Yanhong PENG – 道木研究室

Yanhong PENG

名前:
彭 彦鴻
学年/肩書:
博士課程後期課程1年
役職:
したっぱ
グループ:
ロボット
趣味:
研究、原神
一言:
Attention is all you need.

研究テーマ / Research topic

Modeling and controlling of the Fabric Actuator by Deep Learning on Point Cloud

 


Introduction

Soft robotics are actuators made of the flexible materials,such as McKibben type artificial muscles(Figure 1). Compared to mechanical actuators composed of rigid bodies such as electric motors and hydraulic cylinders,soft actuators are lighter and more flexible.

Figure 1 McKibben type artificial muscles

It has got a lot of attention and developed for use in a wide range of fields such as Rehabilitation equipment and Wearable devices.

However, there exist common limitations in modeling and controlling since the structural compliance and the viscoelasticity in the material results in complex and unpredictable behaviors due to non-linearity. A potential solution to the non-linearity of flexible actuator is deep learning. It is well known that deep learning algorithms are effective in solving nonlinear problems in soft robotics, and they have recently been used to solve problems related to soft robots.

 

Aim

This research aims to solve the above limitations of the soft robot by deep learning. The goal of this research is to develop a wearable system using the soft actuator and can assist people in some common actions. There are the following three objectives. In the initial stage, build a simulator that can simulate the deformation of the soft robot. In the second stage, find a control method according to the developed simulator. In the final stage, develop a wearable system to support some people with limited mobility and the elderly.

 

Method

In this study, the point cloud was converted to a feature dimension that was not affected by the order of points, and the features were associated with LSTM. The structure of the correction system included an encoder, a decoder and the LSTM, and they are shown in Figure 2. The encoder converted point cloud with m points from the simulation into n dimensional features, and the LSTM associated the features of the simulated deformation of the continuous l frames with the features of the predicted deformation. The decoder converted the output with n dimensional features from the LSTM into a point cloud with m points.

Figure 2: Correction system structure (Encoder – LSTM – Decoder)

 

Result

This part would be updated after the research been published.

<