top of page

Publication

Learning-based Shape Sensing of a Soft Optical Waveguide Sensor, IEEE International Conference on Robotics and Automation (ICRA) 2023

Y. Li, C.H. Mak, K. Wang, M. Wu, J.D.L.Ho, Q. Dou, K.Y. Sze, K. Althoefer, K.W Kwok

Optical waveguides create interesting opportunities in the area of soft sensing and electronic skins due to their potential for high flexibility, quick response times, and compactness. The loss or change of light intensities inside a waveguide can be measured and converted into useful sensing feedback such as strain or shape sensing. In this study, we utilize simple light-emitting diodes (LEDs) and photodetectors (PDs) combined with an intelligent shape decoding framework to enable 3D shape sensing of a self-contained flexible substrate. Finite element analysis (FEA) is leveraged to enrich ground-truth data from sparse to dense points for model training. The mapping from light intensities to overall sensor shape was achieved with an autoregression-based model that considers temporal continuity and spatial locality. The sensing framework was evaluated on a fish-shaped prototype, where sensing accuracy (RMSE = 0.27 mm) and repeatability (Δ light intensity < 0.31% over 1000 cycles) were tested underwater.

Target-free Extrinsic Calibration of Event-LiDAR Dyad using Edge Correspondences IEEE Robotics and Automation Letters (In print)

Wanli Xing, Shijie Lin, Lei Yang, Jia Pan

Calibrating the extrinsic parameters of sensory devices is crucial for fusing multi-modal data. Recently, event cameras have emerged as a promising type of neuromorphic sensors, with many potential applications in fields such as mobile robotics and autonomous driving. When combined with LiDAR, they can provide more comprehensive information about the surrounding environment. Nonetheless, due to the distinctive representation of event cameras compared to traditional frame-based cameras, calibrating them with LiDAR presents a significant challenge. In this letter, we propose a novel method to calibrate the extrinsic parameters between a dyad of an event camera and a LiDAR without the need for a calibration board or other equipment. Our approach takes advantage of the fact that when an event camera is in motion, changes in reflectivity and geometric edges in the environment trigger numerous events, which can also be captured by LiDAR. Our proposed method leverages the edges extracted from events and point clouds and correlates them to estimate extrinsic parameters. Experimental results demonstrate that our proposed method is highly robust and effective in various scenes.

Imitation Learning With Time-Varying Synergy for Compact Representation of Spatiotemporal Structures, IEEE Access

Kyo Kutsuzawa and Mitsuhiro Hayashibe

Imitation learning is a promising approach for robots to learn complex motor skills. Recent techniques allow robots to learn long-term movements comprising multiple sub-behaviors. However, learning the temporal structures of movements from a demonstration is challenging, particularly when sub-behaviors overlap and are not labeled in advance. This study applied time-varying synergies, which are representations of spatial and temporal structures in human behavior in neuroscience, to imitation learning. The proposed method extracts time-varying synergies from human demonstrations, with neural networks that learn their activation patterns. Because time-varying synergies can decompose demonstrations into linear combinations of primitives while allowing overlapping, neural networks can learn demonstrations efficiently. This would make the model compact and improve its generalization ability. The proposed method was evaluated with the task of cursive letter writing requiring overlapping sub-behaviors. Consequently, the proposed method allows a neural network to generate new movements with a higher success rate and fewer parameters than those without the proposed method. Moreover, the neural network worked robustly against control deviations and disturbances in an actual robot.

Adversarially Regularized Graph Attention Networks for Inductive Learning on Partially Labeled Graphs, Knowledge-Based Systems 268 (2023) 110456.

J. Xiao, Q. Dai, X. Xie, J. Lam, K.W. Kwok

The high cost of data labeling often results in node label shortage in real applications. To improve node classification accuracy, graph-based semi-supervised learning leverages the ample unlabeled nodes to train together with the scarce available labeled nodes. However, most existing methods require the information of all nodes, including those to be predicted, during model training, which is not practical for dynamic graphs with newly added nodes. To address this issue, an adversarially regularized graph attention model is proposed to classify newly added nodes in a partially labeled graph. An attention-based aggregator is designed to generate the representation of a node by aggregating information from its neighboring nodes, thus naturally generalizing to previously unseen nodes. In addition, adversarial training is employed to improve the model’s robustness and generalization ability by enforcing node representations to match a prior distribution. Experiments on real-world datasets demonstrate the effectiveness of the proposed method in comparison with the state-of-the-art methods. The code is available at https://github.com/JiarenX/AGAIN.

Hierarchical Temporal Transformer for 3D Hand Pose Estimation and Action Recognition from Egocentric RGB Videos. CVPR2023, arXiv:2209.09484v4 [cs.CV]

Yilin Wen, Hao Pan, Lei Yang, Jia Pan, Taku Komura, Wenping Wang

Understanding dynamic hand motions and actions from egocentric RGB videos is a fundamental yet challenging task due to self-occlusion and ambiguity. To address occlusion and ambiguity, we develop a transformer-based framework to exploit temporal information for robust estimation. Noticing the different temporal granularity of and the semantic correlation between hand pose estimation and action recognition, we build a network hierarchy with two cascaded transformer encoders, where the first one exploits the short-term temporal cue for hand pose estimation, and the latter aggregates per-frame pose and object information over a longer time span to recognize the action. Our approach achieves competitive results on two first-person hand action benchmarks, namely FPHA and H2O. Extensive ablation studies verify our design choices.

CNN-Based Visual Servoing for Simultaneous Positioning and Flattening of Soft Fabric Parts, IEEE International Conference on Robotics and Automation (ICRA) 2023

Fuyuki Tokuda*, Akira Seino, Akinari Kobayashi, and Kazuhiro Kosuge

This paper proposes CNN-based visual servoing for simultaneous positioning and flattening of a soft fabric part placed on a table by a dual manipulator system. We propose a network for multimodal data processing of grayscale images captured by a camera and force/torque applied to force sensors. The training dataset is collected by moving the real manipulators, which enables the network to map the captured images and force/torque to the manipulator’s motion in Cartesian space. We apply structured lighting to emphasize the features of the surface of the fabric part since the surface shape of the non-textured fabric part is difficult to recognize by a single grayscale image. Through experiments, we show that the fabric part with unseen wrinkles can be positioned and flattened by the proposed visual servoing scheme.

Robot End-effector for Fabric Folding, IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM) 2023

Akira Seino, Junya Terayama, Fuyuki Tokuda, Akinari Kobayashi, and Kazuhiro Kosuge

In this paper, we propose a robot end-effector for fabric folding of a fabric part of a garment along a straight line. In the garment production process, some of the edges of fabric parts of a garment need to be folded before sewing. Conventional automated folding system has a fixture designed for each folding fabric part and operation. The fixture is not universal and needs to be redesigned when the shape of the folded fabric part changes. In this paper, we consider performing fabric folding without using a fixture. We propose a concept of a robot end-effector for fabric folding of a fabric part along a straight fold line and develop a prototype of an end-effector referred to as“ F-FOLD ”(Free-form FOLDing). Folding of the edge of a fabric part is achieved by moving FFOLD along the desired straight fold line. Experimental results illustrate how F-FOLD works to fold a fabric part along a straight line.

Domain Adaptive Graph Infomax via Conditional Adversarial Networks, IEEE Transactions on Network Science and Engineering (TNSE), 10(1), 35-52.

J. Xiao, Q. Dai, X. Xie, Q. Dou, K.W. Kwok, J. Lam

The emerging graph neural networks (GNNs) have demonstrated impressive performance on the node classification problem in complex networks. However, existing GNNs are mainly devised to classify nodes in a (partially) labeled graph. To classify nodes in a newly-collected unlabeled graph, it is desirable to transfer label information from an existing labeled graph. To address this cross-graph node classification problem, we propose a graph infomax method that is domain adaptive. Node representations are computed through neighborhood aggregation. Mutual information is maximized between node representations and global summaries, encouraging node representations to encode the global structural information. Conditional adversarial networks are employed to reduce the domain discrepancy by aligning the multimodal distributions of node representations. Experimental results in real-world datasets validate the performance of our method in comparison with the state-of-the-art baselines.

Grasping Living Objects with Adversarial Behaviors Using Inverse Reinforcement Learning, IEEE Transactions on Robotics (2023)

Zhe Hu, Yu Zheng, Jia Pan

Living objects are difficult to grasp since they can actively elude capture by adopting adversarial behaviors that are extremely hard to model or predict. In this case, an inappropriately strong contact force may hurt the struggling living objects and a grasping algorithm that can minimize the contact force whenever possible is required. To solve this challenging task, in this article, we present a reinforcement-learning (RL)-based algorithm with two stages: the pregrasp stage and the in-hand stage. In the pregrasp stage, the robot focuses on the living object’s adversarial behavior and approaches it in a reliable manner. In particular, we use inverse RL to encode the living object’s adversarial behavior into a reward function. The negative value of the learned reward function is then used to train a high-quality grasping policy that can compete with the living object’s adversarial behavior with the RL framework. In the in-hand stage, we use RL to train a grasp policy such that the dexterous hand can grab the living object with the minimal force. A set of dense rewards are also specifically designed to encourage the robot to grasp and hold the living object persistently. To further improve the grasp performance, we explicitly take into account the structure of the dexterous robot hand by treating the hand as a graph and adopting graph convolutional network to formulate the grasping policy. We conduct a set of experiments to demonstrate the performance of our proposed method, in which the robot can grasp living objects with the success rate of 90% and 95% in the pregrasp and in-hand stages, respectively. The contact force applied by the robotic hand to the living object is dramatically reduced in comparison with the baseline grasping policy.

Towards Adaptive Continuous Control of Soft Robotic Manipulator using Reinforcement Learning, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 7074-7081

Y. Li, X. Wang, K.W. Kwok

Although the soft robot is gaining considerable popularity in dexterous and safe manipulation, accurate motion control is still an open problem to be explored. Recent investigations suggest that reinforcement learning (RL) is a promising solution but lacks efficient adaptability for Sim2Real transfer or environment variations. In this paper, we present a deep deterministic policy gradient (DDPG)-based control system for the continuous task-space manipulation of soft robots. Domain randomization is adopted in simulation for fast control-policy initialization, while an offline retraining strategy is utilized to update the controller parameters for incremental learning. The experiments demonstrate that the proposed RL controller can track a moving target accurately (with RMSE of 1.26 mm), and accommodate to external varying load effectively (with ~30% RMSE reduction after retraining). Comparisons among the proposed RL controller and other supervised-learning-based controllers in handling additional tip load were also conducted. The results support that our RL method is appropriate for automatic learning such that there is no need of manual interference for data processing, particularly in cases with external disturbances and actuation redundancy.

Isotropic ARAP energy using Cauchy-Green invariants, ACM Transactions on Graphics (TOG) 41, no. 6 (2022): 1-14.

Huancheng Lin, Floyd M. Chitalu, Taku Komura

Isotropic As-Rigid-As-Possible (ARAP) energy has been popular for shape editing, mesh parametrisation and soft-body simulation for almost two decades. However, a formulation using Cauchy-Green (CG) invariants has always been unclear, due to a rotation-polluted trace term that cannot be directly expressed using these invariants. We show how this incongruent trace term can be understood via an implicit relationship to the CG invariants. Our analysis reveals this relationship to be a polynomial where the roots equate to the trace term, and where the derivatives also give rise to closed-form expressions of the Hessian to guarantee positive semi-definiteness for a fast and concise Newton-type implicit time integration. A consequence of this analysis is a novel analytical formulation to compute rotations and singular values of deformation-gradient tensors without explicit/numerical factorization which is significant, resulting in up-to 3.5× speedup and benefits energy function evaluation for reducing solver time. We validate our energy formulation by experiments and comparison, demonstrating that our resulting eigendecomposition using the CG invariants is equivalent to existing ARAP formulations. We thus reveal isotropic ARAP energy to be a member of the "Cauchy-Green club", meaning that it can indeed be defined using CG invariants and therefore that the closed-form expressions of the resulting Hessian are shared with other energies written in their terms.

DISP6D: Disentangled Implicit Shape and Pose Learning for Scalable 6D Pose Estimation. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part IX, pp. 404-421. Cham: Springer Nature Switzerland, 2022.

Yilin Wen, Xiangyu Li, Hao Pan, Lei Yang, Zheng Wang, Taku Komura, Wenping Wang

Scalable 6D pose estimation for rigid objects from RGB images aims at handling multiple objects and generalizing to novel objects. Building on a well-known auto-encoding framework to cope with object symmetry and the lack of labeled training data, we achieve scalability by disentangling the latent representation of auto-encoder into shape and pose sub-spaces. The latent shape space models the similarity of different objects through contrastive metric learning, and the latent pose code is compared with canonical rotations for rotation retrieval. Because different object symmetries induce inconsistent latent pose spaces, we re-entangle the shape representation with canonical rotations to generate shape-dependent pose codebooks for rotation retrieval. We show state-of-the-art performance on two benchmarks containing textureless CAD objects without category and daily objects with categories respectively, and further demonstrate improved scalability by extending to a more challenging setting of daily objects across categories.

"LayoutSLAM: Object Layout based Simultaneous Localization and Mapping for Reducing Object Map Distortion," 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Kyoto, Japan

Kenta Gunji; Kazunori Ohno; Shotaro Kojima; Ranulfo Bezerra; Yoshito Okada; Masashi Konyo; Satoshi Tadokoro

There is an increasing demand for robots that can be substituted for humans in various tasks. Mobile robots are being introduced in factories, stores, and public facilities for carrying goods and cleaning. In factories and stores, desks and shelves are arranged such that the work and movement of personnel are reduced. The surrounding furniture is also set to ensure that a single task can be performed in the same place. It is essential to study the intelligence of robots using information from such layouts, wherein human labor and movements are optimized. However, There is no method of map construction or location estimation that uses the characteristics of furniture arrangements that facilitate human work in a work space. Therefore, this study proposes a method for object mapping using layouts in crowded workspaces. Graphically represent the characteristics of furniture placement that make it easy for people to work in a workspace. The links in the graph represent the connections between the objects in the layout property. The nodes are the objects, and the weights of the links represent the strength of the layout properties. This graph is optimized by GraphSLAM to construct a map that considers the arrangement's characteristics. Using the graph structure improves the map's accuracy while allowing for relative changes in placement. The results show a 50.44% improvement in accuracy in a space with 18 desks, followed by two variations of similar desk layouts. The same improvement in accuracy was also observed when the relative positioning of objects changed significantly in each variation, such as a change to the left or right on the same side.

Development and Control of Robot Hand with Finger Camera for Garment Handling Tasks, International Conference on Intelligent Robots and Systems (IROS) 2022

Hirokasu Kondo, Jose V. Salazar L and Y. Hirata

Robotic automation is steadily growing in different industries around the world. However, in some industries, such as garment manufacturing, most tasks are still predominantly manual, due to the flexible nature of clothes. Garment and clothes are easily deformed when some force is applied, so it is difficult for robots to handle them while predicting their deformation. Our general research goal is to realize flexible cloth handling using robots to automate different tasks in the garment manufacturing industry. We draw inspiration from the actions that humans perform when manipulating clothes and emulate them using a robotic system. In this paper, we developed a robot hand with a camera at the finger, to obtain local information of the contact between the garment and the robot hand, in order to achieve garment handling tasks. Specifically, we focus on the pinch and slide motion that humans perform when straightening a piece of cloth. We selected a specific task to be automated and proposed three manipulation strategies to approach the garment using visual information from the finger camera that enabled the system to perform the task consistently. We carried out two validation experiments to demonstrate the effectiveness of the proposed methods, and an application experiment where we evaluate their applicability to a specific task.

"Distortion-free map construction utilizing layout patterns in space," The 40th annual conference of the Robotics Society of Japan (RSJ), Tokyo, Japan

Kenta Gunji, Kazunori Ohno, Shotaro Kojima, Ranulfo Bezerra, Masao Kuwahara, Yoshito Okada, Masashi Konyo, Satoshi Tadokoro

(In Japanese Only)近年、工場や店舗や公共施設など人間が作業する空間においてその作業の一部を自律ロボットにより自動化する研究が多くなされている[1]。

ロボットによる自律移動を実現するためには、物体配置地図の作成とロボット自身の位置の推定を行う技術であるSLAM が必要となる。近年では機械学習技術の発達の恩恵を受けて物体SLAM に関する研究も盛んに行われている[2][3]。物体SLAM は周囲の環境を物体単位で認識することで環境地図のもつ情報を増やすことができる。

一方、人間が作業する空間においては、多くの人間の作業や移動が小さくなるように、机や棚などの家具が配置されていたり、1つの作業を同じ場所で行えるように周囲の家具の配置が決まっている。[4][5]。そのため、人が作業する空間に置かれる物体は作業に関連したものが整然と並べられる。

このように人間が作業を行う空間では特徴的なレイアウトが存在する。しかしながら、現状の物体マップは物体配置におけるレイアウトが考慮されず物体マップを構築する際に物体同士の一部が重なってしまう。歪んだ物体マップはロボットの自己位置推定をする際にその推定精度を下げる。また、物体を観測しないと物体マップにその配置を反映できない。そのため物体マップの構築や環境の変化に対する物体マップの更新には時間がかかってしまう。

そこで本論文では、レイアウトの情報を物体SLAMに組み込む方法を提案する。そのために人の行う作業と関連した規則的なパターンや配置をレイアウト制約として物体の位置を分布として表現ができるようにグラフ構造をもちいて定義を行なう。ロボットに搭載したセンサの観測情報と、事前に与えられるオブジェクトの配置の制約(レイアウト制約)を利用して地図とロボットの移動軌跡を同時に高精度に復元する。

"Voronoi-based multi-path roadmap using imaginary obstacles for multi-robot path planning," The 40th annual conference of the Robotics Society of Japan (RSJ), Tokyo, Japan

Hanif Aryadi; Ranulfo Bezerra; Kazunori Ohno; Kenta Gunji; Shotaro Kojima; Masao Kuwahara; Yoshito Okada; Masashi Konyo; Satoshi Tadokoro

The roadmap method is a popular approach to mobile robot path planning [1]. In this method, the connectivity
of the free space is captured into a graph. This graph is then used by the robots for planning their paths. By using this method, the search space can be significantly decreased, thus reducing the path planning complexity.

Voronoi diagram is a well-used method to create a roadmap. The diagram is defined as the partitioning of a plane into regions that are close to each of the objects considered. This method is able to create a path with high clearance to obstacles even in a complex environment.

However, the Voronoi diagram, due to its nature, lacks the ability to make use of free space and create redundant paths for a large number of robots. In the Voronoi diagram, only one path is generated between each pair of obstacles on the map. This limits the robots’ movements when the path is occupied. Utilizing available space to create more paths is vital to enhance the planning efficiency in multi-robot system.

This paper summarizes our novel approach to generating a redundant roadmap based on the Voronoi diagram that can be used for multiple robots while maintaining the high clearance aspect of the Voronoi diagram to obstacles. To evaluate the efficiency of the proposed roadmap, multi-robot path planning test was done on several different roadmaps, and some metrics were compared.

The main contributions of this study are as follows:
• Development of an algorithm for creating multipath roadmap based on the Voronoi diagram.
• Comparative analysis of multi-robot path planning efficiency between single-path and multipath roadmap.

Omni-directional Wheel with Suction Mechanism(in Japanese), Conference of The Robotics Society of Japan 2022

Shoya Shimizu, Shunsuke Sano, Yuto Kemmotsu, Kazuki Abe, Masahiro Watanabe, Kenjiro Tadakuma, Kazuhiro Kosuge, Satoshi Tadokoro

(In Japanese Only) 壁面移動ロボットのうち,真空吸着機構を有する移動体は,壁面材質によらず使用可能であり,かつ壁面に対し非破壊で吸着できるといったメリットを持つ.このため,歩行型やクローラ型など,様々な方式が研究されている.中でも車輪型は移動速度,範囲共に優れており,機動性の観点から適用先に富んでいる.これまで,真空吸着車輪移動体としてコンクリート構造物の点検を目的とした壁面移動機構が開発されているが[1],当機構は上下方向のみの移動を行うものである.構造物の点検を行う際,ゴンドラの姿勢を一定に保ちながら壁面上を上下,左右同時に並進移動可能であると,点検作業の能率化に寄与するほか,壁に存在する障害物の回避行動が容易となる.したがって,壁平面上の任意の地点に並進移動するためには,壁面に吸着でき,かつ全方向移動を行えるオムニホイールが有効である.

さらに,当機構の適応先は壁面移動に限らず,他の例としてファブリック素材のハンドリングを挙げることができる.近年,少量多品種生産の衣類に対し,その裁縫プロセスを自動化するニーズは高い.ミシンの送りとともに布の位置,姿勢を制御しながら押さえつける手法としてオムニホイールを用いる機構が提案されているが[2],当オムニホイールに吸着機能を付加することで滑りの抑制に効果があると考えられる.

本研究では,図2に示す吸着式ホイール,ならびにそれを発展させた図3に示す吸着式オムニホイールを提案する.適切な吸着力を得る目的で両機構はホイール接地面のみに負圧を発生させるための流路分配機構を備えている.

本稿では提案コンセプトに従い,その実現可能性の検討ならびに現状の課題を明らかにする目的で,流路分配機構を備えた吸着式ホイールと吸着式オムニホイールの具現化を行う.まず吸着および流路分配の基本原理を示し,試作実機の構成説明を経てから,評価実験と考察により,有効性を検証する.

"PEFTST: A Heterogeneous Multi-Robot Task Scheduling Heuristic for Garment Mass Customization," The 40th annual conference of the Robotics Society of Japan (RSJ), Tokyo, Japan

Ranulfo Bezerra; Kazunori Ohno; Shotaro Kojima; Hanif Aryadi; Kenta Gunji; Masao Kuwahara; Yoshito Okada; Masashi Konyo; Satoshi Tadokoro

With the ever-changing fashion trends and increasing customer demand for the consideration of personal
preferences, customers can no longer be considered homogeneous mass markets. At the same time, globalization and the expansion of commerce have increased the need to maintain competitiveness in the global market. This means that companies must find a way to restore product individuality while maintaining volume production and remaining affordable to customers. That is why ”mass customization” is especially timely and has been increasingly adopted by several industrial sectors. The concept of ”mass customization” integrates features of both craft and mass production models. Namely; it keeps the lowcost and high-volume manufacturing found in mass production but, like craft production, offers the opportunity to customize products per the requirements of individual customers [1, 2, 3].

In a large-scale mass customization garment factory environment, the costumer submits to the factory a request that gets decomposed into several tasks with precedence constraint. When the factory receives a
request, robots are assigned to individual tasks according to the demand of each task. Therefore, to
achieve the effective delivery of customers’ requests, it is essential that the system is able to efficiently leverage its robots to complete each of their tasks in a timely manner.

However, previous works focus on scheduling the tasks in a digital environment only, where the transference of materials required to execute the precedent task can be easily computed based on the architecture information. In our work, the schedule is created for two types of physical robots; static ones that are required to perform the tasks, and mobile ones that are required to transport the materials necessary for the static robot to perform its assigned task. Thus, a new algorithm that is able to schedule both the tasks to static robots, and the precedence between tasks to the mobile robots is required.

This paper presents a summary of [5], which proposes an algorithm that can solve the issue of task scheduling for a garment mass customization environment. PEFT with Spatial Transportation (PEFTST) is an extension of the PEFT algorithm [4]. The authors have extended this heuristic enabling it to schedule the tasks and the transportation of the materials for the completion of the tasks when necessary. The results show that the heuristic is able to solve the scheduling problem on a mass customization scenario.

"Task Scheduling Problem for Heterogeneous Multi-Robot Garment Mass Customization," ICRA 2022 Workshop on Collaborative Robots and the Work of the Future, Philadelphia (PA), USA

Ranulfo Bezerra; Kazunori Ohno; Shotaro Kojima; Hanif A. Aryadi; Kenta Gunji; Masao Kuwahara; Yoshito Okada; Masashi Konyo; Satoshi Tadokoro

Industrial environments that rely on Mass Customization are characterized by a high variety of product models and reduced batch sizes, demanding prompt adaptation of resources to a new product model. In such environment, it is important to schedule tasks that require manual procedures with different levels of complexity and repetitiveness. In a garment mass customization scenario, task scheduling needs to take into consideration the dependency of the tasks, meaning that in order to initiate a certain task, materials from previous tasks may be required. Therefore, in order to carry out a smooth scheduling process within a garment mass customization factory, not only the tasks but also the transportation of materials to perform such tasks need to be scheduled to static and mobile robots, respectively. This paper describes the above problem related to the logistics of an automated garment factory in a mass customization scenery.

"Heterogeneous Multi-Robot Task Scheduling Heuristics for Garment Mass Customization," 2022 IEEE 18th International Conference on Automation Science and Engineering (CASE), Mexico City, Mexico.

Ranulfo Bezerra; Kazunori Ohno; Shotaro Kojima; Hanif A. Aryadi; Kenta Gunji; Masao Kuwahara; Yoshito Okada; Masashi Konyo; Satoshi Tadokoro

Industrial environments that rely on Mass Customization are characterized by a high variety of product models and reduced batch sizes, demanding prompt adaptation of resources to a new product model. In such environment, it is important to schedule tasks that require manual procedures with different levels of complexity and repetitiveness. In a garment mass customization scenario, task scheduling needs to take into consideration the dependency of the tasks, meaning that in order to initiate a certain task, materials from previous tasks may be required. In order to carry out a smooth scheduling process within a garment mass customization factory, not only the tasks but also the transportation of materials to perform such tasks need to be scheduled to static and mobile robots, respectively. To tackle this problem, we propose a set of heuristics that are able to schedule both the task work and transportation of materials. We analyze these heuristics theoretically with respect to computational complexity. Subsequently, the performance of each algorithm is evaluated using a synthetic testset. The comparative analysis shows that the extended algorithms have close results among themselves, whereas for the heuristics, Minimum Transportation Cost (MTC) outperforms all of the other algorithms. Moreover, the combination of Predict Earliest Finish Time (PEFT) and MTC is more efficient compared to other algorithm combinations.

DiffSRL: Learning Dynamical State Representation for Deformable Object Manipulation with Differentiable Simulation, IEEE Robotics and Automation Letters 7, no. 4 (2022): 9533-9540.

Sirui Chen, Yunhao Liu, Shang Wen Yao, Jialong Li, Tingxiang Fan, Jia Pan

Dynamic state representation learning is essential for robot learning. Good latent space that can accurately describe dynamic transition and constraints can significantly accelerate reinforcement learning training as well as reduce motion planning complexity. However, deformable object have very complicated dynamics and is hard to be represented directly by a neural network without any prior physics information. We propose DiffSRL, an end-to-end dynamic state representation learning pipeline that uses differentiable physics engine to teach neural network how to represent high dimensional pointcloud data collected from deformable objects. Our specially designed loss function can guide neural network aware physics constraints and feasibility. We benchmark the performance of our methods as well as other state representation algorithms with multiple downstream tasks on PlasticineLab. Our model demonstrates superior performance most of the time on all tasks. We also demonstrate our model's performance in real hardware setting with two manipulation tasks on a UR-5 robot arm.

Stability and Stabilization of Periodic Piecewise Positive Systems: A Time Segmentation Approach, Asian Journal of Control, 1-18, 2022

B. Zhu, J. Lam, X. Song, H. Lin, J.Y.K. Chan, K.W. Kwok

This paper is concerned with the stability analysis and stabilization of periodic piecewise positive systems. By constructing a time-scheduled copositive Lyapunov function with a time segmentation approach, an equivalent stability condition, determined via linear programming, for periodic piecewise positive systems is established. Based on the asymptotic stability condition, the spectral radius characterization of the state transition matrix is proposed. The relation between the spectral radius of the state transition matrix and the convergent rate of the system is also revealed. An iterative algorithm is developed to stabilize the system by decreasing the spectral radius of the state transition matrix. Finally, numerical examples are given to illustrate the results.

bottom of page