key: cord-0945111-nkd122ss authors: nan title: GuLiM: A Hybrid Motion Mapping Technique for Teleoperation of Medical Assistive Robot in Combating the COVID-19 Pandemic date: 2022-01-26 journal: IEEE Trans Med Robot Bionics DOI: 10.1109/tmrb.2022.3146621 sha: df6ed5e86bc938fcc501dba19febc5e57941bec9 doc_id: 945111 cord_uid: nkd122ss Driven by the demand to largely mitigate nosocomial infection problems in combating the coronavirus disease 2019 (COVID-19) pandemic, the trend of developing technologies for teleoperation of medical assistive robots is emerging. However, traditional teleoperation of robots requires professional training and sophisticated manipulation, imposing a burden on healthcare workers, taking a long time to deploy, and conflicting the urgent demand for a timely and effective response to the pandemic. This paper presents a novel motion synchronization method enabled by the hybrid mapping technique of hand gesture and upper-limb motion (GuLiM). It tackles a limitation that the existing motion mapping scheme has to be customized according to the kinematic configuration of operators. The operator awakes the robot from any initial pose state without extra calibration procedure, thereby reducing operational complexity and relieving unnecessary pre-training, making it user-friendly for healthcare workers to master teleoperation skills. Experimenting with robotic grasping tasks verifies the outperformance of the proposed GuLiM method compared with the traditional direct mapping method. Moreover, a field investigation of GuLiM illustrates its potential for the teleoperation of medical assistive robots in the isolation ward as the Second Body of healthcare workers for telehealthcare, avoiding exposure of healthcare workers to the COVID-19. C OMBATING the coronavirus disease 2019 (COVID- 19) pandemic, healthcare workers in negative pressure isolation wards are exposed to confirmed COVID-19 patients, with a high risk of nosocomial infection [1] , [2] . The effective and efficient deployment of medical assistive robots in the ward could largely mitigate nosocomial infection problems. These robots have the potential to be applied for disinfection, delivering medications and food, measuring vital signs, social interactions, and assisting border controls [3] . With the emerging of the fourth revolution of healthcare technologies (i.e., Healthcare 4.0), the teleoperation of medical assistive robots offers great opportunities for medical experts to perform professional healthcare services remotely in the context of COVID-19 [4] , [5] . It aims at keeping healthcare workers safe by breaking the chains of infection [6] , [7] through providing remote healthcare services directly without doctor-patient physical contact [8] , [9] . Human-Cyber-Physical Systems (HCPS) in the context of new-generation healthcare provides in-depth integration of the operator and the robot, promoting the development of intelligent telerobotic systems [10] , [11] . Well-developed isolation-care telerobotic systems may speed up the treatment process, reduce the risk of nosocomial infection, and free up the medical staff for other worthwhile tasks, providing an optimal strategy to balance escalating healthcare demands and limited human resources in combating the COVID-19 pandemic [12] . Medical assistive robots for isolation care with various customizable features are able to address the demands of a variety of application scenarios [13] , [14] . Several medical assistive robots that can operate in teleoperation modality aim c IEEE 2022. This article is free to access and download, along with rights for full text and data mining, re-use and analysis. to achieve dexterous operations for the treatment of infectious diseases, such as patient care robots, logistics robots, disinfecting robots [15] - [17] . However, for non-expert operators, i.e., healthcare workers, it is time-consuming to completely master the sophisticated regulations and control skills of a teleoperated robot using the traditional teleoperation solution [18] . In this regard, there is an urgent need for a robotic system that can be operated by subconsciously turning to prior knowledge of healthcare workers to enable the timely response of the pandemic. Therefore, an easy-to-use teleoperation solution with less cognitive effort is important to realize an effective, efficient, and satisfying robot-assisted telehealthcare in the isolation ward [19] , [20] . This paper provides a novel motion mapping solution for teleoperation of a medical assistive robot during remotely delivering healthcare services in an isolation ward, aiming to avoid professional pre-training and optimize behavioral adaptation for the healthcare workers. Fig. 1 shows the overall illustration of the proposed isolation-care telerobotic system deployed in the isolation ward. Based on the proposed hybrid mapping method using the hand gesture and upper-limb motion (GuLiM) with an incremental pose mapping strategy, the motion transfer between the caregiver and the medical assistive robot is realized. The operator's hand gesture and upper-limb motion data are captured by the wearable inertial motion capture device as the input for teleoperation. The proposed GuLiM method is deployed on a self-designed unilateral telerobotic system built with the dual-arm collaborative robot YuMi [21] for performance verification. The proposed teleoperation solution provides a user-friendly way for nonexpert operators to master the robot operation skills. 1 Through this advantage, clinical verifications on a medical assistive robot deployed the GuLiM mapping technique in an isolation ward have been conducted during the COVID-19 pandemic. This work tackles three common limitations of the existing human-robot motion mapping schemes: 1) Human operators have to do necessary calibration before the operation, 1 More videos of the experiment part of this work can be found at https://fsie-robotics.com/GuLiM-motion-transfer. 2) Motion mapping scheme has to be customized according to the kinematic configuration of human operators, and 3) A heavy workload of pre-training is required for an inexperienced operator. Based on the proposed GuLiM method, the operators are able to control the robot according to their natural motions with less cognitive effort, and no additional calibration procedure is needed. In addition, the GuLiM method enables the operators to adjust the initial pose state flexibly. Therefore, the maximum range of the robot arm can be reached after enough adjustments, thereby the workspace of the robot is effectively utilized during teleoperation. The remainder of this paper is organized as follows. Section II presents the related work and the comparison between existing approaches and the proposed approach in this work. Section III explains the architecture of the proposed system and details the motion transfer flow between operator and robot. Section IV introduces the proposed incremental pose mapping strategy and describes the GuLiM hybrid motion mapping method. Section V describes the experimental setup and the evaluation rules, then introduces the application and verification of the teleoperated robot system in the hospital scenarios. Section VI concludes this study and discusses the contributions to this area, then presents the current limitation of this work and future research direction. Using robotic teleoperation and telemedicine technology to help with the combat of COVID-19 outbreak has gained great attention [3] . On the one hand, robots and automation devices will not get a virus. On the other hand, it is much easier to keep robots clean (e.g., wiping them down with chemicals and autonomously disinfecting themselves with ultraviolet) than measures for humans [4] . Medical assistive robots have been introduced inside the hospital to assist healthcare operators in various activities: The Ginger robot, for example, developed by CloudMinds, has an infrared thermometric system that can monitor some vital signs of the patients such as the body temperature [22] . In China, a humanoid robot is applied inside a real COVID-19 treatment center for delivering food to noncritical patients during the pandemic [23] . An end-to-end mobile manipulation platform, Moxi robot, has been testing out in a hospital in Texas, picking supplies out of supply closets and delivering them to patient rooms, all completely autonomously [24] . TRINA, a telenursing robot from Duke University, is developed with the augment of direct control for achieving a bit of autonomy to help non-technology experts operate complex robots [18] . These medical assistive robots inside the hospital environment are able to assist healthcare workers to some extent. However, most of them focus on simple functions, such as monitoring body temperature, logistics, etc. They are not suitable for more flexible and challenging operation scenarios in the practical care scenarios, such as flexible delivery and operating medical devices [22] . Therefore, this paper intends to provide a robotic telehealthcare solution to perform a variety of routine tasks in an isolation ward for healthcare workers based on the proposed hybrid motion mapping method. The ultimate goal of ideal teleoperation is to provide the operator with an experience that is immersive and naturalistic [25] . For example, the teleoperated robot could be a true avatar of the operator, providing high-fidelity motion mapping from the operator's body. However, controlling a robot is still a challenging task for healthcare workers, especially controlling a high degree-of-freedom (DoF) manipulator [26] . In order to improve the user experience of teleoperation systems, significant efforts have been made in motion mapping [27] , [28] , and kinematic tracking between humans and robots [29] . Various interactive devices, such as exoskeleton devices [30] - [32] and optical tracking devices [33] , [34] are developed. Nevertheless, many master devices or tracking systems for teleoperation are lack of feasibility in actual telehealthcare scenarios [35] , [36] . The inconvenient pre-training of the operator and the high customization cost of a mapping system for teleoperation are the main concerns [37] . Several researchers have made efforts on convenient and natural teleoperation with self-customized slave devices [38] , [39] . Handa et al. developed a visionbased unilateral teleoperation system, DexPilot, to completely control a robotic hand-arm system by merely observing the bare human hand [25] . Meeker et al. proposed a continuous teleoperation subspace method to conduct an adaptive control of robot hand using the pose of the teleoperator's hand as input [40] . Yu et al. designed the adaptive fuzzy control methods and deployed them on the Baxter robot to achieve trajectory tracking tasks with small tracking errors [41] , [42] . The above works make preliminary progress on the motion mapping between the human hand pose and the slave robot. However, these techniques cannot avoid the inconvenient pre-training, especially in the practice of teleoperating medical assistive robots in the isolation ward for healthcare workers [43] , [44] . Compared with the prevenient research, the GuLiM method proposed in this work creatively transfers the operator's hand gesture and upper-limb motion to the slave robot with a hybrid mapping strategy. Based on this hybrid motion mapping method, healthcare workers can master the teleoperation skill effortlessly and control the medical assistive robot in an isolation ward for a variety of care tasks, such as delivering food and medicine, operating equipment, and disinfecting. As shown in Fig. 2 , a self-designed medical assistive robot is used as the verification platform in this work, including the omnidirectional mobile chassis and the dual-arm robot YuMi above the chassis (more robot platform details are shown in the Supplementary Materials) [15] . Fig. 3 (a) depicts the architecture of the studied telerobotic system. It comprises a wearable device for human motion capture and YuMi. The primary efforts in this work are devoted to the telecommunication link from a human operator to a medical assistive robot, i.e., unilateral teleoperation. Thus, this paper assumes that the human operator is able to access the feedback of the state information of the robot from the remote environment. In a real application, for example, the robot may integrate with on-body cameras to capture the environmental information and communicate back to the human operator. As shown in Fig. 3(b) , the upper-limb motion and the hand gesture of the operator are obtained from the wearable device. Based on the proposed new interaction logic of the GuLiM method, the upper-limb motions and hand gestures of the operator are transferred to the robot to control robot arms and grippers. The wearable device is a motion capture system, Perception Neuron 2.0 (PN2) [45] , which consists of 32 Neurons (An Inertial Measurement Unit is composed of a gyroscope, an accelerometer, and a magnetometer). The straps secure the Neuron sensors to the body of the operator, and the motion data from the Neuron sensors are collected and sent to the computer through the Hub [46] . The supporting software application, Axis Neuron, receives and processes the motion data, streaming real-time motion data in BioVision Hierarchical (BVH) data format through TCP/IP protocol [47] . The NeuronDataReader plugin (an API library provided by Noitom Technology Co., Ltd.) is used to receive and decode the BVH data stream to the standard coordinate frame, which can be used by the transform library (tf ) in Robot Operating System (ROS). The tf maintains the relationship between coordinate frames in a tree structure buffered in time. Two ROS nodes are set to receive and process the motion data of upper limbs and hands respectively. The hand poses and hand gestures are obtained through coordinate transfer between the corresponding coordinate frames at an arbitrary time slot. Taking the right hand as an example, in order to transfer the motion from the operator to the robot, a unified coordinate system is established. As shown in Fig. 4 , the direction of the local reference frame {F hip } decoded from the BVH data stream is different from the local reference frame of robot {F base }. Assuming that the global frame is built as {F global } with the same direction as {F base }. For the robot, the coordinate frame of the end effector is defined as {F gripper }. Accordingly, the end frame of the operator is the hand frame {F rhand }. Motion transfer is to determine a sequence of robot gripper poses based on the human hand poses. For real-time transformation of the operator's motion data, the Externally Guided Motion (EGM) interface provided by ABB is selected, which can follow the external motion data with a maximum frequency of 250 Hz. YuMi only supports the EGM interface with joint angle data as input. The inverse kinematic (IK) solution is conducted on the upper computer, and then the calculated joint configurations are sent to YuMi using the UdpUc interface (User Datagram Protocol Unicast Communication). The open-source kinematics solver trac_ik is used to get the desired joint configurations [48] . Theoretically, trac_ik runs the Newton-based convergence algorithm and the Sequential Quadratic Programming (SQP) nonlinear optimization algorithm concurrently for IK implementations, then returns the IK solution immediately when either of these algorithms converges. With faster and reliable performance, trac_ik is deployed as the IK solver for human-robot motion transfer. In terms of robot kinematic modeling, the Denavit-Hartenberg (D-H) model of YuMi robot in the current literature only focuses on the kinematics model of a single-arm [49] . In this paper, the dual-arm standard D-H model of YuMi robot is established, including the kinematic configurations from the local base coordinate frame {F base } to the first link frame of each arm. The established standard D-H model is verified in MATLAB, and the kinematics calculation for motion transfer is based on this model. More details of building the standard D-H model are explained in the Appendix. In addition to the motion control of the robot arm mentioned above, the control of the robot gripper is also an indispensable part of transferring the function of grasping. The assorted two-fingers gripper of YuMi, Smart Gripper, has only one DoF. The gripper has one basic servo module, communicating with the controller of YuMi robot over an Ethernet IP Fieldbus. The range of the gripping force is 0-20N, with the force control accuracy of ±3N. In this research, the gripping force is adjusted online during the teleoperation and it is set to particular values according to grasping task scenarios. The grab/release state of the operator's hand is used to close/open the gripper. Further, the action logic of the teleoperation based on the new proposal GuLiM method is also achieved by hand gestures. Typically, motion mapping schemes with customized kinematic configurations have been used to regulate the interaction between human and robotic manipulators. Most of them mainly consider the rigid motion mapping of the upper limbs without introducing the hand gestures of the operator for the grasping task. Based on the proposed new interaction scheme of GuLiM hybrid motion mapping, healthcare workers do not have to do extra calibration procedures for the human-robot coordinate mapping and the workspace correspondence. The hand gesture recognition was not only used for the robot gripper control but also used as the indicated sign for incremental pose mapping. Considering the inconsistent workspace and kinemetric configuration between the robot and operator, the hybrid motion mapping scheme guided by the hand gesture was further designed using the relative pose between the robot and human. Instead, the incremental pose mapping strategy gives the healthcare workers a natural experience for humanrobot motion transfer. With the predefined hand gestures, the healthcare workers can easily decide the sequential control of the robot arms and grippers. Notations: Matrices and vectors are denoted by capital and lowercase upright bold letters, e.g., A and a, respectively. ⊗ is the multiplication operator of vectors. (·) −1 is the inverse operator. The letter F in curly braces indicates the coordinate frame, e.g., {F a }. Variables that are not bold represent scalars unless otherwise specified. As Fig. 5 shows, several gesture rules are predefined for enabling/disabling the arm control and closing/opening the gripper. In order to better adapt to the general motion behaviors of human in the process of grabbing objects, two rules are set below: 1) the bending state of the index finger is used to control the gripper, judging the orientation (using the z component of the quaternion q ind = (x ind , y ind , z ind , w ind ) as the indicator) of the index fingertip frame related to {F rhand }5; 2) the state of the middle finger is used to enable/disable the robot arm control, which is achieved by judging the orientation (using the z component of the quaternion q mid = (x mid , y mid , z mid , w mid ) as the indicator) of the middle fingertip frame related to {F rhand }. The threshold of z ind for gripper control is set as Tz ind = 0.2, and the threshold of z mid for arm control is set as Tz mid = 0.5. The above definitions of gestures fully consider robustness and ergonomics for the motion transfer between the operator and robot. Moreover, the default direction of the coordinate system frame of BVH bone local reference system is inconsistent with that of the traditional cartesian coordinate system frame. The {F hip } is the local reference frame of the BVH bone coordinate system in which the Y axis is the vertical axis, while Z and X axes lie on the horizontal plane. To mitigate the inconsistency of the definition of the coordinate frame in the mapping process, the coordinate data (Z, X, Y) in the BVH bone local reference system are used to represent (X, Y, Z) in the robot coordinate system, respectively. The correspondence task for the motion transfer between the operator and robot can be stated as: (1) Observe the behavior of the operator model, which from a given starting state evolves through a sequence of sub-goals in states; (2) Find and execute a sequence of robot actions using the embodiment (possibly dissimilar with the operator model), which from a corresponding starting state, leads through corresponding subgoals to corresponding target states. However, the kinematic model of a robot differs from that of a human. Hence, motion information from the operator must be adaptively mapped to the movement space of the robot. As shown in Fig. 6 , the pose data sequence of human hand pose h (i) is commonly used to decide the desired robot pose pose r (i) which are defined as below at time slot i: where p r (i) and p h (i) refer to the position vector of the robot gripper and human hand, respectively, while q r (i) and q h (i) refer to the orientation vector expressed in the unit quaternion of the robot gripper and human hand, respectively. In this research, an incremental pose mapping strategy is proposed to transfer the human motion to the robot. When the operator starts the motion transfer at time slot i, the relative position p δ (i) and relative orientation q δ (i) between the operator and robot are calculated as below: where q h (i) −1 can be found by just negating the corresponding components (x, y, z) of q h (i). After obtaining the relative pose at time slot i, including the position component p δ (i) and the orientation component q δ (i), the pose data for robot control at time slot i + n are determined below in the proposed incremental pose mapping strategy: where p r (i + n) and q r (i + n) are the position and orientation of robot gripper at time slot i + n, respectively; p h (i + n) and q h (i + n) are the position and orientation of operator's hand at time slot i + n, respectively. Based on the proposed incremental pose mapping strategy, the operator can transfer motion to the robot at any time from any initial pose state. Moreover, the operator does not need to adjust the starting pose to adapt to the initial pose of the robot. During the teleoperation, workspaces of the operator and robot are inconsistent due to the different structures. Traditional mapping methods match the workspace by changing the mapping scale parameters between different operators and robots, which brings about an extra calibration procedure. A novel hybrid mapping method based on the hand gesture and upper-limb motion of the operator is proposed here for flexible mapping. Algorithm 1 shows the detailed procedures of the proposed hybrid motion mapping method, and a clear introduction can be seen in the supplementary material (Video S1). Based on the predefined gesture rules in Section III-A, the operator is free to adjust for a comfortable pose before starting a GuLiM control cycle. As shown in Algorithm 1 (i), the current state of operator and robot, including the hand gesture, upper-limb motion, and robot pose, are obtained at the beginning of a control cycle. The relative pose p δ1 is obtained when the middle finger bend signal is detected to enable the robot arm motion, seen in Algorithm 1 (ii). When the operator releases the middle finger, the robot arm motion is disabled, and then the operator can freely adjust to a comfortable pose. When the middle finger is bending again to continue the motion transfer, an updated relative pose will be calculated. This is applicable to the scenarios that the motion range of the operator has reached the maximum limit, but the robot has not reached the target point, or the current posture is burdened for the operator. As shown in Algorithm 1 (iii), the target pose of robot is generated 19 (iv) inverse kinematic solution 20 Joint configuration ← trac_ik (pose r (i + n)) 21 Parallel procedure B 22 (v) gripper control based on the proposed incremental pose mapping strategy with the obtained relative pose. Then the trac_ik is deployed to get the target joint configuration for robot arm control with the target pose of robot, as seen in Algorithm 1 (iv). Until the robot reaches the target point with an appropriate grasping pose, the operator bends the index finger to control the robot gripper for grasping. As shown in Algorithm 1 (v), the gripper control is independent of the arm control in a parallel procedure. To validate the proposed GuLiM method, comparative experiments were carried out, choosing the Directly Mapping Method (DMM) as the baseline, which maps the pose of human hand directly to the robot gripper. Experiments were conducted with an experienced operator and an inexperienced operator, respectively. The task selections of grasping and placing for experiments were based on human basic operation ability, which can be generalized into the common operation in medical scenarios. The results were reported, and the analyses were revealed in this section. The corresponding evaluation metrics based on the placement precision and the time cost of the tasks were proposed for assessment. Full details of the experiments are given in the supplementary material (Video S2). As shown in Fig. 7 , a plane in front of the YuMi robot was partitioned into four regions, of which region A was the starting position while the other three regions were settled as target positions. The operator was requested to operate the robot to pick up the block from the starting position and place it to the target position according to specific tasks. The real-time motion data of the operator and robot were recorded during the experiments. The wearable motion capture suit PN2 was worn by the operator to capture and transmit the motion data in real time. In the aspect of robot, the YuMi robot was turned on and configured to communicate with ROS. Then, the communication between ROS and the motion capture system was established by starting up the relevant ROS nodes for the robot to receive real-time motion data from the operator. Note that, for both groups of tasks, the gripping force of Smart Gripper for block grasp was set to 20 N. In order to improve the repeatability of the experiments, the code for robot control is open source on Github (more details of the source code are shown in the Supplementary Materials). Choosing DMM as a baseline for comparison, two groups of grasp tasks were carried out to validate the practical performance of the proposed GuLiM method. As shown in Fig. 7(b) , the first group of tasks was grabbing a block and placing it in a specific region, which was used to assess the performance of position transfer. The second group of tasks [see Fig. 6(c) ] was grabbing a block and placing it at a specific angle, which was used to assess the performance of orientation transfer. Differing from the first group, the second group of tasks mainly focused on the assessment of the orientation transfer. Therefore, region B was chosen as the only target region. All the grasp tasks were repeated five times to ensure the reliability of the data. A total of 120 groups of experimental tasks were conducted, and more than 300 minutes of effective motion data were obtained. In the case of using the DMM method, the hand pose data are directly mapped to the robot gripper. There was no adjustment throughout the task, and the mapping configuration was unchanged. Compared with DMM, GuLiM is more flexible. The operator started by bending the middle finger to enable the motion transfer between human and robot. The operator can unbend the middle finger to disable the motion transfer for adjustment when the current pose has no working space allowance for hand motion. The overall process of conducting a grasping task using GuLiM is displayed in Fig. 8 . An adjustment procedure is shown from the time slot of 22 s to 25 s in the figure. The operator unbent the middle finger to disable the motion transfer, and then withdrew the hand to an appropriate posture. Afterward, the middle finger bent again to reenable the motion transfer and deliver the block to the target. Fig. 9 shows a diagram of the signal recording corresponding to the GuLiM process in Fig. 8 . The bending signals of the index finger and the middle finger are shown on the top of Fig. 9 . The bending signal of the index finger was used to control the robot gripper, while the bend signal of the middle finger was used to carry out the human-robot motion transfer. The above two signals were independent of each other, and the thresholds for bending states judgment were also shown. The positions of the human hand and the robot gripper were also recorded, as shown below. Before time slot t 1 in Fig. 9 , the robot was motionless due to the suspend of motion transfer. During the time interval [t 1 , t 2 ], the middle finger bent, and the motion data were transferred to the robot from the operator with the incremental pose mapping strategy. At time slot t 2 , the operator's hand has reached the limit of the motion space. Then, the middle finger was released to disable the motion transfer at time slot t 2 and disabled the motion transfer during the time interval [t 2 , t 3 ]. The operator could move hand freely for adjustment without altering the robot's state until time slot t 3 . When finishing adjustment at the time slot t 3 , the operator could enable the motion transfer again to teleoperate the robot. The grasp performance was evaluated with two evaluation metrics: the precision of placements S mp , m = 1, 2 and the time cost of the tasks, where m means the m-th group of tasks. As demonstrated in Fig. 7 , in the first group, the target regions (B, C, and D) were attached with a couple of concentric annuli as the scoring rings for the assessment of placement precision. The larger diameter was corresponding with a lower score with scores ranging from 1 to 6. In the second group, the operator placed the block into the target region with a specified angle θ(θ ∈ {30 • , 60 • , 90 • }). Six sector areas were divided based on the angle deviation with scoring S dev 2p ranged from 1 to 6. The scoring areas were symmetrically distributed on the left and right sides of the datum line, and the area with a larger deviation would lead to a lower score. Besides, the position precision of placement was also considered with four scoring levels ranging from 1 to 4. This part of the score was based on the ratio (α = A overlap /A square ) of the overlapping area A overlap between the block and the baseline square, to the area of the baseline square A square . The final score of the placement precision of the second group S 2p was calculated with the scoring rules depicted below in (12). C. Experiments Evaluation 1) Evaluation of the Position Transfer: S 1p scores were recorded to assess the position transfer for the grasp tasks from region A to B, from region A to C, as well as from region A to D. The experimental results are shown in Fig. 10 (a). P1 and P2 represented two operators, of which P1 was an experienced operator while P2 was a novice operator to this teleoperation system. As is seen from the results, the proposed GuLiM method outperformed the DMM in terms of the precision score for position transfer. The GuLiM method has a smaller scoring standard deviation, which shows the stability of this method. Especially for the grasp tasks from region A to D, GuLiM outperformed DMM the most for both operators on placement precision. The reason is that the operating distance between region A and region D is the largest among the three routines. When using DMM, this could lead the operator's hand to the limit of the reachable space, making it difficult for the operators to place the block at the proper position. Fig. 11 summarizes the improvement percentage of precision and time cost of GuLiM compared with DMM. Statistically, the GuLiM surpassed DMM by 46.77% in terms of placement precision averagely, whereas it took 19.60% more time on average to accomplish the tasks. For novice operator P2, the GuLiM method has a larger improvement in placement precision than the DMM method compared with the experienced operator. 2) Evaluation of the Orientation Transfer: S 2p scores were calculated according to the scoring rules stated above for specific placement angle θ (θ ∈ {30 • , 60 • , 90 • }). As shown in Fig. 10 (b) , the GuLiM method outperformed the DMM in all circumstances for orientation transfer. The DMM has an advantage in time cost over the GuLiM method in simple tasks (such as the first group of experiments). The reason is that the DMM method has no adjustment process compared with the GuLiM method. As seen in the right of Fig. 11 , the S 2p score of the GuLiM surpassed that of the DMM by 69.27% but took 30.54% more time on average for orientation transfer. From the above experimental results, it is well-reasoned to say that the proposed GuLiM method is capable of accomplishing tasks with more complicated operations. Besides, the GuLiM method takes a main advantage on the placement precision. However, the average time of the GuLiM method is slightly increased than the DMM method. From Fig. 8 and Fig. 9 , the reasons for the increased time cost of the GuLiM method can be analyzed as follows. During a grasping process using the GuLiM method, there are necessary steps of doing the hand gestures and position adjustment compared to the DMM method, which induces extra time cost. An average of two adjustments are required in the position transfer assessment tasks (AB, AC, and AD), while an average of three adjustments are required in the orientation transfer assessment tasks (30 • , 60 • , 90 • ). Therefore, the average time of the GuLiM method is slightly increased due to the adjustment procedure. However, the GuLiM method does not require complicated setup and calibration procedures before an operational task, which saves the pre-training time of the healthcare workers and improves its usability compared to the DMM method for practical applications in a hospital setting. With the increase in the complexity of the task, the advantage of GuLiM on placement precision makes up for its shortage of time cost properly. Possible solutions for decreasing the time cost of the GuLiM method are: 1) Simplify the hand gesture definition in the adjustment process; 2) Enhance the feedback methods on the robot side, such as haptic feedback, so as to improve the transparency between the operator and the robot for speeding up the operator's response. In general, the experimental results show the practical performance of the proposed GuLiM method, which enables healthcare workers to master the control skills of an isolation-ward teleoperated robot quickly with much less professional pre-training. The proposed GuLiM mapping method has been implemented on a medical assistive robot for remotely delivering healthcare services during the COVID-19 pandemic. Clinical verifications and field test of the proposed solution for robotic teleoperation has been conducted in the emergency center's Intensive Care Unit (ICU) of the First Affiliated Hospital of Zhejiang University School of Medicine (FAHZU). The partial record of clinical application can be seen in the last part of the supplementary material (Video S2). The medical assistive robot is driven by an omnidirectional mobile chassis. To achieve the mobile chassis control, a series of interaction commands and strategies based on gait recognition of the operator has been designed to control the movement of the chassis, using the lower-limb motion data captured by PN2 [50] . Fig. 12 shows two application scenarios of remote medicine delivery and operating medical devices using the GuLiM mapping method. In addition, in order to meet the needs of patient care in this work, special replaceable connectors for various tools (Doppler ultrasound equipment, Handheld disinfection equipment, etc.) are designed and mounted on the fingers of the Smart Gripper. The gripping force for remote medicine delivery is set to 15 N. For the task of operating medical devices, there is no need for the gripper to make the grasp action. The healthcare worker put on the motion capture device in the safety workspace outside the isolation ward. The dualarm robotic manipulators of the medical assistive robot were remotely controlled by healthcare workers through the GuLiM method. The camera mounted on the robot transmitted onsite vision to a monitor in the operator site. While operating the medical devices, a compliance stylus pen was attached to the robot's end-effector, providing a passive interaction control to touch the screen. The purpose of interest in this application is to offer a remote solution, allowing healthcare workers to avoid entering an isolation ward when treating patients, thereby minimizing the risk of cross-contamination and nosocomial infection. The healthcare worker conducted the delivery task and instrument operation tasks conveniently without any extra calibration procedure. Implementation of this isolation-care telerobotic systems in this field test obtained consistent remark in the operation experience with the previous results [in Section V-B] from the perspective of healthcare workers. Compared with the traditional teleoperation solutions, the proposed GuLiM mapping method made it user-friendly to master the robotic manipulator for healthcare workers. The demands of patients on medicine delivery and instrument operation were also satisfied during the field tests for its convenience. It should be noticed that the proposed GuLiM method implemented on the YuMi robot provides an early proof-of-concept. Although this method is capable of releasing the burden of the healthcare workers from sophisticated manipulation, functioning the medical assistive robot as the Second Body of the healthcare workers is still challenging. Ongoing synergy efforts should be devoted to such research areas. For example, achievable tasks are limited to the low payload of the YuMi robot at this stage. It is feasible to deploy the proposed GuLiM method to other similar collaborative robots with higher payload, such as UR, Kinova Jaco, and KUKA iiwa. In this paper, a hybrid motion mapping scheme (i.e., GuLiM) is presented, providing a user-friendly solution for healthcare workers to control the high DoF manipulator of a medical assistive robot as their Second Body. Without extra calibration procedure, the GuLiM method allows the healthcare workers to master the operation skills of the medical assistive robot using their hand gestures and upper-limb motion with the incremental pose mapping strategy. Using the operator's motion data obtained from a wearable device, the unilateral teleoperation of dual-arm collaborative robot YuMi is realized. To validate the feasibility and effectiveness of the proposed method, two groups of grasping tasks were carried out over the GuLiM method and the DMM method. Comparative experiments showed that the GuLiM method surpassed DMM at the placement precision for position transfer and orientation transfer, with an improvement of 46.77% and 69.27%, respectively. Furthermore, the application and verification of a teleoperated medical assistive robot with the aid of the GuLiM technique in an isolation ward were conducted in FAHZU during the pandemic. Future teleoperated robotic systems in similar field applications may also benefit from this solution. The current research focuses on the motion mapping of kinematics based on a realized unilateral teleoperation framework with visual feedback. The haptic feedback from the slave robot has not been investigated. In addition, the proposed mapping technique has been extended to the dual arms conveniently with the hybrid mapping strategy, but the dual arms are separately controlled by each hand concurrently at this stage. In future work, the dual-arm collaboration strategy during the teleoperation will be further investigated to fill more flexible tasks. Furthermore, the bilateral teleoperation with the haptic feedback to the master site will be developed. A new generation of medical assistive robots with higher payload, such as Kinova Jaco2 robots, will be investigated and developed. The standard D-H model parameters as well as lower and upper joint limits for the single arm of the YuMi robot are listed in Table AI . In the table, the parameters θ j , α j , d j , a j , l j , and u j (j = 1, 2, 3, . . . , 7) represent the joint angle, link twist angle, link offset, link length, lower joint limit, and upper joint limit, respectively. The homogeneous transformation matrix where sθ j and cθ j refer to sinθ j and cosθ j , respectively, while sα j and cα j refer to sinα j and cosα j , respectively. The above standard D-H model parameters defined the forward kinematic from the first link to the seventh link of a single arm. The right arm and the left arm of YuMi robot have the same kinematic configuration. But the base transformation matrices from the local base coordinate frame {F base } to the first link frame of each arm are different, which are denoted as r0 base T and l0 base T, respectively. The forward kinematic of YuMi robot can be calculated as below: T r = r0 base T · 0 1 T · 1 2 T · 2 3 T · 3 4 T · 4 5 T · 5 6 T · 6 7 T (A2) T l = l0 base T · 0 1 T · 1 2 T · 2 3 T · 3 4 T · 4 5 T · 5 6 T · 6 7 T (A3) The transformation matrices from the local base coordinate frame {F base } to the base link frame of the right arm r0 base T and the left arm l0 base T are defined below, respectively. COVID-19: Protecting health-care workers A novel reusable anti-COVID-19 transparent face respirator with optimized airflow Combating COVID-19: The role of robotics in managing public health and infectious diseases A robotic healthcare assistant for the COVID-19 emergency: A proposed solution for logistics and disinfection in a hospital environment Homecare robotic systems for healthcare 4.0: Visions and enabling technologies Robotics, smart wearable technologies, and autonomous intelligent systems for healthcare during the COVID-19 pandemic: An analysis of the state of the art and future vision Progress in robotics for combating infectious diseases Teleoperation of collaborative robot for remote dementia care in home environments Wearable sensing and telehealth technology with potential applications in the coronavirus pandemic Toward new-generation intelligent manufacturing Human-cyber-physical systems (HCPSs) in the context of new-generation intelligent manufacturing Review of robot skin: A potential enabler for safe collaboration, immersive teleoperation, and affective interaction of future collaborative robots Repurposing factories with robotics in the face of COVID-19 Applications of bioinspired approaches and challenges in medical devices Keep healthcare workers safe: Application of teleoperated robot in isolation ward for COVID-19 prevention and control The role of the hercules autonomous vehicle during the COVID-19 pandemic use cases for an autonomous logistic vehicle for contactless goods transportation Learning ambidextrous robot grasping policies Development of a tele-nursing mobile manipulator for remote care-giving in quarantine areas Digital innovation hubs in health-care robotics fighting COVID-19: Novel support for patients and health-care workers across europe Intuitive dual arm robot programming for assembly operations ABB-Robotics How Robots Helped Protect Doctors From Coronavirus Hospital ward run by robots to spare staff from catching virus How Diligent's Robots Are Making a Difference in Texas Hospitals DexPilot: Vision-based teleoperation of dexterous robotic hand-arm system When joggers meet robots: The past, present, and future of research on humanoid robots Force estimation and slip detection/classification for grip control using a biomimetic tactile sensor Optimal deep learning for robot touch: Training accurate pose models of 3D surfaces and edges Skeleton-aware networks for deep motion retargeting Incomplete orientation mapping for teleoperation with one DoF master-slave asymmetry WRES: A novel 3 DoF WRist ExoSkeleton with tendon-driven differential transmission for neuro-rehabilitation and teleoperation A balance feedback human machine interface for humanoid teleoperation in dynamic tasks Effortless creation of safe robots from modules through self-programming and selfverification A novel gesture recognition system for intelligent interaction with a nursing-care assistant robot Human-manipulator interface based on multisensory process via Kalman filters Smith predictorbased robot control for ultrasound-guided teleoperated beating-heart surgery Review of human-machine interfaces for small unmanned systems with robotic manipulators Learning from wearablebased teleoperation demonstration Deep imitation learning for complex manipulation tasks from virtual reality teleoperation Intuitive hand teleoperation by novice operators using a continuous teleoperation subspace Adaptive fuzzy full-state and output-feedback control for uncertain robots with output constraint Adaptive-constrained impedance control for human-robot cotransportation WristCam: A wearable sensor for Hand trajectory gesture recognition and intelligent human-robot interaction Improving the human-robot interface through adaptive multispace transformation Perception Neuron An IoT-enabled telerobotic-assisted healthcare system based on inertial motion capture Motion Capture File Formats Explained TRAC-IK: An open-source library for improved solving of generic inverse kinematics Hybrid mutation fruit fly optimization algorithm for solving the inverse kinematics of a redundant robot manipulator A gait recognition system for interaction with a homecare mobile robot