key: cord-0063809-0qa6f1co authors: Garcia A., Daniel E.; Sierra M., Sergio D.; Gomez-Vargas, Daniel; Jiménez, Mario F.; Múnera, Marcela; Cifuentes, Carlos A. title: Semi-Remote Gait Assistance Interface: A Joystick with Visual Feedback Capabilities for Therapists date: 2021-05-19 journal: Sensors (Basel) DOI: 10.3390/s21103521 sha: 639b81c2a2724fa4f8bb64dc92ed2ee996f1062c doc_id: 63809 cord_uid: 0qa6f1co The constant growth of pathologies affecting human mobility has led to developing of different assistive devices to provide physical and cognitive assistance. Smart walkers are a particular type of these devices since they integrate navigation systems, path-following algorithms, and user interaction modules to ensure natural and intuitive interaction. Although these functionalities are often implemented in rehabilitation scenarios, there is a need to actively involve the healthcare professionals in the interaction loop while guaranteeing safety for them and patients. This work presents the validation of two visual feedback strategies for the teleoperation of a simulated robotic walker during an assisted navigation task. For this purpose, a group of 14 clinicians from the rehabilitation area formed the validation group. A simple path-following task was proposed, and the feedback strategies were assessed through the kinematic estimation error (KTE) and a usability survey. A KTE of 0.28 m was obtained for the feedback strategy on the joystick. Additionally, significant differences were found through a Mann–Whitney–Wilcoxon test for the perception of behavior and confidence towards the joystick according to the modes of interaction (p-values of 0.04 and 0.01, respectively). The use of visual feedback with this tool contributes to research areas such as remote management of therapies and monitoring rehabilitation of people’s mobility. Physical rehabilitation (i.e., often referred to as physiotherapy) aims to restore people's movement and physical functioning affected by injury, illness, disability, or traumatic events [1] . One of the main approaches for physical rehabilitation is targeted at the retraining of the human gait. Different health conditions can result in walking limitations or problems, such as accidents and neurological disorders (e.g., stroke, spinal cord injury, cerebral palsy), aging, musculoskeletal diseases (e.g., arthritis), heart disease, among others [2] . Depending on each patient's condition, gait rehabilitation and assistance therapies might focus on providing, compensating, increasing, or retraining the lost locomotion capacities, as well as the cognitive abilities of the individual [2] . Specifically, training interventions seek to improve walking performance by (1) eliciting voluntary muscular activation in lower limbs, (2) increasing muscle strength and coordination, (3) recovering walking speed and endurance, and (4) maximizing lower limb range of motion [3] . In this manner, several techniques and approaches have been developed, ranging from overground and conventional gait training to robot-assisted and machine-based therapies [4, 5] . In particular, robot-assisted gait training has gained considerable interest in recent decades since sensors and actuators allow safe, intensive, and task-specific therapies [6, 7] . This section presents the different applications of a set of technologies for the operation, control, real-time monitoring, and reprogramming of multiple devices. Typically, these devices are robots that enable and facilitate shared control tasks [25] in a fast, efficient, and safe way. One of the main applications of teleoperation devices is drone control [26] . Due to their versatility, teleoperation devices can contribute to both military and healthcare [25, 27] , including environmental [28] , and as a real-time monitoring mechanism [29] . Another interesting application of teleoperation is surgical systems, making possible minimally invasive human telesurgery over long distances [30, 31] . Additionally, it should be noted that this type of device is essential in the control of semi-autonomous robots [32, 33] . At this point, the implementation of interfaces that involved force or haptic feedback [34] for obstacle avoidance tasks in dynamic environments and assisted navigation [35, 36] should be highlighted. Unfortunately, despite their adequate performance, this kind of application has shown calibration problems due to interface vibration [37] . After having a general overview of teleoperation devices' applications, it is worth highlighting their incidence and impact on individual vehicles that favor the transportation of people who have permanently, totally, or partially lost motor skills, i.e., electric wheelchairs [38] . Due to its significant impact as an assistive device, interdisciplinary groups have been working on the development of novel interfaces to make electric wheelchairs more and more inclusive [39, 40] as many people who suffer from tremors or spasms or are unable to control their movements completely, find it challenging to control wheelchairs with traditional joysticks. In this area, a particular and very significant application is joystick car driving for people with disabilities. Such a joystick driving device enables a person to drive a car while sitting in an electric wheelchair. The joystick's action in the back and power direction govern a car's acceleration or deceleration, while a steering wheel turns in the left and right direction [41] . Moreover, case studies about teleoperation devices in simulated environments have been reported in the literature, in order to mitigate as many errors as possible for the control devices when implemented in real-life cases [42] . Manipulators that recognize the intention of the user's movement are also presented to make controlling the wheelchair easier [43] . Even control devices that implement haptic [44] and visual [45] feedback are given. Unfortunately, this technology is not designed to rehabilitate this kind of population but is limited to assisting. Considering the high impact of teleoperation devices, the need to include this kind of technology in rehabilitation and physical assistance scenarios should be highlighted. This fact is also supported by the increasing demand for assistive robots, which require creating novel control modalities and interfaces to improve human-robot interfaces (HRI) [46] . These situations are generally characterized by collaborative work between robots and humans, where safe and efficient physical and cognitive encounters occur [47] . In particular, where humans and robots interact in complex scenarios where high performance is required [48, 49] , several strategies have been introduced, such as virtual environments [48] , teleoperation with joysticks [50] , interfaces with virtual impedance [50] , and approaches to force feedback [51] . Thus, these kinds of methods have, for example, been used to interpret navigation commands and monitor robotic systems such as wheelchairs, exoskeletons, and mobile robots [52] [53] [54] cooperatively. Some are presented to contextualize these proposed solutions with the strategies commonly used in smart walkers (SWs). To successfully and accurately facilitate the user's knowledge flow, SWs incorporate various contact channels [15] . The key objective of these channels is to gather user-related information such as velocity, acceleration, location, force, torque, movement intention, among others [6] . SWs are fitted with interfaces that enforce control strategies to maximize their productivity to the fullest and, learn to respond effectively to the user's stimuli [15] . In addition, SWs also provide guidance and aided navigation functions [55] [56] [57] [58] . These characteristics include stability when leading the user through diverse and complex environments [6] . Some approaches are based on the path followed by methods where the ideal path is created offline, and SW is followed [59, 60] . More dynamic methods, on the other hand, have also been applied, where path planning algorithms are used to estimate the desired path online (i.e., changing barriers and complex landscapes directly impact the intended path) [6, 55] . The HRI paradigm has been independently discussed by recent implementations of SWs, such that SWs can communicate with the user and the environment safely and naturally. Similarly, using feedback modules to engage the individual in instruction activities directly, certain methods have mutual management strategies [55] . However, the qualitative evaluation of engagement techniques that have regular and intuitive mutual influence along the road to tasks are still lacking. In addition, visual interface cooperation was not completely used and exploited for guidance purposes in SWs, according to literature evidence. In this sense, this work describes the implementation and evaluation of two visual strategies on a joystick to guide with an SW. The remainder of this work is organized as follows. Section 3 describes the robotic platform, the teleoperation device used during the study, and the proposed visual strategies. Section 3.3 presents the experimental setup, including the volunteers and trial description. Section 4 details the obtained results, presenting a comprehensive analysis of this work's primary outcomes. Finally, Section 5 points out the concluding remarks and future works. This section describes the proposed system for the robot teleoperation in terms of the included interaction platforms and the implemented feedback strategies. Likewise, this part also details the experimental protocol for the system's validation, including quantitative and qualitative assessments. To provide visual feedback, the proposed system (see Figure 1 ) includes a (1) standard workstation to execute and control the simulation, (2) a joystick to provide teleoperation and feedback, and (3) a simulation environment to establish visual communication with the user. A Hapkit joystick (Stanford University, Stanford, CA, USA) was used, which provides a remote command interface. The Hapkit is an open-hardware joystick with one degree of freedom. The device was modified to include three LEDs placed on the base. These LEDs were added to provide a visual feedback strategy focused on showing how the user controlled the virtual smart walker (i.e., whether the robot's trajectory was inside or outside of the proposed path). The graphic interface used the 3D visualization tool provided by the Gazebo ROS package (Gazebo ROS package. Link: http://wiki.ros.org/gazebo_ros_pkgs. Accessed on: 25 April 2021) and a 2D visualization tool, employing the rviz ROS package (Rviz ROS package. Link: http://wiki.ros.org/rviz. Accessed on: 25 April 2021). This way, the computer screen displayed the desired trajectory and the smart walker controlled by the joystick in real-time (see Figure 1 ). To simulate the smart walker motion, the Gazebo plugins for differential robots were used, and the Unified Robot Description Format (URDF) (URDF model. Link: http://wiki.ros.org/urdf/XML/model. Accessed on: 25 April 2021) was used to define the robot's kinematics. The simulation measured the robot's odometry and received speed commands through a speed controller provided by Gazebo. Moreover, a simulated laser rangefinder was also added to the robot, to provide obstacles sensing. The Gazebo plugin for laser rangefinder was also used. It is essential to highlight that for the joystick, the simulation, the admittance controller, the calculation of the kinematic estimation error (KTE), a sampling rate of 30 Hz was implemented. The therapists were asked to guide a simulated smart walker through a predefined environment (see Figure 1 ). Initially, the system indicated to the participants how to control the smart walker by showing them the simulated environment (see Figure 2 ). The robot was rendered in such a way that it resembled the standard structure of a robotic walker. To simulate the patient, a constant impulse force (F) on the robot was generated. For this specific case, this force was decided as a constant parameter. The task of the therapists was to control the turning of the robotic walker. To this end, virtual torques were generated by moving the Hapkit from one side to the other, as Figure 2 shows. Specifically, the position of the joystick was converted to torque through the implementation of Equation (1): where τ is the torque, k 1 a gain with a value of 5000, k 2 a gain with a value of 50, and x the joystick position. This formulation was based on a previous work for guiding people with virtual torque signals, using a smart walker [61] . Subsequently, a constant virtual force (F) of 10 N was generated to simulate a user driving the robotic walker. In this way, the force (F) and torque (τ) were used to generate linear (v) and angular (ω) velocities using an admittance controller [6, 59] . Finally, two feedback modes on the joystick were tested during the simulation: (1) feedback on the screen and (2) feedback on the joystick (i.e., FS and FJ, respectively). It should be noted that through the positions in x (Xω) and y (Yω) and the virtual robot's orientation (θω), the path's orientation error, achieved by the device concerning the proposed trajectory, can be estimated. to Torque Figure 2 . Illustration of the interaction system constituted by the feedback strategies, the path following task, and the simulation environment. x is the joystick position, τ is the virtual torque, F is the impulse force, v is the linear velocity, ω is the angular velocity, X ω is the x coordinate of the walker's position, Y ω is the y coordinate of the walker's position, and θ ω is the walker's orientation. FJ refers to feedback on the joystick and FS to feedback on screen. In addition to the obstacles placed in the simulation environment, the robot proposed an ideal path to be followed by the robot. Thus, the odometry of the robotic walker was used to estimate the path-following error. To obtain the correct direction of turning at each pose of the path, the path following controller developed by Andaluz et al. was used [62] . -Obstacles are not visible. -The desired path is not shown. -The performed path is shown (in Red). LEDs Enabled LEDs Disabled For this modality, the user receives the feedback directly from the graphic interface. Therefore, the ideal and current paths are exhibited on the screen so that the therapist can correct the smart walker's trajectory by moving the Hapkit. The virtual walker and the performed path were updated every 50 ms, approximately. Moreover, the obstacles sensed by the laser rangefinder are also displayed. In this mode, three LEDs located on the base of the Hapkit provide information about the path-following error. Specifically, neither the obstacles nor the desired path are displayed on the graphic interface. A red LED placed on the left side indicates negative errors concerning the ideal path, a white LED in the middle illustrates when the smart walker is correctly oriented, and a yellow LED placed on the right side indicates positive errors (see Figure 4 ). In this way, this strategy's primary goal consists of keeping the white LED (i.e., placed in the middle) switched on as long as possible. More precisely, as can be seen in Figure 4 , a negative error experienced in the virtual walker implied a deviation to the right side concerning the ideal trajectory. In this case, the joystick turned on the left LED (red light), indicating to the users that they should move the control in that direction to keep the robot inside the trajectory. Similarly, this process occurred for the positive error when the walker was in the left part of the proposed path (see Figure 4) . Thus, the joystick turned on the yellow light (right LED), indicating that the user should correct the walker's trajectory. Finally, the center LED (white light) was turned on for a no-error state, showing that the user controlled the robot correctly, as Figure 4 shows. In addition, for the LEDs to light up, at least three successive data samples had to have the same error behavior, i.e., for the yellow LED to light up, at least three consecutive data samples had to have a positive error. Finally, it is worth noting that for this mode, a threshold was defined for making the task a little more user-friendly and thus, to be able to correctly determine the error on the trajectory. This threshold was 10 degrees both to the left and to the right. That is when the subject deviated from the proposed route and exceeded this threshold, the respective LEDs would light up. The threshold was defined experimentally to avoid overloading the cognitive communication channel between the device and the user. This section describes the experimental validation executed to evaluate the performance of the feedback strategies presented above. Considering the goal of the system, 14 occupational therapists participated in this study. The group was composed of 12 females and 2 males with an average age of 23.4 ± 1.8 y.o. and a mean clinical experience of 2.3 ± 1.2 years. Table 1 summarizes the demographic information of the participants recruited according to the exclusion and inclusion criteria shown below: • Inclusion Criteria: Occupational therapists (OT) or last year students in occupational therapy (OT Student) with experience in gait rehabilitation scenarios. • Exclusion Criteria: Candidates who presented upper-limb injuries, cognitive impairments, or any condition that impedes using of the joystick and the graphic interface were excluded in this study. 2 4 21 Female OT 2 5 23 Female OT 3 6 22 Male OT Student 1 7 21 Female OT 3 8 27 Female OT 5 9 23 Female OT 3 10 24 Female OT 4 11 24 Male OT 2 12 25 Female OT 2 13 25 Female OT 3 14 25 Female OT 1 Before the experiment, participants were asked to fill out a brief three-question questionnaire (i.e., Have you worked with walkers? Have you worked with robotic walkers? Have you worked with assistive robotics, in general?) to determine the level of approach they have had with this type of devices. This questionnaire had two answer options, yes if they had some approach to this type of devices, and no in case they do not have any previous experience. All participants were given appropriate instructions on the operation of the two feedback strategies prior to the trials execution. The order in which the feedback strategies were used, was randomized for each participant. Subsequently, the simulation environment was set up with a left-turn trajectory to analyze and compare the effects of the two methods. Each participant was required to complete three attempts of the path-following task, and only the third one was used for analysis purposes. The first and second attempts were used for training. A resting period of 30 s was allowed between each attempt of the same feedback mode, whereas a resting period of 1 min was allowed when the feedback mode was changed. Moreover, a maximum execution time of 1 min and 30 s was allowed for each attempt. In case of exceeding this time, the attempt was aborted. The participants were only asked to attend one session. During the trial, log files were stored, and the rosbag ROS package (Rosbag ROS package. Link: http://wiki.ros.org/rosbag. Accessed on: 25 April 2021) was used to record the robotic walker information and the movements of the joystick. Once they accomplished each strategy, the participants completed a qualitative survey to assess the acceptance and usability of the proposed system. To measure the users' performance during the trials, the kinematic estimation error (KTE) was calculated in this study [63] . The KTE compares the achieved path by the participant against the ideal (the proposed path for the experiment), calculating the mean error and including the trial variance, as Equation (2) shows. where the |ε| 2 value refers to the mean squared errors between the ideal and achieved paths, and the σ 2 value represents the data variance [24, 63] . It is worth noting that this equation does not require the walker's speed or acceleration, since it aims to provide insights into the spatial performance of the path-following error, rather than kinematic information. Furthermore, the virtual impulse force was simulated as constant, thus the linear velocity, generated by the admittance controller, was also constant. For this reason, the KTE is used to estimate the error between the proposed trajectory and the one achieved by the subject. Moreover, to analyze the user-joystick physical interaction during the task, several kinematic characteristics such as the duration [s], the distance [m], the orientation error [rad], the correction torque [N.m], and the walker's pose (i.e., X ω , Y ω , θ ω ) were recorded. Notably, the indicator related to the correction torque indicates the therapist's average torque, when moving the Hapkit. These indicators were only estimated for the third trial of each mode, i.e., the validation trial. Based on previous studies related to qualitative assessments in applications using smart walkers, as presented in [6, 59, 64] , this study included a perception and usability survey. Table 2 illustrates the questionnaire adapted to this study to assess the user interaction with the system. The questions were intended to estimate the naturalness, intuition, and user preference concerning the proposed strategies. For that, the questionnaire integrated six categories: Facilitating Conditions (FC), Performance and Attitude Expectation (PAE), Expectation of Effort and Anxiety (EA), Behavior Perception (BP), Trust (TR), and Attitude towards Technology (AT). Moreover, the survey integrated a 5-point Likert scale to score the questions, being five fully agreeing and one completely disagreeing. As described in Table 2 , some questions were negatively formulated. Regarding these questions, the collected answers were mirrored along with the neutral scale value (i.e., score = 3) for analysis purposes. To analyze the results of this survey, it was necessary to compile each category's questions into a single number. To achieve this, the percentage of each point of the Likert scale was calculated concerning the total number of responses for each mode. That is, for the specific case of FC, we calculated the quotient between the sum of the number of votes for totally disagree (for the 4 questions) and the number of possible votes for the mode. This last value for this case is 56 since there are 14 participants and 4 questions. Finally, this quotient was multiplied by 100 to obtain its equivalent in percentage. This procedure was applied to both modes in each of the categories. In this mode, I felt like I was controlling the virtual walker with the device. 4 In this mode, I felt that the device helped me control the virtual walker. 5 In this way, I believe the type of feedback was appropriate and effective. 6 In this mode, I think the kind of feedback was easy to understand. In general, I would trust when the device gives me advice on how to control the virtual walker. 2 In general, if the device give me advice, I would follow it. In this mode, I had fun using the device. 2 In this mode, I think it is interesting how the device interacts with me. 3 In this mode, using the device was frustrating for me. * For the quantitative data, the Shapiro-Wilk test assessed the normality of the measured characteristics, and the t-student test determined whether there were significant differences between the proposed strategies. Likewise, in the quantitative assessment, the Mann-Whitney-Wilcoxon (MWW) test assessed statistical differences between the proposed feedback methods. Thus, for this case, the test was used because of data reported to have minimal Type I error rates and equivalent power without testing for Likert [65, 66] . A significance value of p < 0.05 was used for all the statistical tests. The Research Ethics Committee of the University approved this experimental protocol. The participants were informed about the experiment's scope and purpose, and their written informed consent was obtained before the study. The participants were free to leave the study when they decided to do so. This section describes and discusses the primary outcomes of this study regarding quantitative and qualitative results. A total of 14 sessions were completed, and no collisions occurred during the simulations. Figure 5 illustrates the results registered by a participant during the different trials with the two proposed feedback strategies. These results were selected, as the participant exhibited an average performance, in comparison to all participants. The upper figure shows the achieved trajectories using the feedback on the screen method, and the lower part displays the paths for the feedback on the joystick. Trials 1 and 2 refer to the trajectories obtained in the training stage, and the validation represents the path used to extract the kinematic and interaction data exhibited below. It is worth noting that a single, simple path was proposed to validate this teleoperation tool. This fact is supported by the data provided by the literature on the cognitive load produced by visual interfaces when they are poorly implemented [67, 68] . Several authors recommend that to validate this type of technology, simple tasks should be performed so that users become familiar with the work to be done [69, 70] and thus, gradually increase the complexity of the task. For this reason, since the joystick and the visual strategies are in a validation stage, such a route was designed to have a clear perception of the clinicians regarding the tool. In addition, it should be noted that the experiment was conducted in a simulated environment. Considering what the literature suggests about mobile robots, simulations play an essential role in system validation, as presented in [71] . Some authors say that although it has been shown that it is possible to train the devices in real environments, the amount of trials needed to test the system discourages the use of physical robots during the training period [71, 72] . Therefore, it is recommended to validate the robot performance in the early stages in simulated environments to mitigate as many errors as possible that may occur in the real application [72] . Table 3 summarizes the mean values of the characteristics obtained in this study to measure the participants' performance during both strategies. The measured indicators comprise aspects such as the duration to accomplish the path, the distance traveled by the robotic walker, the kinematic estimation error (KTE), the orientation error, and the correction torque calculated from the joystick movements. In the statistical context, the Shapiro-Wilk test determined that all parameters followed a normal distribution. Therefore, to find significant differences between the modes (i.e., FS: feedback on the screen and FJ: feedback on the joystick), the t-student test was performed. Notably, all measured parameters registered statistically significant differences (see Table 3 ). Hence, it can be stated that each feedback methodology provides an entirely different teleoperation performance. In this regard, although the path was the same for both strategies, the interaction parameters evidenced statistically significant changes. Table 3 . Summary of kinematic and interaction data obtained during the trials. All parameters followed a normal distribution. Highlighted parameters (in gray) evidenced significant differences between both strategies (p < 0.05). Asterisks indicate that the data have a normal distribution. In terms of duration and distance, the feedback on joystick (FJ) strategy showed a decrease in the mean value compared to the feedback on screen (FS) mode. Thus, it can be highlighted that the therapists performed better trajectories (i.e., closer to the reference path) when the joystick provided visual feedback. In addition, this behavior also led to the accomplishment of the path in shorter times. This result may be supported by the fact that the joystick's feedback mode required fewer correction torques on the device (see Table 3 ). Moreover, the LEDs' use as visual feedback provides an instantaneous indicator of the path following error, compared to the error's perception on the screen. Regarding the KTE error, the feedback mode on the joystick (FJ) registered the lowest values. This result suggests that the user-device interaction was more intuitive and efficient in keeping the walker within the proposed path, concerning the FJ strategy. In addition, the comparison between the strategies was evidenced by statistically significant differences, which was expected, considering that the values obtained from the FS were always considerably higher. Similarly, the orientation error presented lower values for the FJ strategy. Thus, this result indicates that the volunteers managed to keep the robotic walker within the ideal path more easily. In contrast and similarly to the previous results, the feedback on the screen presented higher error values. Finally, regarding the correction torque, the FS strategy exhibited the highest values. This result could be supported by the fact that this mode demanded more correction movements with the Hapkit. In contrast, when the therapists controlled the smart walker using the FJ strategy, the parameters evidenced lower values, indicating that the joystick's feedback was more efficient. In statistical terms, significant differences were found between the modes. Comparing these results with literature, in [73] negative results were obtained when the user received feedback on a screen in a path-following task. Moreover, the study by [24] suggested that visual feedback on the joystick was better, even compared with haptic feedback. This evidence suggests that visual methods can be implemented to facilitate the therapists' involvement and to provide a useful teleoperation tool. Furthermore, ref. [74] emphasized the importance of including an efficient visual strategy for teleoperation applications, the proposed system's results suggest that the feedback on the joystick could be a solution with potential use in this area. This study included a preliminary survey to assess levels of knowledge and perception of robotic technology application in rehabilitation settings. Overall, 58.3% of the participants had worked at least once with conventional walkers. However, 91.7% of the therapists had never interacted with robotic walkers, and 66.7% said they had not used any robotic devices for assistive applications. These results support the need to actively, closely, and safely [20, 75] have therapists during robotic walker therapies [23, 76] . Furthermore, such inexperience on the part of the therapists may be related to the low development of tools to facilitate their task in the course of their therapy [50, 77] . On the other hand, to determine the naturalness, intuitiveness, safety, perception, complexity, and users' preference with the proposed strategies, a questionnaire (see Table 2 ) was accomplished by all participants. Figure 6 summarizes the answers for the different categories of the implemented questionnaire. In the statistical context, the Mann-Whitney-Wilcoxon (MWW) determined significant changes between both assessed feedback strategies. Table 4 summarizes the results for the MWW test applied between the interaction strategies and the questionnaire categories. In particular, the questions in the category of facilitating conditions (FC), which assessed aspects such as safety, ease of use, and attitude during the interaction, show a mainly positive distribution. For the mode on the screen, the perception was slightly higher than the feedback strategy on the joystick (see Figure 6 ). Although, in general terms, this aspect was positive for both methods. Furthermore, volunteers indicated ease to interact with the proposed system independently of the applied modality. This way, the results confirm that the strategies implemented were adjusted, generating non-complex scenarios for the users. Regarding the Performance and Attitude Expectancy (PAE) category, these questions were intended to assess the device's overall performance. The distribution of responses for this category is positive and uniform (see Figure 6 ). This result indicates that users showed a favorable attitude and acceptance for both modes, which is confirmed by the no significant differences between the groups (i.e., FS and FJ) shown in Table 4 . Concerning the category of effort and the perception of anxiety (EEA), the statistical trial revealed significant differences between the feedback modes. Thus, the screen strategy presented better results in comparison with the feedback on the joystick. Moreover, although the tendency was positive for most participants, some therapists perceived considerable anxiety and relevant efforts using the system. For the perception of behavior (i.e., a category that aimed to measure the user-device communication directly), there were significant differences between the proposed strategies, as Table 4 shows. Moreover, Figure 6 illustrates the distribution for both cases, where the feedback on the joystick exhibited more positive values than the method on the screen. This result indicates that volunteers felt more comfortable and confident using this strategy. In the TR category case, which assessed the confidence of the subjects when using the device, strategies evidenced differences between them (see Figure 6 ). This result is consistent with the statistical analysis exhibited in Table 4 . Specifically, the joystick's feedback mode presented a more extensive positive distribution than the method on the screen. Thus, the favorable perception could be supported because the subjects felt more confident interaction under the guided feedback mode using LEDs, possibly because this strategy could be more natural and intuitive in teleoperation applications. Regarding to the category focused on measuring of the subjects' attitude towards technology (TA), there was a slight decrease in the interaction mode showing the orientation error on the screen ( Figure 6 ). Table 4 shows statistical differences between the strategies, where the method on the screen registered lower favorable perception. Moreover, the positive distribution in the joystick's feedback strategy indicates that users understood the device's teleoperation satisfactorily employing LEDs for feedback information. It is worth mentioning that one of the significant limitations of this study is related to the path chosen for the experimental trials. However, this work's main objective was to validate the strategy in a simple scenario, while further works will include more complex experimental conditions. Exceptionally, it would be useful to include obstacles, longer and more difficult paths, and a real smart walker. Furthermore, it is important to highlight a key point within this study related to the feedback strategies. If the joystick's feedback lights were not on the device but on the screen, very similar results would probably be obtained. However, the idea of this study was to validate two methods of feedback and to verify whether the mode with less cognitive load would allow users to obtain better results in the path-following task. Additionally, in future implementations we expect to develop a portable joystick that can be carried by the therapist. In this way, the device will not be required to be connected to a desktop computer or workstation. Therefore, with this work we sought to validate a feedback method that applies to this portable version. A new method for walker-assisted gait therapy monitoring and control using a command interface was proposed in this article. Using the visual capability of a joystick device, a physical and cognitive communication channel was developed. In this sense, a Physical and Cognitive Interface (PCI) for human-robot interaction between therapy manager and joystick was created in this work. In addition, different levels of communication were provided by a series of visual feedback strategies. On 14 participants who completed multiple trials with the device, an acceptance and usability questionnaire was applied. Participants had a higher level of confidence in the visual feedback mode using the joystick's LEDs, as well as a greater understanding of the interaction. Similarly, the kinematic estimation error (KTE) was determined during experimental trials, with lower values in this strategy. The use of feedback strategies integrating physical and cognitive interaction between the therapist and an interface contributes to research areas such as telerehabilitation and monitoring of people in hospital environments. Likewise, those applications empower therapist capabilities by reducing the energetic expenditures performing physical activities. Moreover, through the system's information, the therapist can perceive the patients' perceptions using mobile devices for assistive applications. This way, the therapist can control the SW and prevent undesirable situations such as falls or collisions. As a result, on the one hand, the therapist would have a greater view of the environment and people's recovery process with the proposed tool. Overall, there were some shortcomings due to participants who did not understand the joystick interface channel used for feedback. On the other hand, learning how to interpret therapy knowledge through a non-traditional communication medium can necessitate a brief period of training. As future work, the implementation of the device in a real environment with slightly more complex path-following tasks will be carried out. For that reason, the idea of this study was also to develop an innovative tool in the context of teleoperation in robotic walkers. This tool was designed to explore an alternative to conventional remote control devices for walkers (e.g., laptops, tablets). In particular, they tend to have complex and unfriendly interfaces, thus generating considerable cognitive load for clinicians and not allowing them to adequately perform their role within the therapy. Technologies for Therapy and Assistance of Lower Limb Disabilities: Sit to Stand and Walking. In Exoskeleton Robots for Rehabilitation and Healthcare Devices Stroke Rehabilitation-Guidelines for Exercise and Training to Optimize Motor Skill Overground physical therapy gait training for chronic stroke patients with mobility deficits Physical rehabilitation approaches for the recovery of function and mobility following stroke Human-Robot-Environment Interaction Interface for Smart Walker Assisted Gait: AGoRA Walker Robot-assisted gait training for stroke patients: Current state of the art and perspectives of robotics Advanced technology for gait rehabilitation: An overview A review in gait rehabilitation devices and applied control techniques Rehabilitation of gait after stroke: A review towards a top-down approach Review and Classification of Human Gait Training and Rehabilitation Devices Assistive mobility devices focusing on Smart Walkers: Classification and review Adaptable Robotic Platform for Gait Rehabilitation and Assistance: Design Concepts and Applications Assistive devices for gait in Parkinson's disease. Park. Relat. Disord Human-Robot Interaction Strategies for Walker-Assisted Locomotion A review of the functionalities of smart walkers A Comparative Study of Conventional Physiotherapy versus Robot-Assisted Gait Training Associated to Physiotherapy in Individuals with Ataxia after Stroke Electromechanical-assisted training for walking after stroke A Comparative Study of Conventional Physiotherapy Versus Robotic Training Combined with Physiotherapy in Patients with Stroke Dementia in Nursing Homes Neurological Principles and Rehabilitation of Action Disorders The Three Laws of Neurorobotics: A Review on What Neurorehabilitation Robots Should Do for Patients and Clinicians Evaluation of biomechanical gait parameters of patients with Cerebral Palsy at three different levels of gait assistance using the CPWalker A Therapist Helping Hand for Walker-Assisted Gait Rehabilitation: A Pre-Clinical Assessment A Review on Teleoperation of Mobile Ground Robots: Architecture and Situation Awareness Flyjacket: An upper body soft exoskeleton for immersive drone control Haptic teleoperation of a multirotor aerial robot using path planning with human intention estimation Beyond drone vision: The embodied telepresence of first-person-view drone flight A drone-based networked system and methods for combating coronavirus disease (COVID-19) pandemic Transforming a surgical robot for human telesurgery Robotic surgery-the transatlantic case Teleoperation of a mobile robot through haptic feedback Teleoperation of industrial robot manipulators based on augmented reality Teleoperation of Collaborative Robot for Remote Dementia Care in Home Environments Development of a shared controller for obstacle avoidance in a teleoperation system A Novel Force Sensorless Reflecting Control for Bilateral Haptic Teleoperation System Precise haptic device co-location for visuo-haptic augmented reality Teleoperation training environment for new users of electric powered wheelchairs based on multiple driving methods Joystick Grip for Electric Wheelchair for Tension-Athetosis-Type Cerebral Palsy A semi-autonomous framework for human-aware and user intention driven wheelchair mobility assistance A joystick car drive system with seating in a wheelchair Training environment for electric powered wheelchairs using teleoperation through a head mounted display Cooperative step-climbing strategy using an autonomous wheelchair and a robot Inference of user-intention in remote robot wheelchair assistance using multimodal interfaces A low-cost tele-presence wheelchair system Rehabilitation and health care robotics Progress and prospects of the human-robot collaboration Collaborative manufacturing with physical human-robot interaction Model-based reinforcement learning variable impedance control for human-robot collaboration Artificial and virtual impedance interaction force reflection-based bilateral shared control for miniature unmanned aerial vehicle A six-dimensional traction force sensor used for human-robot collaboration A bayesian shared control approach for wheelchair robot with brain machine interface Human cooperative wheelchair with brain-machine interaction based on shared control strategy A teleoperation framework for mobile robots based on shared control Moro, F.; others. Navigation assistance and guidance of older adults across complex public spaces: The DALi approach The evolution of guido The MOBOT rollator human-robot interaction model and user evaluation process Robot-assisted navigation for a robotic walker with aided user intent Admittance controller with spatial modulation for assisted locomotion using a smart walker Human-robot interaction analysis for a smart walker for elderly: The ACANTO interactive guidance system Assistive Locomotion Device with Haptic Feedback For Guiding Visually Impaired People Adaptive dynamic path following control of an unicycle-like mobile robot Extraction of user's navigation commands from upper body force interaction in walker assisted gait Consumer acceptance and use of information technology: Extending the unified theory of acceptance and use of technology A comparison of the power of Wilcoxon's rank-sum statistic to that of student's t statistic under various nonnormal distributions Five-Point Likert Items: T test versus Mann-Whitney-Wilcoxon (Addendum added October Measuring cognitive load using eye tracking technology in visual computing Human-centered design meets cognitive load theory: Designing interfaces that help people think Integrating cognitive load theory and concepts of human-computer interaction The effects of visual distractors on cognitive load in a motor imagery brain-computer interface Mixed reality simulation for mobile robots Evolving mobile robots in simulated and real environments Customizing haptic and visual feedback for assistive human-robot interface and the effects on performance improvement Analysis and Performance Evaluation of a 3-DOF Wearable Fingertip Device for Haptic Applications Empowering and assisting natural human mobility: The simbiosis walker Online adaptive teleoperation via motion primitives for mobile robots