key: cord-0058883-y2g2w4k7 authors: Devaraja, Rahul Raj; Maskeliūnas, Rytis; Damaševičius, Robertas title: AISRA: Anthropomorphic Robotic Hand for Small-Scale Industrial Applications date: 2020-08-24 journal: Computational Science and Its Applications - ICCSA 2020 DOI: 10.1007/978-3-030-58799-4_54 sha: dc4294f945aca0a922b18d267ac9b370e41565e1 doc_id: 58883 cord_uid: y2g2w4k7 We describe the design of the multi-finger anthropomorphic robotic hand for small-scale industrial applications, called AISRA (Anthropomorphic Interface for Stimulus Robust Applications), which can feel and sense the object that it is holding. The robotic hand was printed using the 3D printer and includes the servo bed for finger movement. The data for object recognition was collected using Leap Motion controller, and Naïve Bayes classifier was used for training and classification. We have trained the robotic hand on several monotonous objects used in daily life using supervised machine learning techniques and the gesture data obtained from the Leap Motion controller. The mean accuracy of object recognition achieved is 92.1%. The Naïve Bayes algorithm is suitable for using with the robotic hand to predict the shape objects in its hands based on the angular position of its figures. Leap Motion controller provides accurate results and helps to create a dataset of object examples in various forms for the AISRA robotic hand, and can be used to help developing and training 3D-printed anthropomorphic robotic hands. The experiments in object grasping experiments demonstrated that the AISRA robotic hand can grasp objects with different size and shape, and verified the feasibility of robot hand design using low-cost 3D printing technology. The implementation can be used for small-scale industrial applications. In July 2015, the USA's DARPA (Defense Advanced Research Projects Agency) presented a prosthetic arm that enables the operator feel things that he touches [1] . The robotic arm was built as it would be controlled by a human brain. The hand is connected by wires linked up to the motor cortex, which is the part of the human brain that controls the movement of muscles, and the sensory cortex, which recognizes the tactile sensations when a person touches some things. The research has opened a way for a multitude of applications, including prosthetics [2] , industrial assembly lines [3] , medical surgery [4] , assisted living [5] , manufacturing, military applications [6] , education [7, 8] . For example, Dhawan et al. [9] presented a robot with stereo vision system for bin picking applications, where robots were able to pick up a thing from a stack of objects put in a bin. Recognition of an object to be picked up required a computer vision system, and an object segmentation algorithm based on stereo imaging method. Ben-Tzvi and Ma [10] proposed a grasping learning system that can provide the rehabilitation function to the subject. The system measures motion and contact force, which is used to train a Gaussian Mixture Model (GMM), representing the joint distribution of the data. Then the learned model is used to generate suitable motions and force by Gaussian Mixture Regression to perform the learned tasks in real-time. Wu et al. [11] designed an under-actuated finger mechanism, a robot palm, and a three-finger robot hand with the finger, and produced by 3D printing. Hu and Xian [12] used a Kinect sensor to control two humanoid-like robotic hands. They used the Denavit-Hartenberg method to set up forward and inverse kinematics models of robot hand movements. Next, the depth data from the Kinect and the joint coordinates are used to segment the hand gesture in depth images. Finally, Deep Belief Network (DBN) recognizes hand gestures and translates them into instructions to control the robot hands. Durairajah et al. [13] proposed a low cost hand exoskeleton developed for rehabilitation, where a healthy hand is used to control the unhealthy hand. The authors used flex sensors attached to the leather glove on the metacarpophalangeal, proximal interphalangeal, and interphalangeal joints to control the unhealthy hand via servo motors, and claimed 92.75% efficiency of testing. Tan et al. [14] presented a study on the calibration of an underactuated robotic hand with soft fingers. Senthil Kumar et al. [15] developed and tested a soft fabric based tactile sensor to use with a 3D printed robotic hand aimed for rehabilitation. Zhang et al. [16] developed a multi-fingered hyperelastic hand, and derived the equations of pulling force and grasping force while using the hand. The optimal (with respect to grasping force) dimensions of the hand are calculated using the finite element method (FEM). Park et al. [17] developed and manufactured a hand exoskeleton system for virtual reality (VR) applications. A wearable system of sensors measures the finger joint angles and the cable-driven actuators apply force feedback to the fingers. Proposed control algorithms were analyzed and validated in the VR applications. Hansen et al. [18] proposed an approach to combine musculoskeletal and robotic modeling driven by motion data for validating ergonomics of exoskeletons and orthotic devices. Gailey et al. [19] developed a soft robotic hand controlled by electromyography (EMG) signals issued by two opposite muscles allowing modulation of grip forces. Ergene and Durdu [20] designed a robotic hand and used grid-based feature extraction and bag-of-words for feature selection, and Support Vector Machine (SVM) for classification of grasping actions of three types of office objects (pens, cups and staplers). Zeng et al. [21] presented a modular multisensory prosthetic hand for prosthetic applications based on the principles of dexterity, controllability, sensing and anthropomorphism. Finally, Zhang et al. [22] used RGB-D Kinect depth sensor with the ORiented Brief (ORB) method for detection of features and extraction of descriptors extraction, a Fast Library for Approximate Nearest Neighbors (FLANN) k-Nearest Neighbor (FLANN KNN) algorithm for feature matching, a modified RANdom SAmple Consensus (RANSAC) method for motion transformation, and the General Iterative Closest Points (GICP) for optimization of motion transformation to scan its surrounding environment. The common limitation of the existing approaches is usually high cost of implementation, which prohibits its wide use by end-users. Our approach takes advantage of the 3D printing technology and low-cost Leap Motion sensors to develop an anthropomorphic multi-gingered robotic hand that is both efficient in small-scale applications and affordable to end-users. The aim of this paper is to develop and an anthropomorphic robotic hand, which can feel and sense the shape of the object it is holding, for small-scale industrial applications. We develop a robotic hand and train it on several monotonously-shaped objects used in daily life using supervised machine learning techniques and the gesture data obtained from the Leap Motion controller. The development of a supervised motion learning system for a robotic hand comprises of four stages: 1. Data generation and pre-processing; 2. Motion learning; 3. Evaluation; and 4. Motion prediction. The stages of the robotic hand motion training are summarized in Fig. 1 . For data collection, we use the Leap Motion sensor and a methodology described in more detail in [23] . From the hardware perspective, the Leap Motion controller is a simple engineering design, which consists of two cameras and three Infrared (IR) Light Emitting Diodes (LEDs). These LEDs track the IR light from the wavelength of 850 nm, which is outside the light spectrum visible to human vision. Once the image data is streamed to the computer, software matches the data to exact tracking information about fingers, hands and tools such as pencils etc. Each finger in the hand comprises of three bones joining to each other, which forms the bending of the finger. Leap Motion SDK provides the inbuilt function, which can recognize these bones and each finger of the hand. Angles between the proximal and intermediary bone of every finger are retrieved and the angle is applied on the vector, which yields the angle of the finger bent. All the finger data is captured and pipelined for pre-processing. Pre-processing includes normalization of data and feature dimensionality reduction. The result is a dataset, which gets all the data required from the Leap Motion controller and is saved into a file to be used as a training data for algorithms. The dataset contains the measurements of angles for of 150 instances from three different objects: BALL, RECTANGLE and CYLINDER (see Fig. 2 ). The data, consisting of 150 samples and 5 features, is written as a 150 Â 5 matrix. Dataset attributes represent individual fingers and the angle between its bones. The Naïve Bayes classifier [24] is a binary or a multi classification problem, which is used to classify the categorical data. This is simple classifier but a powerful predictive algorithm, which is considered in selecting best hypothesis (h) for given data (d) as: here is a posterior probability of hypothesis over given data; is a probability of the data was true over the hypothesis; is the prior probability of hypothesis being true (regardless the data); and is the probability of the data. Probabilities of unseen data are calculated using the Gaussian Probability Density Function (PDF), which produce an estimate of probability of previously unknown input value: here pdf : ð Þ is the PDF, r is standard deviation, and x is the input value for the input variable. In our case, the inputs for the Naïve Bayes algorithms are the finger angles captured by Leap Motion as described in Sect. 4. The evaluation is an estimate on how well an algorithm works in a production environment, it is not a guarantee of performance but a measure of accuracy and efficiency. For evaluation, we use 10-fold cross validation, which is an approach to estimate the efficiency and performance of the algorithm with less variance and a single split set. The data set is split into k-parts (e.g. 10), called a fold. The algorithm is trained on the (k − 1) folds, and is tested on the withheld fold. This is repeated so that each fold of the dataset is given a chance to be the held back test set. For relatively small training sets, the number of folds can be increased to allow to use more training data in each iteration, which results in a lower bias towards estimating the generalization performance. Robotic hands are 3D printed leveraging an open source design from inmoov (http:// inmoov.fr/). The Inmoov designs support major elements of human hand anatomy, which comprises of bones, joints, radius, ulna and tendons. The main knuckle joints are formed by the metacarpophalangeal joints are made stationary, which provides the degree of freedom to bend tit, but cannot move sideways. The operation of extensor hood and extensor tendons is controlled by rotating servos to their appropriate degree of actuators rotation (Fig. 3 ). Designing a system to perform predictions on holding objects are divided as processes, which consists of 1) Servo Readings, 2) I/O interface, 3) Algorithm suite (see Fig. 4 ). Servo Readings process fetches the reading of every single servomotor as a feedback of rotational degree measurements. I/O Interface process takes the measures from the servo readings and passes to I/O interface that is Arduino or a Raspberry PI, which can process these readings. Interface can also use message-passing interface such as ROS or PubNub, which can stream data for further processing. Algorithm Suite performs the algorithm, training, comparisons, validations and tuning. Once the suite is trained, we can use the learned data to perform prediction on new input data. According to an algorithm presented in Fig. 5 , first, the robotic hand will grab an object and as soon as pressure sensors get in contact, the servo motors will stop its action and record its measures. Once measures are converted into respective units (such as degree or radians) and passed as an external data to the dataset as a testing data. Second, once data is passed through an algorithm making necessary adjustments on weights in predictive models, the efficiency is calculated. The training data is passed through a spot-checking algorithm which gives their respective mean and standard deviation for accuracy. This process is continued until the best features are selected for classification and highest accuracy is gained. Each finger comprises of three bones joining to each other. Leap Motion SDK provides the in-built function, which can recognize these bones and each finger on the hand. Once the entire finger object is gathered, object can be further broken down to types of bones (proximal bone and intermediate bone) present in the finger. Vectors can be used to calculate the angle between each bone in the finger. Once all the finger angles are observed and are stored against the hand holding the objects, they are saved to a.csv file. An example of data collected for each finger (thumb, index, middle ring, and pinky) vs the object type is presented in Table 1 . Ball 97 91 92 97 59 Ball 97 91 92 97 59 Ball 100 96 96 100 64 Ball 100 96 96 100 65 Ball 75 81 76 75 73 Rectangle 73 82 74 73 71 Rectangle 88 113 112 88 87 Rectangle 87 108 108 87 88 Rectangle 114 128 132 114 114 Cylinder 114 128 132 114 114 Cylinder 114 128 132 112 114 Cylinder 98 143 144 98 97 Cylinder The dataset includes three objects (circle, rectangular, cylinder) trained for five different instances acting as a sample from five different people for the initial approximation for algorithm analysis. We have evaluated the features for each type of object shape separately using the absolute value of the two-sample t-test with pooled variance estimate. The results of feature ranking are presented in Figs. 6, 7 and 8. Note that different features are important for different types of object. For recognizing the Ball shape, the most important features are provided by the Pinky finger, for Rectangular shapeby the Ring and Thumb fingers, and for Cylinder shapeby the Middle and Index fingers. Using the 5D space of finger angles, the classes are separated linearly by the classifier. For example, see the distribution of the Index and Thumb finger angle values in Fig. 9 , and of the Pinky and Ring finger angle values in Fig. 10 . The evaluation is an estimate on how well an algorithm works in a production environment. However, it is not a guarantee of performance, but a measure of accuracy and efficiency. Once an algorithm is estimated on its performance then retraining of the entire dataset is performed for operational usage. To evaluate the performance of the classification, we used 10-fold cross validation as described in subsection II.C. The mean accuracy of object recognition achieved using Naïve Bayes classifier is 92.1%, while fscore is 0.914. The confusion matrix of classification results is presented in Fig. 11 . Note that the Ball shape is recognized perfectly, while the Cylinder class is often confused with the Rectangular object class due to their similarity in shape. The Receiver Operating Characteristic (ROC) curves for all three classes of objects are presented in Fig. 12 . Anthropomorphic design of a robotic hand allows the robot to interact efficiently with the environment and to operate in workplaces that are specifically accustomed to human hand [25] . Since human hand possesses a set of unique features that allows, for example, to hold a great variety of different object shapes, anthropomorphic robotic hand provides advantages for repetitive object handling and placement tasks such as for industrial conveyor applications. Specifically, implementing a multi-fingered design that is capable of performing a powerful hold task, and at the same time has fine and versatile manipulative skills that cannot be reached with a generic robotic gripper [26] . The developed hand provides a low-cost alternative to other 3D printed robotic hands known from the scientific literature [27] [28] [29] [30] and could be used for various educational projects [31, 32] . We described the development of the low-cost multi-finger anthropomorphic robotic hand that can be usable for small-scale industrial applications. The AISRA robotic hand was developed to execute human-like grasping of objects of various shape. Using the developed robotic hand and the data from Leap Motion, we have achieved a 92.1% accuracy of object recognition. Leap Motion controller provides more accurate results and helps to create a dataset for the AISRA robotic hand and seems to be condensed when creating many instances of object examples in various forms, and can be used to help developing 3D printed anthropomorphic robotic hands. The Naïve Bayes algorithm is suitable for using with the robotic hand to predict the shape objects in its hands based on the angular position of its figures. The experiments in object grasping experiments demonstrated that the AISRA robotic hand can grasp objects with different size and shape, and verified the feasibility of robot hand design using low-cost 3D printing technology. Future work will include adding additional sensors such as touch sensors, gyroscopes, accelerometers, and depth cameras for the AISRA hand. The hand will be used as a test bed for research of smart automation techniques in the laboratory environment. Future work also will involve the development by training Neural Networks and computer vision to enhance the accuracy of object recognition by back propagation and reinforcement learning. Analysis of humanrobot interaction at the DARPA robotics challenge trials Humanoid robot hand and its applied research Multi-focusing algorithm for microscopy imagery in assembly line using low-cost camera Play me back: a unified training platform for robotic and laparoscopic surgery A review of internet of things technologies for ambient assisted living environments Design of fully automatic drone parachute system with temperature compensation mechanism for civilian and military applications Educational robots for internet-of-things supported collaborative learning Educational robots as collaborative learning objects for teaching computer science Automated robot with object recognition and handling features Sensing and force-feedback exoskeleton (SAFE) robotic glove Design and production of a 3D printing robot hand with three underactuated fingers Kinect sensor-based motion control for humanoid robot hands Design and development of low cost hand exoskeleton for rehabilitation Simultaneous robot-world, sensor-tip, and kinematics calibration of an underactuated robotic hand with soft fingers Soft tactile sensors for rehabilitation robotic hand with 3D printed folds Research and design of a multi-fingered hand made of hyperelastic material A dual-cable hand exoskeleton system for virtual reality Design-validation of a hand exoskeleton using musculoskeletal modelling Grasp performance of a soft synergy-based prosthetic hand: a pilot study Robotic hand grasping of objects classified by using support vector machine and bag of visual words Design and experiment of a modular multisensory hand for prosthetic applications A fast robot identification and mapping algorithm based on Kinect sensor Recognition of American sign language gestures in a virtual reality using leap motion Idiot's Bayes-not so stupid after all? Int Anthropomorphic robotic hands: a review A Survey on Real-time Controlled Multi-fingered Robotic Hand Semianthropomorphic 3D printed multigrasp hand for industrial and service robots Self-adaptive monolithic anthropomorphic finger with teeth-guided compliant cross-four-bar joints for underactuated hands A 3D-printed robot hand with three linkage-driven underactuated fingers Intelligent multi-fingered dexterous hand using virtual reality (VR) and robot operating system (ROS) Context-aware generative learning objects for teaching computer science Advances in the use of educational robots in project-based teaching