Self-Archiving Statement (2024)

Table of Contents
Self-Archiving Statement Capturing Complex Hand Movements and Object Interactions Using Machine Learning Powered Stretchable Smart Textile Gloves 1 Introduction 2 Smart Textile Gloves 3 Helical Sensor Yarns 4 Machine Learning Model 5 Data Augmentation Technique 6 Application Examples 7 Conclusion 8 Methods 8.1 Fabrication 8.2 Characterization of the HSYs 8.3 Electronic Hardware and Software 8.4 Data Collection and Training Data Availability Code Availability Acknowledgements Author Contributions Statement Competing interests Supplementary Algorithm 1. Data Augmentation Algorithm Supplementary Table 1. Hand joint angle estimation Supplementary Algorithm 2. Tap and touch detection algorithm Supplementary Data Figure 1. Detailed dynamic gesture recognition - Part 1 Supplementary Data Figure 2. Detailed dynamic gesture recognition - Part 2 Supplementary Data Figure 3. Detailed dynamic gesture recognition - Part 3 Supplementary Data Figure 4. Detailed static gesture recognition - Part 1 Supplementary Data Figure 5. Detailed static gesture recognition - Part 2 Supplementary Data Figure 6. Detailed static gesture recognition - Part 3 Supplementary Data Figure 7. Detailed object recognition - Part 1 Supplementary Data Figure 8. Detailed object recognition - Part 2 Supplementary Data Figure 9. Detailed object recognition - Part 3 Supplementary Data Figure 10. Comparison of GlovePoseML and customized GlovePoseML with separate network branches for different modalities. Supplementary Data Figure 11. Performance comparison of using different models. Supplementary Data Figure 12. Performance comparison of GlovePoseML trained on normal versus augmented data. Supplementary Table 2. Object properties Supplementary Video 1 Supplementary Video 1 Supplementary Video 2 Supplementary Video 3 Supplementary Video 4 Supplementary Video 5 Supplementary Video 6 References

Self-Archiving Statement

Arvin TashakoriDepartment of Electrical and Computer Engineering, University of British Columbia, Vancouver BC, Canada, V6T 1Z4Texavie Technologies Inc., 148-970 Burrard St, Vancouver BC, Canada, V6Z 2R4Corresponding authors: arvin@ece.ubc.ca; peymans@ece.ubc.caZenan JiangTexavie Technologies Inc., 148-970 Burrard St, Vancouver BC, Canada, V6Z 2R4Department of Materials Engineering, University of British Columbia, Vancouver BC, Canada, V6T 1Z4Amir ServatiTexavie Technologies Inc., 148-970 Burrard St, Vancouver BC, Canada, V6Z 2R4Saeid SoltanianTexavie Technologies Inc., 148-970 Burrard St, Vancouver BC, Canada, V6Z 2R4Harishkumar NarayanaTexavie Technologies Inc., 148-970 Burrard St, Vancouver BC, Canada, V6Z 2R4Department of Materials Engineering, University of British Columbia, Vancouver BC, Canada, V6T 1Z4Katherine LeTexavie Technologies Inc., 148-970 Burrard St, Vancouver BC, Canada, V6Z 2R4Department of Materials Engineering, University of British Columbia, Vancouver BC, Canada, V6T 1Z4Caroline NakayamaTexavie Technologies Inc., 148-970 Burrard St, Vancouver BC, Canada, V6Z 2R4Chieh-ling YangDepartment of Occupational Therapy and Graduate Institute of Behavioral Sciences, College of Medicine, Chang Gung University, Taoyuan City, TaiwanDepartment of Physical Medicine and Rehabilitation, Chang Gung Memorial Hospital, Chiayi, TaiwanZ Jane WangDepartment of Electrical and Computer Engineering, University of British Columbia, Vancouver BC, Canada, V6T 1Z4Janice J EngDepartment of Physical Therapy, Faculty of Medicine, University of British Columbia, Vancouver BC, Canada, V6T 1Z3Centre for Hip Health and Mobility, Vancouver Coastal Health Research Institute, Vancouver BC, Canada, V5Z 1M9Peyman ServatiDepartment of Electrical and Computer Engineering, University of British Columbia, Vancouver BC, Canada, V6T 1Z4Texavie Technologies Inc., 148-970 Burrard St, Vancouver BC, Canada, V6Z 2R4Corresponding authors: arvin@ece.ubc.ca; peymans@ece.ubc.ca

This is the accepted version of the following article, which has been published in final form in Nature Machine Intelligence:

A. Tashakori, et al., "Capturing complex hand movements and object interactions using machine learning-powered stretchable smart textile gloves" Nature Machine Intelligence, vol. 6, no. 1, pp. 106–118, 2024.

For the final published version, please refer to the journal’s official website:

Capturing Complex Hand Movements and Object Interactions Using Machine Learning Powered Stretchable Smart Textile Gloves

Arvin TashakoriDepartment of Electrical and Computer Engineering, University of British Columbia, Vancouver BC, Canada, V6T 1Z4Texavie Technologies Inc., 148-970 Burrard St, Vancouver BC, Canada, V6Z 2R4Corresponding authors: arvin@ece.ubc.ca; peymans@ece.ubc.caZenan JiangTexavie Technologies Inc., 148-970 Burrard St, Vancouver BC, Canada, V6Z 2R4Department of Materials Engineering, University of British Columbia, Vancouver BC, Canada, V6T 1Z4Amir ServatiTexavie Technologies Inc., 148-970 Burrard St, Vancouver BC, Canada, V6Z 2R4Saeid SoltanianTexavie Technologies Inc., 148-970 Burrard St, Vancouver BC, Canada, V6Z 2R4Harishkumar NarayanaTexavie Technologies Inc., 148-970 Burrard St, Vancouver BC, Canada, V6Z 2R4Department of Materials Engineering, University of British Columbia, Vancouver BC, Canada, V6T 1Z4Katherine LeTexavie Technologies Inc., 148-970 Burrard St, Vancouver BC, Canada, V6Z 2R4Department of Materials Engineering, University of British Columbia, Vancouver BC, Canada, V6T 1Z4Caroline NakayamaTexavie Technologies Inc., 148-970 Burrard St, Vancouver BC, Canada, V6Z 2R4Chieh-ling YangDepartment of Occupational Therapy and Graduate Institute of Behavioral Sciences, College of Medicine, Chang Gung University, Taoyuan City, TaiwanDepartment of Physical Medicine and Rehabilitation, Chang Gung Memorial Hospital, Chiayi, TaiwanZ Jane WangDepartment of Electrical and Computer Engineering, University of British Columbia, Vancouver BC, Canada, V6T 1Z4Janice J EngDepartment of Physical Therapy, Faculty of Medicine, University of British Columbia, Vancouver BC, Canada, V6T 1Z3Centre for Hip Health and Mobility, Vancouver Coastal Health Research Institute, Vancouver BC, Canada, V5Z 1M9Peyman ServatiDepartment of Electrical and Computer Engineering, University of British Columbia, Vancouver BC, Canada, V6T 1Z4Texavie Technologies Inc., 148-970 Burrard St, Vancouver BC, Canada, V6Z 2R4Corresponding authors: arvin@ece.ubc.ca; peymans@ece.ubc.ca

Abstract

Accurate real-time tracking of dexterous hand movements and interactions has numerous applications in human-computer interaction, metaverse, robotics, and tele-health. Capturing realistic hand movements is challenging because of the large number of articulations and degrees of freedom. Here, we report accurate and dynamic tracking of articulated hand and finger movements using stretchable, washable smart gloves with embedded helical sensor yarns and inertial measurement units. The sensor yarns have a high dynamic range, responding to low 0.005 % to high 155 % strains, and show stability during extensive use and washing cycles. We use multi-stage machine learning to report average joint angle estimation root mean square errors of 1.21 and 1.45 degrees for intra- and inter-subjects cross-validation, respectively, matching accuracy of costly motion capture cameras without occlusion or field of view limitations. We report a data augmentation technique that enhances robustness to noise and variations of sensors. We demonstrate accurate tracking of dexterous hand movements during object interactions, opening new avenues of applications including accurate typing on a mock paper keyboard, recognition of complex dynamic and static gestures adapted from American Sign Language and object identification.

1 Introduction

Real-time tracking of hand movements [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] has significant applications in human-computer interaction (HCI), electronic gaming, metaverse and augmented reality (AR) [4, 11, 12], rehabilitation [4, 13, 14, 15, 16], sports training, robotics, and tele-surgical applications [4, 11]. Fueled by recent advances in machine learning (ML) and flexible electronics [1, 2, 3, 4, 5, 12, 17], significant progress has been reported for tracking or gesture recognition using both computer vision (CV) [4, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28] and wearable technologies [2, 3, 4, 5, 12, 13, 14, 17, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38]. Fixed and costly motion capture camera systems with markers are often used as the gold standard for detailed and articulated hand and finger tracking. CV-based solutions using one or more cameras either attached to the user’s headset [6, 15, 27] or placed in a specific location [15, 28] are demonstrated as lower-cost consumer solutions. Both motion capture and CV-based technologies are spatially limited to the field of view of cameras and face major challenges due to occlusion by objects, hands or other body parts, poor lighting, and background noise [4, 11, 15]. Current wearable technologies are primarily used for gesture recognition in form of gloves [1, 3, 5, 12, 39], wrist bands [25], or arm sleeves [2, 17]. Researchers have reported integration of different sensors, including surface electromyography (sEMG) electrodes [2, 17], pressure sensors [3, 5, 6, 7, 8, 9, 10, 40], inertial measurement units (IMUs) [4], magnetic sensors [4], and optical sensors [28, 41]. However, most of these wearable devices are used only to detect specific gestures with limited accuracy and have not addressed challenges related to reliability, accuracy, and washability of the device [1, 4]. Current solutions have strict requirements for placement of sensors directly on user’s hand and do not address variations in electrical and mechanical properties of the sensors and fit to the users [1, 2, 4, 17]. These factors in addition to lack of washing or sanitization methods lead to limited practical usability and accuracy of these solutions [1, 4, 42].

In this work, we report for the first time a dynamic and accurate dynamic tracking of hand movements, articulated for all finger and wrist joints, using stretchable, wireless and washable smart textile gloves embedded with stretchable helical sensor yarns (HSYs), IMUs and stretchy interconnects. Using our multi-stage ML algorithms, we report average joint angle estimation root mean square error (RMSE) of 1.21 and 1.45 degrees for intra- and inter-subjects cross-validation, respectively, going well beyond the accuracy of published wearable devices and CV systems [1, 2, 3, 4, 5]. Our results indicate that the proposed smart gloves and ML algorithms provide a tool for learning dexterous hand and finger movements that goes beyond conventional gesture recognition and compete in accuracy with costly motion capture systems without limitations of field of view, long setup times (e.g., camera calibration time, marker setup time) and sensitivity to occlusion and image noise that are highly prevalent in hand tracking applications. Indebted to high dynamic range and reliability of the HSYs in response to stretches and pressures at the tip of the fingers as well as our ML algorithms, we demonstrate reliable tracking of complex hand and finger movements during interaction with objects, which is not practical in camera systems due to occlusion by objects and fingers. We also demonstrate a data augmentation technique that increases robustness of our results by two times in the presence of sizable variations in the performance of the sensors and their fit to the subject. Based on these results, we demonstrate complex potential applications such as dynamic tracking of hand and finger movements, accurate (97.80 %) typing on a mock paper keyboard, highly accurate dynamic gesture recognition (94.05 % and 97.31 % inter- and intra-subject cross-validation accuracy, respectively, for 50 gestures), static gesture recognition (94.60 % and 97.81 % inter- and intra-subject cross-validation accuracy, respectively, for 48 gestures) as well as object recognition from grasp patterns (90.20 % and 95.02 % inter- and intra-subject cross-validation accuracy, respectively, for 34 objects). Table 1 summarizes overall performance parameters of our system compared to other published works. We believe our stretchable smart textile glove and ML algorithms can unlock new avenues in HCI, movement and therapy assessment in remote health as well as applications in animation and metaverse for learning dexterous human hand functions and interactions.

2 Smart Textile Gloves

Fig. 1a demonstrates the main functionality of smart textile glove, including joint angle estimation, and detecting the grasp pressure in interaction with objects. Fig. 1b illustrates the photograph of our glove with a schematic overlay showing the embedded helical sensor yarns (HSYs), specifically located at each finger joints, tips of each fingers, palm and thumb joints. Wavy 3D stretchable interconnects connecting all sensors to a wireless processing board with a rechargeable battery as well as two 9-axis degrees of freedom (DOF) IMUs are integrated on the dorsal side of the hand and within the textile on top of the forearm. As discussed in the next section, the HSYs are integrated on the surface of a stretchable internal glove fabric and connected using wavy insulated stretchable interconnects. The gloves are then covered with another layer of stretchable fabric and sewn together to provide reliable, durable and accurate operation. The HSYs can detect local deformations including stretches and pressures in the fabric caused by the movements of joints and fingers or forces when interacting with objects or when pressed against one’s own hand tissue (Fig. 1c). The two 9-DOF IMUs enable highly accurate tracking of wrist joint quaternion angles.

Fig. 1d shows the schematic diagram of the system model. We used a custom-made wireless processing board (Texavie) that consists of amplifiers, multiplexers, analog to digital converters (ADCs), microprocessors and Bluetooth low-energy (BLE) wireless transmission modules that interact with all the sensors and IMUs and transmit data to an external receiver. We use an iOS mobile app (Texavie) on iPhone or iPad (Apple), or a PC as a data gate and use an ML data pipeline that performs all data processing and ML models, including GlovePose model, 3D visualization (Fig. 1e), contact detection, gesture and object detection algorithms. An ML-based hand joint angle estimation algorithm is developed as GlovePoseML model, which was validated against a motion capture system, as explained in the next sections.

3 Helical Sensor Yarns

Fig. 2a illustrates the structure of the HSY that are critical to accurate performance of the smart glove for detection of finger and hand movements used in the machine learning models. The stretchable HSYs consist of an elastic core yarn wrapped with metal-coated nanofibers (NFs) in helical form as shown in the figure. The HSYs structure is completed with a final protective coating of polydimethylsiloxane (PDMS) and silicone as the outer shell. PDMS can penetrate deeply into the NF mesh and bond the NFs together during bending, pressing, and stretching providing high durability and dynamic range for the piezoresistive HSYs. Silicone elastomer is then applied over the yarn to attach the sensors to the fabric substrate and to protect the PDMS layer as well as providing stable electric insulation during washing. All the processing steps are scalable to roll-to-roll manufacturing including roll-to-roll sputtering for economically viable fabrication. Fig. 2b and 2c demonstrate the structure of a typical HSY before and after PDMS and elastomer coatings. The NFs form a helical porous layer around the elastic core yarn. With the PDMS matrix binding the fibers together, the contact area of the metalized NFs changes during the external stretching/pressing-releasing cycles, resulting in a change of resistance in response to strain [43, 44]. The helical structure of the NFs ensures that the HSYs maintain linear responses to a wide range of strains and stresses. Low-temperature curing of PDMS layer (120μmsimilar-toabsent120𝜇𝑚\sim 120\,\mu m∼ 120 italic_μ italic_m) was employed for higher tensile strength and stretchability [45], while the elastomer shell further improves the tensile strength of the yarns. We conducted multiple experiments on different parameters of PDMS coating, such as thickness, curing time and temperature, different shapes and materials of mold to achieve optimum HSY dynamic range, sensitivity and durability. We found that an optimized thickness of 120-200 μ𝜇\muitalic_μm PDMS thickness and 40-60 μ𝜇\muitalic_μm core HSY provides optimum performance. The HSYs can reach up to 155 % stretchability and ultra-flexibility as demonstrated in Fig. 2c.

Fig. 2d depicts the sensor response, defined as change of the yarn resistance to its original value, for the HSYs to a uniaxial strain ranging from 0.005 to 155 % at a frequency of 1 Hz, highlighting the exceptional dynamic range and uniformity of the HSYs. As the in-set shows, the HSYs can accurately respond to external strains as low as 0.005 % , which outperforms the performance of other published wearable sensors to the best of our knowledge, while still maintaining the response at 155 % strain [46]. In addition, the HSYs respond to strains along both longitudinal and vertical directions. Compressive pressure tests on the sensors are illustrated in Extended Data Figure 1c and demonstrate that the sensors can respond to pressure levels as low as 1.7 kPa and up to 1.2 MPa. The exceptional dynamic range, sensitivity, low hysteresis, high linearity, and reliability of the HSYs are key for achieving accurate response in smart glove and tracking of small joint movements, where the stretch in glove fabric is expected to be less than 120 %. Fig. 2e displays the time-dependent response of the HSYs to uniaxial straining cycles with different maximum magnitude of 1 %, 2 %, 5 %, 10 %, 20 %, 40 %, 80 %, and 140 % . The dependence of the output signals on the frequency of straining cycle is shown in Fig. 2f. The HSY sensors demonstrate accurate sensing performance under bending radius of down to 2.5 mm and repeated pressure from sharp objects like 0.5 mm stainless steel plate and a 0.6 mm diameter plastic pipette tip up to about 2.1 MPa.

The response time of the HSYs to external strain stimuli is also characterized to evaluate the response time of the system. Fig. 2g shows exceptional synchronization of the stimuli and output signals illustrating the possibility for real-time dynamic motion tracking. The durability of the sensors was studied by conducting a 14-hour long overnight test at the strain level of 10 % and 0.7 Hz frequency (Fig. 2h). Sensitivity of the response to water is also demonstrated in Fig. 2i comparing the response in air and when fully immersed in water. The sensors show less than 10 % degradation in sensitivity over the entire range of strains. The sensors are integrated on the fabric and stability and washability testing was undertaken over various day-by-day laundry tests, as shown in Fig. 2j. The sensors were first immersed in water, at various temperatures for different times ranging from 5 to 50 minutes with magnetic stirring at different speeds (80, 160 or 400 rpm). The fabric and sensors were dried in air completely before the measurements of sensitivity. Similarly, detergent (dtg) and softener (sft) were added to the water, and the response of the sensors was recorded after they were dried. The sensors show excellent stability after the simulated drying cycles. In the end, we conducted real washing/drying tests by putting the sensors into the laundry machines together with a full load of other clothes, to study the performance of the sensors in actual daily usage scenarios, demonstrating stability during laundry cycles. HSYs demostrate superior specifications to state-of-the art strain sensors, including broad dynamic range [40, 47, 48], fast response time [49, 50, 51], durability in extensive cycling [46], and excellent washability [40, 52]. In addition to washability for HSY embedded fabric, the enclosure box used for PCB hardware is designed with embedded custom made waterproofing rubber flanges that blocks water leakage in normal washing conditions and the PCB is coated with a protective coating and placed in a plastic pouch. This makes the entire glove system washable after removal of the rechargeable battery, tested for over tens of cycles of washing and drying.

4 Machine Learning Model

In this work, we developed an ML model that can dynamically estimate joint angles for all finger joints and wrist with high accuracy. This ML model (GlovePoseML) is the core of the algorithm which is supplemented with other neural network-based models in the output to tune the response for specific applications and demonstrations such as keyboard and object detection. To develop and train GlovePoseML model, we collected a large dataset (over 3,000,000 frames with a sampling rate of 20Hz) from our smart glove with attached markers for gold standard motions capture system from five participants with different hand sizes. The participants perform complex hand transient movements collected for various tasks including random finger movements, grabbing objects and switching between different gestures (e.g., making fist or paper). As shown in Fig. 3f, our GlovePoseML model consists of a 2 layer stacked recurrent neural-network-based bi-directional long-short-term-memory (Bi-LSTM) followed by 2 fully-connected (FC) layers and two activation layers. We use this regression architecture for estimating hand joint angles and tactile information from incoming data. Our model uses a two-second history of normalized sensor and motion capture data as input for training. For inter-subject cross-validation, we select one user’s data as the testing dataset and others as the training dataset, repeat this step for all users, and report the average results. For intra-subject, we performed ten-fold cross-validation over each user’s data and report the average value. With the core model trained, test data can be fed to the model and estimated joint angles are sent through a WebSocket for 3D visualization using Unity software. Fig. 3a demonstrates the linear plot comparison of angles estimated by our ML model for test data and the motion capture gold standard for different finger joint flexion or abduction (if applicable) and wrist joint flexion, abduction and supination. Fig. 3c and 3d report inter- and intra-subject cross-validation results for R2 and RMSE of joint angles, respectively, displaying the model’s accuracy for each joint. More detailed results for the accuracy values can be found in Supplementary Table 1. In particular, the results demonstrate high accuracy of 1.45 (min: 0.69, max: 2.71) and 1.21 (min: 0.36, max: 2.26) degrees RMSE for inter- and intra-subject cross-validation results, respectively. We have demonstrated that our system operates with accuracy in scenarios where current CV methods and wearable devices provide limited or low-accuracy results, such as random finger movement in complex gestures (Fig. 3b-i), visual occlusion by objects (Fig. 3b-ii) and dark conditions (Fig. 3b-iii), shown in detail in our demo video. Such high accuracy values arise from several factors including the stability and dynamic range of our HSYs and reliable positioning of the HSYs on the joints in stretchable smart gloves, leading to similar responses and trends among different subjects and different sessions and accurate performance of our ML algorithm. Our results indicate that our smart glove and ML model can potentially replace high-end, bulky and expensive motion capture systems and is not prone to common issues associated with these systems including losing track of markers due to occlusion, light noise or marker swap.

5 Data Augmentation Technique

In practical applications, the smart gloves and the embedded HSYs have small but unavoidable variations in the values of resistances and stretch sensitivities and how they fit to different subjects or the same subject in different sessions. By drawing inspiration from nature that adapts to changes in external environment, we developed a data augmentation technique to enhance robustness to the small variations of HSYs, their placement and fit to the subjects. As shown in Fig. 3e, the pre-training algorithm uses data augmentation to duplicate (i) the original training data and apply some expected variations to the original data, including (ii) random sensor amplitude scaling or DC shift, (iii) random sensor masking, or (iv) additive noise. Then, we use the augmented dataset to train GlovePoseML model to enhance robustness. Fig. 3g and h demonstrate average R2 and RMSE, respectively, for the original model as well as that incorporating the data augmentation technique. These results highlight that the proposed pre-training is highly effective in improving the accuracy (2×\sim 2\times∼ 2 × reduction in average RMSE) of the system in the presence of unavoidable variations. This step enables our ML model to capture invariant representations of the movement accurately and also reduces the necessity for excessive calibration and retraining in the case of variations in the performance or fit. By developing this pre-training method (Supplementary Algorithm 1), we significantly improve the robustness of our smart glove in comparison to published work that are typically made in the lab and mostly tested by attaching sensors to joints of subjects in a highly controlled and unrepeatable fashion [3, 4].

6 Application Examples

We rely on our dexterous hand movements to perform everyday tasks, from manipulating objects, using computer or communicating with each other [4]. To demonstrate our work’s possible applications, we demonstrate six examples of our smart gloves and ML system for capturing complex real-time hand and finger poses and movements, typing on a mock keyboard, tracking movements in air, interactions and grasping of objects, dynamic and static gesture recognition. These applications demonstrate how achieving dynamic accuracy can open up applications for our smart gloves and ML system.

For the first example, we report dynamic (ML response time of 5mssimilar-toabsent5𝑚𝑠\sim 5ms∼ 5 italic_m italic_s in processing of data on a regular PC with Intel Core i9-9900K CPU @@@@ 3.5 GHz, and 32GB RAM DDR4) articulated tracking of finger movements in complex dynamic gestures as shown in the demonstration video. As seen, the system can follow finger movements and all finger and wrist joints with a high level of accuracy. When the model is trained, the accuracy is not affected in dark conditions or when the hand is grasping a ball which creates visual occlusion for the motion capture cameras making them practically ineffective. This impressive dynamic performance and accuracy is a significant improvement to previously reported works in the literature and can find applications in animation, metaverse and tele-operation (A summary of the performance of our glove system versus other similar systems can be found in Table 1).

For the second example, we demonstrate typing on a mock keyboard printed on a piece of paper laid on a table, which serve as a guide for the user during typing (Fig. 4a). We employed GlovePoseML model trained for both hands and then added two FC layers (Fig. 4c) along with a click detection algorithm running on the response of HSYs at the tip of finger as shown in Fig. 4d. Inter-session cross-validation of the system’s accuracy is displayed by different colors in Fig. 4b and shows an average 97.80 % accuracy for the prediction of the typed letters. As shown in the demonstration video, the accuracy can support complex keyboard functions such as holding shift for capital letters not demonstrated in previous works. Detailed accuracy results can be found in Extended Data Figure 3.

For the third example, we considered a 3D drawing in the air using the glove as shown in Fig. 4e. In this scenario, users can maneuver over an iOS mobile interface application, as shown in the demonstration video, and use their wrist movements to draw with different colors selected based on which finger pinches the thumb. We reported 2.48, 2.34, and 2.54 degree errors for estimating the angle of wrist flexion or extension (Fig. 4f), abduction or adduction (Fig. 4g), and supination and pronation (Fig. 4h), respectively, when compared with the motion capture system.

For the fourth and fifth examples, we focused on dynamic (Fig. 5a), and static gesture recognition (Fig. 5d). Some gestures, such as those shown in Fig. 5d (static gestures 5, 13, 25, 26, 31, and 32), exhibit high similarity, making it very challenging for current CV algorithms to distinguish. We employed GlovePoseML model for each hand and added two FC layers to map finger joint angles to the list of 50 dynamic gestures or 48 static gestures featuring complex finger and wrist poses. We trained the added two layers while keeping GlovePoseML models as is (Fig. 5j). We report inter- and intra-subject cross-validation results of 94.05 % and 97.31 % accuracy for detecting dynamic gestures (More details in Extended Data Figure 4), and inter- and intra-subject cross-validation accuracy of 94.60 % and 97.81 %, respectively for 48 static gestures (More details in Extended Data Figure 5). To visualize the effectiveness of our classification algorithm, we concatenated the output from GlovePoseML models for both hands. We employed the t-Distributed Stochastic Neighbor Embedding (t-SNE) method to visualize the data derived from these concatenated outputs (Fig. 5b for dynamic and Fig. 5e for static gestures). Fig. 5c and Fig. 5f show the high sensitivity of our method for each gesture.

For the sixth application, we focused on detecting objects based on the participants’ grasp patterns. We employed GlovePoseML model and added two layers of FC neural network, mapping finger joint angles to a list of 34 objects (Fig. 5g) with slight difference in shapes, weights, stiffness, and grasp patterns. Here, the users are instructed to mimic a grasp pattern for each object using an instruction video. We trained these added two layers while keeping the core models as is (Fig. 5j). We report inter- and intra-subject cross-validation results of 90.20 % and 95.02 % accuracy in detecting the objects from an individual’s grasp, respectively (Fig. 5e, more details in Extended Data Figure 6). The algorithm makes use of different hand poses used for objects (e.g., objects 11 and 34) as well as slight differences in pose and grasp pressure patterns (e.g., objects 9, 10, and 11) for objects with very similar hand pose for accurate recognition. The distribution of clusters representing different objects realized by t-SNE for different subjects in Fig. 5h, as well as object-wise sensitivity of our system reported in Fig. 5i demonstrates effectiveness of our classification method.

7 Conclusion

We report accurate, stretchable, washable, and multi-modal smart textile gloves, with embedded stretchable helical yarn sensors, IMUs and interconnects, which can dynamically track movements of finger and hand joints and grasp forces during object interactions. The lightweight and insulated sensor yarns show high dynamic range in response to strains as low as 0.005 % and as high as 155 %, and low hysteresis and high stability during extensive use and washing cycles. Using machine learning algorithms, we demonstrate high accuracy for the smart gloves in estimating hand joint angles with less than 1.21 and 1.45 degrees average RMSE compared to the gold standard motion capture system for intra- and inter-subject cross-validation studies, respectively. Data augmentation technique is developed that enhances robustness of the system to variations in the performance of the sensors, noise, and fit to the subjects. The reported smart gloves and machine learning system highlight the potential for high-accuracy tracking of dexterous hand movements and object interactions without limitations of field of view, occlusion and multi-user challenges associated with camera-based systems. We highlight the performance of our system by demonstrating real-time tracking of complex hand movements with minimal delay, as well as accurate detection of 48 static and 50 dynamic gestures adapted from American Sign Language, typing on a random surface as a mock keyboard, and recognition of objects from grasp pose and forces. These results lay foundation for learning realistic, dexterous human hand movements and object interactions for creating new experiences in metaverse, robotics control, tele-surgery, telerehabilitation, and gaming applications, as well as potential for analysis of muscle function and grasp forces for assessment of patients with hand impairments. By extending the volume of data from different movements, interactions and gestures in various scenarios, increased depth and knowledge of hand dexterous functions will empower future realistic digital experiences and human-robot interactions.

8 Methods

8.1 Fabrication

The HSYs were fabricated using needleless electrospinning (NS LAB Nanospider, Elmarco) for deposition of NFs directly onto a core spandex polyurethane yarn (3600 denier; Weight in grams of 9000 meters of length) with a diameter of 0.5 mm (Crystal Tec, Korea) in a roll-to-roll fashion. Before electrospinning, the core yarn was cleaned using oxygen plasma (Tergio-plus) as well as acetone, isopropanol (IPA), and deionized (DI) water. Polyacrylonitrile (PAN) (Scientific Polymer Products) was dissolved in dimethylformamide (DMF) (10 wt %) (Fisher) and stirred at 60 C for 24 h to form a homogeneous solution for electrospinning under a DC bias of 1.5 kV/cm. The core yarn is held between the bias wires of the electrospinning system and rotated around itself (60 rpm) using an electric motor to form helical coating of the NFs with an average diameter of 300 nm (standard deviation of 25 nm). After NF deposition, the yarns are coated with a 40-50 nm conformal layer of gold by plasma sputtering (Edward) to form a conductive helical sensor yarn (HSY). Ag-coated Nylon threads (Noble Biomaterials) were used as contact electrodes and knotted to the sensor yarn at desired locations over the length of the yarn and bound with silver paste (Pelco). The Ag-coated nylon thread (Denier:100; linear resistance: 0.8 ohm/cm) is a 3 ply yarn, the single plies are twisted at 16 twists/inch in the S direction and the 3 plies are twisted together at 14 twists/inch in the Z direction. The yarns were cured at 70 C for 30 min to reach the optimal mechanical and electrical robustness. The sensor yarns are then encapsulated with PDMS (Dow Corning) by pouring and completely curing at 40 C for 24 hours using a tube-shaped mold to form a tubular all-around insulation.

The smart glove was prepared using a stretchable single jersey plated weft-knitted fabric (45 % Nylon, 45 % Polyester, and 10 % Spandex), which was patterned and cut for desired glove patterns. The fabric has the following geometrical and mechanical properties (ASTM-D5035 Standard): stitch density (2142 loops/square inch), thickness (0.61 mm), weight (280 gm/square meter-GSM), breaking force (320 N), breaking elongation (110 %), and elastic recovery (94.8 %). The sensor locations and their corresponding wiring were designed for the given size of the glove. A wiring bundle was made for each branch of the sensors routed for each finger (3 - 5 sensors in each branch) by twisting thin (34-38 AWG) insulated copper wires and mounting it on the fabric in a wavy form using a sewing machine to achieve the desired stretchability for the interconnects. The yarn sensors were placed and attached to the designated locations on the fabric corresponding to joint sensors and tips of the finger as shown in Fig. 1a using elastomer (Dragon Skin, Smooth-On™). The fabric pieces with sensors are cured at room temperature for 30 min, and then the inside and outside fabrics are sewn together to form the final smart gloves. The interconnects are connected to a flexible printed circuit (FPC) that is plugged into the main board and box. Fabrication process is demonstrated in Extended Data Figure 1a.

8.2 Characterization of the HSYs

Tensile testing (INSTRON 5969) was performed to investigate the mechanical performance of the HSYs during stretching at strain >2% and pressing. The top end of HSY sample was fixed on the system dynamometer to measure and control the force applied to the sensor, and the other end was anchored onto a steady holder. By adjusting the frequency and amplitude, the output under different strains was recorded. For compression studies, a metal indenter 5×5mm55𝑚𝑚5\times 5\,mm5 × 5 italic_m italic_m is used to tap on the center of sensors. Low strain tensile testing (<2%) was conducted using a micro actuator (Zaber Technologies). The current and voltage measurements were acquired simultaneously by a Keysight B1500A semiconductor analyzer.

To test the durability of the HSYs against daily laundry, the gloves were washed and dried using a typical laundry machine and detergents (Samsung front load, Samsung electric compact dryer, Purex ® liquid laundry detergent, Downy ultra fabric softener liquid). The electrical and mechanical properties of the gloves were monitored after each test.

8.3 Electronic Hardware and Software

The movement data is acquired at a sampling rate of 20 Hz using a custom PCB (Texavie) containing multiplexers, pre-amplifiers, and analog-to-digital converters (ADC) to measure 25 HSYs with a 12-bit resolution and two IMUs (Bosch BNO055) controlled by a centralized microprocessor (Nordic Semiconductor) with Bluetooth low energy (BLE) connectivity. The power consumption details are included in Extended Data Figure 2.

Data collection gateway is implemented in a mobile application written in Swift 5 (Texavie), which collects data transmitted from each pair of gloves and stores the data in a database. It also sends a start/stop command through a web socket to a python script running on a personal computer for synchronization with gold-standard motion capture cameras (Optitrack). To compensate the shifting of the baseline resitance, we developed a dynamic baseline correction module in the data acquisition software to subtract the sensors’ resistance values from their average value over awindow of size n = 400 (20 seconds).

8.4 Data Collection and Training

We collected data from five healthy right-handed participants (two female and three male subjects) between the age of 15 to 35 on both hands. In this study, we used one stretchable glove pair for all the subjects, which provided a conformal fit for the subject during the entire data collection session. Informed consent was obtained from all participants (Ethics obtained from UBC Clinical Research Ethics Board - H21-03021 entitled “iGRASP:Phase 2 (Version1.0)”). Our data collection consisted of three main sections collected both from the gold standard motion capture system as well as the pair of proposed gloves system: 1) tracking hand and wrist joint angles in random movements or keyboard typing, 2) dynamic and static hand gesture recognition, and 3) grasping different objects. Each section contains a list of movements that the subjects were asked to follow using a prerecorded video supervising them to do a set of gestures.

8.4.1 Articulated Dynamic Hand Tracking

We asked participants to follow a set of random finger and wrist movements. To mitigate task-related bias, subjects are asked to follow a set of prerecorded videos. We recorded over 3,000,000 frames of data, including from the HSYs and IMUs for actual finger and wrist joint flexion and supination angles and from the motion capture system. The wrist quaternion angle data is derived from the two IMUs and downsampled to have similar data rate to that of HSY data and then fed as input to GlovePoseML. The decision to fuse both modalities is due to their high correlation with joint angles and similar characteristics and computational efficiency, as treating the data from two sensors in different network branches did not show significant performance improvements.

We developed a recurrent neural network-based regression Bi-LSTM neural network architecture, which maps a 2 seconds window of sensor values to one set of hand joint angles. We used the Adam optimizer with a learning rate of 0.0001. We trained our model for 100 epochs. We used Smooth L1 loss function with β=0.5𝛽0.5\beta=0.5italic_β = 0.5. More comprehensive results on the effect of customizing network architecture on joint-wise RMSE and R2 can be found in Supplementary Data Figures 10 and 11.

To enhance robustness, we augmented collected gloves data using three data transformations: 1) Randomly masking one to three channels, 2) Randomly adding Gaussian noise (with the mean of μ=0𝜇0\mu=0italic_μ = 0 and standard deviation of σ=0.06𝜎0.06\sigma=0.06italic_σ = 0.06) to one to three channels, and 3) scaling one to three channels by a random scalar between 0.50.50.50.5 to 1.51.51.51.5. Later, we trained our ML model in the multi-task learning setting. More details can be found in Supplementary Algorithm 1. More comprehensive results on the effect of noise level, number of masked or scaled sensors on GlovePoseML trained on normal data and augmented data, in terms of average RMSE and R2 can be found in Supplementary Data Figures 12. A demonstration video showcasing dynamic articulated tracking of finger movements can be found in Supplementary Video 1.

8.4.2 Keyboard Typing Detection

We asked the users to click on a single button of the keyboard. We developed a Click detection algorithm based on the HSY sensors at the tip of the fingers (more details in Supplementary Algorithm 2. We use two FC layers to map the output of GlovePoseML model to ten keys that the fingers maneuver over using the ten-finger typing method. The keyboard was marked using four motion capture markers placed on the corners of the printed keyboard. This is used to calibrate key locations as our ground truth. At the start of data collection, we asked the participants to pose their hands over the keyboard and stay in this position for 10 seconds as the resting position in the ten-finger typing method. Then, we asked them to type a paragraph containing 100 characters on a printed keyboard for training of the two output layers. We collected five different trials and performed five-fold cross-validation. A demonstration video showcasing typing on a mock paper keyboard can be found in Supplementary Video 2.

8.4.3 3D Drawing

We developed a Mobile interface app using Swift 5. We developed the cursor’s location on the screen based on the relative angle of the two IMUs’ quaternion values. Users can choose different colors by pinching the thumb using different fingers detected by using a touch detection algorithm similar to Click detection from the HSYs at the tip of the fingers. A demonstration video showcasing 3D drawing in air can be found in Supplementary Video 3.

8.4.4 Dynamic and Static Hand Gesture Recognition

We recorded a set of videos each containing one gesture and rest position. The gesture was shown for 5 seconds, and users had 5 seconds to rest between each trial. We collected 20 sets of data for each gesture in total. We trained the last two output layers while freezing GlovePoseML models. Inter-session cross-validation results can be found in Supplementary Data Figures 1, 2, and 3, for dynamic gesture recognition, and Supplementary Data Figures 4, 5, and 6 for static gesture recognition. Also, demonstration videos showcasing static and dynamic hand gesture recognition can be found in Supplementary Videos 4 and 5.

8.4.5 Object Detection From the Grasp

We recorded a set of videos containing grasping one object and resting position. Grasps were shown for 5 seconds, and users had 5 seconds to rest between each trial. We collected 20 sets for each gesture in total. We created a tensor mapping window for each dataset over sensor data to the list of objects the user grasped. We combined all tensors and shuffled data for each subject to create a training dataset. We employed the hand joint angle estimation architecture we trained on earlier. We added two FC neural networks mapping hand joint angles to a list of objects. During the training phase, we updated the last two layers while freezing the rest of the network. More results can be found in Supplementary Data Figures 7, 8, and 9. More details about the properties of objects used in this study can be found in Supplementary Table 2. A demonstration video showcasing the object detection algorithm can be found in Supplementary Video 6.

Data Availability

Data supporting this study’s findings are available from project page, containing detailed explanations of all the datasets: https://feel.ece.ubc.ca/SmartTextileGlove/, as well as a direct link to a Google Drive Repository: https://drive.google.com/drive/folders/1HWjG_6Y2G7XNEeI19Aids0g-dcufncGJ?usp=share_link where the datasets can be downloaded.

Code Availability

The codes supporting this study’s findings are available from https://github.com/arvintashakori/SmartTextileGlove[53].

Acknowledgements

The authors would like to thank support of NSERC-CIHR (CHRP549589-20, CPG-170611) awarded to PS, NSERC Discovery (NSERC: RGPIN-2017-04666 & RGPAS-2017-507964) awarded to PS, NSERC Alliance (ALLRP 549207-19) awarded to PS, Mitacs (IT14342 & IT11535) awarded to PS, and CFI and financial and technical support of Texavie Technologies Inc. and their staff.

Author Contributions Statement

AT and PS developed system model and implemented the learning algorithm, iOS Mobile application, data pipeline, PC-based data acquisition software, Unity application, and firmware parts. ZJ and SS designed yarn-based strain sensors. ZJ, AS, SS, HN, KL, and PS developed hardware and fabricated gloves and sensors. AT performed the experiments and analysis with help and input from others. CN helped with PCB box fabrication and drawing sensor schematic. PS, AS, JJE, CY, and ZJW oversaw the project. All authors contributed to writing of the manuscript and analysis of results.

Competing interests

PS, AT, ZJ, AS, SS and HN have filed a patent based on this work under the US provisional patent application no. 63/422,867. The remaining authors declare no competing interests.

This WorkWen et al.(2021)[12]Luo et al.(2021)[1]Moin et al.(2021)[2]Zhou et al.(2020)[3]Hughes, et al.(2020)[8]Glauser et al.(2019)[5]
FormSmart textile gloveSmart textile gloveSmart textile apparelFlexible PCB armbandSmart textile gloveSmart textile glovesSmart latex glove
Sensors25 HYSs +2 ×\times×9axis IMUs15 triboelectricsensors32×32323232\times 3232 × 32 yarn-based piezoresistive pressure sensors64 sEMGsensors5 yarn-basedstrain sensors120 resistive knit sensors+ 6 fluidic pressure sensors44 Capacitive siliconsensor array
Outputfingers+wrist joint anglestactile feedback21 gestures11 gesturesfinger joint anglefinger joint angles
keyboard typingheart rate
3D drawing50 dynamic gestures10 handwritten letters
50 dynamic gestures+ 20 sentencesstiffness
48 static gesturestemperature
34 objects30 objects
forceforce
Robustness
Washability
Wireless
Sampling rate (Hz)20600141k50016600
ML Response5831002501000Not Mentioned125
Time (ms)
Offline personalized1 minuteNot MentionedNot MentionedNot Mentioned2 minutesMLP: similar-to\sim 40 minutes2 hours
training timeLSTM: similar-to\sim 3 hours20 minutes, 1 minute
No GPU needed
to calibrate
Finger joint angleIntra: 1.24 degMLP: 6.40 degIntra: 5.8 deg (2h), 6.5 (20m)
estimation errorInter: 1.45 degLSTM: 4.78 degInter: 6.2 deg (2h), 7.6 (1m)
Static gestureIntra: 97.81 %Not MentionedIntra: 92.87 %3-fold cross-validationNot MentionedNot Mentioned
recognition accuracyInter: 94.60 %Inter: 84.53 %98.63 %
Dynamic gestureIntra: 97.31 %50 words: 91.3 %Not MentionedNot Mentioned
recognition accuracyInter: 94.05 %20 Sentences: 95.0 %
Typing accuracyIntra: 97.8 %Not Mentioned
ObjectIntra: 95.02 %MLP: 99.69 %
detection accuracyInter: 90.20 %LSTM: 98.18 %
Wrist angleIntra: 2.09 deg
estimation errorInter: 2.45 deg
Note: Please note that different advanced wearable systems reported in literature function based on different physiological information (e.g., EMG and tactile), sensing modalities and form factors and can
be used in variety of applications, which may not be fair to compare only based on performance parameters.

Self-Archiving Statement (1)

Fig. 1 Smart Textile Glove. a, Photograph of the smart textile glove demonstrating its functionality in capturing joint angles and grasp pressure in interaction with objects. b, Photograph of the glove with X-ray schematic showing embedded HSYs (blue: top, red: bottom), 3D stretchy interconnects (gold lines, solid: top, dashed: bottom), the PCB including first IMU1 and other readout and Bluetooth hardware, battery box, and the second embedded IMU2, located just above the wrist. c, User wearing a glove pair showing a complex gesture. d, Schematic block diagram of how multiple HSYs and IMUs are connected to PCB hardware, including amplifiers, analog-to-digital-convertors (ADCs), microprocessor and Bluetooth low energy (BLE) transmitter and subsequently to iOS mobile app or a PC that receives the data and pass it to GlovePoseML model and demonstration apps for different applications. e, The visualization of user’s complex hand gesture estimated by ML algorithm that dynamically follow the movements.

Self-Archiving Statement (2)

Fig. 2 Helical Sensor Yarns. a, Schematic of HSYs showing coaxial structure with elastic spandex core, wrapped with helical metal-coated NFs, and encapsulation elastomer shell. b, Microscope and SEM photomicrographs of HSYs before shell coating. c, Photographs of HSYs with shell and contact electrodes. d,e Measurement of sensitivity to different tensile strains and loads, during loading and unloading, displaying exceptional sensitivity down to 0.005 % strain and minimal hysteresis. Inset: Sensitivity for strains < 1.0 %. (The data points present the mean value of 20 samples. Error bars derived from standard deviation are too small to show on the figure.) f, Changes in resistance response of HSYs to strains up to a maximum of 10 % strain with various frequencies 0.2 - 2 Hz. g, Sensor response accurately following changes in strain from 0 to 10 %. Insets: Zoomed views of loading and unloading cycles. h, Mechanical durability test for up to 14 hours continuous stretch–release cycles. i, Comparison of sensor response in air and underwater. Inset: Photograph of sensor during underwater tests. (The data points present the mean value of 20 samples. Error bars derived from standard deviation are too low to show.) j, Durability sensor response in the smart textile glove during various laundry, washing and drying cycles.

Self-Archiving Statement (3)

Fig. 3 ML and Dynamic Hand Tracking. a, Linear plot comparison of joint angles (A to P, shown in subset) estimated using GlovePoseML versus those measured by motion capture camera and marker system. b, Comparison of visualized estimated hand pose (left) and photographs (right) of complex movements in (i) normal conditions, (ii) when there is occlusion during grasping of a ball, and (iii) low light environment. Average accuracy results for different joints in terms of c, the goodness of fit R2, and d, RMSE. e, Scenarios for data-augmentation: (i) original data, (ii) scaled data, (iii) data with fewer active sensors, and (iv) noisy data. f, Overall architecture of tracking GlovePoseML model, showing the normalization layer, 2-layer stacked Bi-LSTM model, 2 FC layers and activation layers. Comparison of average accuracy results of model trained using normal and augmented dataset in terms of g, goodness of fit R2, and h, RMSE.

Self-Archiving Statement (4)

Fig. 4 Typing and Drawing Demos. a, Photograph of user using a pair of wireless smart gloves for typing on a mock paper keyboard used for user’s visual feedback. b, Color-coded comparison of accuracy of detection of each key on the keyboard in a 10-finger typing on the mock paper keyboard. c, Typical response of HSY sensor at the tip of a finger as it repeatedly taps and retracts the surface of mock paper keyboard. d, schematic of the two FC and activation layers that work with GlovePoseML model for detection of typing. e, Illustration of a 3D in-air drawing application on an iPad based on two-finger pinch and wrist movement. Comparison between the estimated angle of the wrist using our smart glove and ML system and gold standard motion capture system for tracking f, wrist flexion and extension, g, wrist abduction and adduction, and h, wrist supination and pronation.

Self-Archiving Statement (5)

Fig. 5 Real-time dynamic/static gesture, and object recognition. a, Pictures of dynamic gestures used for hand gesture recognition. b, Cluster distribution of dynamic gestures from GlovePoseML output layer between users. c, Sensitivity (%) results for dynamic gestures in inter- and intra-subject cross-validation. d, Pictures of static gestures used for hand gesture recognition. e, Cluster distribution of static gestures from GlovePoseML output layer between users. f, Sensitivity (%) results for static gestures in inter- and intra-subject cross-validation. g, Pictures of objects used for object recognition from grasp form study. h, Cluster distribution of objects from GlovePoseML output layer between users. i, Sensitivity (%) results for different objects in inter- and intra-subject cross-validation. j, Schematic of the model used for gesture and object classification, displaying FC and activation layers.

Self-Archiving Statement (6)

Extended Data Figure 1. Fabrication and Characteristics of HSYs. a, Fabrication process of HSYs and gloves. b, SEM images of the yarn sensors before PDMS coating. c, Sensitivity of HSY resistance to various compressive pressure values applied at a frequency of 1 Hz. A metal indenter with the size of 5mm×5mm5𝑚𝑚5𝑚𝑚5mm\times 5mm5 italic_m italic_m × 5 italic_m italic_m were used to apply pressure normal to the fabric. (The data points present the mean values ±plus-or-minus\pm± standard deviation of 20 samples.) d, The strain sensitivity of our insulated yarn sensors made from optimized composites of carbon particles with highly stretchable elastomers, demonstrating high stretchability up to 1,000 %, but showing more hysteresis for > 500 % stretch and slower responsiveness due to the softer nature of these sensors in comparison to HSYs. This highlights the superior performance of HSYs for the proposed smart glove real-time applications with less than 120 % maximum stretch for the fabric.

Self-Archiving Statement (7)

Extended Data Figure 2. Power Consumption Custom-made wireless board power consumption breakdown for different components including BLE chip, IMU chips, and all HSYs.

Self-Archiving Statement (8)

Extended Data Figure 3. Keyboard typing detection Inter-session cross-validation accuracy results for the keyboard typing detection algorithm.

Self-Archiving Statement (9)

Extended Data Figure 4. Dynamic gesture recognition Confusion matrix for a, Intra-subject (accuracy: 97.31 %), and b, Inter-subject cross-validation (accuracy: 94.05 %).

Self-Archiving Statement (10)

Extended Data Figure 5. Static gesture recognition Confusion matrix for a, Intra-subject (accuracy: 97.81 %), and b, Inter-subject cross-validation (accuracy: 94.60 %).

Self-Archiving Statement (11)

Extended Data Figure 6. Object detection Confusion matrix for a, Intra-subject (accuracy: 95.02 %), and b, Inter-subject cross-validation (accuracy: 90.20 %).

Supplementary Algorithm 1. Data Augmentation Algorithm

Labeled dataset D, Transformation functions T, Number of epochs E

Trained multitask model M

Daug[]subscriptDaug\text{D}_{\text{aug}}\leftarrow[]D start_POSTSUBSCRIPT aug end_POSTSUBSCRIPT ← [ ]

for(signals, label) in Ddo

Y[0,0,,0]Y000\text{Y}\leftarrow[0,0,...,0]Y ← [ 0 , 0 , … , 0 ]

Daug.append(signals,label,Y)formulae-sequencesubscriptDaugappendsignalslabelY\text{D}_{\text{aug}}.\text{append}(\text{signals},\text{label},\text{Y})D start_POSTSUBSCRIPT aug end_POSTSUBSCRIPT . append ( signals , label , Y )

forTransformation in Tdo

Y[Transformation]=1

Daug.append(Transformation(signals),label,Y)formulae-sequencesubscriptDaugappendTransformation(signals)labelY\text{D}_{\text{aug}}.\text{append}(\text{Transformation(signals)},\text{label%},\text{Y})D start_POSTSUBSCRIPT aug end_POSTSUBSCRIPT . append ( Transformation(signals) , label , Y )

Y[Transformation]=0

endfor

endfor

MNew Multitask modelMNew Multitask model\text{M}\leftarrow\text{New Multitask model}M ← New Multitask model

forEpoch in Edo

TrainingMinibatch(model = M, dataset = Daug, loss = L1(β=0.5) +subscriptTrainingMinibatch(model = M, dataset = Daug, loss = L1(β=0.5) +\text{TrainingMinibatch(model = M, dataset = D}_{\text{aug}}\text{, loss = L1(%$\beta=0.5$) +}TrainingMinibatch(model = M, dataset = D start_POSTSUBSCRIPT aug end_POSTSUBSCRIPT , loss = L1( italic_β = 0.5 ) + BinaryTransformationLoss (), optimizer = Adam(lr=0.0001𝑙𝑟0.0001lr=0.0001italic_l italic_r = 0.0001))

endfor

Supplementary Table 1. Hand joint angle estimation

SubjectPerformancePinkyRingMiddleIndexThumbWristAverage
MCPPIPDIPMCPPIPDIPMCPPIPDIPMCPPIPDIPMCPIPFlexAbdSup
FlexAbdFlexFlexFlexAbdFlexFlexFlexAbdFlexFlexFlexAbdFlexFlexFlexAbdFlex
Subject 1RMSE0.340.380.590.870.380.420.791.390.410.450.621.090.190.230.871.031.831.872.021.931.872.120.99
R299.8599.7599.569999.7399.6399.398.999.6999.699.59999.5499.499.299.198.6298.597.698.29897.699.07
Subject 2RMSE0.410.470.651.010.370.430.71.430.350.410.681.20.20.261.011.011.9421.791.841.732.031
R299.5599.2199.529999.799.3699.298.799.4599.199.49999.6199.399.399.398.598.298.198.399.197.999.04
Subject 3RMSE0.891.021.072.471.491.621.562.251.241.371.92.411.010.942.342.892.732.663.022.532.462.571.93
R297.3597.0496.8497.497.2596.9497.196.997.2396.997.496.498.0897.897.497.296.8996.696.196.296.796.196.99
Subject 4RMSE0.450.520.671.070.490.560.861.620.420.490.721.210.210.281.011.122.012.082.391.951.892.011.09
R299.5599.4399.3498.999.4299.399.198.699.6399.599.498.999.5499.499.19998.3498.297.69898.297.698.92
Subject 5RMSE0.40.490.640.980.410.50.781.480.390.480.671.170.20.290.961.051.932.022.072.122.062.21.06
R299.6599.3199.479999.6299.2899.298.799.5999.399.49999.5699.299.299.198.4998.297.898.298.49898.98
Intra subject CVRMSE0.50.580.721.280.630.710.941.630.560.640.921.420.360.41.241.422.092.132.262.0722.191.21
R299.1998.9598.9598.799.1498.998.898.499.1298.99998.599.279998.998.898.1797.997.497.898.197.498.6
Inter subject CVRMSE0.780.830.991.330.820.851.121.990.690.731.081.530.730.751.251.432.332.52.712.482.342.541.45
R299.0298.1298.1198.498.8898.398.498.198.4198.29897.698.3798.298.898.797.7796.497.497.697.897.198.07

Supplementary Algorithm 2. Tap and touch detection algorithm

The objective of the tap detection algorithm is to find:

T(t)=[T1(t),T2(t),,T10(t)]𝑇𝑡subscript𝑇1𝑡subscript𝑇2𝑡subscript𝑇10𝑡T(t)=[T_{1}(t),T_{2}(t),...,T_{10}(t)]italic_T ( italic_t ) = [ italic_T start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( italic_t ) , italic_T start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( italic_t ) , … , italic_T start_POSTSUBSCRIPT 10 end_POSTSUBSCRIPT ( italic_t ) ](1)

where eachTi(t)subscript𝑇𝑖𝑡T_{i}(t)italic_T start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_t )is a binary function representing the status of taps detected on the tip of i-th finger. We define a cost function measuring the sensor changes and comparing it with a predefined threshold:

fi(t)={1if(Si(t)Si(0)1)2Threshold0if(Si(t)Si(0)1)2<Thresholdsubscript𝑓𝑖𝑡cases1ifsuperscriptsubscript𝑆𝑖𝑡subscript𝑆𝑖012Threshold0ifsuperscriptsubscript𝑆𝑖𝑡subscript𝑆𝑖012Thresholdf_{i}(t)=\begin{cases}1&\quad\text{if }\Big{(}\frac{S_{i}(t)}{S_{i}(0)}-1\Big{%)}^{2}\geq\text{Threshold}\\0&\quad\text{if }\Big{(}\frac{S_{i}(t)}{S_{i}(0)}-1\Big{)}^{2}<\text{Threshold%}\end{cases}italic_f start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_t ) = { start_ROW start_CELL 1 end_CELL start_CELL if ( divide start_ARG italic_S start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_t ) end_ARG start_ARG italic_S start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( 0 ) end_ARG - 1 ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ≥ Threshold end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL if ( divide start_ARG italic_S start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_t ) end_ARG start_ARG italic_S start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( 0 ) end_ARG - 1 ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT < Threshold end_CELL end_ROW(2)

Where Si(t)subscript𝑆𝑖𝑡S_{i}(t)italic_S start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_t ) is the resistive value of sensor i𝑖iitalic_i at the time t𝑡titalic_t, and Si(0)subscript𝑆𝑖0S_{i}(0)italic_S start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( 0 ) is the initial resistive value of sensor i for the rest time. clicks are the number of rising edges on fi(t)subscript𝑓𝑖𝑡f_{i}(t)italic_f start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_t ). To make the algorithm more stable, we look into the last four received samples and define our rising edge detection function as below:

Ti(t)={1iffi(t3)=0&fi(t2)=0&fi(t1)=1&fi(t)=10Otherwisesubscript𝑇𝑖𝑡cases1ifsubscript𝑓𝑖𝑡30subscript𝑓𝑖𝑡20subscript𝑓𝑖𝑡11subscript𝑓𝑖𝑡10OtherwiseT_{i}(t)=\begin{cases}1&\quad\text{if }f_{i}(t-3)=0\&f_{i}(t-2)=0\&f_{i}(t-1)=%1\&f_{i}(t)=1\\0&\quad\text{Otherwise}\end{cases}italic_T start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_t ) = { start_ROW start_CELL 1 end_CELL start_CELL if italic_f start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_t - 3 ) = 0 & italic_f start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_t - 2 ) = 0 & italic_f start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_t - 1 ) = 1 & italic_f start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_t ) = 1 end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL Otherwise end_CELL end_ROW(3)

For the drawing app, the user can select different colors by touching the thumb to the rest of the fingers. We employ the cost function we developed for the tap detection algorithm earlier. The color selection protocol is as below:

  • fThumb×fIndex=1purple selectedsubscript𝑓𝑇𝑢𝑚𝑏subscript𝑓𝐼𝑛𝑑𝑒𝑥1purple selectedf_{Thumb}\times f_{Index}=1\to\text{purple selected}italic_f start_POSTSUBSCRIPT italic_T italic_h italic_u italic_m italic_b end_POSTSUBSCRIPT × italic_f start_POSTSUBSCRIPT italic_I italic_n italic_d italic_e italic_x end_POSTSUBSCRIPT = 1 → purple selected

  • fThumb×fMiddle=1red selectedsubscript𝑓𝑇𝑢𝑚𝑏subscript𝑓𝑀𝑖𝑑𝑑𝑙𝑒1red selectedf_{Thumb}\times f_{Middle}=1\to\text{red selected}italic_f start_POSTSUBSCRIPT italic_T italic_h italic_u italic_m italic_b end_POSTSUBSCRIPT × italic_f start_POSTSUBSCRIPT italic_M italic_i italic_d italic_d italic_l italic_e end_POSTSUBSCRIPT = 1 → red selected

  • fThumb×fRing=1blue selectedsubscript𝑓𝑇𝑢𝑚𝑏subscript𝑓𝑅𝑖𝑛𝑔1blue selectedf_{Thumb}\times f_{Ring}=1\to\text{blue selected}italic_f start_POSTSUBSCRIPT italic_T italic_h italic_u italic_m italic_b end_POSTSUBSCRIPT × italic_f start_POSTSUBSCRIPT italic_R italic_i italic_n italic_g end_POSTSUBSCRIPT = 1 → blue selected

  • fThumb×fPinky=1green selectedsubscript𝑓𝑇𝑢𝑚𝑏subscript𝑓𝑃𝑖𝑛𝑘𝑦1green selectedf_{Thumb}\times f_{Pinky}=1\to\text{green selected}italic_f start_POSTSUBSCRIPT italic_T italic_h italic_u italic_m italic_b end_POSTSUBSCRIPT × italic_f start_POSTSUBSCRIPT italic_P italic_i italic_n italic_k italic_y end_POSTSUBSCRIPT = 1 → green selected

Supplementary Data Figure 1. Detailed dynamic gesture recognition - Part 1

Self-Archiving Statement (12)

Supplementary Data Figure 2. Detailed dynamic gesture recognition - Part 2

Self-Archiving Statement (13)

Supplementary Data Figure 3. Detailed dynamic gesture recognition - Part 3

Self-Archiving Statement (14)

Supplementary Data Figure 4. Detailed static gesture recognition - Part 1

Self-Archiving Statement (15)

Supplementary Data Figure 5. Detailed static gesture recognition - Part 2

Self-Archiving Statement (16)

Supplementary Data Figure 6. Detailed static gesture recognition - Part 3

Self-Archiving Statement (17)

Supplementary Data Figure 7. Detailed object recognition - Part 1

Self-Archiving Statement (18)

Supplementary Data Figure 8. Detailed object recognition - Part 2

Self-Archiving Statement (19)

Supplementary Data Figure 9. Detailed object recognition - Part 3

Self-Archiving Statement (20)

Supplementary Data Figure 10. Comparison of GlovePoseML and customized GlovePoseML with separate network branches for different modalities.

Self-Archiving Statement (21)

Supplementary Data Figure 11. Performance comparison of using different models.

Self-Archiving Statement (22)

Supplementary Data Figure 12. Performance comparison of GlovePoseML trained on normal versus augmented data.

Self-Archiving Statement (23)

Supplementary Table 2. Object properties

Object typeObject nameMaterialWeight [g]Size [cm]Young’s modulus [GPa]
lengthwidthdepthdiameter
BallsBall 1Rubber180.513.40.004
Ball 2Polyester/Foam19.511.70.002
Ball 3PU/Rubber/Cork139.57.30.550
Ball 4PU foam28.56.90.043
Ball 5Rubber/wool55.06.30.035
Ball 6Rubber144.06.20.097
Ball 7PU foam12.55.80.015
Ball 8Plastic61.54.6Rigid
Ball 9Plastic4.03.9
Ball 10Marble19.02.5
Ball 11Marble5.51.6
BlocksBlock 1Wood351.010.010.010.0Rigid
Block 2Wood236.07.57.57.5
Block 3Wood48.05.05.05.0
Block 4Wood6.52.52.52.5
Block 5Ceramic37.510.22.50.7
EraserPolystyrene11.012.65.12.6
DrinkwareWine glassPlastic103.019.69.2Rigid
Plastic mugPlastic70.512.58.5
Paper mugPaper11.010.77.5
MugCeramic319.09.78.3
CupPP plastic45.57.39.3
Watering canPlastic438.030.611.011.0
OtherFlashlightMetal139.013.22.9Rigid
PlierMetal56.011.76.06.6
TweezersMetal16.512.11.01.1
ScissorsMetal/plastic31.515.06.00.8
SpoonMetal64.020.14.50.3
PC mousePlastic/electroncis102.010.77.43.6
PenPlastic11.514.01.2
MarkerPlastic14.012.11.7
Spray LPlastic/water142.516.46.5
SprayPlastic/water125.013.93.8
CoinMetal2.00.11.8

Supplementary Video 1

Dynamic articulated tracking of finger movements.

Supplementary Video 1

Dynamic articulated tracking of finger movements.

Supplementary Video 2

Typing on a mock keyboard.

Supplementary Video 3

3D drawing in air.

Supplementary Video 4

Static hand gesture recognition.

Supplementary Video 5

Dynamic hand gesture recognition.

Supplementary Video 6

Object detection based on the participants’ grasp pattern.

References

  • [1]Luo, Y., Li, Y., Sharma, P., Shou, W., Wu, K., Foshey, M., Li, B., Palacios, T., Torralba, A. & Matusik, W. Learning human–environment interactions using conformal tactile textiles. Nature Electronics. 4, 193-201 (2021,3)
  • [2]Moin, A., Zhou, A., Rahimi, A., Menon, A., Benatti, S., Alexandrov, G., Tamakloe, S., Ting, J., Yamamoto, N., Khan, Y. & Others A wearable biosensing system with in-sensor adaptive machine learning for hand gesture recognition. Nature Electronics. 4, 54-63 (2021)
  • [3]Zhou, Z., Chen, K., Li, X., Zhang, S., Wu, Y., Zhou, Y., Meng, K., Sun, C., He, Q., Fan, W., Fan, E., Lin, Z., Tan, X., Deng, W., Yang, J. & Chen, J. Sign-to-speech translation using machine-learning-assisted stretchable sensor arrays. Nature Electronics. 3, 571-578 (2020),
  • [4]Chen, W., Yu, C., Tu, C., Lyu, Z., Tang, J., Ou, S., Fu, Y. & Xue, Z. A Survey on Hand Pose Estimation with Wearable. Sensors. 20 (2020)
  • [5]Glauser, O., Wu, S., Panozzo, D., Hilliges, O. & Sorkine-Hornung, O. Interactive hand pose estimation using a stretch-sensing soft glove. ACM Transactions On Graphics (ToG). 38, 1-15 (2019)
  • [6]Wang, M., Yan, Z., Wang, T., Cai, P., Gao, S., Zeng, Y., Wan, C., Wang, H., Pan, L., Yu, J. & Others Gesture recognition using a bioinspired learning architecture that integrates visual data with somatosensory data from stretchable sensors. Nature Electronics. 3, 563-570 (2020)
  • [7]Zhu, M., Sun, Z. & Lee, C. Soft modular glove with multimodal sensing and augmented haptic feedback enabled by materials’ multifunctionalities. ACS Nano. 16, 14097-14110 (2022)
  • [8]Hughes, J., Spielberg, A., Chounlakone, M., Chang, G., Matusik, W. & Rus, D. A simple, inexpensive, wearable glove with hybrid resistive-pressure sensors for computational sensing, proprioception, and task identification. Advanced Intelligent Systems. 2, 2000002 (2020)
  • [9]Sun, Z., Zhu, M., Shan, X. & Lee, C. Augmented tactile-perception and haptic-feedback rings as human-machine interfaces aiming for immersive interactions. Nature Communications. 13, 5224 (2022)
  • [10]Sundaram, S., Kellnhofer, P., Li, Y., Zhu, J., Torralba, A. & Matusik, W. Learning the signatures of the human grasp using a scalable tactile glove. Nature. 569, 698-702 (2019)
  • [11]Lei, Q., Du, J., Zhang, H., Ye, S. & Chen, D. A survey of vision-based human action evaluation methods. Sensors (Switzerland). 19, 1-27 (2019)
  • [12]Wen, F., Zhang, Z., He, T. & Lee, C. AI enabled sign language recognition and VR space bidirectional communication using triboelectric smart glove. Nature Communications. 12 (2021,12)
  • [13]Henderson, J., Condell, J., Connolly, J., Kelly, D. & Curran, K. Review of wearable sensor-based health monitoring glove devices for rheumatoid arthritis. Sensors. 21 pp. 1-32 (2021,3)
  • [14]Hughes, J. & Iida, F. Multi-functional soft strain sensors for wearable physiological monitoring. Sensors (Switzerland). 18 (2018)
  • [15]Atiqur, M., Ahad, R., Antar, A. & Shahid, O. Vision-based Action Understanding for Assistive Healthcare: A Short Review. (2019)
  • [16]Yang, C., Chui, R., Mortenson, W., Servati, P., Servati, A., Tashakori, A. & Eng, J. Perspectives of users for a future interactive wearable system for upper extremity rehabilitation following stroke: a qualitative study. Journal Of NeuroEngineering And Rehabilitation. 20, 1-10 (2023)
  • [17]Moin, A., Zhou, A., Rahimi, A., Benatti, S., Menon, A., Tamakloe, S., Ting, J., Yamamoto, N., Khan, Y., Burghardt, F., Benini, L., Arias, A. & Rabaey, J. An EMG gesture recognition system with flexible high-density sensors and brain-inspired high-dimensional classifier. ArXiv. pp. 1-5 (2018)
  • [18]Karunratanakul, K., Yang, J., Zhang, Y., Black, M., Muandet, K. & Tang, S. Grasping Field: Learning Implicit Representations for Human Grasps. (2020,8)
  • [19]Smith, B., Wu, C., Wen, H., Peluse, P., Sheikh, Y., Hodgins, J. & Shiratori, T. Constraining dense hand surface tracking with elasticity. ACM Transactions On Graphics. 39 (2020,11)
  • [20]Xie, K., Wang, T., Iqbal, U., Guo, Y., Fidler, S. & Shkurti, F. Physics-based human motion estimation and synthesis from videos. Proceedings Of The IEEE/CVF International Conference On Computer Vision. pp. 11532-11541 (2021)
  • [21]Jiang, L., Xia, H. & Guo, C. A model-based system for real-time articulated hand tracking using a simple data glove and a depth camera. Sensors (Switzerland). 19 (2019,11)
  • [22]Guzov, V., Mir, A., Sattler, T. & Pons-Moll, G. Human POSEitioning System (HPS): 3D Human Pose Estimation and Self-localization in Large Scenes from Body-Mounted Sensors. (2021,3)
  • [23]Ehsani, K., Tulsiani, S., Gupta, S., Farhadi, A. & Gupta, A. Use the Force, Luke! Learning to Predict Physical Forces by Simulating Effects. (2020,3)
  • [24]Ge, L., Ren, Z., Li, Y., Xue, Z., Wang, Y., Cai, J. & Yuan, J. 3d hand shape and pose estimation from a single rgb image. Proceedings Of The IEEE/CVF Conference On Computer Vision And Pattern Recognition. pp. 10833-10842 (2019)
  • [25]Wu, E., Yuan, Y., Yeo, H., Quigley, A., Koike, H. & Kitani, K. Back-hand-pose: 3D hand pose estimation for a wrist-worn camera via dorsum deformation network. UIST 2020 - Proceedings Of The 33rd Annual ACM Symposium On User Interface Software And Technology. pp. 1147-1160 (2020)
  • [26]Kocabas, M., Athanasiou, N. & Black, M. Vibe: Video inference for human body pose and shape estimation. Proceedings Of The IEEE Computer Society Conference On Computer Vision And Pattern Recognition. pp. 5252-5262 (2020)
  • [27]Hu, F., He, P., Xu, S., Li, Y. & Zhang, C. FingerTrak: Continuous 3D Hand Pose Tracking by Deep Learning Hand Silhouettes Captured by Miniature Thermal Cameras onWrist. Proceedings Of The ACM On Interactive, Mobile, Wearable And Ubiquitous Technologies. 4 (2020,6)
  • [28]Cao, Z., Hidalgo, G., Simon, T., Wei, S. & Sheikh, Y. OpenPose: Realtime Multi-Person 2D Pose Estimation Using Part Affinity Fields. IEEE Transactions On Pattern Analysis And Machine Intelligence. 43, 172-186 (2021)
  • [29]Si, Y., Chen, S., Li, M., Li, S., Pei, Y. & Guo, X. Flexible Strain Sensors for Wearable Hand Gesture Recognition: From Devices to Systems. Advanced Intelligent Systems. 4, 2100046 (2022,2)
  • [30]Zhang, Q., Li, Y., Luo, Y., Shou, W., Foshey, M., Yan, J., Tenenbaum, J., Matusik, W. & Torralba, A. Dynamic Modeling of Hand-Object Interactions via Tactile Sensing. IEEE International Conference On Intelligent Robots And Systems. pp. 2874-2881 (2021)
  • [31]Wang, S., Wang, A., Ran, M., Liu, L., Peng, Y., Liu, M., Su, G., Alhudhaif, A., Alenezi, F. & Alnaim, N. Hand gesture recognition framework using a lie group based spatio-temporal recurrent network with multiple hand-worn motion sensors. Information Sciences. 606 pp. 722-741 (2022,8)
  • [32]Deng, L., Shen, Y., Hong, Y., Dong, Y., He, X., Yuan, Y., Li, Z. & Ding, H. Sen-Glove: A Lightweight Wearable Glove for Hand Assistance with Soft Joint Sensing.
  • [33]Pan, J., Li, Y., Luo, Y., Zhang, X., Wang, X., Wong, D., Heng, C., Tham, C. & Thean, A. Hybrid-Flexible Bimodal Sensing Wearable Glove System for Complex Hand Gesture Recognition. ACS Sensors. 6, 4156-4166 (2021,11)
  • [34]Liu, Y., Zhang, S. & Gowda, M. NeuroPose: 3D hand pose tracking using EMG wearables. The Web Conference 2021 - Proceedings Of The World Wide Web Conference, WWW 2021. pp. 1471-1482 (2021,4)
  • [35]Denz, R., Demirci, R., Cansev, M., Bliek, A., Beckerle, P., Rueckert, E. & Rottmann, N. A high-accuracy, low-budget Sensor Glove for Trajectory Model Learning. 2021 20th International Conference On Advanced Robotics, ICAR 2021. pp. 1109-1115 (2021)
  • [36]Luo, Y. Discovering the patterns of human-environment interactions using scalable functional textiles. (Massachusetts Institute of Technology,2020)
  • [37]Côté-Allard, U., Fall, C., Drouin, A., Campeau-Lecours, A., Gosselin, C., Glette, K., Laviolette, F. & Gosselin, B. Deep learning for electromyographic hand gesture signal classification using transfer learning. IEEE Transactions On Neural Systems And Rehabilitation Engineering. 27, 760-771 (2019)
  • [38]Zhang, W., Tashakori, A., Jiang, Z., Servati, A., Narayana, H., Soltanian, S., Yeap, R., Ma, M., Toy, L. & Servati, P. Intelligent Knee Sleeves: A Real-time Multimodal Dataset for 3D Lower Body Motion Estimation Using Smart Textile. Thirty-seventh Conference On Neural Information Processing Systems Datasets And Benchmarks Track. (2023)
  • [39]Luo, Y., Li, Y., Foshey, M., Shou, W., Sharma, P., Palacios, T., Torralba, A. & Matusik, W. Intelligent carpet: Inferring 3d human pose from tactile signals. Proceedings Of The IEEE/CVF Conference On Computer Vision And Pattern Recognition. pp. 11255-11265 (2021)
  • [40]Zhou, L., Shen, W., Liu, Y. & Zhang, Y. A Scalable Durable and Seamlessly Integrated Knitted Fabric Strain Sensor for Human Motion Tracking. Advanced Materials Technologies. pp. 2200082 (2022)
  • [41]Brahmbhatt, S., Tang, C., Twigg, C., Kemp, C. & Hays, J. ContactPose: A Dataset of Grasps with Object Contact and Hand Pose. Lecture Notes In Computer Science (including Subseries Lecture Notes In Artificial Intelligence And Lecture Notes In Bioinformatics). 12358 LNCS pp. 361-378 (2020)
  • [42]Lei, X., Sun, L. & Xia, Y. Lost data reconstruction for structural health monitoring using deep convolutional generative adversarial networks. Structural Health Monitoring. (2020)
  • [43]Soltanian, S., Servati, A., Rahmanian, R., Ko, F. & Servati, P. Highly piezoresistive compliant nanofibrous sensors for tactile and epidermal electronic applications. Journal Of Materials Research. 30, 121-129 (2015)
  • [44]Soltanian, S., Rahmanian, R., Gholamkhass, B., Kiasari, N., Ko, F. & Servati, P. Highly stretchable, sparse, metallized nanofiber webs as thin, transferrable transparent conductors. Advanced Energy Materials. 3, 1332-1337 (2013)
  • [45]Konku-Asase, Y., Yaya, A. & Kan-Dapaah, K. Curing Temperature Effects on the Tensile Properties and Hardness of γFe2O3𝛾𝐹subscript𝑒2subscript𝑂3\gamma-Fe_{2}O_{3}italic_γ - italic_F italic_e start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT italic_O start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT Reinforced PDMS Nanocomposites. Advances In Materials Science & Engineering. (2020)
  • [46]Liu, Z., Zhu, T., Wang, J., Zheng, Z., Li, Y., Li, J. & Lai, Y. Functionalized Fiber-Based Strain Sensors: Pathway to Next-Generation Wearable Electronics. Nano-Micro Letters. 14, 1-39 (2022)
  • [47]Ho, D., Cheon, S., Hong, P., Park, J., Suk, J., Kim, D., Han, J. & Cho, J. Multifunctional smart textronics with blow-spun nonwoven fabrics. Advanced Functional Materials. 29, 1900025 (2019)
  • [48]Vu, C. & Kim, J. Highly sensitive e-textile strain sensors enhanced by geometrical treatment for human monitoring. Sensors. 20, 2383 (2020)
  • [49]Tang, Z., Jia, S., Wang, F., Bian, C., Chen, Y., Wang, Y. & Li, B. Highly stretchable core–sheath fibers via wet-spinning for wearable strain sensors. ACS Applied Materials & Interfaces. 10, 6624-6635 (2018)
  • [50]Gao, Y., Yu, G., Shu, T., Chen, Y., Yang, W., Liu, Y., Long, J., Xiong, W. & Xuan, F. 3D-printed coaxial fibers for integrated wearable sensor skin. Advanced Materials Technologies. 4, 1900504 (2019)
  • [51]Cai, G., Hao, B., Luo, L., Deng, Z., Zhang, R., Ran, J., Tang, X., Cheng, D., Bi, S., Wang, X. & Dai, K. Highly stretchable sheath-core yarns for multifunctional wearable electronics. ACS Applied Materials & Interfaces. 12, 29717-29727 (2020)
  • [52]Xu, L., Liu, Z., Zhai, H., Chen, X., Sun, R., Lyu, S., Fan, Y., Yi, Y., Chen, Z., Jin, L., Zhang, J., Li, Y. & Ye, T. Moisture-resilient graphene-dyed wool fabric for strain sensing. ACS Applied Materials & Interfaces. 12, 13265-13274 (2020)
  • [53]Tashakori, A. arvintashakori/SmartTextileGlove: v1.0.0. (Zenodo,2023,11), https://doi.org/10.5281/zenodo.10128938
Self-Archiving Statement (2024)
Top Articles
Latest Posts
Recommended Articles
Article information

Author: Fredrick Kertzmann

Last Updated:

Views: 6277

Rating: 4.6 / 5 (46 voted)

Reviews: 93% of readers found this page helpful

Author information

Name: Fredrick Kertzmann

Birthday: 2000-04-29

Address: Apt. 203 613 Huels Gateway, Ralphtown, LA 40204

Phone: +2135150832870

Job: Regional Design Producer

Hobby: Nordic skating, Lacemaking, Mountain biking, Rowing, Gardening, Water sports, role-playing games

Introduction: My name is Fredrick Kertzmann, I am a gleaming, encouraging, inexpensive, thankful, tender, quaint, precious person who loves writing and wants to share my knowledge and understanding with you.