The Mobilize Center’s software efforts center around two Technology Research and Development (TR&D) projects.
TR&D Project 1
Biomechanics via Wearable Sensors
Our technology will push the bounds of what we can measure via wearable sensors. Focusing on biomechanical quantities beyond steps, we will create validated models that enable researchers to explore novel uses of wearable sensors. Our efforts will draw on our team’s expertise in both biomechanical and machine-learning approaches.
OpenSim for Musculoskeletal Modeling and Simulation
OpenSim is an open-source musculoskeletal simulation tool, developed by our NIH National Center for Simulation in Rehabilitation Research (NCSRR) and used by hundreds of research teams around the world to advance rehabilitation science. It provides musculoskeletal modeling and dynamic simulation capabilities to uncover the biomechanical causes of movement abnormalities and to design improved treatments.
The Mobilize Center is extending OpenSim’s capabilities with modules (OpenSense, OpenSim Moco) that will enable the analysis and prediction of biomechanical parameters from wearable sensor data.
OpenSense for Analyzing Movement with Inertial Measurement Units
The OpenSense module enables users to analyze movement with inertial measurement unit (IMU) data. It enables users to (i) read and convert IMU sensors data into a single orientation format, (ii) associate and register IMU sensors with body segments of a musculoskeletal model, and (iii) perform inverse kinematics studies to compute joint angles. The OpenSense capabilities are currently available through OpenSim using the command line, Matlab scripting, and Python scripting.
Future work will combine biomechanical and machine-learning methods to estimate additional biomechanics quantities from wearable sensor data.
OpenSim Moco for Solving Musculoskeletal Optimization Problems
The OpenSim Moco module provides an easy-to-use interface for solving a wide variety of musculoskeletal optimization problems, including:
- Motion tracking based on experimental data, e.g., from wearable sensors
- Motion prediction (without using any experimental data)
- Parameter optimization, e.g., the stiffness of an assistive device, the location of a sensor
Moco is built on the direct collocation optimal control method, which has become increasingly popular due to its flexibility and speed. Using Moco, researchers are able to access this advanced technique through a simple and intuitive interface, allowing them to focus on their scientific questions.
Moco is a stand-alone software package that utilizes OpenSim models.
TR&D Project 2
Machine Learning for Mobility Data
To fill the need for tools that analyze data about movement and rehabilitation, we are developing machine-learning models to analyze and generate insights from unstructured, high-dimensional data, including time-series (e.g., from mobile sensors), images (e.g., MRI), and video (e.g., smartphone video of a patient’s gait).
Snorkel for Deep Learning
Snorkel is an open-source software platform for deep learning. Current approaches for building predictive models require large, structured, labeled datasets for training. These gold standard datasets are difficult to come by, particularly in biomedicine, limiting our ability to make predictions from our data.
Snorkel was created in response to this challenge. It constructs knowledge bases from “dark data”—data that are unstructured, such as scientific articles or clinical notes. Unlike other approaches, which require precisely labeled data to train and build the models, Snorkel can work with just a set of user-input rules and performs as well or better than gold standard datasets for training predictive models. It has been used in a wide variety of applications, from paleobiology to crime fighting to biomedicine.
Deep-Learning-Based 2-D Video Analysis of Gait
In our recent publication, we demonstrated that a deep neural network could predict common quantitative gait metrics, such as cadence, walking speed, and the gait deviation index (GDI). We have made available the scripts for training machine-learning models and results analysis, the code used for generating all figures, and the dataset of trajectories of landmarks extracted from videos. You can also demo the software through our web app.