Modeling the Human Haptic Code
This project was worked on by Cleah under the guidance of Courtnie Paschall (UW Seattle)
This project was worked on by Cleah under the guidance of Courtnie Paschall (UW Seattle)
According to the World Health Organization, approximately 300 million people are in need of wearing prosthetic limbs. Prosthetic limbs are artificial body parts that act as a replacement to an amputated limb. Prosthetics have advanced significantly over the past years, and now individuals who have lost limbs can wear prosthetics and regain movement. However, individuals who wear prosthetic limbs are unable to feel sensation. For instance, they would be unable to differentiate between a hot object and cold object placed on their prosthetic limb. This can be dangerous for amputees as they may not sense heat, sharpness, or similar danger signals. Also, this disconnect from the rich tactile world that many of us take for granted can also be severely disheartening for individuals.
In this project, I created three different machine learning models to predict touched objects based on neural signals. Logistic regression learns nonlinear decision boundaries. K-nearest-neighbour works by classifying each point based on the majority class of some number of closest neighbors. Multi-layer perceptrons are feed-forward neural networks consisting of input hidden output layers
The goal of this project was to create a machine learning model that predicts tactile labels of objects given the brain's neural signals while an individual touches or imagines touching such labels. Eventually, this can be reverse-engineered to identify tactile labels, such as hot and cold, of objects an individual is touching or imagining touching given the brain’s response. Once neurostimulation techniques become more easily and safely available, it is possible that such models can be used to induce sensation of a given tactile label based on an external sensor. In this project, the brain's response is described by electroencephalography, or EEG, data. EEG is a noninvasive technique that measures neural activity detectable from the scalp.
I created logistic regression, k-nearest-neighbour, and multi-layer perceptron models. The inputs of these models was the neural data (EEG recorded by Muse2 headband) collected from participants who were touching, imagining touching, and touching and viewing the object. The models yielded high accuracy. The logistic regression had an 84% accuracy, k-nearest-neighbour had a 90% accuracy, and the multi layer perceptron had a 92% accuracy. What exactly do these accuracies mean? Looking at the logistic regression result, 84% accuracy means that if we collected data for 100 different time stamps and inputted them to our model, the model would on average yield 84 correct predictions and 16 (100 - 84) incorrect predictions.
The slide presentation below has some confusion matrices, which are coloured grids that have different colours on each cell depending on the value at that x-y value. Here is how you read them - The confusion matrices visually show the accuracies of the models for the pair rough vs. smooth. The vertical axis shows the actual label while the horizontal axis shows the predicted label. Darker colors mean that there are more data points that fall in that combination of predicted and actual labels. Since we would want the actual label to equal the predicted label, in general the downward diagonal should be dark colored.
The slideshow contains all my results including the confusion matrices (slide 11 - 13). I also did recursive feature importance, which is basically a technique used to progressively remove features (frequency bands and electrodes) to determine which one contributes most to the accuracy of the model. It is essentially a ranking - 1 means that that feature is the most important (see slide 14 for more detail).
I also made a video which explains the parts of my project. I hope you enjoy it!