Teams

Prosthetics Team

NXT Prosthetics Team’s focus is to research and explore the intersection of neuroscience, engineering, and programming to develop prosthetic devices that can restore or enhance the functionality of limbs. The tools we primarily use are Fusion360 and Onshape for the computer-aided design of the prosthetics parts, Flashforge and Prusa 3D printers, and Arduino, which are coupled with electroencephalography (EEG) and electromyography (EMG) for the controls of the prosthetics.


The plan for the Prosthetics Team this upcoming semester is to split people into groups by experience level (newcomers, returning members). The newcomers will either download and load CAD designs to print or design their own CAD designs involving movement, depending on their experience and comfortability with CAD designs. The returning members will continue to work on their previous designs through printing and coding. Programming/Coding workshops will be held during the semester to assist anyone in their projects. This upcoming semester aims to give the newcomers experience with 3D printing and electrical circuit controls while also allowing the returning members to work towards completing their designs from the previous year.

Primary: Eric Choi

ejc4671@rit.edu

Internal Team Lead: Leanna Frasch

lhf8682@rit.edu

Internal Team Lead: Luca Chiossone

lgc8457@rit.edu

Fabric Electrodes Team

NXT Fabrics is an interdisciplinary team who strive to create more accessible and robust electrodes that are effective for daily use in controlling prosthetics. We use tools like Myoware and Arduino to collect EMG signals, and create prototypes using sewing and soldering techniques. We use this hardware to collect signals and then perform signal processing through programming software (C++, Python) to extract meaningful data.

Our team is currently developing a prototype, We are reverse engineering circuitry and ordering parts in preparation for our next goal of data collection and signal processing.

Primary: Mikaela Simpkinson

mls4943@rit.edu

Games Team

The NeuroTechnology Games Team is committed to integrating neurotechnology into gaming to enhance fun, immersion, and engagement. Our team is divided into two primary areas of focus: one dedicated to developing neurotech software, and the other specialized in game creation using the Unity game engine.

In the past, we have built 2D and 3D racing games that the player controls by flexing their right or left arm, which was showcased at the Strong Museum of Play. We have also built a target practice game using the Unicorn Hybrid Black where the player must focus only on enemies to get the highest possible score.

Currently, we are working on another prototype game in collaboration with g.tec where the player solves 3D puzzles by focusing to create portals and launch objects. We are excited to continue or mission of making game development with neurotechnology more accessible to students at RIT.


Primary: Mike Elia

Mje5066@rit.edu

NDML Project

The Neurophysiological Database for Machine Learning (or NDML for short) research team is working to make advancements in neurotech easier by constructing a comprehensive database of EEG scans over a consistent set of subjects, doing a varied set of tasks involving activity in many areas of the brain. This database will improve upon currently available EEG databases through its variety of tests, usage of longer recording windows, and consistent use of the same research subjects across multiple experiments. We hope to do a proof-of-concept run with a small pool of research subjects, and then expand to working with more subjects and higher-quality hardware.


We’re also working to make advancements in AI/ML neurologic technologies easier by ensuring that said database is optimized for AI/ML with clear data labeling, organization, and preprocessed versions, as well as creating and training some AI models ourselves. We’re currently working with Python as our coding language, with PyTorch (and/or PyTorch Lightning) as the main AI library. Convolutional and recurrent neural networks are being evaluated for usage in our tasks.

Primary: Bryce Gernon

brg2890@rit.edu

WheelChair Project 

The wheelchair team works on a motorized wheelchair to control with with eeg signals. We are aiming to create a resource thst is replicable by others for cheap so that people with diminishing motor control can have other ways to control a wheelchair. The hope is that by giving the open source community an in depth guide we can jumpstart the advancement of open source bci wheelchair control.

Primary: Alex Burbano

arb8590@rit.edu

Dream Project (inactive)

The Dream team is looking to capture and visualize dreams. Right now, we are particularly interested in studying emotions recognition during dreams using electroencephalography (EEG) because dreams are known to be emotion-heavy experiences. By analyzing these electrical signals, we can gain insights into the emotional states experienced during dreams. This information can help us better understand the relationship between emotions and dream content.

In addition to emotional recognition, we are also exploring the use of EEG for facial recognition in dreams. We aim to develop a method that can identify recurring faces that appear in your dreams. This could provide valuable insights into the role of familiar individuals or archetypes in dream narratives. EEG has shown promise in the field of facial recognition, and by extending its application to the dream context, we hope to uncover unique patterns and characteristics associated with dream characters.

Thought Keyboard Project (inactive)

The Neurotechnology Exploration Thought Keyboard project will focus on the development of a hands-free virtual keyboard and mouse. This will be the start of a process towards making simple interfaces that bring the kind of accessibility and ease of use that only Brain Computer Interfaces can offer to all sorts of devices.


This research project has gone through many iterations, and little projects like the Wizard101 interface. Eventually it was decided that the interface shouldn't be confined to anything and simply act as a digital alternative to its physical counterpart. After going through some redesigns, we decided to make a virtual overlay that would appear over any ongoing activity on the device to type, select or any other inputs. Specifically, this overlay will consist of a mouse that will appear first, intended to select, or prompt for the radial keyboard (circular, expanding).


Right now, Kivy is being used to develop an app that can be used on PC and Phones for that intended purpose. We're focused on the software development stage of the UI and making it future proof for any additions in case its scope were to grow. Soon, we'll move onto implementing biosensing technology and SSVEP, to realize the other half of this project, in allowing this interface to receive input from the brain. The plan is to have this overlay follow the directional and locational goals of any subject. From here, this flexible interface will be used for many things, from browsers to games.