BCI-Gym: Development of Virtual Environments for Brain-Computer Interface users training (Hiwi, Research Internship)
02.05., Studentische Hilfskräfte, Praktikantenstellen, Studienarbeiten
The potential for invasive Brain Computer Interface (iBCI) systems to dramatically improve the livelihoods of those who have lost motor functionality has increased in recent years. This improvement has emerged due in part to advances in the field of deep learning, but the performance of these models varies significantly depending on the quality of the data collection and the design of the experiment. For iBCI participants who have lost the ability to move, the decoding models will be attempting to decode imagined movement, rather than actual movement. To have the best chances at decoding imagined movement, an effective and realistic virtual environment is essential as it increases the chance that the iBCI participant effectively embodies the virtual arm, thereby allowing for synchronization between the collected neural data and the imagined arm position. This position allows the researcher to work alongside experts in neuroscience, machine learning, and robotics in a supportive and resource-rich environment.
The potential for invasive Brain Computer Interface (iBCI) systems to dramatically improve the livelihoods of those who have lost motor functionality has increased in recent years. This improvement has emerged due in part to advances in the field of deep learning, but the performance of these models varies significantly depending on the quality of the data collection and the design of the experiment. For iBCI participants who have lost the ability to move, the decoding models will be attempting to decode imagined movement, rather than actual movement. To have the best chances at decoding imagined movement, an effective and realistic virtual environment is essential as it increases the chance that the iBCI participant effectively embodies the virtual arm, thereby allowing for synchronization between the collected neural data and the imagined arm position. This position allows the researcher to work alongside experts in neuroscience, machine learning, and robotics in a supportive and resource-rich environment.
Tasks may include a few of the below (to be discussed depending on your interest and background)
- Create virtual lab environments. Virtual assets include:
- Human hand and arm model
- Robotic arm and gripper model
- Table with simple objects that can be manipulated in real-time
- Implement specific experiments within the environment (center-out task, follow position task, etc.)
- Read and write to a redis-based communication framework
- Control the virtual human and robot arm within the environment
- Deliberate documentation
Prerequisites
- Direct experience with Unreal Engine or Unity-based virtual environment development
- Good python, matlab, or C/C++ programming skills
Helpful but not required
- Experience with real-time communication
- Experience with user interface design
- Experience with Blender or other 3D asset creation
Position
Current position is targeted as HiWi (paid position).