Of Mice and Mind: Creating a simple EEG cursor-control application

Of Mice and Mind: Creating a simple EEG cursor-control applicationKevin JY CuiBlockedUnblockFollowFollowingFeb 27Photo courtesy of Oscar Ivan Esquivel Arteaga on UnsplashHumankind exists in an era of information, and our first step into this era was the invention of the computer.

Although seemingly complicated and inefficient upon its first generation, the computer has evolved from a bulky and inconvenient machine to an essential and accessible device found in almost every office, every home, and every pocket of the modern world.

And with the integration of computers into the common household, interfaces were developed to guide the average person into being able to join the new era as a member of the digital world.

The QWERTY keyboard, the computer mouse, and the touchscreen were all invented to allow people to transfer the movement of their hands into executable commands that control, manage, and operate the computer.

These functions and devices have been made so essential for practical computer usage, that it has become almost impossible to operate a computer without them.

Unfortunately, for a large demographic of people, these functions are not made available to them.

In the United States of America, which is one of the largest consumers of computer products, nearly 5.

4 million individuals have some form of paralysis as of 2013.

Paralysis is the condition in which the subject experiences a loss of muscle function or the loss of the ability to move their muscles at will.

The patient may have had experienced trauma or stroke that affected the nervous system, thus rendering the communication between the neurons and motor stimulation as faulty.

Keyboards, mice, and touchscreens all work off of the assumption that the user is capable of performing basic motor skills, which many paralyzed people lack.

Without the use of their hands, paralysis patients must either resort to using other forms of motor controls or, for the more severe cases, use a direct interface system between the brain and the computer.

As paralysis may be unpredictable and the subject is often unable to perform any significant motion, the latter option is being explored by neurologists as a potential strategy to implement computers to those suffering from paralysis.

Such is the idea of Brain-Computer Interfaces or BCIs.

With this information, I became motivated to develop a BCI application of my own to help explore and combat this problem in both the medical and technological industries.

Specifically, I built a cursor-control application that implements a BCI and navigates the cursor based off of select “mental commands”.

In order to accomplish this, I used an EEG headband, WebSockets, JSON-RPC, Python, and some Python libraries.

The Emotiv INSIGHT 5 Channel Mobile EEG Jet BlackSelecting a Brain-Computer InterfaceBrain-Computer Interfaces are devices that transmit data between the synaptic transmission of the brain and a computer.

For my project, I used this data to navigate the onscreen cursor.

To begin, I first had to choose a method of detecting the transmissions.

Brain-Computer Interfaces can be divided into two categories: invasive and non-invasive (although there are other variants called partially-invasive BCIs that act as an in-between of the two categories).

While invasive BCIs are more accurate and can detect data with higher quality spatial resolution, we will be using non-invasive BCIs for practicality reasons.

This project is intended for consumer-usage, and an invasive BCI, which requires surgery, is far too inefficient and unaffordable.

Cursor-control does not require extremely accurate data, so it is simply much more reasonable for both the developer and the consumer to use a non-invasive BCI.

Specifically, the non-invasive BCI will be electroencephalography (EEGs).

I chose to use EEGs rather than other BCIs, such as fMRIs, for the same reason- practicality.

fMRIs tend to be large in size, and just like how a consumer would not choose a desk-sized mouse, neither would they want a desk-sized BCI for their home or personal computer.

Now that we have narrowed down our BCI choice to the exact type of BCI, we are once again presented with an assemblage of choices.

Perhaps the three most common EEGs for consumer and developer use are manufactured by three companies: NeuroSky, Muse, and Emotiv.

Let’s begin with NeuroSky.

While products produced by NeuroSky are sold for consumer-usage and very affordable, as shown by YouTuber Michael Reeves in his mind-controlled car project, the NeuroSky EEGs simply do not have a high enough resolution.

NeuroSky’s product, the NeuroSky MindWave, only has one channel.

Although our cursor-control application does not require too much quality, the NeuroSky MindWave is just not the optimal EEG.

Next, there is the Muse.

The Muse is notably higher quality than the MindWave, as it comes with 4 channels.

Muse is also perhaps the most well-known brand of the three, and it often presents good resolution data.

However, its usages lean more towards purely consumer usages, such as meditation.

Although possible to develop with, the Muse does not come with a very developer-friendly interface or API.

Thus, Muse would be the preferred EEG- if you were a meditation enthusiast.

Finally, the Emotiv Insight, which has 5 channels, is very developer-friendly and reasonably high quality.

Emotiv products also come with an API and full documentation, which becomes very useful when trying to run commands later on.

It should be noted that for a simple project like this, the functionality of any of these three choices would suffice, but the Emotiv family of products will be the optimal choice.

An interior view of the Emotiv INSIGHTHow does the Emotiv INSIGHT work?Since I’m using the Emotiv INSIGHT (which is Emotiv’s 5 channel mobile EEG) for this project, let’s briefly delve into the hardware of the headband.

If you want to learn about the basics of EEGs first, be sure to read this introduction to BCIs.

If you are building your own project and don’t care about hardware, you can skip this section.

The components of an EEG headband can be summed up with the following: electrodes, an amplifier, a computer control module, a display, and the wires that connect these parts.

The electrodes detect the synaptic signals and send the data through a channel.

The number of channels is based on the number of electrodes on different locations of the scalp.

Signals coming from the scalp may be very weak, so the data is sent through the connecting wires to an amplifier so that it can be recognizable by the computer and in the display.

This is then sent, through BlueTooth, to the computer where it is processed.

Finally, with the use of WebSockets, it can be displayed on a monitor for development.

That’s all for hardware.

Time to do some codingTo build this project, we will be needing Python (my code will be in Python 3.

7), Emotiv’s CortexUI (which will be used to establish a stable connect the device), and some type of configuration management framework (I used Windows PowerShell).

My program will also be available on my GitHub, so feel free to fork it or use it as a guide.

Once the EEG is set up (charged and connected), we can start working.

I used Python for 2 reasons: First of all, it is data intensive.

5 channels of synaptic signals could be a great quantity of data, so we need a program that could handle that.

Second of all, it easily implements JSON-RPC.

We will need to use JSON-RPC as communication between Emotiv services and our program, and using Python just makes the whole program a lot simpler.

Our first step is to initialize a connection with Emotiv.

As we already know, Emotiv uses WebSockets and JSON-RPC to create that communication between the device and the program.

If you are not familiar with these tools, you can see the links attached which will give introductory information.

In essence, the WebSocket is a server that stores the data so both the program and the EEG can send or receive commands and feedback, and the JSON-RPC is the method that transmits the data.

Our first step would then be to import the Python libraries required to use them.

We will need to import json and create_connection from websocket-client for the connection.

We must also import ssl, which will help us make a secure connection.

We will have two files for this project, one for authorization and one for the client to use.

I named them auth.

py and client.

py, respectively.

We will need to import these libraries for both of them.

import jsonimport sslfrom websocket import create_connectionNow that all our libraries are set, it’s time to authorize and transmit data with the EEG.

The syntax for this can be found on Emotiv’s Cortex API documentation.

Our first step is to initialize the WebSocket object that we will be using throughout the program.

This can be done simply with the function create_connection of the websocket-client library.

We will need to do this for both files.

print("Connecting to websocket.

")receivedData = create_connection("wss://emotivcortex.

com:54321", sslopt={"cert_reqs": ssl.

CERT_NONE})Now, all we have to do is follow the Cortex API documentation.

To send a JSON request to the WebSocket, we must first use the WebSocket’s send function followed with json.

dumps and then the request.

First, we must send an authorization request in the auth.

py file as shown below and then receive a response.

What authorizing will do is it will generate a token, which is the key that allows us to execute other requests.

receivedData.

send(json.

dumps({ "jsonrpc": "2.

0", "method": "authorize", "params": {}, "id": 1}))token = json.

loads(receivedData.

recv())["result"]["_auth"]Now let’s direct our focus to the client.

py file.

This file will be used to run the application.

In this file, we will need to import the token from auth.

py so that we can use it to authorize requests.

from auth import tokenSince we want to make time delays between each detection when we are training commands, let’s also import the time library.

This will come in handy later on.

import timeWe’ll also need to import pyautogui.

PyAutoGUI is a library that will allow Python to control and manage cursor and keyboard input.

This will be useful when we apply the Emotiv API to our application.

import pyautoguiNext, it is time to set up a connection with all the libraries.

After initializing the WebSocket object and authorizing a token from the other file, we can follow the Cortex API and create a session, which we will subscribe to.

print("Checking headset connectivity.

")receivedData.

send(json.

dumps({ "jsonrpc": "2.

0", "method": "queryHeadsets", "params": {}, "id": 1}))print(receivedData.

recv())print(".Creating session.

")receivedData.

send(json.

dumps({ "jsonrpc": "2.

0", "method": "createSession", "params": { "_auth": token, "status": "open", "project": "test" }, "id": 1}))print(receivedData.

recv())print(".Subscribing to session.

")receivedData.

send(json.

dumps({ "jsonrpc": "2.

0", "method": "subscribe", "params": { "_auth": token, "streams": [ "sys" ] }, "id": 1}))print(receivedData.

recv())print(".Getting detection info.

")receivedData.

send(json.

dumps({ "jsonrpc": "2.

0", "method": "getDetectionInfo", "params": { "detection": "mentalCommand" }, "id": 1}))print(receivedData.

recv())The session’s setup is complete, and it is finally time to create a command.

The way that the Cortex API works is that it needs you to train commands, which it will keep saved in its WebSocket server.

You can train many commands, and have them labelled different names so that you can get an output based off of each command.

For my cursor, I used 4 directional commands that navigate the cursor up, down, left, and right, as well as another command for clicking.

To do this, let’s create a train_command function that prompts the user for a command and trains it.

Again, the syntax for the request can be explained in the Cortex API documentation.

Notice that I sent two requests to the WebSocket.

The first one starts or initializes the training and the second one accepts the training.

We also use the time.

sleep(x) function to delay the time between starting the training and accepting it, as we want data across a long period of time.

def train_command(request): print("Training " + request + " command.

") receivedData.

send(json.

dumps( { "jsonrpc": "2.

0", "method": "training", "params": { "_auth":token, "detection":"mentalCommand", "action":request, "status":"start" }, "id": 1 }))print(receivedData.

recv()) time.

sleep(5) print(receivedData.

recv()) time.

sleep(10) print(receivedData.

recv())receivedData.

send(json.

dumps( { "jsonrpc": "2.

0", "method": "training", "params": { "_auth":token, "detection":"mentalCommand", "action":request, "status":"accept" }, "id": 1 } ))print(receivedData.

recv())time.

sleep(2)print(receivedData.

recv())Once we have completely recorded all the commands are, we can now listen for commands.

Request the following requests to log in, subscribe, set up a profile, and get a map of the available commands.

It is very important to set streams to "com".

The stream type determines what the EEG will be recording.

When it is set to "com", the stream will record trained commands in the WebSocket.

If you had the dedication to or had to conduct research rather than create a simple project, you could set streams to "eeg".

Instead of recording commands, it will record raw data from the 5 channels: AF3, T7, Pz, T8, and AF4 (as shown on the map above).

While raw data is more useful for research purposes, as it can actually show you the frequencies of each channel, the "com" stream is sufficient for application development.

print("Getting USER login.

")receivedData.

send(json.

dumps({ "jsonrpc": "2.

0", "method": "getUserLogin", "id": 1 }))profile = json.

loads(receivedData.

recv())["result"][0]print(profile) receivedData.

send(json.

dumps({ "jsonrpc": "2.

0", "method": "subscribe", "params": { "_auth": token, "streams": [ "com" ] }, "id": 1 }))print("Subscription:", receivedData.

recv())receivedData.

send(json.

dumps({ "jsonrpc": "2.

0", "method": "setupProfile", "params": { "_auth": token, "profile": profile, "status": "create" }, "id": 1 }))print("Profile Set-up:", receivedData.

recv())receivedData.

send(json.

dumps({ "jsonrpc": "2.

0", "method": "mentalCommandBrainMap", "params": { "_auth": token, "profile": profile}, "id": 1 }))synapseData = receivedData.

recv()print("Mental Command Brain Map:", synapseData)Finally, we take the results and check the command, executing the corresponding action based off of the command.

PyAutoGUI’s function pyautogui.

move(x,y) will move the cursor x number of pixels along the x-axis and y number of pixels along the y-axis relative to the cursor’s current position.

If the value None is in the parameter, the cursor will not move along that axis.

Be sure to use move and not moveTo, as we need to change the cursor’s position relative to its current position.

Also, check that the cursor is within the screen and handle the KeyboardInterrupt exception.

while True: thought = json.

loads(receivedData.

recv())["com"][0] print(thought) print("Beginning.

") maxX, maxY = pyautogui.

size() try: x, y = pyautogui.

position() except KeyboardInterrupt: print('.') if thought == "left" and x>0: pyautogui.

move(-3, None) elif thought == "right" and x<maxX: pyautogui.

move(3, None) elif thought == "lift" and y<maxY: pyautogui.

move(None, -3) elif thought == "drop" and y>0: pyautogui.

move(None, 3) elif thought == "neutral": pyautogui.

move(None, None) elif thought == "push": pyautogui.

click()The final productAfter running the program and training the commands for navigating right, we can see the program in action.

Note that the following is done without the use of a mouse or any input device other than the EEG.

It must be noted that the program still has some difficulties.

The use of an EEG rather than a larger or more invasive BCI is practical for consumer-usage but sacrifices a lot of spatial resolution.

Thus, even when the user is exerting a “right” command, it can be noticed that the output arbitrarily switches between “right” and “neutral”.

It is this exact problem that will be the next great hurdle in BCI technology: the question of how to devise a BCI design that is both practical for everyday use yet additionally provides high-quality real-time data.

The current focus of the community is to surpass this hurdle, and it remains persistent in advancing towards this goal.

Humankind exists in an era of information.

But information is only as good as what it can be used for.

As more and more information about our brains, our nervous systems, and even the unknown concepts of cognition, imagination, emotion, and logic are being revealed to us through neurological research, our tasks as developers are to turn this information into useful devices, for the betterment of humankind.

My cursor-control application is an introduction to the potential possibilities of BCI technology, and I will continue to build in this field alongside the BCI software development community.

.

. More details

Leave a Reply