The power of the mind
Brain-computer interfaces promise to change the way we interact with machines and thus the world around us, unleashing the power of the mind.
Have you ever wished you could change the world around you just by using the power of your thoughts? Well, you have already been doing that all your life: our body is the amazing, complicated biological machine that enables interaction with our natural and social environment.
Nevertheless, wondrous as our body might be, it is significantly constrained by physical limitations, especially so in the case of people with disabilities. Humans have been therefore using technology since the dawn of history in order to effect change in their environment in a scale and magnitude far exceeding their physical capabilities. However, as machines progressively become more powerful and sophisticated, the input needed from a human operator tends also to become more complicated.
Brain-computer interfaces undoubtedly constitute the holy grail of human-machine interfaces, as they promise to let us leverage the power of machines as seamlessly and effortlessly as it is to control our own body: by using our thoughts.
EEG based brain-computer interfaces
Brain-computer interfaces rely on monitoring human cognitive or sensory-motor functions. There has been extensive research and technological progress in the field of neuroprosthetics, which entails implanting electronic devices directly into some part of the neural system, central or peripheral, in order to restore or replace the function of impaired nervous systems or sensory organs (e.g. cochlear implants, retinal implants etc). Although this approach has demonstrated so far better results, for example with respect to rehabilitating disabled people, the required surgical procedure is a major obstacle concerning widespread adoption as part of a more general interface. There has been therefore pronounced interest in non-invasive methods.
The most widely studied and successful such method is electroencephalography (EEG). EEG is a neuro-imaging technique which uses external electrodes placed along the scalp in order to record the electrical activity of the brain.
Although the grid of electrodes worn on the head prevents surgery and the medical issues associated with it (e.g. development of scar tissue inside the cranium, which may hamper signal quality long-term), it offers relatively poor spatial resolution. More importantly, the bone tissue of the skull dampens, deflects and distorts the electromagnetic waves created by the neurons, blurring higher-frequency signals. This poses an important problem, as applications typically rely on the analysis of the spectral content of EEG signals, that is, the type/frequencies of neural oscillations (popularly called “brain waves”). Moreover, while being definitely more pleasant than surgery, applying the electrodes on the head is a lengthy and particularly cumbersome process, especially considering that in order to achieve better impedance matching and therefore better results, an electrolyte must be used to wet the surface of the contacts.
Despite the method’s limitations, recent advances in digital signal processing as well as machine learning have lowered the requirements on signal quality and spatial resolution, allowing the hardware to become much more portable and simpler to use, while redefining what is possible to accomplish with the recorded data. As a result, EEG is now more attractive than ever.
In the last 5 to 10 years, numerous consumer brain-computer interfaces have appeared on the market for only a few hundred dollars, with some of the platforms even coming as simple electronic boards, letting the user 3D-print the support grid for the electrodes. There exist even open hardware projects and kits, alongside “traditional” do-it-yourself approaches, that allow anyone interested to build their own EEG device from scratch. Similarly, there is a broad offer of proprietary and open source software that allows processing, curation and analysis of the recorded EEG data.
With all these tools becoming widely available, it’s hardly surprising that there is currently an ever increasing number of “non-academic” projects exploring the potential of brain-computer interfaces. Applications range from gaming to wheel-chair and toy helicopter control. Just a few days ago, a drone race, in which contestants controlled the drones using their thoughts, made headlines.
The Emotiv Epoch+
The brain-computer interface that powered the drone race mentioned above was the Emotiv Insight, a sleek, 5-channel, dry-contact, wireless EEG headset primarily intended for commercial applications. The same manufacturer produces the Epoch+, a wireless EEG device of 14 channels. The Epoch+ is much more cumbersome to use, as the 18 electrodes first have to be wet with an electrolyte and then precisely placed on the appropriate spots on one’s head, a process that may last up to 20 minutes. However, its resolution, sampling rate and rest of specifications enable it to be used even for research projects. The headset additionally comprises a gyroscope for tracking head movements. Better still, we happen to own one.
Emotiv releases accompanying software for the headset. It has implemented an engine that processes raw EEG data and uses machine learning to interpret them as emotional states or as facial expressions (apparently, every time you smile, wink or clench your teeth, there is a respective electrical activity in the brain that can be identified by the system). Additionally, the user can train the system to recognize specific “thought patterns” and associate them with actions. By repeatedly recording the EEG patterns corresponding to the user’s thoughts of pushing, pulling or lifting an item, the system learns to identify these patterns and map them to any action within a certain application (for example, moving a drone forward, backward or making it gain height). The examples are not entirely coincidental: classification works best with “thought patterns” corresponding to envisioning motor functions and almost not at all with abstract thoughts or mental pictures.
The functionality described above is accessible through an API exposed as part of a Software Development Kit. Developers can therefore use the library offered as a DLL and map facial expressions and “thought patterns” to specific commands, or make programs which are aware of the emotional state of the user (imagine the level of engagement possible when playing a video game where the in-game characters respond to the facial expressions and emotional state of the player). Emotiv also offers a basic control panel GUI, which helps ensure that the electrodes make good contact and showcases the capabilities of the platform with respect to emotional state, facial expression and thought pattern recognition. One can use the control panel to assign key-bindings to facial expressions, which, combined with gyroscope head tracking, can be used to provide rudimentary cursor and character input control.
Of course, one need not rely on Emotiv’s SDK. There is third party software that allows exporting the raw EEG data (e.g. as JSON objects), whereupon one of various open source frameworks (OpenViBE, adastra) can be used to process and utilize them at will.
So, what do we have in mind for our Epoc+? To paraphrase Fermat’s famous quote, we have found a truly marvelous use case, but this blog post is already too long to describe it. Stay tuned for future developments! 🙂