Skip to main content

Interaxon measures brainwaves to give VR devs more data for game design

Prototypes of the Muse Virtual Reality add-ons for Samsung Gear VR (left) and HTC Vive (right).
Image Credit: GamesBeat

Interaxon started out developing wearables like its Muse headband meditation tool, and it’s now applying its learnings to virtual reality. Its new Muse Virtual Reality aftermarket add-ons will attach to the HTC Vive and Samsung Gear VR headsets to pick up users’ brainwaves and collect information about how they’re reacting to stimuli. The company is planning on sending out software development kits to developers in Q2 of this year and rolling the add-ons out to markets in Q4.

Like Interaxon’s Muse headband, Muse Virtual Reality uses electroencephalography (EEG) to capture brain activity. This data can give the developers information, such as what the user is paying attention to and how they’re responding to characters and environments in a game. It can also measure their cognitive workload, which is how much mental effort they’re exerting while in the VR experience.

“We can close the loop, take that information and bring it back into the game engine, and use it to adapt the game and make it more engaging for the user,” said Interaxon’s chief scientist Graeme Moffat in an interview with GamesBeat. “Automatically turning up or down the level of stimulation in the game based on the user’s brain responses to it. You can tell whether, given a high cognitive load or low cognitive load, what they’re seeing, how much they’re seeing, whether they’re distracted by this or that or some other thing.”

Though the technology may be useful primarily for developers who are playtesting their VR games, it has other applications as well. EEG is commonly used in biofeedback therapy, which is a form of treatment where patients learn more about their own physiological reactions and try to change them.

“There are a few conditions in brain health, mental health, where biosignal feedback and brain signal feedback are really useful,” said Moffat. “They can add something. Mild traumatic brain injury like concussion. Post-traumatic stress. ADHD. A few other brain health conditions. Maybe depression, although we don’t really know. Anxiety. Those kinds of conditions, where adding brain signals to a treatment regime can potentially improve the outcomes. We know that it can. It’s just a question of how we integrate it into VR.”

Moffat says that EEG systems can cost around $5,000 on average, whereas Interaxon is hoping to get its price under $1,000. The company has experience in the consumer space, and it recently integrated its Muse EEG technology into sunglasses and glasses through a partnership with Smith Optics, a manufacturer of athletic eyewear.

Muse Virtual Reality does come with a learning curve, though, which may make it challenging for at-home users. For instance, they’ll have to learn how to wear the headsets properly. The Vive add-on has electrodes that have to make contact with the user’s head to be effective. If there’s hair in the way, it interferes with data collection.

In addition to teaching people how to use the hardware, Moffat says that Interaxon is developing software that will help people interpret the information. It will also be working with developers to make sure the information makes sense.

“My job, I guess, is to explain what the brain signals do and how that corresponds to mental state,” said Moffat. “Our engineers’ job is to take the signals and put them into a usable format for developers. You can’t just take the raw data and do something with it. You have to process them and put classifiers on them. That’s where it gets really interesting. We have to work with developers on what they’re doing. Our approach is open science. We publish everything out in the open. We share our SDK openly. You can pull raw signals off of all of our devices.”

Here’s an edited transcript of our interview at the Consumer Electronics Show (CES).

Above: Prototypes of the Muse Virtual Reality add-ons for Samsung Gear VR (left) and HTC Vive (right).

Image Credit: GamesBeat

GamesBeat: Can you tell me more about Muse Virtual Reality?

Graeme Moffat: These are not the final form factor for AR and VR. This is going to continue to evolve for a number of years. Our hypothesis is we have a long [way] to go, but we’re building the neuro-adaptive vision phase for the future of AR and VR gear. Not just gaming but productivity, too. The big one initially is going to be gaming.

There are sort of two theories about how you build a neurotechnology interface. One is the Elon Musk, Facebook, Building 8 strategy, which is to sequester a dozen neuroscientists for five years and see what they come up with. It’s not really how innovation in the space of neurotechnology has worked in the past, and I don’t think that’s how it’s going to work in the future. It’s a lot of open testing, a lot of labs in a lot of places and a lot of users, and then, you gradually iterate.

None of this stuff that’s talked about in neurotechnology is in the news. It hasn’t come up in the laboratory. It’s all stuff that’s being done in the Howard Hughes Medical Institute, places like that. What we’re talking about when we talk about the brain-computer interface is taking that stuff, scaling it way up, and making it possible to use that for thought control in computing. We’re a long way from that.

But in the interim, what we’re building toward is something that’s going to be much more useful in the medium term. That is — people are going to put things on their head, AR and VR headsets, head-mounted displays. That gives us an opportunity to put a lot of sensors on the cranium that we wouldn’t otherwise have the opportunity to do. Somebody puts on a Vive headset or a Gear VR, we have all kinds of contact points around their face, around the back and sides of their heads. That allows us to measure their brain responses and time lock those responses to stimuli in the VR environment. We can measure brain responses to those things in ways that allow game developers — and gamers themselves — to use this output to have an adaptive experience.

What I mean by that is we can measure, for example, cognitive workload, attention, and novelty. Some stimuli will be repetitive in a game and some will be novel. Some will be recognizable and some will be new. You can imagine measuring the brain response to a human face, which is quite straightforward to do. Then, you can tell whether or not the face that someone has seen — you can tell by their brain signals whether they recognize that face or not. We can measure those things with brainwaves. We can close the loop, take that information and bring it back into the game engine, and use it to adapt the game and make it more engaging for the user. Automatically turning up or down the level of stimulation in the game based on the user’s brain responses to it. You can tell whether, given a high cognitive load or low cognitive load, what they’re seeing, how much they’re seeing, whether they’re distracted by this or that or some other thing. That gives you a more immersive experience. We call this neuro-adaptive technology. That’s what we’re building.

The first product we have coming out this year is going to be a mod. We don’t want to build our own headsets because that’s a hard thing to do, and it’s not a problem that we need to solve. We’re trying to build modifications, simple low-cost ones that a lot of gamers can buy, and make it possible for the signals coming off these things to be used by game developers and gamers themselves.

This is a modifiable aftermarket faceplate that goes into a Samsung Gear VR. These are all electrodes here. They measure brain responses up here and facial muscle responses down here. Not only can you measure brain responses, but you can measure whether someone’s cheekbones are rubbing up against the sensors and other facial muscle activity. You can drive an avatar in VR, facial expressions on an avatar. We also have electrodes that go up behind the ears. The early versions of these will be just clipping on your earlobes. A later version will be more like the Muse itself, the glasses, where the rubber electrode behind the ear fits on seamlessly. And then, the electronics just sit on top.

Above: A prototype of the Muse Virtual Reality add-on for the Samsung Gear VR.

Image Credit: GamesBeat

GamesBeat: Does this come with software to help developers interpret it?

Moffat: Yeah, that’s the key part. You can’t just make the hardware. You have to make it easy for developers and gamers to use. You have to teach people how to get the signal and how to push the signal around. It’s not immediately obvious, how to interact with a brain-computer interface. You have to think about how users are going to learn this. That’s actually something that mindfulness teaches really well.

We came to the mindfulness product by building things like this for other applications, and through the realization that when you’re trying to learn how to push brain signals around for brain-computer interfaces, you’re actually learning how to control your thoughts. That’s an essential skill in mindfulness. So, this diverged in two ways. One is, we can teach you mindfulness with technology. The other is, we can use the techniques we learned about how users interact with these things to make them for developers to use. We pull the signals out, teach users how to get good signal quality, and then, it becomes an SDK output, already interpreted by the SDK or the API, and they can input that to the game in a simple way. This is coming in 2018. We’ll get this out to developers in Q2 and then launch in Q4.

GamesBeat: This is for the Gear VR?

Moffat: Yeah, this is for the Gear, and then, this is for the Vive. We’re not limiting ourselves to those, but these are the two most easily modified headsets, and the ones that are the most easy to use and in widespread use. The Vive, we just like it because it’s the most effective VR headset, certainly for user experience today. We’ve tried them all, and we like the Vive best. Not only is it comfortable, but the tracking and the experience of being in VR is a lot less disorienting because the latency is so good. That’s a really important thing. Being able to measure and control latency, if you’re looking at time-locked brain responses, is super important. HTC has done an excellent job with that as well.

When you get into something like the Vive because you have this big thing on the back of your head, we have a bunch of other places we can put electrodes. These are going to be softer. These ones we put in for CES are really hard, so they’re resistant to abrasion, but the softer ones will be more comfortable in the production model. They wiggle around. You push this thing out and put it around your head. One of the challenges is you’re going to have to wiggle it a bit to get it through people’s hair. There’s a learning curve. You’re going to have to put this thing on, lock in, and get good brain signal. That becomes a part of the game as well. We have to gamify that, which is something we’re really good at doing at Muse. We’ve learned a lot, through the hundreds of thousands of people who have Muses, about how you teach people how to use a brain-computer interface or a neuro-adaptive technology on their own in the home.

Traditionally, when you use electroencephalography (EEG), it’s in a laboratory environment like a university lab, and you have a trained technician who puts the electrodes on. You can’t really take it out of the lab in that sense. So, when people first started trying to do this, started marketing these things, people would take them home, and the end user would be like, “What do I do? How do I get this thing on?” We went through that and figured it out. There’s a particular process of how to take people through and teach them how to use this. You actually gamify that process of getting the electrodes attached to your head, make that part of the experience. You lock in your VR environment, lock in your brain sensors.

Anyway, the reason why we put these electrodes on the back of the head is because that’s where the visual processing part of the brain lives. Auditory is right above the ear. Most of the active thinking, cognitive control, is in the pre-frontal area. And then, all back here is visual. You can measure, really accurately, visual responses to what’s going on in the visual field of the user. So, you’re seeing it in the VR headset, and we’re measuring the brain responses back here. Because we have this opportunity to put electrodes here, it gives us another level of coverage and resolution. It opens up a whole new world of possibilities around how you build this stuff into games.

Above: The inside of a prototype of the Muse Virtual Reality for HTC Vive.

Image Credit: GamesBeat

GamesBeat: It’s interesting. It’s a different interpretation of what we think about when we talk about presence in VR. Are you also reaching out to developers to test this out?

Moffat: Yeah. This is going to come out in late 2018, this modified Vive with — this has more channels. Depending on what you buy, it’ll have eight or 16 channels. Sixteen is getting close to full head EEGs like you would see in a hospital. That goes up to 19 or 20 channels. This is going to come first to developers. We’ll make hundreds of them for game developers and VR developers, and we’ll seed them to people who are really engaged and want to work with us on this. This is a long-term project for us. We think we can really make a difference in VR, and we want to do it the right way.

GamesBeat: Did this emerge organically from your mindfulness work? Or were you always interested in the gaming space as well?

Moffat: We’re tinkerers. In our company we just love VR gaming. We do a lot of it, playing at night after everyone’s gone. The engineers will hang around and tinker and play with the Vive. It occurred to us a while ago that we could do something with biosignals, with brain signals in VR. We started playing around with it, and it turned out it was easier than we thought to bring something meaningful. We just kept going from there. In terms of the mindfulness stuff, there’s a sort of virtuous cycle.

Our CTO, who invented Muse, came to mindfulness not through the mindfulness movement or Buddhism, but he actually figured out the principles of mindfulness by using technology. He realized he had to learn to control his own thoughts, and then, he’s like, “Hey, has anybody ever heard of this? I just figured this thing out.” And everyone said, “Yeah, that’s basically focused attention, mindfulness meditation.” Oh, really? Yeah, that’s cool.

So, he created a technology that could teach mindfulness meditation to people who struggled with it. That became the cycle that created the first product, the Muse product, and then, we worked with Smith to bring that product to glasses. It’s not like we were looking for this project. It’s that we got into VR gaming, and then, we said, “Look at all these spots on the head where we can put electrodes.” We can do so much more in VR. We started to play around and came to the realization that there’s an awful lot of things we can do to make VR better and more engaging. It was a labor of love until it became a project that people got interested in.

GamesBeat: Do you think there are health applications here? People are doing things like PTSD treatment in VR. Will you able to use this for those applications as well?

Moffat: You certainly will. It’ll be — the barrier to adoption there, and it’s a hard one to get through. First you have to build an experience that works — like if you’re working with PTSD. Then, you add brain-signal sensing, and you design it in a way that adds something to the experience. There are things we can do with brain signals to treat PTSD, for example, and add that to the VR interactive experience that’s also designed around that. We can make a more powerful tool for that purpose. Or for ADHD.

There are a few conditions in brain health, mental health, where biosignal feedback and brain signal feedback are really useful. They can add something. Mild traumatic brain injury like concussion. Post-traumatic stress. ADHD. A few other brain health conditions. Maybe depression, although we don’t really know. Anxiety. Those kinds of conditions, where adding brain signals to a treatment regime can potentially improve the outcomes. We know that it can, it’s just a question of how we integrate it into VR. That’s a long-term project. But we talk to the guys at Stanford and Harvard who are doing the VR in medicine stuff, and they’re super excited about this technology.

The challenge here is, it’s hard to explain. You have to sit down with someone and try to explain what it is. If you just take a picture and say, this is a brain-sensing system, control VR with your brain — that’s not really what it is. That’s a long way off.

There’s one other company doing this — well, there are a few. One is on the software side. They’re called Neurable. They’re quite good at what they do. The other companies making hardware are making systems that are like $10,000. EEG systems typically cost $5,000 and up, and that’s not going to work for gaming or any everyday use. We’re going to bring this down well under $1,000 and make it useful and accessible. We’re alone in that space right now.

GamesBeat: Do you think this is something that consumers will buy, or is it mainly aimed at developers and publishers?

Moffat: This one’s going to be primarily for developers and early adopters. Probably — that’s going to teach us a lot. Whatever we do in 2019 is going to be the thing that’s much more oriented toward consumers. The developers and the early adopters that work with this one are going to build the things that drive the next experience that goes to new heights. The first experiences in the next generation of these things will be built on this hardware.