Facebook’s AR / VR research division is developing non-invasive brain scanning technology as a potential input device for its future consumer AR glasses.

Although Facebook has not yet announced AR glasses as a specific product, the company has confirmed that it is developing them. A Business Insider report earlier this year quoted the source as saying the device “resembled traditional glasses.”

In today’s blog, Facebook describes its ultimate goal as “the ultimate form factor of a pair of stylish augmented reality glasses (AR)”.

However, the main problem in developing consumer AR glasses is the input method. A traditional controller, for example, used with many VR devices, will not be practical for glasses that you want to wear outdoors. Similarly, while voice recognition is now a mature technology, people generally do not want to give potentially private commands aloud to strangers.

The Brain Computer Interface (BCI) can allow users to manage their glasses and even type words and sentences just by thinking.

“In ten years, the ability to print directly from our brain can be taken for granted,” the blog says.

“Not so long ago, it sounded like science fiction. Now this trend is within reach. ”

Invasive methods are not an option

Almost all high-quality BCI variants today are invasive, that is, they require placing the chip directly into the user’s brain using an operational method. Startup Ilona Mask Neuralink plans to use the robot to insert tiny “threads” into the brain, but this is obviously still impractical for a mass market product.

Technology marc cheville

The BCI Facebook program is led by Mark Cheville. Cheville has a doctorate in neurobiology and was previously a professor and program manager at the Department of Neurobiology at Johns Hopkins University. There he worked on a project to create a communication device for people who could not speak.

Before figuring out how to read minds non-invasively, Cheville needed to figure out if this was possible at all. He turned to Edward Chang, a colleague and neurosurgeon at the University of California, San Francisco (UCSF).

To prove that this concept is possible, UCSF researchers used invasive ECoG (electrocorticography) and were able to obtain an accuracy of up to 76% in detecting the statements that the subjects were thinking about.

Previous research projects have achieved this through offline processing, but this result has been recorded in real time. Currently, the system can detect only a limited vocabulary of words and phrases, but researchers are working to improve it.

Near infrared image

To achieve similar results without brain implants, new technologies and breakthroughs will be required. Facebook collaborates with Washington University and Johns Hopkins to study near infrared images.

If you ever shone a red light on your finger, you will notice that light passes through it. Researchers use this concept to sense the “shifts in oxygen levels in the brain” caused by oxygen consuming neurons when they are active – an indirect measure of brain activity.

This is similar to the methods used by Mary Lou Jepsen Openwater startup. Jepsen worked at Facebook in 2015 as the CEO of Oculus, researching cutting-edge technologies for AR and VR. Although Openwater’s goal is to replace MRI and CT scanners, Facebook is clear that it is not interested in developing medical devices.

The current prototype is described as “bulky, slow, and unreliable,” but Facebook hopes that if it can one day recognize even a few phrases such as “home,” “select,” and “delete,” it can be combined with other technologies, such as eye tracking, become a compelling solution for your future AR glasses.

Direct cell imaging

If with near infrared imaging, blood oxygenation is not enough, then we are in a hurry to inform you that Facebook is studying direct visualization of blood vessels and even neurons:

“Thanks to the commercialization of optical technologies for smartphones and lidars, we think that we can create small, convenient BCI devices that will allow us to measure neural signals closer to those we are currently recording with implanted electrodes – and perhaps even decode quiet speech in one great day. It may take a decade, but we think we can bridge the gap. ”

Confidentiality and Responsibility

Of course, the idea of ​​Facebook literally reading your brain can bring major privacy concerns to your mind. Such data can be used for targeted advertising with unprecedented accuracy or for more vile purposes.

“We have already learned a lot,” says Cheville.

“For example, at the beginning of the program, we asked our colleagues to share some anonymized information with electrodes from patients with epilepsy with us so that we could check how the software worked. But later we completely destroyed the information. “

“We cannot foresee or solve all the ethical issues associated with technology on our own,” Cheville continues.

“What we can do is acknowledge when technology has moved beyond what people know, and make sure that the information goes back to the scientific community. Neuroethical design is one of the key elements of our program. We are ready to listen to the concerns of users who are following the development of this technology and plan to use it in the future. ”

Stay up to date with the latest news on neural interfaces with VRcue.

Facebook scans the brain for its AR device

About The Author