Should Tech Companies Be Trusted to Read our Brains?

By Anushka Gupta 

Humans are always finding ways to use technology to good use, especially in the medical industry. The BRAIN initiative stems from this idea, such that it is a public-private alliance of companies (including NIH, FDA and NSF amongst others) with the purpose of seeking a way to ‘deepen understanding of the inner workings of the human mind and to improve how we treat, prevent, and cure disorders of the brain’ (Mission, n.d.). In general, neurotechnology is “devices aimed at reading/writing the electrical activity, tools designed to understand the neural code of the individual’s brain” (Humayun, 2001) and they are also platforms that allow us to interact with our nervous system directly, in order to alter its activity or controlling external devices, especially in the case of pathological conditions (Brooks, 2001).

Recently in the summer of 2019, Facebook and Elon Musk’s Neuralink Corporation (a neurotechnology company founded in 2016) announced their respective technological creations that can access and read our brains. Elon Musk’s product features a neural implant device ‘that will let you control a computer or mobile device anywhere you go’ (Approach, n.d.) and record the actions of thousands of neurons in the brain and provides sensory feedback by transmitting signals to the brain. The Neuralink website also describes that micron-scale threads are inserted into the motor cortex (the brain area that controls movement). They have many electrodes to broadcast signals to the centre also known as the Link Implant (Approach, n.d.).

Mark Zuckerberg and his paediatrician wife Priscilla Chan gave away 30 million shares of Facebook to start an aspiring biomedical project called the Chan Zuckerberg Initiative (CZI) with the ultimate aim of curing all diseases in this generation. Their recent project is a wireless brain implant that can record, activate and cease movement. However, these are still being tested on monkeys before humans. The device alters the brain activity as ‘interference’ in a manner that it can act as “therapy” to treat diseases such as epilepsy or Parkinson’s disease by stopping a seizure or harmful disruptive motion as soon as it starts (Zuckerberg, n.d.).

So, these companies have indeed diversified to integrate their technology into healthcare for the greater good (or profit as a secondary bonus). However, the question is: Can we trust these tech companies with poking around our brains? 

The primary goal for these products is medical with therapeutic outcomes and is to detect and rehabilitate the body’s impairments such as to aid paralysed individuals in using their thoughts to control a phone or computer. In the case of Facebook, this technology will help those with speech impairments (MSU Bioethics, n.d.). Other general benefits of this type of technology is the revolution of treatment of many conditions such as schizophrenia, paralysis and epilepsy. (Yuste et al., 2019) 

However,on a larger scale, these technologies can have major social impacts such as exacerbating social inequality or presenting corporations, hackers and even the government new ways to exploit and manipulate people. This profoundly affects some of “our core human characteristics: private mental life, individual agency and an understanding of individuals as entities bound by their bodies” (Yuste et al., 2019) Therefore to combat the risks involved, the devices or products these technology companies make need to be non-invasive, of minimal risk, and should be ‘affordable’ at a much less expense than current neurosurgical procedures that treat diseases/impairments. 

One area of concern is Privacy and consent – a remarkable amount of personal information can be obtained from people’s ‘data patterns’. These neural devices are most likely to be connected to the internet where hackers and cyber organisations can gather information about you, track and in the worst case, manipulate an individual’s mental experience. (Yuste et al., 2019) For example they can use your neural patterns to force target advertising of a product on to you, such as if they are tracking fitness and health there could be advertisements of a protein powder, or a product to help you lose weight etc. Therefore, the manner to minimise this risk is that for neural data we should be able to opt out of sharing data to commercial providers and if needed to share, there should be an explicit consent procedure in place of that. These are just some examples – there are endless ways of protecting privacy now in the modern age. 

Another area of concern is self-identity – Some people that receive deep-brain stimulation brain implanted electrodes have reported feeling a changed sense of identity. In a 2016 study, a man had used brain stimulation to treat his depression for several years and he reported he thought he was interacting with others in an inappropriate way and didn’t know how to control it thus he lost his sense of identity (Klein et al, 2016). Therefore, neurotechnology can impact and disrupt people’s assumptions about the nature of self and they can end up behaving in ways they struggle to claim as their own. People should be aware about the effects of these devices on their mood, personality, or sense of self. (Yuste et al., 2019)

There are many other neurotechnologies that are in the works globally. For example, researchers at University of Freiburg in Germany showed how neural networks “can be used to decode planning-related brain activity and so control robots”(Burget et al., 2017). The US Defense Advanced Research Projects Agency (DARPA) that launched a project called Neural Engineering System Design that is still looking for approval by the FDA and aims to use a wireless brain device that can “monitor brain activity using 1 million electrodes simultaneously and selectively stimulate up to 100,000 neurons” (Yuste et al., 2019) 

There are other benefits and problems that come with neurotechnologies that aren’t mentioned here but it is still debated in society to this day. Definitely the benefits can outweigh the negatives as long as the risks have been diminished and in the best case, demolished.  


Mission. [Online] The BRAIN Initiative. Available from: 

Approach. [Online] Neuralink. Available from: 

Mark Zuckerberg-funded researchers test implantable brain devices. [Online] South China Morning Post. Available from:

Ethics C for, Sciences H in the L. Should we trust giant tech companies and entrepreneurs with reading our brains?. [Online] MSU Bioethics. 2019. Available from:

Yuste R, Goering S, Arcas BA y, Bi G, Carmena JM, Carter A, et al. Four ethical priorities for neurotechnologies and AI. Nature News. [Online] 2017;551(7679): 159. Available from: doi:10.1038/551159a

Humayun MS. Intraocular retinal prosthesis. Transactions of the American Ophthalmological Society. 2001;99: 271.

Brooks RA. Elephants don’t play chess. Robotics and Autonomous Systems. [Online] 1990;6(Mission, n.d.): 3–15. Available from: doi:10.1016/S0921-8890(05)80025-9

Klein E, Goering S, Gagne J, Shea CV, Franklin R, Zorowitz S, et al. Brain-computer interface-based control of closed-loop brain stimulation: attitudes and ethical considerations. Brain-Computer Interfaces. [Online] 2016;3(Zuckerberg, n.d.): 140–148. Available from: doi:10.1080/2326263X.2016.1207497

Burget F, Fiederer LDJ, Kuhner D, Völker M, Aldinger J, Schirrmeister RT, et al. Acting thoughts: towards a mobile robotic service assistant for users with limited communication skills. 2017 European Conference on Mobile Robots (ECMR). [Online] 2017; 1–6. Available from: doi:10.1109/ECMR.2017.8098658

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s