Paola Cantarini
FIRST PART
Drawing on authors such as Sandra Wachter and Brent Mittelstadt (“A Right to Reasonable Inferences: Re-Thinking Data Protection Law in the Age of Big Data and AI.” Columbia Business Law Review, 2019), Helen Nissenbaum (“Privacy in Context: Technology, Policy, and the Integrity of Social Life.” Stanford University Press, 2010), Luciano Floridi (“Group Privacy: A Defence and an Extension.” Philosophy & Technology, 2014), Daniel Solove and his theory of appropriate informational processing related to privacy protection (Daniel J. Solove, “The Digital Person: Technology and Privacy in the Information Age.”, 2004), as well as the works of Habermas and Martha Nussbaum, we assert the need for a new “Declaration of Human Rights” and a reformulation of its foundations to adapt to the “onlife” context and to provide a suitable approach to artificial intelligence. It is no longer possible to address current problems based on outdated premises such as liberal democracy, i.e., within the liberal paradigm.
It is necessary to rethink the democratic model in light of the reduction in the number of democratic countries (Freedom House Report, 2024. Freedom in the World 2024: The Mounting Damage of Flawed Elections and Disinformation), as only 29% of the world’s population lives under democratic regimes. There has been a decline in global freedom for the 18th consecutive year, a widening gap between rich and poor countries (A Matter of Choice – Human Development Report, 2025, UNDP), increasing wealth concentration, stagnation of the Human Development Index (HDI), and deepening digital and democratic inequalities. Moreover, dominant interpretations like the absolutist understanding of the First Amendment in the United States, which is historically anachronistic and insensitive to shifts in human nature, social reality, and power/media dynamics, fail to align with the Theory of Fundamental Rights that underpins the necessary balancing between conflicting fundamental rights.
Contrary to Cass R. Sunstein’s stance (“#Republic: Divided Democracy in the Age of Social Media,” Princeton University Press, 2017), we require disruptive and innovative thinking, especially in the humanities, as also advocated by Glauco Arbix (USP/IEA Journal, 2024, AI and Scientific Research), and solutions that go beyond incremental fixes.
The 21st century poses unprecedented challenges to human dignity, transcending the traditional categories of human rights established in the postwar era. The digital revolution, advances in neuroscience, artificial intelligence, and new forms of social organization have created normative gaps that demand a profound reformulation of the legal-philosophical framework of fundamental and human rights. It is urgent to recognize emerging human rights and build a new “Declaration of Human Rights” adapted to contemporary realities, grounded in the work of contemporary thinkers who have dedicated their careers to understanding and responding to these challenges. This proposal diverges from the first binding international treaty on AI, which, despite its importance, presents alarming gaps. It does not address military applications of AI, fails to explicitly ban high-risk uses such as mass facial recognition systems, allows for self-regulation (Art. 18), and does not create new human rights (Art. 13) (Council of Europe. Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law, 2024).
Rapid innovations in neuroscience and brain-computer interfaces (BCIs) usher in a new frontier in human rights. Technologies capable of reading, decoding, manipulating, and stimulating brain activity jeopardize mental self-determination, cognitive privacy, and psychological integrity. Contemporary literature already identifies the need for a new generation of fundamental and human rights—neuro-rights—as a normative response to such challenges (Yuste et al., 2017; Ienca & Andorno, 2017).
Neurocapitalism represents a new configuration of power in which brain data is extracted, stored, processed, and commercialized for profit, deepening existing asymmetries between individuals and corporations.
As Frank Pasquale (2015) warns, we are witnessing a transition from an “information society” to a “neurocognitive surveillance society,” where data collection extends from online activities to internal mental processes.
The advancement of neurotechnologies necessitates a profound update to the catalog of fundamental and human rights. Brain privacy, the right to dream, and restrictions on neurocapitalism are only initial expressions of this necessary normative transformation.
The recognition of neuro-rights represents not only a response to emerging risks but also an ethical commitment to preserving the autonomy, dignity, and cognitive freedom of individuals in an increasingly invasive and neuro-informational world.
The right to brain privacy refers to the legal protection of the brain’s electrical, chemical, and functional activity, including data derived from its capture, analysis, and inference by neuroscientific technologies. It is a direct extension of personal data protection, but with unprecedented sensitivity and risk.
Neural activity contains information about emotions, beliefs, memories, and even future intentions. If such content is accessed without consent or improperly processed, it may violate rights to intimacy, informational self-determination, and freedom of thought—constituting a new form of cognitive surveillance (Ienca & Andorno, 2017).
Among the emerging fundamental human rights, neuro-rights stand out as requiring legal protection for the human mind, as advocated by Rafael Yuste, director of Columbia University’s Neurotechnology Center, president of the NeuroRights Foundation, and principal theoretical architect of neuro-rights. Yuste, also one of the designers of President Obama’s 2013 BRAIN Initiative, argues that the advancement of neurotechnologies requires specific legal protection of the human mind, stressing the temporal urgency of implementing neuro-rights. In his words at UNESCO: “We must act before it’s too late.” This urgency stems from the exponential development of neurotechnologies and the rapidly closing window of opportunity to establish proactive legal protections.
Neuro-rights are an emerging set of fundamental rights aimed at protecting the mental integrity, autonomy, and freedom of individuals in the face of advances in neurotechnologies and direct brain-machine interfaces (BCIs), in addition to technologies that read, manipulate, or alter brain activity.
According to Yuste, the core neuro-rights are (Yuste, R., Goering, S., et al., “Four ethical priorities for neurotechnologies and AI,” Nature, 2017):
- Right to Mental Privacy: Protection of personal information obtained by neurotechnologies (neurodata) from decoding without prior consent;
- Right to Mental Identity: The right to consciousness and the preservation of personal identity against unauthorized alterations through neurotechnological interventions;
- Right to Cognitive Liberty: Protection of the autonomy of the will and the ability to make free decisions;
- Right to Mental Integrity/Free Will: Preservation of independent choice against neural manipulations;
- Right to Cognitive Enhancement: Freedom to choose to use or refuse mental enhancement technologies;
- Right to Non-Algorithmic Discrimination Based on Neural Data (Fair Access to Mental Augmentation).
Among global legislative and constitutional initiatives, Chile is pioneering as the first country to initiate the constitutionalization of neuro-rights, including in its 2021 constitutional reform the right to physical and mental integrity and the protection of neural data as explicit fundamental rights (https://www.bcn.cl/leychile). In the European Union, the issue already appears in discussions by the European Council on bioethics, in the AI Act, and in the AI Liability Directive, with concerns regarding AI and the manipulation of cognitive processes.
In the United States, bills are under debate, especially at the state level (e.g., California), alongside discussions within the NIH’s Neuroethics Initiative.
UNESCO, in its 2021 Recommendation on the Ethics of Artificial Intelligence, includes the protection of mental privacy as part of the ethical agenda for AI.
The 21st century is marked by the emergence of technologies capable of accessing, decoding, and manipulating human brain activity. Brain-machine interfaces, neurostimulation devices, and advanced neuromarketing techniques are shaping a scenario where the boundaries between thought, decision, and external intervention are increasingly blurred. Such a context requires the urgent expansion of the catalog of fundamental and human rights to incorporate neuro-rights that protect autonomy, cognitive dignity, and mental freedom.
The concept of neurocapitalism, as delineated by Rafael Yuste (2021) and expanded by authors such as Marcello Ienca (2017), can be conceptualized as an emerging economic system based on the extraction, processing, and monetization of neural data to create economic value through the prediction and modification of mental states, behaviors, and decisions. In other words, it is an emergent model of commercial exploitation of brain activity.
Unlike traditional surveillance capitalism, which infers mental states through observable behaviors, neurocapitalism accesses neural activity directly, promising to eliminate the “uncertainty” inherent in behavioral inference. This emerging model allows private companies to monetize neural data by offering entertainment, health, or productivity services—but at the cost of massive brain data capture (Yuste, 2021).
Technology companies are already developing products capable of capturing neural signals for targeted advertising, consumer behavior modulation, and emotional state analysis. Neurodata mining practices—where brain data is collected, stored, and sold—create a new category of social risk: neurocognitive surveillance. This form of exploitation transcends traditional privacy boundaries and reaches deep into human subjectivity.
From a legal standpoint, such a scenario constitutes a violation not only of informational self-determination (Floridi, 2014) but also of constitutional principles of human dignity and freedom of thought.
The emerging legal category of “Brain Data Privacy” seeks to establish normative safeguards to protect information derived from brain activity, regardless of direct identification with the individual. This right transcends the traditional logic of personal data protection, recognizing the sensitive, intimate, and identity-bound nature of neural data.
Ienca and Andorno (2017) propose that the right to mental privacy be enshrined as a new human right, encompassing a prohibition on the collection, storage, and processing of brain data without free, specific, and informed consent. Furthermore, authors such as Rafael Yuste advocate for explicit legal restrictions on the commercial use of neural data to limit the destructive effects of neurocapitalism.
Neurogaming platforms, wearable meditation or productivity devices, and even mood analysis apps based on EEG (electroencephalography) already collect such data. The lack of adequate regulation creates an environment conducive to the economic exploitation of the human mind, with potential violations of privacy, freedom of thought, and cognitive dignity.
Forms of neurocapitalist exploitation include:
- Direct Neuromarketing – Companies like Neurosky, Emotiv, and Nielsen already market consumer-grade EEG devices for analyzing neural responses to advertising stimuli. This “direct reading” of neural preferences represents a qualitative leap from traditional marketing, promising access to consumers’ “true” desires, unfiltered by reflective consciousness.
- Commercial Cognitive Optimization – Platforms such as Nootopia, Brain.fm, and Focus@Will sell promises of cognitive enhancement through neural stimulation, creating markets for mental capacities where attention, memory, and creativity become commodities.
- Neural Attention Economy – Continuous attention-monitoring devices, like those developed by BrainCo, allow for the real-time quantification and commodification of attentional resources, creating new markets for “mental time.”
Neurocapitalism operates through three central mechanisms of accumulation and expropriation:
- Neural Extraction: Capture of brain data through invasive and non-invasive devices, often under therapeutic or enhancement pretenses;
- Algorithmic Processing: Application of machine learning and AI to decode neural patterns and build detailed mental profiles;
- Behavioral Instrumentalization: Use of neural insights to influence and modify behaviors, creating feedback loops that amplify control over mental processes.
Brain data has unique characteristics that categorically distinguish it from other forms of personal data, such as:
- Radical Involuntariness – Unlike digital behavioral data resulting from conscious actions (clicks, posts, purchases), neural data is produced involuntarily by basic brain processes. An individual cannot simply “stop” producing brain waves or neural activity;
- Access to Private Mental States – Contemporary neurotechnologies can access mental states that individuals may not even consciously recognize, including pre-conscious processes, implicit emotions, and unarticulated intentions;
- Biometric Immutability – Individual neural patterns possess unique and relatively stable biometric characteristics, functioning like “brain fingerprints” that cannot be voluntarily altered.
The advancement of neurotechnologies has generated a scenario of profound transformations in the relationships between science, the market, and fundamental rights. These challenges and particularities demonstrate the inadequacy of existing legal frameworks, starting with the limitations of the consent model, since there is almost never free, informed, granular, and thus qualified consent (consent fiction). The consent paradigm central to GDPR, LGPD, and similar legislations proves inadequate in its original context and even more so for neural data due to several reasons:
- Impossibility of Granularity: It is not possible to selectively consent to specific types of neural activity;
- Lack of Awareness: Individuals do not fully understand what information can be extracted from their neural patterns;
- Permanence of Extraction: Once the neural interface is established, data extraction becomes continuous, complicating consent revocation;
- Insufficiency of Anonymization: Traditional anonymization techniques are ineffective for neural data due to:
- Uniqueness of Brain Patterns: Practical impossibility of true anonymization;
- Re-identification Capability: Algorithms can re-identify individuals through unique neural patterns;
- Inference of Sensitive Attributes: “Anonymized” neural data can reveal sexual orientation, medical conditions, and political beliefs.
Given these challenges and the inadequacy of the legal status quo to protect these emerging rights, we propose a specific framework and a new dimension of neuro-rights, alongside the right to brain privacy (Brain Data Privacy), and the right to dream as an expression of psychic freedom. The central aspects of this proposal, including its epistemological foundations and developments, will be presented in our next text.