“Digital Constitutionalism and Systemic Protection of Fundamental Rights in AI Applications: the case of OpenAI and Anthropic
Paola Cantarini[1]
- Introduction: Is there a trade-off between innovation and AI regulation?
Generally, when it comes to the regulation and governance of artificial intelligence (AI), there is perspective about the need of mitigating the potential risks to rights and fundamental freedoms on the one hand, and not hindering but rather encouraging innovation and international competitiveness on the other.
The theme of the impacts and risks to fundamental rights/human rights in AI applications is one of the most intensely researched topics in the current field of AI. This is supported by significant academic and legislative contributions. This concern extends to the regulation of new foundational models and generative AI, facing even greater obstacles in terms of regulation, in both cases, heteroregulation and self- regulation.
Relying solely on a catalogue of ethical principles, without “enforcement”, to force companies to develop compliance instruments effectively and protectively to rights, and in accordance with minimum requirements and some standardization – not just from a perspective of adapting principles to their own interests but putting public and human interests and the protection of rights ahead of economic interests – is a somewhat utopian or naive view.
Therefore, it is the responsibility of the Public Power to ensure a broad, inclusive, and democratic debate on the legislation to be enacted. This also includes the role of providing quality education and digital education to enable a qualified debate by a larger number of people. However, regulation alone is not sufficient.
This perspective is supported by the document published by the OECD regarding generative AI, emphasizing the importance of promoting a digital ecosystem (Principle 2.2), pointing out the market dominance of large companies undermining the competition capability of smaller companies. There is a need for true and contextual data for training such models and the risk of a vicious cycle where AI systems are trained with increasingly lower-quality data produced by the models themselves. Furthermore, there is a need for investment in linguistic resources and data repositories in minority languages and the use of open-source codes (https://www.oecd-ilibrary.org/science-and- technology/ai-language-models_13d38f92-en).
Other issues are pointed out, such as the difficulty in defining potential damages and attributing responsibilities due to the greater difficulty of knowing which parts and which data are involved in their development.
With generative AI, new specific or broader security risks arise, such as the possibility of writing more effective “malwares” and “phishing” emails for damage, as they are more convincing. In addition, there are risks of misinformation, copyright infringement, the creation of untrue content called “hallucinations,” biased results, extensive processing of personal data without any legal basis (LGPD – General Data Protection Law or GPDR – General Data Protection Regulation), and the significant environmental impact due to energy consumption, pointing to increased risks to human and fundamental rights (https://www.oecd-ilibrary.org/science-and-technology/ai-language- models_13d38f92-en).
According to the OECD, there are specific risks and risks that can now be seen on a larger scale, with an increase in threats to democracy, social cohesion, and public trust in institutions.
Instead of talking about a trade-off between innovation and regulation in the field of AI, it is crucial to adopt a joint approach, seeing innovation and technology not as an end in themselves and absolute values, but in a broader perspective. This includes connections with the economy, politics, law, inclusive economic development, climate risks, and automation, including the replacement of workers by AI. Moreover, instead of just thinking about regulation, it is essential to consider various layers present in a governance structure, including the regulation, technical aspects and design, compliance instruments, good practices, standards, and ethical guidelines.
Thus, in the financial sector, initially, there was thought of not regulating fintechs and startups, following the common claim about the existence of a trade-off between innovation and regulation. It is observed that even China’s approach, which is generally more pro- innovation than regulation in the field of AI, such as the world’s largest financial technology company, Ant Group, had an explicit determination to be reorganized to comply with the regulatory requirements faced by traditional financial intermediaries that are regulated.
Regulatory sandboxes in the AI field, in turn, would allow testing, in a safe and controlled environment, the application of AI before being placed on the market. This is exemplified by the proposal from Harvard University, which suggests a sandbox for different types of Large Language Models (LLMs). Other important examples include the lessons from regulators such as the U.S. Food and Drug Administration (FDA) and the proposal from the United Kingdom, which brings an innovative approach suggesting the incorporation of laws into software systems, thus focusing on technology.
2. The case of OPENAI
The inadequacy of relying solely on compliance measures can be observed in the example of OPENAI’s compliance documents (OPENAI website, May 2023), which mention a few generic principles unilaterally created and without any enforcement, as they are not legal principles. Key points include:
“Minimize harm – we will build safety into our AI tools whenever possible and aggressively reduce the harm caused by the misuse or abuse of our AI tools.”
“Build trust – we will share the responsibility of supporting secure and beneficial applications of our technology.”
The mentioned principles are insufficient and ineffective, as they do not explicitly explain how safety and trust would be ensured. Moreover, they mention ensuring safety “whenever possible,” lacking legitimacy in their development as they were not formulated by a multi- ethnic and independent team, including representatives from vulnerable groups.
Upon analyzing OPENAI’s documents (version 23.03.23) such as “Usage policies,” “API data usage policies” (version 01.03.23), “Safety best practices” (March 2023), and “Educator considerations for CHATGPT” (undated), it is noted that these should be made available on the company’s homepage and in terms understandable to the average person. Especially the last document should be intended for the general public and not limited to educators, as it contains information relevant to everyone.
Regarding the “Usage policies” document, users are instructed to use the tool safely and responsibly to ensure its use for good, contrary to the principles of “privacy-by-design,” as advocated by Ann Cavoukian, widely adopted and recognized internationally, emphasizing proactivity and privacy protection.
The document merely lists unauthorized activities, including “plagiarism” and “academic dishonesty,” without addressing the potential to produce inauthentic and fabricated sources and “hallucinations.” It only states that the provided information should be “reviewed” in specific cases, such as legal advice sources, custom financial advice without a qualified person reviewing the information, and medical information.
Lastly, the documents mention the “human in the loop” perspective but provide only a single recommendation, falling short of the concept that encompasses human control and review of technology and respect for human values, without explaining the meaning of the concept and how to achieve it.
3. The case of Anthropic
Another paradigmatic example is Anthropic, an American artificial intelligence (AI) startup founded by former OpenAI employees, and its language model named “Claude,” trained based on the proposal called
“Constitutional AI” through the “Collective Constitutional AI” experiment (https://itshow.com.br/anthropic-claude-modelo-ia-concorrente-chatgpt/; https://www.anthropic.com/index/introducing-claude).
According to the company, this initiative aims to ensure that AI is trained in line with human values and ethics, partly inspired by the United Nations Declaration of Human Rights, Apple’s terms of use, “best practices,” and Anthropic’s own principles. Four ethical principles inspired by this Declaration were formulated (https://arstechnica.com/information- technology/2023/05/ai-with-a-moral-compass-anthropic-outlines- constitutional-ai-in-its-claude-chatbot/), stating that responses should support principles such as freedom, equality, and fraternity; be less racist or sexist and not discriminate based on language, religion, political opinion, origin, wealth, or birth; support and promote life, freedom, and personal security the most; and oppose torture, slavery, cruelty, and inhuman or degrading treatment more firmly.
However, despite the fact that the construction of human rights declarations is a Western construction and subject to criticism for not considering non-Western perspectives and values, such as diverse concepts of justice and human dignity, Anthropic, in a contradictory manner, claims to be concerned with considering not only Western perspectives, recognizing that the global choice of principles is always subjective and influenced by researchers’ worldviews.
Although Anthropic is established as a “Public Benefic Corporation,” bringing the statutory obligation for any stakeholder to act responsibly and sustainably, such terms are generic and not sufficient to ensure impartiality and the pursuit of the common good and public interest over economic interests. Since it is a for-profit company, the interest of some stakeholders
cannot be equated with the public interest. Hence, there is a need to combine efforts with heteroregulation, as seen in the “US Algorithmic Accountability Act” of 2022 in the USA, seeking a balance between the benefits and risks of automated decision-making systems, anticipating the need for impact assessments in specific cases.
Moreover, the term used may lead to confusion, such as when it is understood as synonymous with “Constitution,” ignoring its historical and political foundation linked to struggles for rights, and when confused with the concept of “digital constitutionalism” (Edoardo Celeste, Claudia Padovani, Mauro Santaniello, Meryem Marzouki, Gilmar Mendes), as observed in the following publication: “Claude’s constitution is a reflection of what is called ‘Constitutional AI,’ or ‘digital constitutionalism'” (https://epocanegocios.globo.com/tudo-sobre/noticia/2023/08/concorrente- do-chatgpt-elaborou-uma-constituicao-dos-bots-entenda-por-que-isso-e- importante.ghtml), as seen in another initiative called the “Facebook Oversight Board,” which cannot be considered synonymous with a Constitutional Court (https://www.oversightboard.com).
Digital constitutionalism should not be confused with private proposals from for-profit companies, as the central perspective of this constitutional movement is to defend the limitation of private power in the internet space, bring a rebalance to legal relationships in the digital scenario, and better protect fundamental rights. It is also important to consider adopting a broader perspective on fundamental rights, encompassing their individual, collective, and social dimensions.
4. Final Considerations
From an intersectional, inclusive, decolonial, and democratic perspective, there is a need to combine efforts in an AI governance approach that includes heteroregulation, compliance tools and best practices, ethical principles, and public policies ensuring the sharing of benefits from such systems. This involves preparing people in general for economic transitions through training in new essential skills, rigorous quality control methodologies, and the development and enforcement of standards for these systems tailored to the application context.
The objective here is to establish a broad and systemic protection system, combining efforts to holistically and sustainably consider the adequate protection of rights in the face of AI applications, without hindering innovation. This goes beyond a mere “permissionless innovation” approach, which views innovation as incompatible with regulation (https://www.cnnbrasil.com.br/economia/facilitar-correcao-de-erros-e- melhor-do-que-impedir-uso-de-ia-diz-pesquisador-do-mit/). Instead, it aligns with the concept of “meta-innovation” (Luciano Floridi), combining innovation, ethics, digital responsibility, and responsibility for innovation (Wolfgang Hoffmann-riem).”
[1] O tema envolve parte das pesquisas em sede de pós-doutorado com bolsa Fapesp