Institutional AI: The Crucial Nexus of Democracy, Civil Virtue, and Ethical AI Governance

Ali Arsanjani
9 min readJun 1, 2024

--

I typically write about technical topics; but here I will explore a bit of the intersection of technical and societal aspects of AI. As we continue to teach the next generation about algorithms and data engineering; transformers and generative AI and … we may lose sight of the bigger picture of education.

The rise of generative AI presents both immense opportunities and significant challenges for society. To harness the benefits of this transformative technology while mitigating its risks, we must establish robust governance frameworks. Institutional AI — the creation of structures analogous to democratic institutions — is essential to achieve this. Effective AI governance requires more than just technical institutions; it also demands a foundation of civil virtue and a careful consideration of the role of for-profit companies, legislation, oversight, public engagement and research in this space.

We often hear the term “democratization of AI” used rather loosely — generally with the benevolent intent of expanding access to AI . And sometimes this may be posed more as a rather broad somewhat nonchalant slogan, perhaps hiding the structures and functions that are required to support it — responsible provenance, oversight, guardrails and ethical, responsible peer-review and governance through transparency.

Democratic Institutions: The Foundations of Society

In a democracy, institutions play a critical role in ensuring the proper functioning of society. Key democratic institutions include:

  1. Legislature: Creates laws and policies.
  2. Executive: Enforces laws and regulations.
  3. Judiciary: Interprets laws and resolves disputes.
  4. Free Press: Informs the public and holds power accountable.
  5. Academia and Research: Generates knowledge and informs policy.
  6. Civil Society: Advocates for public interest and monitors power.

These institutions work together to create a more balanced and equitable system that increases the chances of promoting fairness/equity, inclusion, transparency, accountability, and public participation.

One of the tacit foundations of democratic institutions is one of the individual : civil virtue.

Drawing the Analogy: Institutions for Ethical AI Governance

To ensure ethical AI governance, we can draw an analogy to these democratic institutions. Just as these institutions are essential for a healthy democracy, we need analogous structures for AI. Here’s how we can translate these concepts into the realm of AI governance:

AI Ethics Boards and Standard-Setting Bodies

Similar to legislatures, these bodies would develop ethical guidelines and technical standards for AI development and use. These boards would ideally be composed of diverse stakeholders, including ethicists, AI researchers, industry representatives, policymakers, and members of the public. Their decision-making processes should be transparent and inclusive, ensuring that a wide range of perspectives is considered.

Example: The Partnership on AI is an organization that brings together diverse stakeholders to develop best practices for AI technologies.

AI Auditing and Compliance Agencies

Similar to executive branches, these agencies would enforce regulations and ensure compliance with ethical standards. They would need to be equipped with the authority and resources to conduct thorough audits, investigate complaints, and impose penalties for non-compliance. Transparency and accountability would be crucial for maintaining public trust in these agencies.

Example: The Office of the Privacy Commissioner in Canada conducts audits and investigations to ensure compliance with privacy laws, which could serve as a model for AI compliance agencies.

AI Dispute Resolution Mechanisms and Legal Frameworks

Akin to the judiciary, these mechanisms would resolve disputes and establish legal frameworks for AI-related issues. This could involve creating specialized AI courts or tribunals with expertise in AI technology and ethics. Legal principles would need to be developed to address novel challenges like algorithmic bias, liability for AI-caused harm, and intellectual property issues related to AI-generated content.

Example: The European Union’s General Data Protection Regulation (GDPR) includes mechanisms for resolving data privacy disputes, which could be adapted for AI-related disputes.

AI Transparency and Explainability Initiatives

Like a free press, these initiatives would promote transparency and hold AI systems accountable. This involves developing tools and techniques for making AI systems more transparent and understandable, enabling users and stakeholders to understand how AI decisions are made and challenge them if necessary.

Example: The Explainable AI (XAI) project by DARPA aims to create AI systems that can explain their reasoning and decisions to users.

AI Research Institutes and Collaborative Networks

Analogous to academia, these institutions would conduct research and inform policy. They would foster research on AI ethics, safety, and societal impact, and encourage collaboration between academia, industry, and government. Activities could include funding research projects, organizing conferences, and publishing findings to inform policy and public discourse.

Example: The AI Now Institute at New York University focuses on the social implications of AI and conducts interdisciplinary research to inform public policy.

AI Advocacy Groups and Public Awareness Campaigns

Similar to civil society organizations, these groups would advocate for ethical AI and raise public awareness. They would organize public forums, engage with policymakers, and provide educational resources to ensure that AI development and use reflect societal values and concerns.

Example: Algorithmic Justice League advocates for equitable and accountable AI through research, art, and policy advocacy.

A Deeper Dive into Civil Virtue in the Age of AI

Technical institutions are essential, but they are not enough. Effective AI governance also requires a foundation of civil virtue — the shared values and behaviors that enable a society to function well. In the context of AI, this could include these aspects.

Digital Literacy

Understanding how AI systems work, their potential biases, and their impact on society. This includes knowing how algorithms make decisions, how data is collected and used, and how to critically evaluate AI-generated information.

Initiative: The UK’s National Centre for Computing Education offers programs to enhance digital literacy among students and teachers.

Critical Thinking

Evaluating AI-generated information with skepticism and discernment. It’s crucial to question the assumptions and biases that may be embedded in AI systems and to be aware of the potential for misinformation and manipulation.

Initiative: Critical Media Literacy Project provides resources to help individuals critically analyze media and technology.

Ethical Awareness

Considering the ethical implications of AI use and advocating for responsible development. This involves reflecting on the potential consequences of AI for individuals, communities, and society as a whole, and pushing for AI systems that prioritize human well-being and social good.

Initiative: The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems works to ensure every stakeholder involved in the design and development of autonomous and intelligent systems is educated in the ethics of these systems.

Collaboration

Working with others to ensure AI benefits everyone, not just the privileged few. This means engaging in open dialogue with diverse stakeholders, sharing knowledge and resources, and working together to develop AI solutions that address the needs of all members of society.

Initiative: The Global Partnership on AI (GPAI) facilitates international collaboration on AI research and development.

Respect for Data Privacy

Protecting personal information and being mindful of how data is collected and used by AI. This includes advocating for strong data protection laws, supporting companies that prioritize privacy, and being cautious about sharing personal information online.

Initiative: The Electronic Frontier Foundation (EFF) advocates for stronger data privacy protections and educates the public on data privacy issues.

Data Literacy

Understanding the power and limitations of data, and how it is used to train and improve AI systems. This includes recognizing the potential for biases in data collection and analysis, and advocating for transparency in how data is used.

Algorithm Awareness

Recognizing the role of algorithms in shaping our online experiences, from the news we see to the products we are recommended. Understanding how algorithms work can help us make more informed choices and hold companies accountable for the impact of their algorithms.

Collective Action

Organizing and advocating for responsible AI policies and practices. This could involve supporting organizations that work on AI ethics, contacting policymakers to express concerns, or participating in public discussions about AI.

Empathy and Inclusion

Considering the impact of AI on diverse communities and advocating for equitable outcomes. This means recognizing that AI can exacerbate existing inequalities and working to ensure that AI systems are designed and used in ways that benefit everyone, regardless of their background or circumstances.

The Role of For-Profit Companies in Ethical AI

The pursuit of profit by for-profit companies can create a conflict with the goals of ethical AI. When companies prioritize short-term gains over long-term societal well-being, they may:

Avoid Cutting Corners on Safety and Ethics

This can lead to biased, discriminatory, or even dangerous AI systems. Companies may neglect to invest in robust testing and validation procedures, or they may fail to consider the potential negative impacts of their AI systems on vulnerable populations.

Beware the Tendency to Ignore Negative Externalities

The broader societal impacts of AI, such as job displacement or environmental harm, may be disregarded. For example, a company might develop an AI system that automates jobs without considering the impact on workers or the need for retraining programs.

Engage in Regulatory Capture

Companies may try to influence or weaken regulations that protect consumers or ensure responsible AI development. They may lobby against stricter regulations or attempt to water down existing ones, putting profits ahead of public safety and ethical considerations.

Avoid Disenfranchising the Public

Lack of transparency and responsiveness can leave the public feeling powerless to influence AI development. Companies may withhold information about how their AI systems work or fail to engage with public concerns about potential biases or harms.

These actions have the potential to undermine public trust, weaken institutions, and erode the rule of law — all of which are essential for ethical AI governance.

Addressing the Conflict and Striving for Ethical AI

To mitigate the conflict between profit motives and ethical AI, and to create a future where AI technology is used responsibly, equitably, and for the benefit of all, we need a multifaceted approach. Here are some of the considerations.

Stronger Regulations

Governments must enact and enforce robust regulations that ensure AI is developed and used responsibly. These regulations should address issues like bias, transparency, accountability, and the potential for misuse of AI. They should also establish clear guidelines for data collection, use, and storage, and ensure that individuals have control over their personal data.

Ethical Frameworks

Companies should adopt ethical frameworks that guide their AI development and deployment. These frameworks should be based on principles of fairness, transparency, accountability, and respect for human rights. Companies should also establish internal oversight mechanisms to ensure that these principles are upheld in practice.

Example: Google’s AI Principles outline ethical guidelines for the development and use of AI technologies.

Public-Private Partnerships

Collaboration between governments, companies, and civil society organizations can help to ensure that AI benefits everyone. Public-private partnerships can facilitate the sharing of knowledge and resources, promote innovation, and develop solutions to common challenges.

Example: The AI for Good Global Summit brings together stakeholders from the public and private sectors to discuss and develop AI solutions for social good.

Transparency and Accountability

Companies must be transparent about how their AI systems work and the data they use. They should also be accountable for the impact of their AI systems on individuals and society. This means providing clear information about how AI decisions are made, allowing for independent audits and evaluations, and establishing mechanisms for redress in cases of harm.

Example: The Fairness, Accountability, and Transparency in Machine Learning (FAT/ML) community develops principles and practices for ensuring transparency and accountability in AI systems.

Public Engagement

Engaging the public in discussions about AI development and governance can help to ensure that AI reflects societal values and needs. This can be done through public consultations, participatory design processes, and educational campaigns.

Example: The AI4All initiative aims to increase diversity and inclusion in AI by providing educational opportunities for underrepresented groups.

International Cooperation

Global challenges require global solutions. International cooperation is essential for developing and enforcing standards for ethical AI. This can involve establishing international agreements, coordinating regulatory approaches, and sharing best practices.

Example: The Organisation for Economic Co-operation and Development (OECD) has developed a set of principles for AI that provide guidance for responsible AI development and use.

When we seek to integrate strong institutions, a foundation of civil virtue, and a commitment to addressing the potential conflicts arising from profit-driven motives, we have the possibility of creating a future where AI serves the common good. This future upholds the values of fair-use, intellectual property, fairness, transparency, and accountability, further increasing the probability that AI technology is used responsibly, equitably, and for the benefit of all.

References

  1. The Partnership on AI: https://www.partnershiponai.org/
  2. Office of the Privacy Commissioner of Canada: https://www.priv.gc.ca/en/
  3. European Union’s General Data Protection Regulation (GDPR): https://europa.eu/european-union/topics/data-protection_en
  4. DARPA’s Explainable AI (XAI) project: https://xai.darpa.mil/
  5. AI Now Institute at New York University: https://ainowinstitute.org/
  6. Algorithmic Justice League: https://www.algorithmicjusticeleague.org/
  7. The UK’s National Centre for Computing Education: https://www.ncce.io/
  8. Critical Media Literacy Project: https://criticalmedialiteracy.com/
  9. IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems: https://ethicsinaction.ieee.org/
  10. Global Partnership on AI (GPAI): https://gpai.ai/
  11. Electronic Frontier Foundation (EFF): https://www.eff.org/
  12. Google AI Principles: https://ai.google/principles/
  13. AI for Good Global Summit: https://aiforgood.org/
  14. Fairness, Accountability, and Transparency in Machine Learning (FAT/ML): https://www.fatml.org/
  15. AI4All: https://www.ai4all.org/
  16. Organisation for Economic Co-operation and Development (OECD): https://www.oecd.org/going-digital/ai/

--

--

Ali Arsanjani

Director Google, AI | EX: WW Tech Leader, Chief Principal AI/ML Solution Architect, AWS | IBM Distinguished Engineer and CTO Analytics & ML