Stakeholders in AI ethics
- Nusrat Jahan Nishu
- Apr 20
- 1 min read

Developing ethical principles for responsible AI use and development requires industry actors to work together. Stakeholders must examine how social, economic, and political issues intersect with AI, and determine how machines and humans can coexist harmoniously.
Each of these actors play an important role in ensuring less bias and risk for AI technologies.
Academics: Researchers and professors are responsible for developing theory-based statistics, research, and ideas that can support governments, corporations, and non-profit organizations.
Government: Agencies and committees within a government can help facilitate AI ethics in a nation. A good example of this is the Preparing for the Future of Artificial Intelligence report that was developed by the National Science and Technology Council (NSTC) in 2016, which outlines AI and its relationship to public outreach, regulation, governance, economy, and security.
Intergovernmental entities: Entities like the United Nations and the World Bank are responsible for raising awareness and drafting agreements for AI ethics globally. For example, UNESCO’s 193 member states adopted the first ever global agreement on the Ethics of AI in November 2021 to promote human rights and dignity.
Non-profit organizations: Non-profit organizations like Black in AI and Queer in AI help diverse groups gain representation within AI technology. The Future of Life Institute created 23 guidelines that are now the Asilomar AI Principles, which outline specific risks, challenges, outcomes for AI technologies.
Private companies: Executives at Google, Meta, and other tech companies, as well as banking, consulting, health care, and other industries within the private sector that uses AI technology, are responsible for creating ethics teams and codes of conduct. This often creates a standard for companies to follow suit.
Comments