flex-height
text-black

Engineers and scientists work on a bionics exoskeleton prototype

What are AI ethics?

AI ethics refers to the principles that govern AI’s behaviour in terms of human values. AI ethics helps ensure that AI is developed and used in ways that are beneficial to society. It encompasses a broad range of considerations, including fairness, transparency, accountability, privacy, security, and the potential societal impacts.

default

{}

default

{}

primary

default

{}

secondary

Introduction to AI ethics

Imagine an AI system that predicts the likelihood of future criminal behaviour and is used by judges to determine sentencing lengths. What happens if this system disproportionately targets certain demographic groups?

AI ethics is a force for good that helps mitigate unfair biases, removes barriers to accessibility, and augments creativity, among many other benefits. As organisations increasingly rely on AI for decisions that impact human lives, it’s critical that they consider the complex ethical implications because misusing AI can cause harm to individuals and society—and to businesses’ bottom lines and reputations.

In this article, we shall explore:

Examples of ethical AI principles

The wellbeing of people is at the centre of any discussion about the ethics of AI. While AI systems can be designed to prioritise morality and ethics, humans are ultimately responsible for ensuring ethical design and use—and to intervene when necessary.

There’s no single, universally agreed-upon set of ethical AI principles. Many organisations and government agencies consult with experts in ethics, law, and AI to create their guiding principles. These principles commonly address:

AI ethics terms and definitions

As an intersection of ethics and high technology, conversations about ethical AI often use vocabulary from both fields. Understanding this vocabulary is important for being able to discuss the ethics of AI:

AI ethics: A set of values, principles, and techniques that employ widely accepted standards of right and wrong to guide moral conduct in the development, deployment, use, and sale of AI technologies.

AI model: A mathematical framework created by people and trained on data that enables AI systems to perform certain tasks by identifying patterns, making decisions, and predicting outcomes. Common uses include image recognition and language translation, among many others.

AI system: A complex structure of algorithms and models designed to mimic human reasoning and perform tasks autonomously.

Agency: The capacity of individuals to act independently and to make free choices.

Bias: An inclination or prejudice for or against a person or group, especially in a way considered to be unfair. Biases in training data—such as the under- or over-representation of data pertaining to a certain group—can cause AI to act in biased ways.

Explainability: The ability to answer the question, “What did the machine do to reach its output?” Explainability refers to the technological context of the AI system, such as its mechanics, rules and algorithms, and training data.

Fairness: Impartial and just treatment or behaviour without unjust favouritism or discrimination.

Human-in-the-loop: The ability of human beings to intervene in every decision cycle of an AI system.

Interpretability: The ability for people to understand the real-life context and impact of an AI system’s output, such as when AI is used to help make a decision about approving or rejecting a loan application.

Large language model (LLM): A type of machine learning often used in text recognition and generation tasks.

Machine learning: A subset of AI that provides systems the ability to automatically learn, improve from experience, and adapt to new data without being explicitly programmed to do so.

Normative: A key context of practical ethics concerned with what people and institutions “should” or “ought” to do in particular situations.

Transparency: Related to explainability, transparency is the ability to justify how and why an AI system is developed, implemented, and used, and to make that information visible and understandable to people.

How to implement principles for AI ethics

For organisations, there’s more to using AI ethically than just adopting ethical principles; these principles must be integrated into all technical and operational AI processes. While integrating ethics might seem cumbersome for organisations rapidly adopting AI, real-world cases of harm caused by issues in AI model designs and usage show that neglecting proper ethics can be risky and costly.

Who is responsible for AI ethics?

The short answer: everyone who’s involved in AI, including businesses, governments, consumers, and citizens.

The different roles of different people in AI ethics

What human stakeholders need to understand infographic

The role of business leaders in AI ethics

Many businesses establish committees led by their senior leaders to shape their AI governance policies. For example, at SAP, we formed an advisory panel and an AI ethics steering committee, consisting of ethics and technology experts, to integrate our ethical AI principles throughout our products and operations. These principles prioritise:

Forming an AI ethics steering committee

Establishing a steering committee is vital for managing an organisation’s approach to the ethics of AI and provides top-level accountability and oversight. This committee ensures ethical considerations are interwoven into AI development and deployment.

Best practices for forming an AI ethics steering committee

Creating an AI ethics policy

Developing an AI ethics policy is essential for guiding AI initiatives within an organisation. The steering committee is critical in this process, using its diverse expertise to ensure the policy adheres to laws, standards, and broader ethical principles.

Example approach for creating an AI ethics policy

Risk Classification & Assessment Process Flowchart

Establishing a compliance review process

Developing effective compliance review processes is essential to ensure AI deployments adhere to the organisation's AI ethics policies and regulations. These processes help build trust with users and regulators and serve to mitigate risks and uphold ethical practices across AI projects.

Typical compliance review processes

Technical implementation of AI ethics practices

Integrating ethical considerations into AI development involves adapting current technology practices to ensure systems are built and deployed responsibly. In addition to establishing ethical AI principles, organisations sometimes also create responsible AI principles, which can be more focused on their specific industry and technical use cases.

Key technical requirements for ethical AI systems

Detection and mitigation of bias: Use diverse data sets and statistical methods to detect and correct biases in AI models. Conduct regular audits to monitor bias.

Transparency and explainability: Develop systems that users can easily understand and verify, employing methods like feature importance scores, decision trees, and model-agnostic explanations to improve transparency.

Data privacy and security: Ensure data in AI systems is securely managed and complies with privacy laws. Systems must use encryption, anonymisation, and secure protocols to safeguard data integrity.

Robust and reliable design: AI systems must be durable and reliable under various conditions, incorporating extensive testing and validation to handle unexpected scenarios effectively.

Continuous monitoring and updating: Maintain ongoing monitoring to assess AI performance and ethical compliance, updating systems as needed based on new data or changes in conditions.

Stakeholder engagement and feedback: Involve stakeholders, such as end-users, ethicists, and domain experts, in the design and development processes to collect feedback and ensure the system aligns with ethical and operational requirements.

Training the organisation in the ethics of AI

Comprehensive training is crucial to ensuring that employees understand AI ethics and can responsibly work with AI technologies. Training also serves to enhance the integrity and effectiveness of the organisations’ AI tools and solutions.

Key components of an effective AI training curriculum

AI ethics use cases for different roles in the organisation

Everyone in an organisation that works with AI-powered applications, or with AI answer engines, should be cautious of the risk of AI bias and work responsibly. Examples of AI ethics use cases for different roles or departments in corporate businesses are:

Authorities on AI ethics

AI ethics is complex, shaped by evolving regulations, legal standards, industry practices, and technological advancements. Organisations must stay up to date on policy changes that may impact them—and they should work with relevant stakeholders to determine which policies apply to them. The list below is not exhaustive but provides a sense of the range of policy resources organisations should seek out based on their industry and region.

Examples of AI ethics authorities and resources

ACET Artificial Intelligence for Economic Policymaking report: This research study by the African Centre for Economic Transformation assesses the economic and ethical considerations of AI for the purpose of informing inclusive and sustainable economic, financial, and industrial policies across Africa.

AlgorithmWatch: A human rights organisation that advocates and develops tools for the creation and use of algorithmic systems which protect democracy, the rule of law, freedom, autonomy, justice, and equality.

ASEAN Guide on AI Governance and Ethics: A practical guide for member states in the Association of Southeast Asian Nations to design, develop, and deploy AI technologies ethically and productively.

European Commission AI Watch: The European Commission’s Joint Research Centre provides guidance for creating trustworthy AI systems, including country-specific reports and dashboards to help monitor the development, uptake, and impact of AI for Europe

NTIA AI Accountability Report: This National Telecommunications and Information Administration report proposes voluntary, regulatory, and other measures to help ensure legal and trustworthy AI systems in the United States.

OECD AI Principles: This forum of countries and stakeholder groups works to shape trustworthy AI. In 2019, it facilitated the OECD AI Principles, the first intergovernmental standard on AI. These principles also served as the basis for the G20 AI Principles.

UNESCO Recommendation on the Ethics of Artificial Intelligence: This United Nations agency’s recommendation framework was adopted by 193 member states after a two-year global consultation process with experts and stakeholders.

Conclusion

In conclusion, ethical AI development and deployment require a multi-faceted approach. As an organisation, it is recommended to establish clear ethical principles, integrate them into AI development processes, and ensure ongoing compliance through robust governance and training programmes. By prioritising human-centred values like fairness, transparency, and accountability, businesses can harness the power of AI responsibly, driving innovation while mitigating potential risks and ensuring that these technologies benefit society as a whole.

SAP logo

SAP Resources

More AI ethics use cases and guidance

Receive comprehensive guidance for implementing ethical AI practices in the SAP AI Ethics Handbook.

Go to the manual

Read more