The Ethics of AI

Who is responsible in an accident with a self-driving car? And how do we guarantee our privacy when using tracking apps? 

These kinds of questions play a major role in the debate about AI (artificial intelligence) and ethics. These are complex issues, as there is often no universally accepted truth. Fortunately, there are more and more ethical guidelines to ensure the development of reliable AI systems.

General principles can be derived from these guidelines, such as fairness, transparency, honesty, reliability and privacy.

Ethical principles and guidelines can help us agree on what is important to us in order to develop AI applications that comply with them. 

In practice, there are dilemmas that can lead to conflicts of values. Everyone has their own perspective and there are several ethical movements that are sometimes ideologically opposed. For example, we all agree that you are not allowed to kill. But what do you do when a terrorist tries to shoot a hundred people? Isn’t an autonomous weapon drone allowed to intervene? 

Ethical questions are complex, even without an artificial intelligence involved. 

 

Activity

Add your own ethical dilemma to the discussion.

Practical Applications

See the AI@School Toolkit and Curriculum for lots of practical activities and resources.

Here’s one we like for starters;

 

Best resources to teach AI Ethics 

AI@School Toolkit

10 Things About AI
Thing 2 - Ethics