What does it mean for an AI system to make a decision?What are the moral,societal and legal consequences of their actions and decisions?

Ethics in artificial intelligence: introduction to the special issue

Recent developments in Artificial Intelligence (AI) have generated a steep interest from media and general public. As AI systems (e.g. robots, chatbots, avatars and other intelligent agents) are moving from being perceived as a tool to being perceived as autonomous agents and team-mates, an impor-
tant focus of research and development is understanding the ethical impact of these systems. What does it mean for an AI system to make a decision?What are the moral,societal and legal consequences of their actions and decisions?Can an AI system be held accountable for its actions?

How can these systems be controlled once their learning capabilities bring them into states that are possibly only remotely linked to their initial, designed, setup? Should such autonomous innovation in commercial systems even be allowed, and how should use and development be regulated?

These and many other related questions are currently the focus of much attention. The way society and our systems will be able to deal with these questions will for a large part determine our level of trust, and ultimately, the impact of AI in society, and the existence of AI.