Ethical and legal challenges of AI, machine learning and robotics
As artificial intelligence (AI), machine learning and robotics are being integrated into everyday life, it is important to understand the ethical and legal challenges that may arise.

AI can be seen in examples such as virtual assistants, AI assistant apps, generative AI, language-learning models and the recommendations created by the algorithms of streaming services. However, discussion continues around both the benefits and the concerns of using these technologies at scale.
Artificial intelligence (AI)
This describes the process of when computer systems are designed to make decisions and give answers in a similar way to human beings, for example, speaking to a virtual assistant such as Alexa or Siri. A human being is able to converse with these virtual assistants and receive a meaningful response.
Machine learning
It describes a process in which computers are trained (taught) how to recognise things. For example, for a computer to recognise an apple as a fruit, it will be trained, by being shown lots of images of apples with distinctive features such as colour and size.
The system then learns to recognise an apple and to know the difference between an apple and another fruit, such as a banana. This is an example of “machine learning".
The same process is used with recommender systems, for example when films are viewed on streaming services: the suggested videos are based upon what the viewer has previously watched. Machine learning is also referred to in its shortened form as ML.
Robotics
It describes a process involving the design, creation and use of robots to complete tasks either on their own (full autonomy) or alongside humans. A robot is a machine that can be programmed to complete a task.
Robots are increasingly being used to do many jobs such as working in industry and also within our homes, for example robot vacuum cleaners.
Ethical and legal considerations
Accountability
Consideration should be given about who is responsible for an AI system decision, for example is it the developer, the end user or the company using the AI?
When artificial intelligence and machine learning are used to make decisions, it’s important to assess who is accountable if there is an undesirable result. For example, if a system is trained with incorrect data and then it is used in the medical world, who is responsible? Or if a self-driving vehicle is involved in an accident, who would be held to account?
Legal liability
If systems cause problems, legal processes need to determine who is legally liable. Existing legal frameworks need to be kept up to date, to cover instances such as autonomous systems. Legal liability needs to be established in the use of new technologies.
Algorithmic bias
AI systems learn from existing real world datasets. If the datasets contain biases, the systems then also contain those biases, which can lead to unfair treatment of individuals based on a range of factors, such as ethnic origins, gender or where someone lives.
When AI systems are trained, it’s important that the data used to train the system is unbiased, otherwise human prejudices will be replicated onto AI systems. For example, in recruitment, an AI tool may be trained incorrectly and some candidates may be excluded during the recruitment process.
Safety
In order to have trust in systems, it is important that they are fully tested by the companies developing them.
For example, when using robots as workers alongside humans in industrial settings, it’s necessary to know that robots can navigate safely without causing accidents or harm to human beings.