Title Blog post Image

Artificial intelligence (AI) is a technology that allows computers and machines to mimic human intelligence and problem-solving abilities. Whether used independently or in conjunction with other technologies like sensors, geolocation, or robotics, AI can execute tasks that typically need human intelligence or intervention.

However, its use necessarily implies a massive processing of data and information, in which there may be found personal data. It is undeniable that this processing is absolutely necessary, not only for the AI to reach its maximum potential, but also to avoid mistakes and biases (also known as machine bias). The AI developer needs to apply the necessary technical safeguards to minimise machine bias, nevertheless, it is quite hard, since it reflects the values of the people involved in its learning.

As contradictory as it may seem, concerning AI development, it is not always applicable the more the merrier. It is true its training requires to use a big amount of data but bearing the General Data Protection Regulation (GDPR) in mind, it demands the minimum amount of personal data are processed. Besides, more data can increase the machine bias.

What is of remarkable importance is trust in AI tools. To be used, people and companies need to rely on the safety of the AI. Especially, focus on reducing opacity, bias and partial autonomy, and guaranteeing safety, ethics and fundamental rights protection, may help to increase trust in it.

There is also some advice you may follow to make an AI trustworthy:

  1. Have a risk management system that covers the entire life cycle of the system.
  2. Ensure transparency and communication of information to users.
  3. Systems should be monitored by people during use.
  4. Ensure accuracy, robustness and cybersecurity. Continuous learning systems must correct for possible biases in output information. They must also be resistant to unauthorised tampering attempts.
  5. Ensuring the development of ethical AI is important to avoid using the technology in a harmful or discriminatory way.
  6. It should also be a reliable AI, to ensure that systems are accurate and useful.

Last, but not least, do not forget to involve a privacy expert in the design of the AI tool, since GDPR obliges to both at the time of the determination of the means for processing and at the time of the processing itself, implement appropriate technical and organisational measures, and ensure that, by default, only personal data which are necessary for each specific purpose of the processing are processed.

No responses yet

Leave a Reply

Your email address will not be published. Required fields are marked *

Archives