European Union releases guidelines for responsible AI development


The European Union has just published seven guidelines for trustworthy AI based on around 500 comments received following the publication of a draft on ethics guidelines in December 2018.

While these guidelines are not laws, they set out a framework for lawmakers and companies to achieve trustworthy AI.

The Commissions' artificial intelligence strategy unveiled the updated guidelines flow in April last is aimed to gather investments of at least 20 billion euros annually from the public and the private sector over the next ten years.

The Commission wants to heighten its cooperation in the area with other nations that promotes ethical artificial intelligence like Japan, Canada or Singapore as it continues its cooperation with the G7 and G20 groups of leading economies.

Human agency and oversight: AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy.

Privacy and data governance: Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.

The European Commission, the executive arm of bloc, disclosed a system went for boosting trust in AI by ensuring, for instance, information about EU natives are not used to hurt them.

More news: Sterling gets Van Dijk's vote for PFA player of the year
More news: Rugby Australia condemn incendiary social media post from Israel Folau
More news: Boeing shares fall as 737 Max trouble continues

The guidelines also urge policies that ensure accountability for artificial intelligence systems and for security and reliability of artificial intelligence algorithms as it deals with errors or inconsistencies. Ethical AI is a win-win proposition that can become a competitive advantage for Europe: "being a leader of human-centric AI that people can trust".

Diversity, non-discrimination and fairness: AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility. We now have a solid foundation based on European Union values and following an extensive and constructive engagement from many stakeholders including businesses, academia and civil society.

And there should be mechanisms in place to make sure someone is responsible and accountable for the activities of robots and clever code.

The Commission presently intends to dispatch a pilot stage in which industry, research and open experts test the rundown of key prerequisites.

The announcement comes ahead of a pilot phase later this year, which will call on companies and public bodies to provide feedback.

The Commission said it also plans to launch a network of AI research excellence centres before the end of the year, and begin discussions with wider nations in an attempt to build global consensus on an approach to the technology.