On the road towards trustworthy artificial intelligence

On the road towards trustworthy artificial intelligence
Those pushing towards the irreversible changes to human civilisation must do so in an ethical way

Elly Yates-Roberts |


This article was originally published in the Autumn 2019 issue of The Record. Subscribe for FREE here to get the next issues delivered directly to your inbox.

In 1942, Isaac Asimov’s short story Runaround introduced ‘The Three Laws of Robotics’, later complemented by ‘The Zeroth Law’, which stated: “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.” While for the last 70 years or so the author’s ideas have been considered a fine example of great ­science-fiction literature, we have now come to the point where we no longer question a human’s ability to create an intelligent machine. With Microsoft’s recent US$1 billion investment in OpenAI, we should rather be asking ourselves: can machines be made moral as they become an integral part of our 21st century world?

The title of this article is not coincidental and refers to the recently published consultation paper in Malta, one of the international examples of how to approach the ethical guiding principles of artificial intelligence (AI) and policy considerations that will form the basis of a (trans)national ethical AI framework.

There have been other initiatives around the globe to create guiding principles for research and development in AI, regardless of jurisdiction or political regime. Even China, heavily criticised for its social credit system – a Black Mirror-like reality – has released its own Beijing AI Principles, which are intended to realise AI that is beneficial for humankind and nature. Researchers in this field, together with other professionals, have also been busy and have created lists of guiding principles such as the Asilomar AI Principles, the International Association of Privacy Professionals’ whitepaper Building Ethics into Privacy Frameworks for Big Data and AI, the Institute of Electrical and Electronics Engineers’ (IEEE) Ethically Aligned Design, and the Organisation for Economic Co-operation and Development’s AI Principles, which have already been adopted by 42 countries.

The last publication not only declares that AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being, but that AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards – for example, enabling human intervention where necessary – to ensure a fair and just society. It also argues for transparency and responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them and states that AI systems must function in a robust, secure and safe way throughout their life cycles and potential risks should be continually assessed and managed. Finally, it says that organisations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the above principles.

The above principles have been embraced within other documents of a similar nature, and it appears we have sufficient frameworks on the ethical use of AI, but what do they actually mean in practice? Do they sufficiently protect us against privacy breaches, eliminate the risk of reinforcing societal and economic schisms, or rather perpetuate stereotypes? Google’s algorithm – created to monitor and prevent hate speech on social media platforms and websites – turned out to be biased against African-Americans. As was the US COMPAS, an algorithm used to guide sentencing by predicting the likelihood of a criminal reoffending. And In 2018, Amazon had to shut down its beta AI recruiting tool when it turned out that the system was not scoring candidates for software developer vacancies and other technical positions in a gender-neutral way.

Are we really incapable of creating ethical machines, or do the above-mentioned examples merely expose our society’s systemic problems, which are inadvertently transmitted to subside within our new digital reality? There is no doubt that senior management must embrace not only the benefits of new technologies, but also risk-assess all its implications. One way of ensuring that ethical standards are maintained is the approach recently formulated by Hannah Fry, an associate professor in the mathematics of cities at University College London, who called for a Hippocratic oath for AI. 

However, it is not just the data scientists or programmers who will need to bear such a responsibility. The automation and digitalisation of any business will necessitate the thorough understanding and involvement of every single employee.

Ursula von der Leyen, President-elect of the European Commission, has already announced that one of her priorities will be to propose legislation for a coordinated European approach on the human and ethical implications of AI. The IEEE is also moving from principles to practice, because – while the law is still in a state of flux – it is still necessary to have clear policies and to promote practices allowing the adoption of new technologies on the basis of informed trust. This is something senior management should appreciate and respond to by formulating an adequate due diligence process.

While we await further developments, just as General Data Protection Regulation introduced us to the concept of ‘privacy by design’, it appears as though the ethical AI dilemma is taking us in the same direction. Moral values will have to be considered at every step as an organisation seeks to safeguard to the highest standards possible. In the meantime, Objectivity is taking all challenges associated with newly emerging technology with informed seriousness, and actively considering the ethical pitfalls of AI. These ought to be closely monitored throughout any project. As a result, appropriate safety measures and protocols, such as double-blind review, should be implemented in respect of the data required in order to ‘de-bias’ any algorithm.

Gabriela Wiktorzak is a solicitor and in-house counsel at Objectivity

Subscribe to the Technology Record newsletter


  • ©2024 Tudor Rose. All Rights Reserved. Technology Record is published by Tudor Rose with the support and guidance of Microsoft.