The Ethical AI Imperative

Business Insights
02/08/2023

Artificial Intelligence (AI) is changing lives. It is transforming diagnosis and treatment throughout healthcare, improving patient outcomes. It is accelerating drug discovery, has the potential to drastically improve road safety and, through robotics, is unleashing new manufacturing productivity and quality. But the speed with which emerging technologies such as ChatGPT have been adopted by individuals is raising the ethical and political implications of AI adoption to the very top of the agenda.


For all of the benefits that AI can undoubtedly offer, if algorithms do not adhere to ethical guidelines, is it safe to rely on and use the outputs? If the results are not ethical or, if a business has no way to ascertain whether the results are ethical, where is the trust? Where is the value? And how big is the risk?


Ethical AI is ‘artificial intelligence that adheres to well-defined ethical guidelines regarding fundamental values, including individual rights, privacy, non-discrimination, and non-manipulation.' With organisations poised on the cusp of an enormous step change, Peter Ruffley, CEO at Zizo explores the ethical issues affecting the corporate adoption of AI, the importance of trust and the need for robust data sets that support robust bias checking.


Pandora's Box

Calls from technology leaders for the industry to hold fire on the development of AI are too late. Pandora's Box is wide open and, with the arrival of ChatGPT, anyone and everyone is now playing with AI - and individual employee adoption is outstripping business' ability to respond. Today, managers have no idea if employees are using AI, and no way to tell if work has been carried out by an individual or by technology. And with employees now claiming to be using these tools to work multiple full time jobs, because the tools allow the completion of work such as content creation and coding, in half the time, companies need to get a handle on AI policies fast.


Setting aside for now the ethical issues raised by individuals potentially defrauding their employer by failing to dedicate their time to the full-time job, the current ChatGPT output may not pose a huge risk. Chatbot-created emails and marketing copy should still be subject to the same levels of rigour and approval as manual content.


But this is the tip of a very fast expanding iceberg. These tools are developing at a phenomenal pace, creating new, unconsidered risks every day. It is possible to get a chatbot to write Excel rules, for example, but with no way to demonstrate what rules have been used or data changed, can that data be trusted? With employees tending to hide their use of AI from employers, corporations are completely blind to the fast-evolving business risk. This is just the start. What happens when an engineer asks ChatGPT to compile the list of safety tasks? Or a lawyer uses the tool to check case law prior to providing a client opinion? The potential for disaster is unlimited.


Without the ability to ‘show your workings', companies face a legal and corporate social responsibility (CSR) nightmare. What happens if the algorithms are shown to operate counter to the organisation's diversity, equality and inclusivity (DEI) strategy, and that bias and discrimination have been embedded in decision-making as a result?


Rather than calling for an unachievable slow-down in AI development, it is now imperative that data experts come together to mitigate the risks and enable effective, trusted use of these technologies. It is incumbent upon data experts to develop technology to support the safe and ethical operational use of AI. This can only be achieved if both the data being used and the output of the AI & ML activity is supported by appropriate data governance and data quality procedures, including the use of accurate, accessible data sets to check AI output for bias.


In practice this requires the development of trustable components throughout the entire AI production pipeline to provide essential transparency that enables a business to understand how the AI reached its conclusions, what sources were used and why. Clearly such ‘AI checking' technology must also be inherently usable, a simple data governance and risk monitoring framework that could both provide alerts in the face of exposed bias, discrimination or the use of questionable source data and enable the AI's entire process to be reviewed if required.


Furthermore, there is a global need for data collaboration and data sharing – both within and between organisations - to expand the data available and add more context and accuracy to the morass of Internet only information. This collaboration will be a vital part of the process to counter AI generated bias and discrimination which, together with AI ‘explainability' will create a trusted view of the world where AI can deliver the tangible business value that organisations currently seek.


Conclusion

These changes must, of course, take place while AI continues its extraordinary pace of innovation. Therefore, while collaboration and technology that delivers AI trust are on the agenda, the next few years will not be without risk. Potentially large-scale corporate failure due to mismanagement of AI usage at both individual employee and corporate level is almost inevitable.


As such, it is now imperative that organisations escalate the creation of robust strategies to safely manage AI adoption and usage, with a strong focus on the implications for CSR and corporate risk.