A look at generative AI and its ethical landscape
While global interest in generative Artificial Intelligence (AI) has surged of late, only sporadic progress has been made on the ethical guardrails needed to protect society.
July 17, 2023
While global interest in generative Artificial Intelligence (AI) has surged of late, only sporadic progress has been made on the ethical guardrails needed to protect society.
The release of OpenAI’s ChatGPT late last year seemingly sent global interest into an upward spiral. With Silicon Valley companies rushing to launch their own large language models and AI tools, many are asking how much consideration is being given to the ethics of generative AI before products and solutions are unleashed on the general public and how much consensus has been reached among governments, the high-tech industry, and other stakeholders.
There has also been a growing enthusiasm for generative AI from many adjacent industries, including from IT leaders, many of whom are reportedly either experimenting or actively deploying the technology in their businesses. Interest is also gaining momentum across the telecoms industry, most notably on how AI can help telecom manage operations and improve customer satisfaction amid growing network demand and decreasing profitability.
Yet, according to a Salesforce survey, more than seven in ten respondents leading the IT industry expressed that the technology has the potential to introduce both data security risks and bias. The application of generative AI is also thought to introduce an upheaval in the workforce of today. Equally, interested parties from Silicon Valley itself, as well as political stakeholders, are beginning to voice their concerns about both the technical and ethical risks of AI, and more specifically generative AI. Some in fact go as far as warning of “profound risks to society and humanity” and “risk of extinction”.
Against this backdrop, we set out to present an overview of AI’s current ethical landscape. In so doing, we created a collection of some of the major debates and events that have been recently discussed in the media, academia, and among politicians. Before delving in those, let’s first take a look at some related terminology to set common grounds.
What does generative AI mean and what is the difference between an AI language model and an AI chatbot?
The word ‘generative’ in the context of AI relates to the ability of algorithms to generate or produce complex data, as compared to discriminative AI which can choose between a set of fixed options to produce a single outcome. While generative AI has existed as a technology for some time, for instance applied to smart speakers generating audio responses, it is only more recently that generative AI has made the leap towards producing human-like linguistic responses and significantly realistic visual creations. This has been the catalyst for a new phase of AI and a more mainstream familiarity of the term.
Generative AI consists of two components, AI language models and AI tools. Simply put, an AI language model – relying on large linguistic data – can manipulate language more accurately than any computing model could ever before. However, that doesn’t mean it understands language per se. Instead, these models, also known as large language models (LLMs) aim to understand the context of text input and provide the basis for AI tools. Popular examples of LLMs include Google’s BERT and OpenAI’s GPT-4.
Meanwhile, AI tools (based on LLMs) such as AI chatbots, AI content generators, or AI virtual assistants are specifically designed to generate or produce human-like responses or content, for instance in a conversation. Depending on the tool, various use cases will apply, such as written or audio text output in a dialogue, image and art, video content etc.
Some of the more famous chatbot tools based on these language models include Bard which is based on Google’s BERT language model and ChatGPT, based on OpenAI’s GPT3.5 model series. Examples of other generative AI tools include OpenAI’s DALL-E 2 which can create video, Anthropic’s Claude which is an AI assistant tool, or GitHub Copilot which is a cloud-based generative AI tool that produces code and is developed by GitHub and OpenAI.
Who is raising concerns?
It is worth noting that while some of the industry concerns outlined here are getting more mainstream media coverage since the popular launch of the AI LLM ChatGPT in November of last year, not all are new concerns but simply renewed.
For instance, in late 2020 an article emerged by MIT Technology Review claiming to have seen an unpublished paper by Timnit Gebru, an ex-Google AI ethics researcher, whose co-authored research paper had warned of the capitalistic intentions, the use of inscrutable data, wider societal impacts, and the effects of LLMs on the environment.
Meanwhile, kickstarting the launch of GPT4 in parallel with the layoff of an entire responsible AI team in early March 2023, was perhaps not the most reassuring message Microsoft, a commercial partner to OpenAI, could have sent to partners in the industry.
Amid fears around the societal impacts of AI voices from the industry include Geoffrey Hinton, ex-Google VP and one of the pioneers of deep learning. In an interview also with MIT Technology Review, Hinton raises his concerns over lack of guardrails and the consequences of the rivalry between Google and Microsoft. He spoke out about his concerns shortly after resigning from Google in May 2023.
In April 2023 the Italian watchdog, Garante, introduced a ban on ChatGPT over privacy concerns (later on reversed once concerns were addressed) and also decided to conduct a review of other AI systems. Soon after, fears also spread into the private sector as Samsung was reported to have issued its staff a ban from the use of AI tools over security concerns.
Amid these events, political momentum also began to gather pace in May 2023 in the US as senator and lawyer Richard Blumenthal and even OpenAI’s CEO Sam Altman reportedly argued for a governing rule, technology accountability, and users’ permission for the utilisation of their data. Altman further call on the senate to revoke licences from developers that have launched AI tools which exceed certain “threshold” or “capabilities”, giving an example of LLMs which can self-replicate or generate harmful content.
Rather more proactively to protect fundamental rights, the European Union (EU) held its first debate on a proposed AI rulebook with its commissioners a couple of years ahead in April 2021, with initial consultations on the ethical guidelines for a trustworthy AI dating back to 2018. In the 2021 debate members of the European parliament discussed a number of aspects, including but not limited to “the potential risk of mass surveillance, discrimination or wrongful incrimination of citizens” through the use of AI systems at the time. Further updates on the progress of the EU’s work are discussed below.
What are the risks and ethical considerations of generative AI?
Risks to higher education
Back in January 2023 there were a number of debates around the application of ChatGPT in academia and plagiarism in the education sector. A group of lecturers at University of Plymouth and Plymouth Marjon University published a paper on the topic of academic integrity in the era of ChatGPT. Indeed, they had given GPT3 a set of prompts to write a short piece on this subject which was then minimally edited by the lecturers for structure and removing of fake references and then published.
In their parting thoughts the professors acknowledged the role of the tool in producing the paper and having considered whether the AI tool was deserving of co-authorship they highlighted a very interesting and important point: as the tool lacked responsibility and could not be held accountable for the contents it created it was not named a co-author.
Subsequently, the risks in this sector can be viewed as two-fold. Firstly from the perspective whereby opportunities to plagiarise content are more broadly available, and secondly from the perspective that even if content created by AI were openly and honestly declared, there would still be a lack of amenability and accountability to which we as human creators are assignable.
Environmental impact
It is widely agreed that an AI LLM relies on large linguistic data consuming huge amounts of computer processing power which are debated to heavily impact the environment and further climate change. A 2019 study reportedly found out that training one language model just once via a method called a neural architecture search (NAS) can produce more than 600,000 pounds of carbon dioxide. This, we are told, is about the same amount of carbon dioxide produced as the lifetime output of five average American cars. When a version of Google’s BERT was trained its carbon dioxide output equalled to a round-trip flight between the east and west coast of the US.
Also worth noting is that language models are typically retrained multiple times over the course of their R&D process, meaning the actual carbon dioxide output per language model’s full lifecycle training is considered to be even higher. Adding to that, the values will further increase as consumers start using the technology.
Further, considerations of the investments made into building and maintaining AI models are also noteworthy. Reportedly, these investments would advance wealthy organisations while the adverse effects of climate change predominantly impact already marginalised communities of our society, leading to further societal divisions and inequitable access to technology and resources.
Other areas of concerns less frequently cited include lack of ubiquitous digital literacy and accessibility of these tools to all.
Social bias and disinformation
The aforementioned article from 2020 on the findings from the research done by Gebru et al highlights the risks to racial and gender bias, as well as the exclusion of communities with less online presence in LLMs and their AI tools. Further, a TechCrunch article from 2020, concerning bias in data used to feed LLMs, gives an example from the Black Lives Matter movement, highlighting that “[a]n AI-generated summary of a neutral news feed about Black Lives Matter would be very likely to take one side in the debate. It’s pretty likely to condemn the movement, given the negatively charged language that the model will associate with racial terms like ‘Black’.”
It is also argued that there is a high risk of bias in training data as western values are incorporated while values from the rest of the world ignored.
Amid these biases, we reviewed the limitations OpenAI has described for its ChatGPT tool on their website (accessed July 4th, 2023). Below is an extract regarding harmful content and bias that the company acknowledged in an article published in November 2022. At time of writing, we did not discover an updated version of the limitations based on GPT4.
“While we’ve made efforts to make the model refuse inappropriate requests, it will sometimes respond to harmful instructions or exhibit biased behavior. We’re using the Moderation API to warn or block certain types of unsafe content, but we expect it to have some false negatives and positives for now. We’re eager to collect user feedback to aid our ongoing work to improve this system.”
Similarly, Google states in its FAQ section of the bard.google page that “[a]ccelerating people’s ideas with generative AI is truly exciting, but it’s still early days and Bard is an experiment. While Bard has built-in safety controls and clear feedback mechanisms in line with our AI principles, please be aware that it may display inaccurate information or offensive statements.”
So, a couple of questions come to mind: Perhaps on a more philosophical level, will there ever be a time where AI is perfect while the humans who built it still live in such an imperfect society? After all, aren’t AI tools only as good as the data fed to them (at least for now)? And on a more logistical level, who will bear responsibility for any harm caused to society in the meantime?
This creates a natural segue to look at consensus across industry and political grounds as well as any regulatory progress to date.
How do we regulate and reach consensus on AI ethics?
In 2020 a study run by the Pew Research Centre found that a majority of ethical AI experts acknowledged the difficulty of reaching consensus on AI ethics. Less than a third of these respondents said ethical principles focusing primarily on the public good will be employed in most AI systems by 2030. So far, efforts to establish guidelines and standards seem to have been mostly sporadic, though momentum seems to be gathering as the EU and the US make announcements.
Political focus remains on the protection of privacy and elimination of misinformation
With governing bodies becoming more alert on the lack of guardrails, the EU trilogue recently voted on an AI rulebook, first drafted two years ago. These set of rules constitute EU’s negotiating position with its member states which ban the use of AI for biometric surveillance, emotion recognition, and predictive policing. Further, its negotiation rules for a safe and transparent AI also calls for banning emotion recognition and predictive policing.
Across the Atlantic, the US seems to be catching up to define and establish what guardrails are needed in AI. It appears to be looking at the protection of privacy and addressing bias and disinformation as the government announced its commitment to safeguarding America’s rights and safety through a new National Institute of Standards and Technology (NIST) public working group.
In terms of other regions and markets, it is worth noting that if approved by the 27 member states, we would be likely to see EU’s rules of engagement on safe and transparent AI adopted outside the union’s territory. This would not be the first time for the EU to step up and act as a global authority on regulation. For instance, soon after the General Data Protection Regulation (GDPR) first came into force across the member states in May 2018, many other nations outside the EU territory adopted GDPR-like data privacy laws.
Granted this will most likely not be the case unanimously across the globe. There are already some countries around the world that have been engaging in the development of a social credit system which is easily underpinned by AI technologies. Such a system would stand in direct opposition of some of the key principles outlined by the EU.
Enabling free access to standards for engineers and technologists
In addition to political momentum, standards bodies such as the IEEE Standards Association (IEEE SA) have established a series of standards documents as part of a global program for AI ethics and governance. They focus on topics such as AI ethics in the workforce, system design, and children’s rights. In January 2023, the IEEE SA announced the free access to the program to encourage engineers and technologists to adopt selected AI ethics and governance standards.
The program aims to enable organisations to incorporate ethical values. For instance, the standard model process for addressing ethical concerns during system design include values such as “transparency, sustainability, privacy, fairness, and accountability, as well as values typically considered in system engineering, such as efficiency and effectiveness.”
Parting thoughts
It appears the political view of ethical AI is less concerned with an apocalyptic doomsday, where we imagine life in the Matrix or Skynet. While the dangers motion-pictured in these wonderfully visionary movies are unlikely to be present themselves anytime soon, there are plenty of more immediate issues to be concerned about.
These would include risks such as the violation of our privacy, the use of biased data and the spread harmful information (inadvertently or not), and the environmental impacts such as the energy intensity needed to train LLMs. None of these risks and threats seem entirely unfamiliar to our society. Instead, the novel challenges lie in the massive reach and scale of AI technologies, the exponential speed at which these threats can grow when underpinned by AI, and the authority that AI (and those in charge of the tools and LLMs) can hold over our society.
There are many organisations out there that, despite these concerns, seem to view the benefits of generative AI as more powerful than the risks associated. However, one would hope the benefits of using modern technology are not stopping us from using them ethically and that stakeholders will agree to that.
Many of those advocating for an ethical adoption of AI speak of a human-centric approach, ensuring a consensus on guardrails that incorporate the basics of ethical considerations for businesses and individuals alike, thus ensuring our human rights and privacy are secured, and addressing the risks of disinformation.
To conclude, one thing is clear: if there was ever a hot topic about which we were compelled to say “watch this space”, ethical AI is it. If not an existential necessity, ethics in AI is certainly a must-have to secure our rights and freedoms as we know them today. And perhaps by taking such a step we can also better this imperfect and biased society that we live in.
Read more about:
Tech ExplainersAbout the Author
You May Also Like