AI open letter has already achieved its likely real aim

A week after the publication of a letter calling for a pause on the development of artificial intelligence, there has been an explosion of public discussion.

The letter was very unlikely to achieve its stated aim but, from the start, it seemed more likely that its real purpose was to spark a global public discussion about the pros and cons of AI development, and there is mounting evidence that it has been successful in that respect. Earlier this week lobbying groups on both sides of the Atlantic called for greater oversight of AI technologies, with a focus on safety and ethics.

Now, it seems, governments and regulators have also been moved to act. US President Biden yesterday convened a meeting of his Council of Advisors on Science and Technology. “AI can help deal with some very difficult challenges like disease and climate change, but we also have to address the potential risks to our society, to our economy, to our national security,” he said at the preceding media briefing.

“And so, tech companies have a responsibility, in my view, to make sure their products are safe before making them public. Social media has already shown us the harm that powerful technologies can do without the right safeguards in place.”

That reference to social media offers an indication of the US government’s thinking on the matter of AI, and perhaps the tech industry in general. Regulation of social media has fallen behind, especially when it comes to the protection of children, but many of the proposed remedial measures seem more focused on giving the state greater powers of online censorship in general. When asked if he thinks AI is dangerous, Biden said “It could be.”

Meanwhile The Office of the Privacy Commissioner of Canada has launched an investigation into OpenAI, the company behind ChatGPT. “AI technology and its effects on privacy is a priority for my Office,” said Privacy Commissioner Philippe Dufresne. “We need to keep up with – and stay ahead of – fast-moving technological advances, and that is one of my key focus areas as Commissioner.”

A further contribution to the public conversation around AI has been provided by Canadian AI ethics researcher Sasha Luccioni. Her op-ed published by Wired argues that we already know how to make AI safer – through greater transparency. Not only would this allow regulators to better assess the risks of AI developments, it would also enable a much more informed public discussion.

Luccioni notes that the likes of Open AI are very untransparent in their methods, offering only gated access to their models. Unsurprisingly, she also thinks the situation could be improved by the greater involvement of AI ethics researchers.

It’s no more possible to pause the general development of technology around the world than it is to stop the flow of a river with your bare hands. The authors of the open letter must have been aware of that and so, presumably, had other reasons for writing it. Countless important discussions seems to have been catalysed by the letter and, while we no more trust governments determine the direction of AI development than companies, they should at least try to ensure it’s open in more than name.


Get the latest news straight to your inbox. Register for the newsletter here.

  • 2020 Vision Executive Summit

Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.