We should be regulating AI, but no-one really knows how
A conundrum which has existed throughout the life of the technology and telco industry around the perfect balance between innovation and regulation.
November 13, 2018
A conundrum which has existed throughout the life of the technology and telco industry around the perfect balance between innovation and regulation.
From a technologists perspective, the question is a simple one to answer; don’t pin us back with red-tape, allow us to explore new ideas with complete and utter freedom. However, with technology becoming increasingly invasive, most sensible people would suggest there is a need to build a rulebook.
One of the main issues with regulation is the strength and depth. Striking the right balance between freedom and guidelines is an incredibly difficult task. Where the lines should be drawn is an answer which will vary dependent on who you speak to, as will the flexibility of these lines. And then of course you have the pace of change. Technology is constantly years ahead of regulation, so is it even a reasonable objective to attempt to achieve.
And then you have artificial intelligence. A technology which has great promise and is advancing faster than anything before it, but risks stripping people of their livelihoods, invading individuals privacy and compounding inherent human bias because written code. This is an incredibly complicated field, and when asked the question whether AI should be regulated, lawyers from Webber Wentzel gave a resounding yes.
Speaking to Webber Wentzel CIO Warren Hero after his presentation at AfricaCom, the issue with regulating AI is simply down to a lack of understanding and a non-existent conversation between the stakeholders in the industry. To build a reasonable legislative and regulatory foundation for AI, technologists, governments and consumer-interest representatives should all be sat around a table and contributing, but they simply aren’t.
Part of this is down to the on-going conflict between industry and red-tapers, but another factor to consider is understanding the technology itself.
“The ability to make a decision is based on the understanding of a concept,” said Hero. “Not enough people understand AI.”
Hero is 100% correct. Such a miniscule proportion of the population understand AI to the degree needed to make any decisions on the future of the technology making it is an almost impossible task. For AI to succeed, there will of course need to be rules to ensure responsible development, though there will also have to be freedoms granted. As we mentioned before, finding this balance is not simple and will require a deep understanding of the technology itself.
The issue which many governments are facing, according to Hero, is attempting to legislate and regulate in the same way governments have for generations. This might not sound terrible, but the digital economy is unlike anything which has come before, and AI presents a completely different dynamic.
Looking at the digital economy first and foremost, this is an area where technologies can continue to scale constantly. The virtual world offers no barriers, just take a look at the cloud computing segment. These companies are already incredibly influential, but how many of the worlds processes have been moved to the cloud. 10%? 5%? Less? The room for growth is exceptional, and should be treated differently to business segments of the past.
Now onto the AI arena specifically. While there are countless companies around the world who consider themselves experts in the AI world, in reality there are only a handful who dominate the space and have the scale to enforce genuine change. The likes of Google, Amazon and Microsoft has created such a strong position at the top of the ecosystem, it will be almost impossible to break the vice-like grip. This concentration, and inevitable continuation, of power is unlike anything which has been seen before.
For both of these reasons, AI (and digital in general) has to be regulated in a different manner. The trend of building new regulations and legislation on top of existing foundations will not create an environment which is healthy for the industry or the consumer.
In short, yes, AI needs to be regulated, but at the moment, there isn’t the breadth and depth of brainpower capable of doing it.
About the Author
You May Also Like