The UK government will be hoping its AI advisory board is a bit more successful than Google’s as it names the full line-up.
Bringing together experts from industry, academia and data rights organisations, the ambition is to provide a guiding light for the future of artificial intelligence. Tabitha Goldstaub, co-founder of CognitionX, will chair the council which will feature the likes of Ocado CTO Paul Clarke, Kriti Sharma, the founder of AI for Good and Deepmind’s co-founder Mustafa Suleyman.
The primary objective of the council will be to make the UK a leading name in the AI world.
Such is the promise of the technology in terms of productivity and the creation of new services, technologists will be keen to drive innovation forward, though the dangers are also high.
AI not only presents the risk of abuse through prejudice and unconscious bias, but the unknown risks should be considered as much of a danger. Such is the embryotic nature of AI, the full-potential, power and influence are anyone’s guess for the moment. This is an exciting prospect, but also should be approached with caution.
For example, back in July 2017, a Facebook AI application managed to invent its own language to speak to other applications meaning human overseers had no idea what was going on. This was a very simplistic and limited application so there was no real danger, but it was a lesson to the industry; more defined perimeters need to be created for more complex applications in the real world.
This council will aim to create a framework to take the UK into a leadership position in the AI world, but it will be critical the members do not forget about the importance of ethical and responsible development.
“Britain is already a leading authority in AI,” said Secretary of State for Digital, Culture, Media and Sport, Jeremy Wright. “We are home to some of the world’s finest academic institutions, landing record levels of investment to the sector and attracting the best global tech talent, but we must not be complacent.
“Through our AI Council we will continue this momentum by leveraging the knowledge of experts from a range of sectors to provide leadership on the best use and adoption of artificial intelligence across the economy.”
The full list of members:
What role will consumers expect telcos to play when COVID-19 is behind us?
Total Voters: 19
DT and Samsung flirt with each other over standalone 5G https://t.co/fmh3g5VWIg #5G #RAN
27 January 2021 @ 17:58:32 UTC
Are public clouds right for telecommunications service providers? https://t.co/nVJyrf7rcV #Cloud #RedHat
27 January 2021 @ 15:18:32 UTC
Network automation for everyone https://t.co/9PxyozKNFp #Cloud #RedHat
27 January 2021 @ 14:26:02 UTC
First, I note that the “intelligence” in AI has little to do with the idea “intelligent” but rather than that of “intelligence agency” eg MI5. AI is a field concerned essentially with corrrelation analysis, a field of statistics that has been under steady growth since the arrival of computers. Secomdly, I am surprised that the AI council does not appear to include a representative of the Royal Statistical Society. Professional statisticians are guided by the Code of Conduct of the RSS https://www.rss.org.uk/Images/PDF/join-us/RSS-Code-of-Conduct-2014.pdf. That code of conduct includes
The PublicInterest
1.Fellows should always be aware of their overriding responsibility to the public good; including public health, safety and environment.
a.A Fellow’s obligations to employers, clients and the profession can never override this; and Fellows should seek to avoid situations and not enter into undertakings which compromise this responsibility.
b.Fellows shall ensure that within their chosen fields they have appropriate knowledge and understanding of relevant legislation, regulations, codes and standards and that they comply with such requirements.
c.Fellows shall be mindful of the scarcity of resources, promote the optimal use of resources and only support studies that have pre-defined objectives and that are capable of producing useful results.
2.Fellows shall in their professional practice have regard to basic human rights and shall avoid any actions that adversely affect such rights.
a.Enquiries involving human subjects should acquire ethical approval as appropriate and, as far as practicable, be based on the freely given informed consent of subjects. The identities of subjects should be avoided in data presentations wherever possible, and be kept confidential unless disclosure is permitted in law or consent for disclosure is explicitly obtained.
It is abundently clear that this is a code of practice that has not been followed in many important instances in applications of AI recently. Now the number of statisticians employed in AI my be small (?), but I suggest that their use might well improve the legal acceptability of some practices. Many instances of the selling of peoples identity, at least in Internet terms, would fall foul immediately I imagine of this code of conduct.