You don’t need to understand AI to trust it, says German politician
The minister for artificial intelligence at the German government has spoken about the European vision for AI, especially how to grow and gain trust from non-expert users.
December 12, 2019
The minister for artificial intelligence at the German government has spoken about the European vision for AI, especially how to grow and gain trust from non-expert users.
Prof. Dr. Ina Schieferdecker, a junior minister in Germany’s Federal Ministry of Education and Research (Bundesministerium für Bildung und Forschung, BMBF), who has artificial intelligence in her portfolio, recently attended an AI Camp in Berlin (or KI-Camp in German, for “künstliche Intelligenz”). She was interviewed there by DW (Deutsche Welle, Germany’s answer to the BBC World Service) on how the German government and the European Union can help alleviate concerns about AI among ordinary users of the internet and information technologies.
When addressing the question that AI is often seen as a “black box”, and the demand for algorithms to be made transparent, Schieferdecker said she saw it differently. “I don’t believe that everyone has to understand AI. Not everyone can understand it,” she said. “Technology should be trustworthy. But we don’t all understand how planes work or how giant tankers float on water. So, we have learn (sic.) to trust digital technology, too.”
Admittedly not all Europeans share this way of looking at AI and non-expert users. Finland, the current holder of the European presidency, believes that as many people as possible should understand what AI is about, not only to alleviate the concerns but also unleash its power more broadly. So it decided to give 1% of its population AI training.
Schieferdecker also called for a communal approach to developing AI, which should involve science, technology, education, and business sectors. She also demanded that AI developers should consider users’ safety concerns and other basic principles from the beginning. This is very much in line with what has been outlined in the EU’s “Ethics guidelines for trustworthy AI” published in April this year, where, as guideline number one, it is stated “AI systems should empower human beings, allowing them to make informed decisions and fostering their fundamental rights. At the same time, proper oversight mechanisms need to be ensured, which can be achieved through human-in-the-loop, human-on-the-loop, and human-in-command approaches.” As we subsequently reported, those guidelines are too vague and lack tangible measurements of success.
Schieferdecker was more confident. She believed when Germany, which has presumable heavily shaped the guidelines, assumes the European presidency in the second half of 2020, it “will try to pool Europe’s strengths in an effort to transform the rules on paper into something real and useable for the people.”
The interview also touched upon how user data, for example shopping or browsing records, are being used by AI in an opaque way and the concerns about privacy this may raise. Schieferdecker believed GDPR has “made a difference” while also admitting there are “issues here and there, but it’s being further developed.” She also claimed the government is working to achieve a data sovereignty in some shape and “offer people alternatives to your Amazons, Googles, Instagrams” without disclosing further details.
The camp took place on 5 December in Berlin as part of the Science Year 2019 programme (Wissenschaftsjahr 2019) and was co-organised by the BMBF and the Society for Information Technology (Gesellschaft für Informatik, GI), an industry organisation. The interview was subjected to a vetting process by the BMBF before it could be published. As DW put it, “the text has been redacted and altered by the BMBF in addition to DW’s normal editorial guidelines. As such, the text does not entirely reflect the audio of the interview as recorded”.
About the Author
You May Also Like