Amazon has managed to bottle fear, but recognition debate remains
While facial recognition technologies are becoming increasingly controversial, it is always worth paying homage to innovation in this field and the real-world applications, when deployed responsibly.
August 14, 2019
While facial recognition technologies are becoming increasingly controversial, it is always worth paying homage to innovation in this field and the real-world applications, when deployed responsibly.
We suspect people aren’t necessarily objecting to the concept of facial recognition technologies, but more to the application and lack of public consultation. You only have to look at some of world’s less appetizing governments to see the negative implications to privacy and human rights, but there are of course some significant benefits should it be applied in an ethically sound and transparent manner.
Over in the AWS labs, engineers have managed to do something quite remarkable; they have managed to bottle the concept of fear and teach its AI programmes to recognise it.
“Amazon Rekognition provides a comprehensive set of face detection, analysis, and recognition features for image and video analysis,” the company stated on its blog. “Today, we are launching accuracy and functionality improvements to our face analysis features.
“With this release, we have further improved the accuracy of gender identification. In addition, we have improved accuracy for emotion detection (for all 7 emotions: Happy, Sad, Angry, Surprised, Disgusted, Calm and Confused) and added a new emotion: Fear.”
When applied correctly, these technologies have an incredibly power to help society. You only have to think about some of the atrocities which have plagued major cities, but also the on-going problems. Human eyes can only see so much, with police and security forces often relying on reports from the general public. With cameras able to recognise emotions such as fear, crimes could be identified while they are taking process, allowing speedier reactions from the relevant parties.
However, there are of course significant risks with the application of this technology. We have seen in China such programmes are being used to track certain individuals and races, while certain forces and agencies in the US are constantly rumoured to be considering the implementation of AI for facial recognition, profiling and tracking of individuals. Some of these projects are incredibly worrying, and a violation of privacy rights granted to the general public.
This is where governments are betraying the promise they have made to the general public. Rules and regulations have not been written for such technologies, therefore the agencies and forces involved are acting in a monstrously large grey area. There of course need to be rules in place to govern surveillance practices, but a public conversation should be considered imperative.
Any time the right to privacy is being compromised, irrelevant as to whether there are noble goals in mind, the public should be consulted. The voters should choose whether they are happy to sacrifice certain privacy rights and freedoms in the pursuit of safety. This is what transparency means and this is exactly what has been disregarded to date.
About the Author
You May Also Like