Protecting our ‘voice’ in an AI-enabled world
Telecoms.com periodically invites expert third parties to share their views on the industry’s most pressing issues. In this piece Kush Parikh, President of Hiya, examines the state of spam and fraud calls, and how AI can help operators identify and protect their customers from being scammed.
June 10, 2024
We live in a world where having a ‘voice’ should be a basic human right, but that same voice is under threat. The trust we have in voice calls as a method of communication is being eroded with the rise of AI-fuelled scams from today’s tech-savvy fraudsters, a growing problem for operators looking to keep their customers safe on their networks.
Voice remains important in the eyes of consumers and businesses, not only for providing a more human touch, but because of the efficiency and reliability it offers. In fact, it’s deemed especially important by consumers when engaging with essential services like healthcare providers or banks and credit card companies, where sensitive information is being shared. For businesses, 66% state voice calls were “essential” or “very important” to achieving sales and wider business goals.
But that reliability is increasingly being questioned. In 2023, 16% of UK consumers reported that they fell victim to mobile phone scams, costing them on average £634. In fact, analysis of all unknown calls in 2023 revealed that 28% were spam or fraud, up from 24% in 2022. It’s a global issue: in the fourth quarter of 2023, there were 7.3 billion unwanted calls globally – that’s more than 81 million spam calls every day.
As operators look to differentiate in the market - investing in ways to keep their customers safe is high on the agenda. That includes investing in AI-enabled technologies to help prevent the success of phone spam and scam campaigns, and protect the reputation of business customers. This is an essential tool if we’re going to level the playing field with the fraudsters already harnessing AI technology to create more complex and convincing scams.
AI-fuelling scams and fraud
Innovation in voice services, fuelled by the capabilities of AI, has already delivered some interesting advancements.
One example is Apple's iOS 17, an application that has been introduced to synthesise a person's voice for interpersonal communication purposes. This development underscores the growing ubiquity of voice cloning technology, with the alarming revelation that AI can replicate someone's voice from their voicemail in just three seconds.
The flipside with any innovation is the risk that these advances will be harnessed and adapted for nefarious means by fraudsters and other bad actors. While people are becoming increasingly savvy in avoiding calls from unknown numbers, staying safe today goes beyond applying basic common sense. As AI tools become more powerful and easily accessible, fraudsters are using AI to help create and distribute their scams.
One of the most recent advancements in scammer behaviour is the use of AI-generated voice clone scams, also called deep fakes, where AI is used to emulate voices you would recognise (family members, celebrities or political figures). Voice phishing, or ‘Vishing’ for short, is also becoming prevalent, which uses fraudulent phone calls to trick victims into providing sensitive information, like login credentials, credit card numbers, or bank details.
These tactics can be used in social engineering attacks, where scammers try to trick individuals into revealing personal information or steal money.
MGM Resorts became a victim of a notable number of inbound call attacks in September 2023 after hackers found an employee’s information on LinkedIn. The scammers impersonated the employee on a call to MGM Resorts’ IT help desk to obtain credentials that they then used to access and infect IT systems, according to Vox.
Meanwhile, in January this year it was revealed that gen-AI robocalls faking the voice of President Joe Biden were being received in the US, with the aim of encouraging New Hampshire voters to skip the upcoming primary election. These types of fraudulent calls will only increase as the general election looms and are predicted to be far more widespread as the year continues.
The US has taken steps to try and mitigate the growing threat of robocalls. In February this year, the Federal Communications Commission (FCC) ruled that calls featuring lifelike AI-generated human voices are now officially illegal, to reduce the number of tools available to scammers and help hold perpetrators accountable by law for their actions.
Meanwhile, in the UK, the Government is taking a ‘pro-innovation’ approach to legislating AI, but has stated that “letting fraudsters lead the way in the use of Artificial Intelligence (AI) technology is not an option”, calling on a “collective role and responsibility in counter fraud to keep pace with developments” and their potential impact.
AI-powered solutions
So, what is the solution? The reality is we need to fight fire with fire because AI-fuelled scams require AI-fuelled solutions. But it goes further than that. Having smart solutions is one thing, having them rolled out effectively enough to make an impact on people’s lives, is another. In the UK especially, responsibility for that is a collective one, which is why businesses, consumers and operators are taking measures into their own hands and implementing tools to help.
Current estimates suggest that 92% of consumers believe unidentified calls are fraudulent, and as a result, nearly half go unanswered. This is more than an inconvenience, it's a significant barrier to effective and secure interactions between businesses and their customers, given that 77% of consumers report they are more likely to answer a call if they know who is calling.
Many operators in the UK have started to take greater responsibility over protecting their customers from fraudsters, making it a point of differentiation in the market. For example, business customers can register their numbers with operators and applications that identify spam calls for consumers, to ensure their calls are answered.
Branded calling solutions can add additional layers of identity, like logo and call reason, as well as provide access to analytics to help businesses or operators better understand the effectiveness of their outbound calls.
Tools are already available that leverage an AI system to analyse every aspect of a phone call, from identifying the caller to the consumer who is receiving it. It then ensures the right protection is applied for each call, whether that’s blocking the call or flagging it up as a potential scam or nuisance call. These solutions allow the process to adapt and respond to new threats as they emerge, which is essential in an ever-changing threat landscape.
Unlocking the future of voice services
By instilling confidence in customers' voice services once again, there is a real opportunity for operators to add value to everyday voice activity. For example, voice recognition technology could be used to understand and respond to individual users' unique commands or requests or activate a voice assistant to answer calls, take and transcribe messages to ensure they action the necessary response required from the caller.
Empowering users with the knowledge of when and to whom they should respond, means operators can set the stage for transformative innovations in the realm of communications. With the potential of AI being harnessed by the fraudsters, the battle to protect voice as a medium of communication is well and truly here. Harnessing AI to counteract this threat is essential if we want to maintain trust in one of our most human forms of communication.
Kush drives the Go-to-Market aspect of the Hiya business including distribution, sales, marketing and business operations. He has over 15 years of experience in the mobile industry including handset OEMs, location-based services and mobile payments. Among Kush’s many startup roles, he was recently the CEO of PayByPhone, Inc. one of the largest mobile payment companies in the world which he sold to Volkswagen Financial Services. Kush holds a B.S. in Electrical Engineering from Penn State and an MBA from Duke University.
Read more about:
DiscussionAbout the Author
You May Also Like