Telefónica CEO clumsily wades into AI panic

telefonica building logo

José María Álvarez-Pallete has gone off on one about the dangers of AI, despite the telco industry’s prominent role in its promotion.

In a blog post on Monday, the Telefónica chairman and CEO used Descartes’ famous saying “I think, therefore I am”, as a clarion call for the social sciences to weigh in on AI’s direction of travel.

“Technology is already here but we must ensure it is not left on its own. The time has come for sociology, philosophy, anthropology, law,” he said.

Álvarez-Pallete joins the growing chorus of voices that have warned lately that AI, in particular generative AI, could wipe out humanity should it break free of its shackles and get ideas above its station.

“A runaway or power-hungry GenAI is an existential risk. It could make molecules that are harmful to humans or lead to models of fake news or deep fakes becoming a threat to democracy through mass campaigns of systematic and undetectable disinformation,” he warned. “Unlimited intelligence placed at the service of particular interests can create chemical or cybernetic weapons. The very companies that develop GenAI do so without knowing how to stop the process when GenAI acquires a degree of unrestrained autonomy.”

Álvarez-Pallete’s blog post reads as if the last decade of telcos banging on about AI’s potential brilliance never happened.

At Mobile World Congress 2017, Telefónica introduced Aura, which would bring cognitive understanding to the data flowing throughout its network and IT systems, enabling it to take what it called “a major qualitative leap” in the customer experience.

“Cognitive intelligence will allow us to understand our customers better, so they can then relate to us in a more natural and easy way, and generate a new relationship of trust with them based on transparency and the control of their data capabilities,” said a statement from Álvarez-Pallete at the time. “We are pioneers in this relationship model. Never before have the users of telecommunications services been able to talk with the networks in real time. We’re expanding the relationship with our customers, seeking to increase their satisfaction, and opening new possibilities for them so that they can enrich their digital lives with us.”

There was no suggestion from Telefónica’s presentation that one day an AI would try and wipe out humanity. Did Álvarez-Pallete happen to quote Descartes back in 2017?

There certainly appeared to be no references to French philosophers in 2019, when Telefónica showcased how corporate clients could benefit from big data and AI.

“Thanks to the transformation promoted within Telefónica with the use of artificial intelligence, we can now share our knowledge with our corporate clients by helping them build their own artificial intelligence and create experiences on one of our great pillars: our customers’ homes,” said Telefónica’s chief data officer, Chema Alonso, in a statement at the time.

Descartes seemed to be noticeably absent last September, when Telefónica launched Aura as a personal digital assistant that offers an alternative to Amazon Alexa and Apple’s Siri.

“A year ago we said we wanted artificial intelligence to be the basis of our clients’ relationship with us and we are delivering on our promise,” said Álvarez-Pallete, at the time. “Digitalisation is the core of our business and thanks to the deep internal transformation that we have undergone in the last eight years we have become a company of smart platforms.”

Now, in fairness to Telefónica, if you dig through its public policy website – or perhaps ask ChatGPT to do it – there is a section that soberly addresses issues concerning the responsible development, use and regulation of AI.

AI “is an essential technology for the qualitative advancement of our productive, scientific, educational, environmental, and social system. However, its design and use are not risk-free. It is therefore necessary to assess its impact and, if necessary, establish formal limits and ethical principles to safeguard human rights and the rule of law,” it states.

So it can be said that when it comes to the ethics of AI, Telefónica is aware and working on it.

However, that sort of reasoned discourse doesn’t get people’s attention, hence you end up with people conjuring images of malevolent supercomputers controlling armies of unfeeling killer robots. But all the while the sky isn’t actually falling in, these voices are very easy to ignore.

The recent surge in warnings about AI shows there is a need to discuss its potential implications for the future of society and humanity. However, the discussion needs to strike the right tone so that people pay proper attention, and don’t simply roll their eyes at all the scaremongering.

It’s a tricky balancing act, so perhaps we should ask for some help. On that note, here’s what ChatGPT thinks we should do:

As an AI assistant, I would recommend the following approach:

1. Start by acknowledging the benefits of artificial intelligence and how it has already transformed our lives.

2. Highlight the potential risks associated with the development of artificial general intelligence, such as unintended consequences, loss of control, and ethical concerns.

3. Use concrete examples to illustrate your points, such as the potential for AI to be used in autonomous weapons, the displacement of certain jobs, or the potential for AI to reinforce societal biases.

4. Provide information on ongoing research and initiatives aimed at mitigating these risks, such as the development of ethical principles and regulations for AI.

5. Encourage open dialogue and collaboration among stakeholders to ensure that the development of AI is human-centric and aligned with our values and principles.

Overall, it is important to convey the potential risks of artificial general intelligence in a calm and reasonable manner, while also promoting responsible development and ethical considerations in the field.

Get the latest news straight to your inbox. Register for the newsletter here.

  • 2020 Vision Executive Summit


  1. Avatar Anon1121 19/04/2023 @ 3:32 pm

    It doesn’t think. It’s uses probability to pick the next word. No different to nay other neural network. Wtf is this guy talking about…

  2. Avatar Nasir Nawaz 20/04/2023 @ 5:58 am

    Logical comments from Telefonica CEO. Perhaps there must be a standardisation bodies for AI as well who can keep check on the limits of AI. GenAI or StrongAI could lead to potential threats to the systems in place. Technology must be regulated.

Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.