Humans Don’t Trust AI-based Calls

The technology exists to do it.

Consider Google Duplex, which calls a store on behalf of a Google Assistant user to book an oppointment or order a pizza, by acting like a real person:Google Duplex: A.


Assistant Calls Local Businesses To Make AppointmentsWhen Duplex came out, lots of commentators were up in arms about Google tricking people by pretending to be a person.

They were right.

I love that the same year Google stopped using the motto “Don’t be Evil” they started rolling out Duplex.

Google may be tricking pizza makers and hair salons, but I’m way more worried about the truly bad actors using this technology as a modern robocaller to feed on consumers.

I think Leor Grebler said it best, this goes way beyond tricking the consumer: even the training of these systems with hapless humans as the test dummies is super sketchy.

I worked with Leor in the past on voice interactive agents, and the whole field is progressing at a crazy fast pace since the early days of converting text to speech and speech to text, to the point where normal people are not aware that they are in a new reality of voice interaction agents.

Leor Grebler talking about his voice interaction technology way back in 2012.

It’s now 2019!The technology is very very strong, compared to the voice quality of systems from just one or two years ago.

Have a look at the following voice samples from VoiceNet: https://deepmind.

com/blog/wavenet-generative-model-raw-audio/Things are getting interesting, and that is not going to be painless for consumers.

By way of illustration, a close friend had the following experience in the past few weeks:A salesperson called her on behalf of her bank.

This was a real sales call, not a scam.

The agent on the line with her had all the right intonation and enthusiasm of a human, laughed at her smalltalk, and stopped to get additional information as you would expect, but when she was asked an oddball question that people don’t usually ask, the call slowed down a lot, and then a human agent with an Indian accent hopped into the call to take over.

The person made it clear right away: “You have been talking with me the whole time, but we use artificial intelligence software to overcome the accent barrier”.

My friend was pretty weirded out, and hung up.

It’s a total violation.

Worse yet, the human agent is probably the backup system for 50 calls all placed at once.

Maybe more.

In my family we have a case of an elderly relative who has been on the Do Not Call list since 2012, and she gets robocalls at 6am, 7am, and even 11pm on her landline.

These are the old school recorded robocalls, but it’s really infuriating.

It’s not cool.

It’s basically mechanized harassment.

Systems like this are only going to get worse as the bad guys get their hands on this artificial intelligence voice agent stuff.

I had a look on amazon, and there are some hardware devices for blocking spam calls, but cord cutting and anti-spam cellphone apps are a better strategy.

Here is a quick thing I slapped together to make the point that this technology is realily available.

It will only be up for about a month, but it proves the point.

Call: +1 213–224–2234 and you will hear a basic conversational agent I put up.

A recording of me calling the agent on the phone is embedded here:Me calling an artificial intelligence system on the phone, to show the basic capability.

Systems like this have a lot of problems.

Notice that the system I demonstrated tells you upfront that it can only do certain things, and so the user (the caller) is more aware of what to expect and what is going on.

In my talk with Darryl Praill (below) we discussed how artificial intelligence is not going to be replacing salespeople, because consumers hate being tricked, and the tools out there now are often not being applied correctly (ethically?) to help the consumer.

My talk with Darryl Praill of VanillaSoft.

com fame, about artificial intelligence in sales.

That main takeaway of the new survey report I mentioned was: “Consumers display distrust toward conversational AI tools like Google Duplex, but trust will build as usage increases.

” Clutch is a B2B ratings and reviews firm in Washington, D.


To get the data, Clutch partnered with Ciklum, a global digital solutions firm, to survey more than 500 people about their level of trust toward conversational AI.

The survey is telling us that the technology is going to be used.

A lot.

And so buckle up for a lot of consumer complaints.

A robocaller selling stuff on the phone is annoying enough, but this AI calling technology is not like an illegal caller trying to scam you.

Rather, this is an annoying trust cruching business tool that scales and is easily cost-justified.

It’s a robocall in disguise, and on steroids.

Even with humans stepping in to catch cases where the system does not know what to do, it’s not a good idea to use technology to trick people.

Trust is going to be an issue.

My friend’s dad told her a neat trick to try out.

Simply ask the caller if they are a human.

Many of these systems are programmed to pull you to a human in response.

I know from a banking guy I met at a conference that cursing at the system will probably get the system to hang up on you, and saying things in an agry/yelling tone is likely to get you to a real person.

But I’m not a great actor, and so I’ll stick to “Goodbye”.

In this article you followed along to see some negative experiences and extreme frustration I have had with both human and AI-based agents.

If you liked this article, then have a look at some of my most read past articles, like “How to Price an AI Project” and “How to Hire an AI Consultant.

” And hey, check out our newsletter!Until next time!-DanielLemay.



. More details

Leave a Reply