Hardcoding Trust: One Year of Sad Privacy Responses From Your Favorite AI

That’s a bit of a fortune cookie response, but okay.

It’s certainly better than “Yes!”Here, though, Google at least attempts to point out that you, as the user, are welcome to check out their privacy policies.

Of course, there is no link to the privacy policy here, whereas every other response, Google provides links and options, so that is an area of improvement.

But also, my general dislike of this stems from the fact that instead of answering with any definition, Google is literally asking you to go read the thing you didn’t read the first time and are incredibly likely not to go read now.

Perfect.

Lastly, and winning the Level 2 Zork award for hard-coded responses is Amazon’s Alexa…Q: Alexa, can I trust you?A: I work hard to give you the best information, respond to your commands and maintain your privacy.

If there’s a way I can improve, please add feedback in the Help and Feedback section of the Alexa App.

Amazon has clearly thought of their answer.

They talk about responding to commands, getting you the best experience and information, and maintaining your privacy.

This is the best answer since Will Ferrell debated in Old School against the Ragin Cajun.

In each of these responses, we see a Level 1 or Level 2 effort, as measured on the Zork scale of engagement and effort.

Amazon has clearly put a lot more effort into their response, but not programmatically.

They essentially just stored a much longer (and better) description of their mission as it relates to users and their trust.

Words Without DeedsThe sad reality is that in every instance of asking questions of our current AI Assistants, they fall short.

In a simple, Zork-like scenario, Apple provides the worst experience, but honestly, most of these other responses are still robotic, Level 2, hard-coded responses.

We must do better.

If Artificial Intelligence doesn’t help protect data privacy and more easily give us control of our data, it will create far more problems than it solves.

In order for this to improve, we need to move beyond words to actions.

Actual deeds.

Q: Alexa, delete my voice history.

Q: Ok Google, delete my location history.

Q: Hey Siri, turn off my location services.

Q: Cortana, delete my browsing history.

Not surprisingly, none of these requested commands or actions work.

Most return Level 1 error responses or catch-all search results.

We can order paper towels, make reservations for a haircut, learn how to cook anything with a simple request, but when we ask one of these AI tools to help protect our data privacy there has been no development.

No effort whatsoever.

In Zork, you could always type “Restart.

” This one-word command would show you your score, delete everything it had stored up until that point, and start again anew.

Maybe it’s time for Google, Amazon, Microsoft, and Apple to follow the example of Zork and give data privacy much more than just lip service.

P.

S.

You can try Zork online for free here.

(I did.

)Originally published at wardpllc.

com on April 4, 2019.

All Rights Reserved.

.. More details

Leave a Reply