What’s in Your Bot?

There’s been a huge rush toward using AI (artificial intelligence) to build “conversational UIs“—user interfaces that allow us to type or speak to computers in natural language. Sorta. It’s the latest interaction mode and it comes after people interacting with machines, then talking to each other through machines, then talking to machines. Kindah like a conversation (but not really). Here’s a diagram of that progression:

Untitled
Cover of the proposal to NSF, “Graphical Conversation Theory”, written by the MIT Architecture Machine Group, 1977

Today, when you hear about all that, “AI” means a specialized kind of AI that’s hugely popular called machine learning. (Yeah, I didn’t make that a link, you can just google it. We all know that we all know how. You’ll find some OK stuff about it. )

So when Siri or Cortana, Amazon or Google, Apple or Facebook, IBM or GE—all of  whom are infected with the AI meme—deploys the machine-learning brand of artificial intelligence, it might be good for you to think about it. (But then, that’s up to you.)

I think about machine learning being everywhere in the virtual world whenever I make a typo on my mobile and my text gets snatched away from me and turned into drivel. (Or every time I ask my intelligent assistant two related questions in a row and it behaves as if I’m the schizophrenic in the chat.)

And here’s how I think about it:

  1. I type in some garbage characters by mistake and the machine “corrects” it.
  2. The machine can do this because it was fed lots of data to “train” it.
  3. “Training” a machine-learning program is essentially declaring to it, “When I give you this—or something close to it—you give me that.”
  4. “Machine learning” means the feeding of lots of data to machines (computers, yes) to train them how to “correct” me.
  5. For such a machine, “correct” means that it thinks to itself, “because what Paul just typed is so very uncommon, I’m going to find something that a jillion people have already typed—hoping that’s what Paul really meant—and offer that.”
  6. So, when I text you, machine learning usurps the phrase I’m trying to type—morphing it from something rare and intended especially for you, into a lowest-common denominator, every-day phrase.
  7. And, while severely flattening what I wanted to say, it treats you as if you are the same as so many other people—while I was saying something personal for you, now your reading something impersonal, for most anybody.

I don’t particularly like being turned into an average. Whaddya think—do you?

P.S. I think about AI and conversational UIs that way because of conversation theory. (Please don’t google that; really, it’s not a good idea. Please click the link. Trust a little. You’ve made an investment here, might pay off a bit more.)