Michael Emerald 1506

Meet Microsoft’s Twitter Robot, Tay

Briefly, Microsoft put a robot on Twitter, impersonating an 18-24 year old girl, and users began putting her to her limits, resulting in everything from profanity, to drunk innuendos, to sex innuendos (funny how that works) to antisemitism.  For a full story Google Microsoft Tay and you’ll find something.

My Thoughts: Shows Me that AI Isn’t As Far Along as we are Led to Believe

As a securities analyst I can tell you that every company with a product in development is ready to tell you that their product is “almost” ready to roll.  But it’s not.  CAVEAT: I’m generalizing.  We’ve been told that AI personas are around the corner to be able to talk and act like humans, almost passing the Turing Test (a test where a robot is indistinguishable from humans) with ease.

This told us that these things are a ways away from production-ready, regardless of what a company like Microsoft may believe.

My Thoughts: One BIG Problem With AI is Where It’s Learning From

On the surface, a computer that learns on its own is a wonderful thing.  But, like we humans, socialization comes into play, apparently.  The users who conversed with Tay intentionally enticed her to say things like this.  Microsoft blamed the users (more on this below) but more generally, how do we prevent a computer from learning the wrong things from the wrong people.  Do you have an answer?  I don’t.

My Thoughts: Excuse me?  It’s the Users Who Were at Fault?

The facts: Microsoft puts a robot on line, users conversed with her, and she said she believes in Genocide, against specific ethnic groups, no less.

The Verdict: Well, if I’m the judge I’d say that Tay did a VERY bad thing. Who’s to blame?  The programmers and the ones who decided to put her into production.  But wait…

But Microsoft blames the users!  Were it me, I’d be THANKING them, for testing her right out-of-the gate.

This has broader, more serious, implications.  It’s one thing for a drunk robot to tell you they want to commit suicide and we get blamed by the company for taunting her… but what if it’s a self-driving car and we ask it to drive to a non-existent convenience store?  Or ask it to drive us into a lake? Or ask it to drive full speed on the autobahn and it goes IT”S full speed of 160 miles an hour?  Who’s to blame?  Well obviously the manufacturers of the car.  But if Microsoft can blame us here, I sense the auto makers might try to use the same precedent to tell the courts that a car that drives itself into the lake is OUR – repeat OUR – fault.

So What do we do About It?

The baby boomer in me comes out when I suggest we return to basic product development and first design it well, then build it well, then test it well, then beta test it well, and once this is all done, release it to the public in limited production, distributing it more widely as wrinkles are ironed out.  You know, I’m sure, that products are rushed nowadays.  But with products as serious as AI or robots I feel the downside to rushing production is worse than the upside benefits.

Michael Emerald, CFA

Performance Business Design

Owner, Business Strategy Consultant

 

Categories: Current Issues

About the Author