03-28-2016, 07:05 PM
(03-28-2016, 03:50 AM)Axess Wrote: Eh, they didn't account for immature trolls I guess. Again, the bot ran for over a year in China without this kind of issue, but no one acknowledges that part.
Since the AI is not open source, there's no way for us to know really how the bot was abused and exploited. It's more than just sending it racist messages. If that's really all it took, then yeah M$ testing sucked, but the fact that it ran fine in China but not with Twitter testing says more about the online community than M$.
Broski. This is 'Murica. We ARE an immature country. No other country is as ignorant as us or think like us because we are "free".
China, a very strict place, is a poor example to test such a program thinking their end results is gonna be cool for somewhere else.
And yes, it was that easy. Because of it's algorithm, it was literally the equivalent of racist parents raising a child, how do you think that kid is going to turn out?
If you got two Chan sites doing nothing but exploiting it all day, boom.
I think Microsoft's testing was to the extent that just saying racial related stuff will make the A.I. reply "against it" and they left it at that. Simple common ABC thought process. And what they didn't account for is user conversation manipulation when it comes to such context.
"Who am I to tell you something that you already know?
Who am I to tell you 'Hold on' when you wanna let go?
Who am I? I'm just a sicko with a song in my head and it keeps playing again and again and again and again."
https://youtu.be/bdJ7xe70ck0