03-28-2016, 08:12 PM
(03-28-2016, 07:05 PM)WeaponTheory Wrote: Broski. This is 'Murica. We ARE an immature country. No other country is as ignorant as us or think like us because we are "free".
China, a very strict place, is a poor example to test such a program thinking their end results is gonna be cool for somewhere else.
And yes, it was that easy. Because of it's algorithm, it was literally the equivalent of racist parents raising a child, how do you think that kid is going to turn out?
If you got two Chan sites doing nothing but exploiting it all day, boom.
I think Microsoft's testing was to the extent that just saying racial related stuff will make the A.I. reply "against it" and they left it at that. Simple common ABC thought process. And what they didn't account for is user conversation manipulation when it comes to such context.
This article pretty much covers what we've both been saying:
https://www.inverse.com/article/13387-mi...ally-works
Microsoft / inverse.com Wrote:“In China, our Xiaolce chatbot is being used by some 40 million people, delighting with its stories and conversations. The great experience with XiaoIce led us to wonder: Would an AI like this be just as captivating in a radically different cultural environment? Tay – a chatbot created for 18- to 24- year-olds in the U.S. for entertainment purposes – is our first attempt to answer this question.”
Xiaolce was launched in 2014 on the micro blogging, text-based site Weibo. She does essentially what Tay was doing, she has a “personality” and gathers information from conversations on the web. She has more than 20 million registered users (that’s more people than live in the state of Florida) and 850,000 followers on Weibo. You can follow her on JD.com and 163.com in China as well as on the app Line as Rinna in Japan.
...
China treats Xiaolce like an sweet, adoring grandmother, while Americans talk to Tay like a toddler sibling with limited intellect. Does this reflect cultural attitudes toward technology or A.I.? Does it show that the Chinese are way nicer than Americans, generally? It’s more likely that the Great Firewall of China protects Xiaolce from aggression. Freedom of speech can sometimes produce unpleasant results, like Tay after 24-hours on Twitter.
“The more you talk, the smarter Tay gets,” some poor soul at Microsoft typed into the chatbot’s profile. Well not when English speaking trolls rule the web. Despite these results, Microsoft says it will not give into the attacks on Tay. “We will remain steadfast in our efforts to learn from this and other experiences as we work toward contributing to an Internet that represents the best, not the worst, of humanity.”
In the end, I'm not surprised about what happened once it was tested on Twitter. But the whole point was that it was a test to see if it could entertain Americans like it was doing for the millions of Chinese and Japanese users. When not exploited/attacked, it was working fine for millions.
Technically, the bot did entertain Americans, lol. It sparked many articles and debates about the whole thing.
(03-28-2016, 07:05 PM)WeaponTheory Wrote: And yes, it was that easy. Because of it's algorithm
source code plox