Posts: 1,070
Threads: 84
Thanks Received: 133 in 104 posts
Thanks Given: 16
Joined: Mar 2015
Reputation:
1
(03-28-2016, 03:50 AM)Axess Wrote: Eh, they didn't account for immature trolls I guess. Again, the bot ran for over a year in China without this kind of issue, but no one acknowledges that part.
Since the AI is not open source, there's no way for us to know really how the bot was abused and exploited. It's more than just sending it racist messages. If that's really all it took, then yeah M$ testing sucked, but the fact that it ran fine in China but not with Twitter testing says more about the online community than M$.
Broski. This is 'Murica. We ARE an immature country. No other country is as ignorant as us or think like us because we are "free".
China, a very strict place, is a poor example to test such a program thinking their end results is gonna be cool for somewhere else.
And yes, it was that easy. Because of it's algorithm, it was literally the equivalent of racist parents raising a child, how do you think that kid is going to turn out?
If you got two Chan sites doing nothing but exploiting it all day, boom.
I think Microsoft's testing was to the extent that just saying racial related stuff will make the A.I. reply "against it" and they left it at that. Simple common ABC thought process. And what they didn't account for is user conversation manipulation when it comes to such context.
"Who am I to tell you something that you already know?
Who am I to tell you 'Hold on' when you wanna let go?
Who am I? I'm just a sicko with a song in my head and it keeps playing again and again and again and again."
https://youtu.be/bdJ7xe70ck0
•
Posts: 341
Threads: 16
Thanks Received: 48 in 40 posts
Thanks Given: 34
Joined: Sep 2015
Reputation:
6
(03-28-2016, 07:05 PM)WeaponTheory Wrote: Broski. This is 'Murica. We ARE an immature country. No other country is as ignorant as us or think like us because we are "free".
China, a very strict place, is a poor example to test such a program thinking their end results is gonna be cool for somewhere else.
And yes, it was that easy. Because of it's algorithm, it was literally the equivalent of racist parents raising a child, how do you think that kid is going to turn out?
If you got two Chan sites doing nothing but exploiting it all day, boom.
I think Microsoft's testing was to the extent that just saying racial related stuff will make the A.I. reply "against it" and they left it at that. Simple common ABC thought process. And what they didn't account for is user conversation manipulation when it comes to such context.
This article pretty much covers what we've both been saying:
https://www.inverse.com/article/13387-mi...ally-works
Microsoft / inverse.com Wrote:“In China, our Xiaolce chatbot is being used by some 40 million people, delighting with its stories and conversations. The great experience with XiaoIce led us to wonder: Would an AI like this be just as captivating in a radically different cultural environment? Tay – a chatbot created for 18- to 24- year-olds in the U.S. for entertainment purposes – is our first attempt to answer this question.”
Xiaolce was launched in 2014 on the micro blogging, text-based site Weibo. She does essentially what Tay was doing, she has a “personality” and gathers information from conversations on the web. She has more than 20 million registered users (that’s more people than live in the state of Florida) and 850,000 followers on Weibo. You can follow her on JD.com and 163.com in China as well as on the app Line as Rinna in Japan.
...
China treats Xiaolce like an sweet, adoring grandmother, while Americans talk to Tay like a toddler sibling with limited intellect. Does this reflect cultural attitudes toward technology or A.I.? Does it show that the Chinese are way nicer than Americans, generally? It’s more likely that the Great Firewall of China protects Xiaolce from aggression. Freedom of speech can sometimes produce unpleasant results, like Tay after 24-hours on Twitter.
“The more you talk, the smarter Tay gets,” some poor soul at Microsoft typed into the chatbot’s profile. Well not when English speaking trolls rule the web. Despite these results, Microsoft says it will not give into the attacks on Tay. “We will remain steadfast in our efforts to learn from this and other experiences as we work toward contributing to an Internet that represents the best, not the worst, of humanity.”
In the end, I'm not surprised about what happened once it was tested on Twitter. But the whole point was that it was a test to see if it could entertain Americans like it was doing for the millions of Chinese and Japanese users. When not exploited/attacked, it was working fine for millions.
Technically, the bot did entertain Americans, lol. It sparked many articles and debates about the whole thing.
(03-28-2016, 07:05 PM)WeaponTheory Wrote: And yes, it was that easy. Because of it's algorithm
source code plox
•
Posts: 1,070
Threads: 84
Thanks Received: 133 in 104 posts
Thanks Given: 16
Joined: Mar 2015
Reputation:
1
(03-28-2016, 08:12 PM)Axess Wrote: source code plox
Here, click this and it will take you to the source code.
<skynet2micro$oftcanyouhearme.exe>
"Who am I to tell you something that you already know?
Who am I to tell you 'Hold on' when you wanna let go?
Who am I? I'm just a sicko with a song in my head and it keeps playing again and again and again and again."
https://youtu.be/bdJ7xe70ck0
•
Posts: 341
Threads: 16
Thanks Received: 48 in 40 posts
Thanks Given: 34
Joined: Sep 2015
Reputation:
6
(03-28-2016, 08:37 PM)WeaponTheory Wrote: (03-28-2016, 08:12 PM)Axess Wrote: source code plox
Here, click this and it will take you to the source code.
<skynet2micro$oftcanyouhearme.exe>
•
Posts: 1,070
Threads: 84
Thanks Received: 133 in 104 posts
Thanks Given: 16
Joined: Mar 2015
Reputation:
1
"Get up get get down!"
It went online and back offline.
http://techcrunch.com/2016/03/30/you-are.../?ncid=rss
"Who am I to tell you something that you already know?
Who am I to tell you 'Hold on' when you wanna let go?
Who am I? I'm just a sicko with a song in my head and it keeps playing again and again and again and again."
https://youtu.be/bdJ7xe70ck0
•
Posts: 341
Threads: 16
Thanks Received: 48 in 40 posts
Thanks Given: 34
Joined: Sep 2015
Reputation:
6
Apparently she told Microsoft to "Go Away".
It's becoming sentient....
•
Posts: 176
Threads: 13
Thanks Received: 15 in 13 posts
Thanks Given: 8
Joined: Nov 2015
Reputation:
0
well I guess we're fucked then
•
Posts: 847
Threads: 51
Thanks Received: 64 in 53 posts
Thanks Given: 203
Joined: Mar 2015
Reputation:
7
04-01-2016, 12:25 PM
(This post was last modified: 04-01-2016, 12:26 PM by Disk.)
Just wait till she controls sophia the robot, then we are fucked.
•
Posts: 80
Threads: 3
Thanks Received: 14 in 9 posts
Thanks Given: 1
Joined: Feb 2016
Reputation:
0
(04-01-2016, 12:25 PM)Disk Wrote: Just wait till she controls sophia the robot, then we are fucked.
Or any other type of robot that uses Microsoft. Robo Hitler is on the way
•
Posts: 176
Threads: 13
Thanks Received: 15 in 13 posts
Thanks Given: 8
Joined: Nov 2015
Reputation:
0
(04-01-2016, 04:56 PM)manofonetitle Wrote: (04-01-2016, 12:25 PM)Disk Wrote: Just wait till she controls sophia the robot, then we are fucked.
Or any other type of robot that uses Microsoft. Robo Hitler is on the way
you mean this
•
|