VOA英语学习网 > 科学美国人 > 2019年科学美国人 > 科学美国人60秒科学系列 >
缩小放大

科学美国人60秒:未来你可能在跟电脑软件吵架

[提示:]双击单词,即可查看词义!如果生词较多,请先学习:VOA慢速英语1500基础词汇
中英对照 听力原文

Artificial Intelligence Learns to Talk Back to Bigots

未来你可能在跟电脑软件吵架

Social media platforms like Facebook use a combination of artificial intelligence and human moderators to scout out and eliminate hate speech. But now researchers have developed a new AI tool that wouldn't just scrub hate speech, but would actually craft responses to it, like: 'The language used is highly offensive. All ethnicities and social groups deserve tolerance.'

Facebook等社交媒体平台结合了人工智能和人工版主来侦查和删除仇恨言论。但现在,研究人员开发了一种新的人工智能工具,它不仅能清除仇恨言论,还能对此做出回应,比如:“使用的语言非常无礼。所有种族和社会群体都应该得到宽容。”

"And this type of intervention response can hopefully short circuit the hate cycles that we often get in these types of forums." Anna Bethke, a data scientist at Intel. The idea, she says, is to fight hate speech with more speech. An approach advocated by the ACLU and the UN High Commissioner for Human Rights.

“这种类型的干预有望缩短我们在论坛中经常遇到的仇恨循环。”英特尔的数据科学家安娜·贝斯克表示。她说,这样做的目的是用更多的言论来对抗仇恨言论。该方法得到美国公民自由联盟和联合国人权事务高级专员的支持。

\

So, with her colleagues at UC Santa Barbara, Bethke got access to more than 5,000 conversations from the site Reddit, and nearly 12,000 more from Gab—a social media site where many users banned by Twitter tend to resurface.

因此,贝斯克和她在加州大学圣巴巴拉分校的同事们,从Reddit网站上获得了5000多条对话,从社交媒体网站Gaba上获得了近12000条对话。

The researchers had real people craft sample responses to the hate speech in those Reddit and Gab conversations. Then, they let natural language processing algorithms learn from the real human responses, and craft their own. Such as: 'I don't think using words that are sexist in nature contribute to a productive conversation.'

研究人员让真人对Reddit和Gab对话中的仇恨言论做出样本反应。然后,他们让自然语言处理算法从真实的人类反应中学习,并形成自己的算法。比如:“我不认为使用带有性别歧视的语言有助于提高谈话的效率。”

Which sounds pretty good. But the machines also spit out slightly head-scratching responses like this one: 'This is not allowed and un time to treat people by their skin color.'And when the scientists asked human reviewers to blindly choose between human responses and machine responses… well, most of the time, the humans won. The team published the results on the site Arxiv, and will present them next month in Hong Kong at the Conference on Empirical Methods in Natural Language Processing.

这听起来不错。但这些机器也会做出一些让人头痛的回答,比如:“这是不允许的,也没有时间根据肤色来对待人。”“当科学家们要求人类评论者在人类反应和机器反应之间盲目选择时……嗯,大多数时候人类赢了。”研究小组将研究结果发表在Arxiv网站上,并将于下月在香港举行的“自然语言处理的经验方法”会议上发表。

Ultimately, Bethke says, the idea is to spark more conversation. "Not just to have this discussion between a person and a bot but to start to elicit the conversations within the communities themselves between the people that might be being harmful, and those they're potentially harming."

贝斯克表示,这种想法最终目的是激发更多的对话。“这不仅仅是一个人和一个机器人之间的讨论,而是要开始在社区内部引发可能有害的人和那些可能被他们伤害的人之间的对话。”

In other words: to bring back good ol' civil discourse? "Oh! I don't know if I'd go that far, but it sort of sounds like that's what I just proposed, huh?"

换句话说:引发公民之间良好的话语?“哦!我不知道是否能够完成目标,但这听起来像是我刚刚提出的建议,哈哈?”

Artificial Intelligence Learns to Talk Back to Bigots

Social media platforms like Facebook use a combination of artificial intelligence and human moderators to scout out and eliminate hate speech. But now researchers have developed a new AI tool that wouldn't just scrub hate speech, but would actually craft responses to it, like: 'The language used is highly offensive. All ethnicities and social groups deserve tolerance.'

"And this type of intervention response can hopefully short circuit the hate cycles that we often get in these types of forums." Anna Bethke, a data scientist at Intel. The idea, she says, is to fight hate speech with more speech. An approach advocated by the ACLU and the UN High Commissioner for Human Rights.

So, with her colleagues at UC Santa Barbara, Bethke got access to more than 5,000 conversations from the site Reddit, and nearly 12,000 more from Gab—a social media site where many users banned by Twitter tend to resurface.

The researchers had real people craft sample responses to the hate speech in those Reddit and Gab conversations. Then, they let natural language processing algorithms learn from the real human responses, and craft their own. Such as: 'I don't think using words that are sexist in nature contribute to a productive conversation.'

Which sounds pretty good. But the machines also spit out slightly head-scratching responses like this one: 'This is not allowed and un time to treat people by their skin color.'And when the scientists asked human reviewers to blindly choose between human responses and machine responses… well, most of the time, the humans won. The team published the results on the site Arxiv, and will present them next month in Hong Kong at the Conference on Empirical Methods in Natural Language Processing.

Ultimately, Bethke says, the idea is to spark more conversation. "Not just to have this discussion between a person and a bot but to start to elicit the conversations within the communities themselves between the people that might be being harmful, and those they're potentially harming."

In other words: to bring back good ol' civil discourse? "Oh! I don't know if I'd go that far, but it sort of sounds like that's what I just proposed, huh?"


内容来自 VOA英语学习网https://www.chinavoa.com/show-8762-241857-1.html
Related Articles
内容推荐