هوش مصنوعی پاسخ افراطی ها را می دهد

: شصت ثانیه با علم / فصل: مجموعه ی پنجم / درس 2

شصت ثانیه با علم

10 فصل | 200 درس

هوش مصنوعی پاسخ افراطی ها را می دهد

توضیح مختصر

دانشمندان به هوش های مصنوعی آموخته اند که چگونه پاسخ افرادی که صحبت های ناخوشایند و مذبورانه می زنند را بدهند.

  • زمان مطالعه 0 دقیقه
  • سطح خیلی سخت

دانلود اپلیکیشن «زبانشناس»

این درس را می‌توانید به بهترین شکل و با امکانات عالی در اپلیکیشن «زبانشناس» بخوانید

دانلود اپلیکیشن «زبانشناس»

فایل صوتی

برای دسترسی به این محتوا بایستی اپلیکیشن زبانشناس را نصب کنید.

متن انگلیسی درس

This is scientific American 60 second science, I’m Christopher Intagliata.

Social media platforms like Facebook use a combination of artificial intelligence and human moderators to scout out and eliminate hate speech.

But now researchers have developed a new AI tool that wouldn’t just scrub hate speech but would actually craft responses to it, like this: “The language used is highly offensive. All ethnicities and social groups deserve tolerance.”

“And this type of intervention response can hopefully short-circuit the hate cycles that we often get in these types of forums.”

Anna Bethke, a data scientist at Intel. The idea, she says, is to fight hate speech with more speech—an approach advocated by the ACLU and the U.N High Commissioner for Human Rights.

So with her colleagues at U.C Santa Barbara, Bethke got access to more than 5,000 conversations from the site Reddit and nearly 12,000 more from Gab—a social media site where many users banned by Twitter tend to resurface.

The researchers had real people craft sample responses to the hate speech in those Reddit and Gab conversations. Then they let natural-language-processing algorithms learn from the real human responses and craft their own, such as: “I don’t think using words that are sexist in nature contribute to a productive conversation.”

Which sounds pretty good. But the machines also spit out slightly head-scratching responses like this one: “This is not allowed and un time to treat people by their skin color.”

And when the scientists asked human reviewers to blindly choose between human responses and machine responses—well, most of the time, the humans won.

The team published the results on the site Arxiv and will present them next month in Hong Kong at the Conference on Empirical Methods in Natural Language Processing.

Ultimately, Bethke says, the idea is to spark more conversation.

“And not just to have this discussion between a person and a bot but to start to elicit the conversations within the communities themselves—between the people that might be being harmful and those they’re potentially harming.”

In other words, to bring back good ol’ civil discourse?

“Oh! I don’t know if I’d go that far. But it sort of sounds like that’s what I just proposed, huh?”

Thanks for listening, for scientific American 60 second science, I’m Christopher Intagliata.

مشارکت کنندگان در این صفحه

تا کنون فردی در بازسازی این صفحه مشارکت نداشته است.

🖊 شما نیز می‌توانید برای مشارکت در ترجمه‌ی این صفحه یا اصلاح متن انگلیسی، به این لینک مراجعه بفرمایید.