Translate to multiple languages

Subscribe to my Email updates
Enjoy what you've read, make sure you subscribe to my Email Updates

Friday, April 09, 2021

Am I arguing with a machine? AI debaters highlight need for transparency | Computer science -

With artificial intelligence starting to take part in debates with humans, more oversight is needed to avoid manipulation and harm

Noam Slonim of IBM Research next to the corporation’s AI debating system, Project Debater.
Photo: Eric Risberg/AP/Shutterstock

Can a machine powered by artificial intelligence (AI) successfully persuade an audience in debate with a human? Researchers at IBM Research in Haifa, Israel, think so.

They describe the results of an experiment in which a machine engaged in live debate with a person. Audiences rated the quality of the speeches they heard, and ranked the automated debater’s performance as being very close to that of humans. Such an achievement is a striking demonstration of how far AI has come in mimicking human-level language use (N. Slonim et al. Nature 591, 379–384; 2021). As this research develops, it’s also a reminder of the urgent need for guidelines, if not regulations, on transparency in AI — at the very least, so that people know whether they are interacting with a human or a machine. AI debaters might one day develop manipulative skills, further strengthening the need for oversight.

The IBM AI system is called Project Debater...

Nothing like that can simply be mined from training data. But researchers are starting to incorporate some elements of a theory of mind into their AI models (L. Cominelli et al. Front. Robot. AI; 2018) — with the implication that the algorithms could become more explicitly manipulative (A. F. T. Winfield Front. Robot. AI; 2018). Given such capabilities, it’s possible that a computer might one day create persuasive language with stronger oratorical ability and recourse to emotive appeals — both of which are known to be more effective than facts and logic in gaining attention and winning converts, especially for false claims (C. Martel et al. Cogn. Res. (2020); S. Vosoughi et al. Science 359, 1146–1151; 2018).

Read more... 

Additional resources

Nature 592, 166 (2021)