Translate to multiple languages

Subscribe to my Email updates

https://feedburner.google.com/fb/a/mailverify?uri=helgeScherlundelearning
Enjoy what you've read, make sure you subscribe to my Email Updates

Monday, May 11, 2015

Meet the people out to stop humanity from destroying itself

Follow on Twitter as @quinto_quarto
Kabir Chibber, Deputy news editor reports, "Meet the people out to stop humanity from destroying itself below."

In 1942, one of Robert Oppenheimer’s colleagues came to him with a disturbing suggestion: in the event their work on the Manhattan Project succeeded and they built the world’s first atomic bomb, it was quite possible the explosion would set the skies on fire. Shaken, Oppenheimer privately told one of the project’s most senior figures, Arthur Compton, who responded with horror, according to a biography of Oppenheimer:
Was there really any chance that an atomic bomb would trigger the explosion of the nitrogen in the atmosphere or of the hydrogen in the ocean? This would be the ultimate catastrophe. Better to accept the slavery of the Nazis than to run a chance of drawing the final curtain on mankind!
Compton told Oppenheimer that “unless they came up with a firm and reliable conclusion that our atomic bombs could not explode the air or the sea, these bombs must never be made.” The team ran a series of calculations and decided the math supported their case that the “gadget,” as the bomb was known, was safe. Work continued. Still, at the site of the Trinity test in New Mexico on July 16 1945, one of the scientists offered the others a bet on “whether or not the bomb would ignite the atmosphere, and if so, whether it would merely destroy New Mexico or destroy the world.” Luckily for us, it did neither...

We attract weird people,” Andrew Snyder-Beattie said. “I get crazy emails in my inbox all the time.” What kinds of people? “People who have their own theories of physics.”
 
The FHI’s Andrew Snyder-Beattie.
Photo: Quartz

Snyder-Beattie is the project manager at the Future of Humanity Institute. Headed up by Nick Bostrom, the Swedish philosopher famous for popularizing the risks of artificial intelligence, the FHI is part of the Oxford Martin School, created when a computer billionaire gave the largest donation in Oxford University’s 900-year history to set up a place to solve some of the world’s biggest problems. One of Bostrom’s research papers (pdf, p. 26) noted that more academic research has been done on dung beetles and Star Trek than on human extinction. The FHI is trying to change that.

The institute sits on the first floor—next to the Centre for Effective Altruism—of a practical, nondescript office building. In the main lobby, if you can call it that, there’s a huge multi-sided whiteboard, scribbled with notes, graphs, charts, and a small memorial to James Martin, the billionaire donor, who died in 2013. When I visited recently, the board was dominated by the ultimate office sweepstakes: a timeline that showed the likelihood, according to each FHI researcher, that the human race would go extinct in the next 100 years. They asked me not to publish it. (Most said the chances were quite low, but one person put it at 40%.)
On an intellectual level, we have these core set of goals: try to figure out what really, really matters to future of the largest part of humanity, and then what can we investigate about that?” says one of the researchers, a genial Swede named Anders Sandberg, who wears a peculiar steel medallion hanging over his shirt. “Real success would be coming up with an idea to make the world better. Or even figure out what direction ‘better’ is.

Source: Quartz