Google is drawing up a set of guidelines that will steer
its involvement in developing AI tools for the military, according to a
report from The New York Times.
What exactly these guidelines will stipulate isn’t clear, but Google
says they will include a ban on the use of artificial intelligence in
weaponry.
The principles are expected to be announced in full in the
coming weeks. They are a response to the controversy over the company’s
decision to develop AI tools for the Pentagon that analyze drone
surveillance footage.
Although tech companies regularly bid for contracts in
the US defense sector, the involvement of Google (a company that once
boasted the motto “don’t be evil”) and cutting-edge AI tech has raised
eyebrows — both inside and outside the firm. News of the Pentagon
contract was first made public by Gizmodo in March, and thousands of Google employees have since signed a petition demanding the company withdraw from all such work. Around a dozen individuals have even resigned.
Internal emails obtained by the Times show that
Google was aware of the upset this news might cause. Chief scientist at
Google Cloud, Fei-Fei Li, told colleagues that they should “avoid at ALL
COSTS any mention or implication of AI” when announcing the Pentagon
contract. “Weaponized AI is probably one of the most sensitized topics
of AI — if not THE most. This is red meat to the media to find all ways
to damage Google,” said Li.
But Google never ended up making the announcement, and it
has since been on the back foot defending its decision. The company
says the technology it’s helping to build for the Pentagon simply “flags
images for human review” and is for “non-offensive uses only.” The
contract is also small by industry standards — worth just $9 million to
Google, according to the Times.
Source: The Verge