Google says its A.I. won't be used for weapons, surveillance


In his blog post, Pichai also made it clear on what sorts of applications that Google will not develop. Google also wants to make sure the systems it builds include feedback from people and mechanisms by which people can adjust them over time.

Google will avoid the use of any technologies "that cause or are likely to cause overall harm", Pichai wrote.

Google's principles say it will not pursue AI applications meant to cause physical injury, that tie into surveillance "violating internationally accepted norms of human rights", or that present greater "material risk of harm" than countervailing benefits. The project apparently involved Google helping the United States government analyse drone footage using artificial intelligence.

"How AI is developed and used will have a significant impact on society for many years to come", Pichai writes. As leaders in this field, we feel deeply responsible for doing it right, "Pinchai said, according to the French Agency and Reuters". The incident had prompted over a dozen employees to resign from the company and almost 4,600 employees to sign an internal petition addressed to the Google bosses citing the company's age old motto "don't be evil".

Google has sought to clarify its company policy on cooperating on the development of weaponry with artificial intelligence - but filled it with caveats that will probably end up pleasing no-one. The goal of this project was to process and catalog drone imagery, and Google's rank-and-file workers were none too pleased. However, the company says it will continue working with the USA government and military on other technologies.

More news: Obama administration tried secret bank workaround for Iran, new report says
More news: Warriors sweep away Cavaliers to repeat as National Basketball Association champions
More news: China hacks computers of US Navy contractor, secures highly sensitive data

This comes in direct response to the company's work with the Department of Defense's Project Maven.

Google believes that these principles are the right foundation for their company and the future development of AI with the values that are laid out in their original Founders'. So, after facing pressure from employees and others over the contract, Google finally declared that it wouldn't be renewing its contract.

Though Pichai's principles don't address Project Maven directly, it does commit to not using AI to create weapons or "other technologies whose principal purpose" is to injure people. A Google employee reportedly told Gizmodo the principles were "a hollow PR statement". "These include cybersecurity, training, military recruitment, veterans' healthcare, and search and rescue".

As mentioned above, Google's policy means that it won't work on surveillance outside global norms, but the definition of "international norms" is open to interpretation, and depends on who's doing the interpreting.