Google AI ethical principles

Notably, the standards leave room for Google’s Cloud business to bid for contracts with government agencies or the military. In April, Defense One reported that the company was quietly pursuing a large, competitive cloud contract with the Defense Department.

In addition to outlining which AI applications it won’t pursue, Google highlighted that it believes that AI should “avoid creating or reinforcing unfair bias” and provide privacy safeguards.

The company also says that it will “work to limit potentially harmful or abusive applications” of its AI technologies. In a previous version of the guidelines, however, the company wrote much more explicitly that it would “reserve the right to prevent or stop uses of our technology if we become aware of uses that are inconsistent with these principles.”

The company blunted its language because it can’t control all aspects of its technology, for example its open source AI software TensorFlow, a company spokesperson said. But it can try to wield its influence in the open source community, and can more directly control other tools, like software development kits, through more restrictive licensing agreements.

According to Pichai, here’s everything Google says that it won’t use its AI technologies for:

1. Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.

2. Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.

3. Technologies that gather or use information for surveillance violating internationally accepted norms.

4. Technologies whose purpose contravenes widely accepted principles of international law and human rights.

We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas. These include cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue. These collaborations are important and we’ll actively look for more ways to augment the critical work of these organizations and keep service members and civilians safe.