Google done weaponizing A.I., will still work with military and governments
CEO Sundar Pichai outlined Google’s principles and objectives for artificial intelligence applications on the corporate blog on Thursday, June 7.
Pichai’s public statement follows complaints and a letter signed by more than 4,000 Google employees. The employees wanted the company to stop working on Project Maven, which involved developing A.I. for military applications assessing video footage taken by autonomous drones. Google has since reportedly agreed not to renew the Project Maven contract.
Referring to the potential societal impact of A.I., Pichai wrote, “As a leader in A.I., we feel a deep responsibility to get this right. So today, we’re announcing seven principles to guide our work going forward. These are not theoretical concepts; they are concrete standards that will actively govern our research and product development and will impact our business decisions.”
Here are Google’s published objectives for A.I. applications, as written:
- Be social beneficial.
- Avoid creating or reinforcing unfair bias.
- Be built and tested for safety.
- Be accountable to people.
- Incorporate privacy design principles.
- Uphold high standards of scientific excellence.
- Be made available for uses that accord with these principles.
Aware that the above principles are subject to interpretation, Google also outlined A.I. applications the company will not pursue. Pichai’s statement disavowed Google’s pursuit of technologies that cause harm, cause or support injury to people, perform surveillance that violates accepted norms, or contravene international laws and human rights.
Clarifying the assessment of causing harm, the statement of principles reads, “Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.”
Pichai also wrote that while Google will not develop A.I. for use with weapons, the company will work with governments and military on other applications, particularly those that “keep service members and civilians safe.” Specific beneficial applications for A.I. mentioned include cybersecurity, training, military recruitment, veteran’s healthcare, and search and rescue.
Acknowledging there are many opinions in the uses of artificial intelligence, Pichai pledged to “promote thoughtful leadership in this area, drawing on scientifically rigorous and multidisciplinary approaches.”
The A.I. principles blog post closed with a look back to Google’s 2004 Founders’ Letter, written by Larry Page and Sergey Brin on the occasion of the company’s initial public offering (IPO). Two principles stand out as guiding principles in the 2004 letter: “Making the world a better place” and “Don’t be evil.”
Under the heading “Don’t be evil,” Page and Brin wrote, “We believe strongly that in the long term, we will be better served — as shareholders and in all other ways — by a company that does good things for the world even if we forgo some short-term gains. This is an important aspect of our culture and is broadly shared within the company.”
The Open letter Google employees sent to Pichai in April asking that Google cancel its participation in Project Maven referenced the principles stated in the Founders’ Letter.
- Google co-founder’s letter focuses on the dangers and promises of A.I.
- Google reportedly plans to end involvement with Project Maven
- Apple’s latest hire just might make Siri seriously smarter
- Does Project Maven’s use of Google’s A.I. tech violate ‘Don’t Be Evil’ credo?
- Can A.I. make self-driving cars a reality? Waymo reveals the future