Google Parent Company Alphabet has dropped its PLEDGE to Not Use Artificial Intelligence (AI) in Weapons Systems or Survelance tools, Citing a Need to Support the Nature of “
Google Ceo Sundar Pichai, in a blogpost published Jun 2018Previously outlined how the company would “Not Pursue” Gies Whose Principal purpose or implementation is to cause or directly Facilitate injury to people ”.
He added that Google would also not find “Technologies that gather or use information for surveillance violating interactively accepted norms”.
Google – whose company motto 'don't be evil' was replaced in 2015 with 'do the right thing' – Defended the decision to remove these goals from its AI Principles Webpage in a blogpost Co-Authored by demis Hassabis, CEO of Google Deepmind; And James Manyika, the company's senior vice-power for technology and socialy.
“There's a global competition taking place for AI Leadership with An Increasingly Complex Geopolitical Landscape. We believe democracies should lead in ai development, guided by core values like freedom, equality and respect for human rights, ”They Wrote on 4 February.
“And we believe that companies, governments and organisations sharing these values should work together to create ai that protects people, promotes global growth and supports new
They added that Google's Ai Principles will Now Focus on Three Core Tenants, Including “Bold Innovation”, which aims to “Assesses, Empower, and Inspire People in Alman Humanity's biggest challenges; “Responsible Development and Deployment”, which means pursuing ai responsibly throughout a systems enter lifecycle; And “collaborative Progress, Together”, which is focused on Empowering “Others to Harnass Ai positively”.
Commenting on Google's Policy Change, Elke schwarz – a professor of political theory at queen Mary University London and Author of Death Machines: The Ethics of Vioilent Technologies – Said that while it is “not at all surprising” Given the company has alredy been Supplying the US MILITARY (and Reportedly the IDF) With cloud services, she is still concerned about the shifting mood am Many of which are now arguing it is “Unethical not to get stuck in” developing ai applications for this context.
“Google Now Feels Comfortable Enough to Make Erying from Violence (to put it somewhat crudely). It indicates a worrying acceptance of building out a war Economy, “She Told Computer Weekly, Adding that Google's Policy Change Highlights a clear shift Industry.
“It sugges an encroaching militarization of everything. It also also signs that there is a significant market position in making ai for military purposes and that there is a significant share of Financial Gains Up for Gains for Gains for Which The Crows. How useful this drive is toward ai for military purposes is still very much special. “
Experts on Military Ai Have Previously raised Concerns About the ethical implications of algorithmically enabled Killing, Including the potential for dehumanisation when people on the receiving end of lethal force are Reduced to data points and numbers on a screenThe risk of discrimination during target selection due to biases in the programming or Criteria used; As well as the emotional and psychological detachment of operators from the human consortes of their actions.
There area also Concerns Over Whether there can ever be meaningful human control Over Autonomous Weapons Systems (AWS), Due to the Combination of Automation Bias and How Such Weapons Increase the Velocity of Warfare Beyond Human Cognition.
Throughout 2024, A range of other ai developers – Including Openai, Anthropic and Meta – Walked back their own ai usage policies to allow us intelligence and definition agencies use their ai systems. They still claim they do not allow their ai to harm humans.
Computer Weekly Contacted Google About The Change – Intends to Approach Ai Development Responsibly in the Context of National Security, and if it Intends to Place to Place of Applies of Applies AI Systems can be used in – but received no response.
'Don't be evil'
The move by Google has attracted strong criticism, Including from Human Rights Organizations Concerned about the use of ai for autonomous weapons or mass surveillance. Amnesty International, For Example, Has Called The decision “Shameful” and Said it would set a “dangerous” precedent.
“AI-Powered Technologies Cold Fuel Survelance and Lethal Killing Systems at a Vast Scale, Potentially Leading to Mass Violations and Infringing on the Fundamental Right to PRIVACY Er and Adviser on Ai and Human Rights at amnesty.
“Google's Decision to Reverse Ban on Ai Weapons Enables The Company to Sell Products that Power Technologies Including Mass Survelance, Drones Developed for SEMI-AATIMATED FOR SEMINES Eneration software that is designed to speed up the decision to kill.
“Google Must Urgently Reverse Recent Changes in Ai Principles and Refraining from Developing or Selling Systems that Cold Enable Serious Human Rights Violations. It is also essential that state actors establishment regulations governing the deployment of these technologies Grounded in Human Rights Principles. The Facade of Self-Regulation Perpetuated by Tech Companies must not distract us from Urgent need to create robust legislation that protects human rights. “
Human rights watch similarly highlighted the problem of self-regulation through Voluntary Principles.
“That a Global Industry Leader Like Google Can Suddenly Abandon Self-Proclaimed Forbidden PRACTICES UndersCores who Voluntary Guidelines are not a Substute for Regulation and Enforceble Law. Existing International Human Rights Law and Standards Do Apply in the Use of Ai, And Regulation Can Be Crucial in Translating Norms Into Practice, ”It said, noting which it is it is available to Revious Principles, Google Workers Have At Least been alle to cite them when pushing back on irresponsible ai practices.
For example, in September 2022, Google Workers and Palestinian Activists Called on the Tech Giant To end its involvement in the second project nimbus cloud computing contract, which involves the provision of Ai and Machine Learning (Ml) to the Israeli Government.
They specifically accuged the tech giant of “Complicity in Israeli Apartheid”, and said they feared how the technology would be used against Palestinians, CITING Google's Own AI PRINCIPLES. A google spokesperson told computer weekly at the time: “The project incisions making google cloud platform available to government agencies for etc. And education, but it is not directed to highly sensitive or classified works. “
Human rights watch added: “Google's Pivot from refusing to build ai for weapons to stating an intent to create ai that supports national security ventures is stark. MILITARIES ARE Increasing Ai in War, where their reliance on incomplete or faulty data and flwed calculations increasing the risk of civilian harm. Such Digital Tools Complicate Accountability For battlefield decisions that may have life-or-deth consequences. “
While The Vast Majority of Countries Are In Favor of Mulatilateral Controls on Ai-Powered Weapons Systems SystemsEuropean Foreign Ministers and Civil Society Reportives Noted DURIL 2024 Conference in Vienna That A Small Number of Powerful Players – Including the Uk, Us and Israel Part of the select few counts to oppose binding measures.
Timothy Musa Kabba, The Minister of Foreign Affairs and International Cooperation in Sierra Leone, Said at the time that for multilalectoralism to work in the modern world, there is a preceding to makes ICH is dominated by the interests of its Five Permanent Members (China, France, Russia, The UK, and the US).
“I think with the emergence of new realities, from climate change to autonomous weapons systems, we need to look at multilateralism once again,” he said, noting on new or reformed insteads , Democratic and adaptable.