Connect with us

Tech News

DeepMind announces ethics group to focus on problems of AI

Firm brings in advisers from academia and charity sector to ‘help technologists put ethics into practice’ in bid to help society cope with artificial intelligence

Deepmind, Google’s London-based AI research sibling, has opened a new unit focused on the ethical and societal questions raised by artificial intelligence.

The new research unit will aim “to help technologists put ethics into practice, and to help society anticipate and direct the impact of AI so that it works for the benefit of all”, according to the company, which hit headlines in 2016 for building the first machine to beat a world champion at the ancient Asian board game Go.

The company is bringing in external advisers from academia and the charitable sector, including Columbia development professor Jeffrey Sachs, Oxford AI professor Nick Bostrom, and climate change campaigner Christiana Figueres to advise the unit.

“These Fellows are important not only for the expertise that they bring but for the diversity of thought they represent,” said the unit’s co-leads, Verity Harding and Sean Legassick, in a blogpost announcing its creation.

The unit, called DeepMind Ethics and Society, is not the AI Ethics Board that DeepMind was promised when it agreed to be acquired by Google in 2014. That board, which was convened by January 2016, was supposed to oversee all of the company’s AI research, but nothing has been heard of it in the three-and-a-half years since the acquisition. It remains a mystery who is on it, what they discuss, or even whether it has officially met.

DeepMind Ethics and Society is also not the same as DeepMind Health’s Independent Review Panel, a third body set up by the company to provide ethical oversight – in this case, of its specific operations in healthcare.

DeepMind provides services to a number of UK hospitals, including its Streams app, which aims to bring modern mobile-first communications into the NHS, as well as a number of AI-focused research projects looking at using machine learning to help diagnose visual problems and treat cancer. The review panel has met several times, and produced its first annual report in July 2017, a few days after DeepMind’s NHS partner the Royal Free received a slap on the wrist from the Information Commissioner’s Office for improper data transfers.

Nor is the new research unit the Partnership on Artificial Intelligence to Benefit People and Society, an external group founded in part by DeepMind and chaired by the company’s co-founder Mustafa Suleyman. That partnership, which was also co-founded by Facebook, Amazon, IBM and Microsoft, exists to “conduct research, recommend best practices, and publish research under an open licence in areas such as ethics, fairness and inclusivity”.

Musk

Nonetheless, its creation is the hallmark of a change in attitude from DeepMind over the past year, which has seen the company reassess its previously closed and secretive outlook. It is still battling a wave of bad publicity started when it partnered with the Royal Free in secret, bringing the app Streams to active use in the London hospital without being open to the public about what data was being shared and how.

The research unit also reflects an urgency on the part of many AI practitioners to get ahead of growing concerns on the part of the public about how the new technology will shape the world around us. Some technology leaders, such as Elon Musk and Mark Zuckerberg, have led discussion about the potentially disastrous effect of super-intelligent AI. Other experts are more concerned about the near-term risk of outsourcing ever more complex and serious decisions to systems whose operation we still do not fully understand, yet which seem to be susceptible to embodying the worst of humanity’s pre-existing biases and prejudices.

One pair of researchers, Ryan Calo and Kate Crawford, approvingly cited by DeepMind, wrote in the publication Nature: “Auto­nomous systems are already deployed in our most crucial social institutions, from hospitals to courtrooms. Yet there are no agreed methods to assess the sustained effects of such applications on human populations.”

Designers and researchers, Calo and Crawford wrote, “need to assess the impact of technologies on their social, cultural and political settings.”

source: https://www.theguardian.com/technology/2017/oct/04/google-deepmind-ai-artificial-intelligence-ethics-group-problems

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: