AI ethicsGoogle tells EU that users bear responsibility for discrimination

The tech giant told Commission officials in a recent closed meeting that developers of Artificial Intelligence tools should not be overburdened with regulation.

Protests against Google in France
French feminists have targeted Google for sexist search results – Alle Rechte vorbehalten IMAGO / Hans Lucas

While the European Union is preparing new laws to regulate Artificial Intelligence (AI), Google has told the EU Commission that users should „bear the obligation not to discriminate“ when using AI applications. In a meeting in March, Commission officials asked the tech giant which distinct responsibilities providers and users of AI technology should have. The lobbyists replied that it was users who should be „obliged to make sure they understand a system they use and that their use of a system is compliant“.

Human rights campaigners warn that AI applications could perpetuate societal discrimination in regard to gender or ethnicity. Google has recently said publicly that it has refrained from using its technology for consumer credit ratings and will not pursue emotion recognition applications for the moment. Other tech firms have also expressed reservations about algorithmic bias and ethic concerns around the use of AI.

In the closed-door meeting with the Commission, Google sounded a different note. Lobbyists says that AI „risks should not be over-emphasized“ and that regulators should also look at the „opportunity costs of not using AI“. They warned that an upcoming Commission proposal for a law on AI liability risked creating „disincentives to innovation“, according to minutes of the meeting released to netzpolitik.org under the EU’s freedom of information law. The remarks echo similar ones by Google in reply to a public consultation by the Commission.

The proposal on AI liability is set to be introduced by the Commission by the end of this year. The European Parliament has suggested a liability regime under which the operator of a high-risk AI system would be held liable for harm or damage it causes. It also calls for maintaining liability for producers of faulty AI systems.

When contacted by netzpolitik.org, Google said that guarding against discriminatory systems was „something Google takes incredibly seriously“. A spokesperson said that companies, governments and other organisations had a clear responsibility to not build technology that is or could be discriminatory.

Google vows „strong safeguards“ for emotion detection

Shortly after the meeting with Google, the Commission unveiled the EU’s proposal to ban certain AI applications considered risky, such as Chinese-style „social scoring“ by governments. Under the law, remote biometric identification systems – a category which would likely include emotion recognition tools – would be considered „high-risk“ and be subject to stringent requirements.

Asked by officials from the cabinet of Commissioner for Justice Didier Reynders about AI technology for the recognition of emotions, Google said it considered such use in situations such as job interviews to be „highly sensitive“ and would see the need for „very stringent scrutiny and strong safeguards is such a system should be used“. The company declined to comment on whether it was currently developing other applications for emotion detection. Googles competitor Amazon has scrapped an internal AI tool used for recruiting after it showed bias against women.

Deine Spende für digitale Freiheitsrechte

Wir berichten über aktuelle netzpolitische Entwicklungen, decken Skandale auf und stoßen Debatten an. Dabei sind wir vollkommen unabhängig. Denn unser Kampf für digitale Freiheitsrechte finanziert sich zu fast 100 Prozent aus den Spenden unserer Leser:innen.

1 Ergänzungen

  1. Was stört, sind die verschlossenen Türen, weniger Googles Position. Denn die mag durchaus gerechtfertigt sein. Man bräuchte mehr Informationen, um das zu beurteilen.

    Was ebenfalls stört, ist, das Amazon seine Bewerbungs-KI eingestellt, weil sie Frauen benachteiligt hat. Denn das eigentliche Problem ist, dass Amazon Frauen benachteiligt, entweder im Bewerbungsprozess, oder bei der Karriere.

    Die KI funktioniert wie ein Spiegel, der die Realität abbildet. Wenn die KI von menschlichen Recruitern trainiert wird und von denen lernt, welche Bewerber ausgesiebt werden und welche weiter kommen, und dann Frauen rauskegelt, ist nicht die KI das Problem.

    Wenn die KI Karrieredaten analysiert und heraus destilliert, dass Männer im Durchschnitt schneller befördert werden, größere Teams leiten und höher aufsteigen: Dann kann das durchaus die Realität bei Amazon sein.

    Wenn Amazon dann ganz schnell die KI abschaltet und das Thema beerdigt, dann ist das ein Problem. Allerdings kein KI – Problem.

    Und genau deshalb ist es wichtig, eine Diskussion mit durchaus kontroversen Positionen zu haben. Vor allem aber eine offene, transparente Diskussion und noch wichtiger, eine Aufgeschlossenheit gegenüber dem, was wir bei der Diskussion lernen werden.

Dieser Artikel ist älter als ein Jahr, daher sind die Ergänzungen geschlossen.