Google: We decide what is evil

Google CEO Sundar Pinchai has formulated ethical standards for how his company should develop artificial intelligence. But they are not really clear.

Google: We decide what is evil
Content
  • Page 1 — we decide what's evil
  • Page 2 — cooperation with military is not ruled out
  • Read on a page

    Sometimes bosses also react to blackmail by subordinates: About two months ago, an intern kursierenderBrief Google employee, who is now supposed to have 4,500 Beschäftigteunterschrieben, to public. In it, DieAngestellten criticized long-kept secret decision of company's top to work with Amsogenannten project Maven of US military. The focus here is on artificial intelligence when scouring Aufklärungsbilderneinzusetzen. The two main demands of signatories of Protestbriefeslauteten: Google should first get out of project Maven and formulate and publish zweitenseine "clear strategy", which should say, "dassweder Google still has its contractual partners ever To build war technology. "

    The first demand met Google a few days ago, second now only half: Pinchai has published on Thursday a blogpost, in which he imposes on his own company ethical principles in development of Artificial Intelligence (KI) – but future Does not preclude transactions with military.

    The text overridden by AI at Google: our principles (Ki on Google: Our principles) is an amazing document for a company boss. This is because Pinchai is not even trying to get to grips with public's own business Politikaus. On one hand, Pinchai obeys denGepflogenheiten of Silicon Valley, where at least pretends to communicate openly with eigenenBeschäftigten at all times. On or hand, Pinchaimit is now implicitly confirming power that Google Mundus programme was leading companies in AI research: DieEntscheidungen that are being enjoyed re, could be lives of unzähligenMenschen around world Change. And that only starts with it, Dassspezifisch programmed Kis soon machenkönnten whole professions superfluous.

    "Benefits" can hardly be considered hypotically

    However, Pinchai does not go into it in his text. Instead, he precedes his "principles" with promises of artificial intelligence, which advances, for example, in diagnosis of cancer UndFrüherkennung possible furr diseases by MaschinelleAnalyseverfahren. But when it comes to ethical issues, Oderwill Pinchai cannot make any concrete statements.

    Already first principle, Society of Nutzenzu ("be socially beneficial"), shows a fundamental problem with DerEinordnung of technologies, which are largely not yet at , but are at stage of good idea: "benefit" leaves Sichschwerlich hypotically. Only when something is re can dessenNutzen be weighed. And because as with many inventions of tech world also artificial intelligence first of all an efficiency increase vonProzessen foreseeable and desirable, it becomes ethically equally terribly difficult: ideology of Silicon Valley (and Pinchais text reflects Sieeindeutig describes each improvement of efficiency as prinzipiellgut. Doctors, whose role in early detection is, for example, of Krebsbislang an often vital for patients, will consider use of VonKI in ir field ethically, however, more differentiated. Not Nurweil y have to fear to lose sovereignty over diagnoses.

    The or six of seven ethical principles listed by Pinchai describe EigentlichSelbstverständlichkeiten. That algorithms, for example, should be free from Voreingenommenheitenprogrammiert against ethnicity, gender undsexueller orientation of potential users, sollteaußerhalb any discussion. But as long as algorithms are created by humans, ir neutrality is actually not notwendigerweisegewährleistet. AI systems from Google should also be sure to meet highest scientific standards and respect Privatsphärevon users. To verstandenwissen se points as ethical principles seems to be almost cute. Because y actually fall unterdas, what y call product promises. But because that was disappointed by tech company Zuletzthäufiger, refore, se assurances are probably in this "principles" list.

    The actually ethical discussion has Pinchai sichfür conclusion of his text, in which he describes basic dilemma einerTechnologie like Ki: It can be used in virtually infinitely diverse ways, including for military purposes. Pinchai Betontzunächst, Google wants to limit number of "potentially harmful or missbräuchlicherAnwendungen" of artificial intelligence. Factors include, among or things, immediate purpose of applications and great ihreerwartbare distribution.

    Date Of Update: 09 June 2018, 12:02