Groups Similar Look up By Text Browse About

Similar articles
Article Id Title Prob Score Similar Compare
201107 VENTUREBEAT 2021-2-18:
Google’s new AI ethics lead calls for more ‘diplomatic’ conversation
1.000 Find similar Compare side-by-side
201515 VENTUREBEAT 2021-2-19:
Google fires Ethical AI lead Margaret Mitchell
0.834 0.719 Find similar Compare side-by-side
201158 THENEXTWEB 2021-2-18:
Google’s restructuring of its AI teams sparks further criticism
0.979 0.659 Find similar Compare side-by-side
201624 ZDNET 2021-2-22:
Google fires top ethical AI expert Margaret Mitchell
0.659 Find similar Compare side-by-side
201174 VENTUREBEAT 2021-2-18:
DeepMind researchers say AI poses a threat to people who identify as queer
0.941 0.643 Find similar Compare side-by-side
201514 THENEXTWEB 2021-2-19:
Google fired Margaret Mitchell, its Ethical AI Team founder
0.329 0.534 Find similar Compare side-by-side
201301 ZDNET 2021-2-18:
Google to work more closely with the Mayo Clinic at new Minnesota office
0.004 0.468 Find similar Compare side-by-side
201506 THENEXTWEB 2021-2-19:
Researchers propose ‘ethically correct AI’ for smart guns that locks out mass shooters
0.003 0.408 Find similar Compare side-by-side
200876 VENTUREBEAT 2021-2-16:
AI can persuade people to make ethically questionable decisions, study finds
0.004 0.407 Find similar Compare side-by-side
201084 ZDNET 2021-2-17:
AI will have a huge impact on your healthcare. But there are still big obstacles to overcome
0.386 Find similar Compare side-by-side
200966 THENEXTWEB 2021-2-16:
Zut alors! France spanks Google over ‘misleading’ hotel ranking algorithm
0.375 Find similar Compare side-by-side
201356 VENTUREBEAT 2021-2-22:
Google’s Voice AI accelerator launches 12 startups
0.374 Find similar Compare side-by-side
200976 THENEXTWEB 2021-2-15:
How AI could spot your weaknesses and influence your choices
0.353 Find similar Compare side-by-side
201285 ZDNET 2021-2-18:
Google, J.B. Hunt ink deal to tackle US transport, supply chain challenges
0.346 Find similar Compare side-by-side
201241 TECHREPUBLIC 2021-2-17:
World's fastest supercomputer and AI tapped to predict tsunami flooding
0.002 0.344 Find similar Compare side-by-side
201523 VENTUREBEAT 2021-2-19:
AI Weekly: The challenges of creating open source AI training datasets
0.339 Find similar Compare side-by-side
200989 ZDNET 2021-2-16:
Data meets science: Open access, code, datasets, and knowledge graphs for machine learning research and beyond
0.338 Find similar Compare side-by-side
201270 ZDNET 2021-2-17:
Defence lists cyber mitigation as key factor for building ethical AI
0.336 Find similar Compare side-by-side
200977 TECHREPUBLIC 2021-2-15:
Is AI just a fairy tale? Not in these successful use cases
0.333 Find similar Compare side-by-side
200857 VENTUREBEAT 2021-2-16:
IBM’s Arin Bhowmick explains why AI trust is hard to achieve in the enterprise
0.001 0.331 Find similar Compare side-by-side
201531 VENTUREBEAT 2021-2-20:
Salesforce Research wields AI to study medicine, economics, and speech
0.328 Find similar Compare side-by-side
200756 THENEXTWEB 2021-2-17:
This bizarro AI creates psychedelic visual interpretations of famous poems
0.326 Find similar Compare side-by-side
201017 TECHREPUBLIC 2021-2-16:
What CIOs need to know about adding AI to their processes
0.315 Find similar Compare side-by-side
201556 VENTUREBEAT 2021-2-22:
Disrupting tech hiring: How AI connects companies to talent
0.312 Find similar Compare side-by-side
200895 VENTUREBEAT 2021-2-17:
Peak.AI raises $21 million to drive enterprise AI adoption
0.298 Find similar Compare side-by-side


ID: 201107


Date: 2021-02-18

Google’s new AI ethics lead calls for more ‘diplomatic’ conversation

Following months of inner conflict and opposition from Congress and thousands of Google employees, Google today announced that it will reorganize its AI ethics operations and place them in the hands of VP Marian Croak, who will lead a new responsible AI research and engineering center for expertise. A blog and six-minute video interview with Croak that Google released today announcing the news make no mention of former Ethical AI team co-lead Timnit Gebru, whom Google fired abruptly in late 2020, or Ethical AI lead Margaret Meg Mitchell, who a Google spokesperson told VentureBeat was placed under internal investigation last month. The release also makes no mention of steps taken to address a need to rebuild trust called for by members of the Ethical AI team at Google. Multiple members of the Ethical AI team said they found out about the change in leadership from a report published late Wednesday evening by Bloomberg. Marian is a highly accomplished trailblazing scientist that I had admired and even confided in. Its incredibly hurtful to see her legitimizing what Jeff Dean and his subordinates have done to me and my team, Gebru told VentureBeat. Meg Mitchell is still suspended from her corporate account. The last email that the Ethical AI team got from research leadership was over two weeks ago. We're in the lurch and left out to dry. This should tell you a lot about what Google thinks about ethics research. In the video, Croak discusses self-driving cars and techniques for diagnosis of diseases as potential areas of focus in the future, but made no mention of large language models. A recent piece of AI research citing a cross spectrum of experts concluded that companies like Google and OpenAI only have a matter of months to set standards about how to address the negative societal impact of large language models. In December, Gebru was fired after she sent an email to colleagues advising them to no longer participate in diversity data collecting efforts. A paper she was working on at the time criticized large language models, like the kind Google is known for producing, for harming marginalized communities and tricking people into believing models trained with massive corpora of text data represent genuine progress in language understanding. In the weeks following her firing, members of the Ethical AI team also called for the reinstatement of Gebru in her previous role. More than 2,000 Googlers and thousands of other supporters signed a letter in support of Gebru and in opposition to what the letter calls unprecedented research censorship. Members of Congress who have proposed legislation to regulate algorithms also raised a number of questions about the Gebru episode in a letter to Google CEO Sundar Pichai. Earlier this month, news emerged that two software engineers resigned in protest over Googles treatment of Black women like Gebru and former recruiter April Curley. In todays video and blog post about the change at Google, Croak said that people need to understand that the fields of responsible AI and ethics are new, and called for a more conciliatory tone of conversation about the ways AI can harm people. Google created its AI ethics principles in 2019, shortly after thousands of employees opposed participation in the U.S. militarys Project Maven. So theres a lot of dissension, theres a lot of conflict in terms of trying to standardize a normative definition of these principles and whose definition of fairness and safety are we going to use, and so theres quite a lot of conflict right now in the field, and it can be polarizing at times, and what Id like to do is just have people have a conversation in a more diplomatic way perhaps so we can truly advance this field, Croak said. Croak said the new center will work internally to assess AI systems that are being deployed or designed, then partner with our colleagues and PAs and mitigate potential harms. The Gebru episode at Google led some AI researchers to pledge that they wouldnt review papers from Google Research until change was made. Shortly after Google fired Gebru, Reuters reported that the company asked its researchers to strike a positive tone when addressing issues referred to as sensitive topics. Croaks appointment to the position spells the latest controversial development at the top of AI ethics ranks at Google Research and DeepMind, which Google acquired in 2014. Last month, a Wall Street Journal report found that DeepMind cofounder Mustafa Suleyman was removed from management duties, before leaving the company in 2019, due to his bullying of coworkers. Suleyman also served as a head of ethics at DeepMind, where he discussed issues like climate change and health care. Months later, Google hired Suleyman for work in an advisory role on matters of policy and regulation. How Google conducts itself when it comes to using AI responsibly and defending against forms of algorithmic oppression is immensely important because AI adoption is growing in business and society, but also because Google is a world leader in producing published AI research. A study published last fall found that Big Tech companies treat AI ethics funding in a way thats analogous to the way Big Tobacco companies funded health research decades ago. VentureBeat has reached out to Google to inquire about steps to reform internal practices, issues raised by Google employees, and a number of other questions. This story will be updated if we hear back. More to come