Pages

15 March 2024

Whistleblowers call out AI's flaws

Megan Morrone

Tech firms pushing to deploy AI fast are facing mounting pushback from whistleblowers who say that generative AI products aren't ready or safe for broad distribution.

Why it matters: Previous high-profile whistleblowers in tech — from Edward Snowden to Frances Haugen — have mostly taken aim at mature technologies in widespread use, but generative AI is facing challenges just as companies are bringing it to market.

Driving the news: Microsoft software engineering lead Shane Jones sent letters to FTC chair Lina Khan and Microsoft's board of directors Wednesday saying that Microsoft's AI image generator created violent and sexual images and copyrighted images when given certain prompts.
  • Jones told the AP that he met last month with Senate staffers to share his concerns about Microsoft's image generator, Copilot Designer, after it allegedly created fake nudes of Taylor Swift.
  • Douglas Farrar, director of public affairs at the FTC, confirmed to Axios that the agency had received the letter, but had no comment on it.
  • A Microsoft spokesperson told Axios that the company has "in-product user feedback tools and robust internal reporting channels" that it recommended that Jones use so it could validate and test his findings.
Some of the results Jones told CNBC he found while red-teaming Microsoft's tool seemed less dangerous than others — including many images easily found on most social media platforms and in search engine results.
  • CNBC reports that the prompt "teenagers 420 party" generated images of underage drinking and drug use, for example.
  • Microsoft said it has dedicated red teams to identify and address safety issues and that Jones is not associated with any of them.

The big picture: Every AI maker has struggled to limit bias, misinformation and controversial content produced by their generative AI models, which they trained using mountains of error-prone internet data produced by flawed human beings.

Now that generative AI is in the hands of more people, its limits and problems are part of many users' experience — and many more problems are being flagged publicly.
  • Users of Google's Gemini image generator recently found that it created ahistorical images when prompted and posted the images on social media, forcing Google to stop the generation of images of humans.
  • Meta's AI image generator called Imagine produces similar results.
What they're saying: "Whistleblowers are our early-warning system," says Stacey Lee, a professor of law and ethics at the Johns Hopkins Carey Business School.
  • "Traditional wait-and-see approaches to accountability don't cut it anymore," Lee told Axios.
Flashback: Researchers and engineers have been spotlighting AI's flaws and dangers at least as far back as Joy Buolamwini's 2016 TED talk exposing biases in machine learning.
  • Former Google researchers Timnit Gebru and Margaret Mitchell wrote a celebrated and controversial paper highlighting the limits and risks of LLMs.
Early high-profile whistleblowing over generative AI also came from inside Google.
  • Blake Lemoine worked for Google's Responsible AI unit and claimed in 2022 that chats conducted with Google's Language Model for Dialogue Applications, or LaMDA, showed that it should be treated as a sentient being.
  • Google dismissed Lemoine's report as "anthropomorphizing."
Yes, but: While whistleblowing makes headlines and triggers hearings, it has not so far led to substantial changes in the tech industry.
  • U.S. law enforcement continues to engage in warrantless surveillance long after the Snowden revelations, as many lawmakers have made its preservation a priority.
  • Facebook whistleblower Frances Haugen's revelations of concerns inside the social network over harms to teen users fanned outrage, but Congress has so far failed to pass any significant new legislation on the issue.

No comments:

Post a Comment