Pages

27 November 2019

No, Google's Search Results Aren't Biased

by Mark Jamison

The Wall Street Journal recently reported a shocking revelation: Senior Google employees exert “editorial control over what (its search engine) shows users.” Another journalist and member of the managing board of the Stigler Center at the University of Chicago echoed: “Google is everything we feared.”

The numbers can be impressive: About 1.5 billion people visit this critical and unique information source. Human intervention is involved in directing about 70 percent of academic users, but the humans claim their motives are unprejudiced. Outside groups apply intense pressure to selectively censor information. And it is feared that up to 50 percent of the information presented as scientific is demonstrably false.

Oh, wait. Those are data for US libraries (public and academic). (See here, here, here, here, and here.) Google seems to have less human bias than libraries, and its information is no less accurate.


Google is used more than libraries (120 billion queries per year vs. 1.5 billion library visits for the US), but the amount of human intervention has to be much less than for libraries: Google employs 98,771 people worldwide and there are 134,800 librarians in the US alone. And librarians choose their hard copy materials. (See here, here, and here.)

The Journal reports that an internal Google study found that search results contain misinformation only 0.1 percent to 0.25 percent of the time. Apparently, libraries don’t track the number of times librarians give bad advice, but it would be tough to beat 99+ percent accuracy, and the quality of the information sources should be about the same for Google and libraries.

So these journalists’ claims lack context. And they appear to have a bit of tout: The Journal article asserts that “Google engineers regularly make behind-the-scenes adjustments” to top layer search results. It also says that “Google employs thousands of low-paid contractors whose purpose the company says is to assess the quality of the algorithms’ rankings” and that “Google gave feedback to these workers to convey what it considered to be the correct ranking of results.” It also said Google executives and engineers regularly tinker with the company’s algorithms, making “more than 3,200 changes to its algorithms in 2018, up from more than 2,400 in 2017 and from about 500 in 2010.”

In other words, Google makes judgements in modifying its algorithms and is paying greater attention as political pressures mount. And Google’s upper level management gives guidance to lower level workers. These are good practices.

And it is unclear what it means for engineers to work “behind-the-scenes:” Were they hiding from management or simply not doing their work where all could see? At least the latter would seem to be good practice.

The Journal also found “wide discrepancies” in Google’s auto-complete feature and search engine results. Based on the data presented, by “wide discrepancies” the Journal appears to mean that Google’s algorithms perform differently than those of DuckDuckGo, Yahoo, and Bing. It is unclear why that constitutes a “discrepancy” or why it belongs under the headline “How Google Interferes With Its Search Algorithms.” Wouldn’t it be expected that a search engine that people use about 90 percent of the time would be different from engines that people use less than 10 percent of the time?

The Journal makes one potentially important point: It claims that Google favors large clients over smaller ones and blacklists some sites in its search results, all the time claiming that it doesn’t do so. If this is true, Google isn’t necessarily dishonest: Large clients may be better at search engine optimization than smaller clients, and a blacklist can be an efficient way of dealing with some types of bad actors. But if the company isn’t being honest, it should be held accountable by customers, regulators, or both, as should any company or institution.

US social media companies are caught in an intensely political moment. Business decisions that are likely sensible and well-justified are apparently being spun to grab attention and look onerous. It may be hard for the companies to emerge from this moment unscathed by poor regulatory responses, but more careful journalistic characterization of research results would help.

No comments:

Post a Comment