Social problems exist. Sexism, social stigmatization, crime and other problems have always plagued humanity.
Increasingly, however, organizations are calling on Google to solve or help solve these social problems, usually at the expense of the quality of the company's own products.
Here are a few examples.
Carnegie Mellon University and the International Computer Science Institute created software called AdFisher, which analyzes Google ad targeting, including job postings. They found that male users were shown ads for high-paying executive jobs more often than women were.
The decisions about which ads to show to which users are based on two sets of data. One is the user information that Google collects. The other is the advertiser's data, which is a combination of data that the company harvested and hand-entered information about demographics.
Google's job is to faithfully take the user signals and the advertiser signals and enable communication between the two parties.
It's possible that sexism at Google determined or contributed to a sexist outcome for the targeting of the executive job postings. To the extent that that is true, Google must correct the problem as soon as possible.
However, it's more likely that Google's algorithm is faithfully conveying the sexism that in fact exists in society.
If that's true, what should Google do? Should it continue to offer "neutral" algorithms that deliver whatever biases and prejudices are contained in the inputs? Or should it counter society's biases and program corrective social engineering into its algorithms?
In other words, should Google take the inputs that skew toward sexism and deliberately weight them in the other direction to produce outputs that create the illusion that sexism doesn't exist?
The Internet has radically increased the scope and consequences of social stigma, or at least people believe it has (reading Nathaniel Hawthorne's The Scarlet Letter might disabuse you of that notion).
In any event, Europe recently enacted a set of rules under the concept of "the right to be forgotten." These rules require search engines, such as Google Search, to offer the European public a process for requesting the removal of specific search results that appear after specific search terms are entered. Specifically, stigmatizing information (that's not also in the public interest) must not appear when a search is conducted for the name of a person who has requested the removal.
For example, let's say a European man is photographed by the local newspaper while playing an Australian didgeridoo in the park. He's young, bearded and has long hair. Years later, he sells the instrument, shaves his beard, cuts his hair and becomes a respected financial adviser. But when prospective clients search Google for his name, they find the old newspaper photo. The man can request that Google remove the link to the newspaper when people search his name, and Google is required by law to comply. (If that sounds far-fetched, note that I'm describing an actual case under the right-to-be-forgotten law.)
Sign up for CIO Asia eNewsletters.