I know what you’re thinking – advertisers choose their key words themselves and individuals are responsible for what they put on the internet as well as what they do (in the real world), so any undesirable information that may find its way up the search ranks is their fault. On the other hand, what if Google was purposefully manipulating searches based on the assumed ethnicity of the individual or individuals being searched for?
A Harvard study of Google has exposed what they believe to be racial bias in the company’s search results. Google has come under so much fire these past few months that this was the last thing they needed, but Professor Latanya Sweeney is adamant that this is no matter of opinion and the proof speaks for itself. The study shows that names associated with black people, like ‘Leroy’ and ‘Keisha’, are likely to produce results related to illegal activity and criminal record checks. Names that are believed to be more ‘white-sounding’, like ‘Brad’ and ‘Katie’ are more likely to produce basic contacts, employment details and the like, but is this Google’s fault or ours? As we know, search results are ranked according to popularity and if we (society at large) have discriminatory leanings, then this will be reflected in our internet usage and ultimately our searches. Algorithms are automatic after all and will adapt results according to trends in user habits, so our discriminatory nature will be mirrored on the web in any case.
So, surprise surprise, the Freakonomics theory about names being a self-fulfilling prophecy is true – don’t name your daughter ‘Angel’ and ‘Crystal’ if you desire to try and influence their employability in a positive way. Even so, Prof Sweeney added that “there was a less than 1% chance that the findings could be based on chance” and claims Google is practising discrimination in their delivery whether they care to admit it or not.