As online users, we’ve become accustomed to the giant, invisible hands of Google, Facebook, and Amazon feeding our screens. We’re surrounded by proprietary code like Twitter Trends, Google’s autocomplete, Netflix recommendations, and OKCupid matches. It’s how the internet churns. So when Instagram or Twitter, or the Silicon Valley titan of the moment, chooses to mess with what we consider our personal lives, we’re reminded where the power actually lies. And it rankles.
They’re also anything but objective. “How can they be?” asks Mark Hansen, a statistician and the director of the Brown Institute at Columbia University. “They’re the products of human imagination.” (As an experiment, think about all of the ways you could answer the question: “How many Latinos live in New York?” That’ll give you an idea of how much human judgement goes into turning the real world into math.)
Can an algorithm be racist?
“Algorithms are like a very small child,” says Suresh Venkatasubramanian. “They learn from their environment.”
Second, algorithms have some inherently unfair design tics—many of which are laid out in a Medium post, “How big data is unfair.” The author points out that since algorithms look for patterns, and minorities by definition don’t fit the same patterns as the majority, the results will be different for members of the minority group. And if the overall success rate of the algorithm is pretty high, it might not be noticeable that the people it isn’t working for all belong to a similar group.
Read the whole thing below [10 minute read]: