Data-driven decision-making is more than just business intelligence dashboards or personalized recommendations in your Netflix queue. It’s now impacting your chance at scoring a new job, finding a date, or a signing a lease on an apartment. While big data may be helpful when you’re trying to find the best deal on a plane ticket, is it ever unfair in a way that hurts people? As IT leaders, when do we need to stop looking at the data and focus on the person?
For one anonymous Canadian applicant, a rescinded job offer based on his credit score was viewed as an absurd penalty for being “out of work for a while.” In an interview with The Huffington Post, the Ontario resident expressed frustration that his potential employer thought his credit score was an indicator he’d steal money. Get this: TransUnion government relations director Eric Rosenberg has admitted there’s no existing statistical correlation between poor credit and the likelihood to commit fraud.
Big data is an amazingly valuable tool for finding the right book on Amazon or getting the right dosage of antibiotics from your doctor, but are we beginning to see its limitations as a form of intelligence? Data-driven decision-making in the real world has both positive and negative aspects, especially where human romantic and socioeconomic relationships are concerned.
Cracking the code on romantic compatibility?
For the 25 percent of Canadians who have dipped their toes into the online dating pool, big data is playing a bigger role than ever before. Chances are, your matches are ranked by perceived compatibility, based on how you’ve answered a series of questions about your hopes, dreams, and feelings about horror films.
While it may seem strange to apply algorithms to the highly chemical and inexact science of attraction, some proponents insist it really works—to an extent, anyway. When CEO Amy Webb successfully “hacked online dating” with a 72-point set of criteria to discover compatible matches, she discovered that there’s more to romance and attraction than she ever thought.
Algorithms aren’t biased—but they don’t look for hustle
For IT pros, hiring algorithms are an application of big data that may be encroaching too close to your personal world for comfort. The explosion of “people analytics” means that data-driven hiring decisions are no longer reserved for elite organizations like Facebook. Supporters of this approach argue that it can lead to objectively better and more diverse decisions than if a human hand-picked an applicant.
While employers may be tempted to use mathematical models to remove human bias from the equation, thinking of algorithms as impartial is a mistake, according to mathematician Cathy O’Neil. While hiring algorithms may be capable of removing the dangerous human biases against the best candidate, not everyone is sold on the idea. Founder and chief executive of Millennium Search Amish Shah told The New York Times, “I look for passion and hustle, and there’s no data algorithm that could ever get to the bottom of that. It’s an intuition, gut feel, chemistry.”
When big data at work gets a little sketchy
Sure, data is revolutionizing the professional realm in tons of positive ways, like energy-efficient smart offices—but there’s still plenty of controversy. Biometric data collection on employees is one area with hotly contested ethics. From HR’s perspective, knowing the percentage of employees at-risk could be incredibly helpful. But where does the line between data-driven insight and personal privacy fall?
Some experts believe that biometric technology may be moving faster than companies know what to do with. The world of fitness trackers and genetic samples has been a total whirlwind. The Privacy Commissioner of Canada warned against this trend in 2016. The Commission’s report is clear: “An organization needs enough information about an individual to authorize a legitimate transaction, but needs to ensure that it does not collect, use, retain, or disclose personal information that is not necessary for that purpose.”
Data-driven decision making: Proceed with caution
Data has limits. This doesn’t mean you should unplug your Hadoop cluster, or ask your doctor to hand-crunch the numbers for your laser eye surgery. But IT pros need to understand the limitations of big data decisions in the real world, and use this filter for smarter and more ethical choices at work.
Google Flu famously failed to predict influenza outbreaks. This doesn’t mean that algorithms can’t ever work for epidemiology—it just means it’s still a work in progress. Any new or old artificial intelligence has flaws because it’s driven by humans, who are prone to mistakes. But algorithms also fail to account for the law of entropy or any factors outside their programmed logic.
Data-driven decision-making algorithms perform better in a wide range of concepts. They can be an effective tool for removing bias and error, but IT managers need to recognize that technology is only as smart as its human creator, and misapplications of artificial intelligence can lead to some really dumb decisions.