Algorithms: The number crunchers that control your life

When you switch on your tablet, computer or smart phone and have a browse for a new home, a car, or even a handbag or simple pair of jeans, an algorithm decides the outcome. What is an algorithm? According to the book Algorithms for Dummies, by Mueller and Massaron:

“Algorithms are mathematical equations that determine what we see – based on our likes, dislikes, queries, views, interests, relationships, and more – online.”

So, to give you a quick, simple example – say you like cats: you have your cat food delivered from Amazon each month, you watch funny cat videos on YouTube, you maybe make a quarterly donation to Cats Protection and you’ve run a Google search on flea products for kittens – they’re your digital crumbs. Now you’ll find all sorts of products being pitched to you on Facebook, you’ll see banners down the side of your email promoting flea treatments, and possibly, based on other recent searches, ‘they’ know you like dresses from a particular store – and suddenly there’s a dress with a cat on it being pitched to you when you log on to make a Google search. That’s an algorithm in layman’s terms.

Increasingly, algorithms are controlling, or at least, play a major role in, every aspect of our lives: deciding who gets the job, whether you obtain a mortgage, the length of prison sentences handed out and even in political campaigns – the perfect example being the Donald Trump presidential campaign, which was helped by behavioural marketers who used algorithms to detect the highest concentration of persuadable voters.

Algorithms aren’t bad at all when it comes to innocuous things like recommending a movie on Netflix, based on our previous viewer history, advertising an online store, based on the type of clothing we like to buy, or having a banner appear to tell us our favourite rock band is playing soon in our town. But when it comes to more sober issues – blindly trusting formulas is quite worrying.

Take the recruitment and selection process for instance. More and more big organisations are using algorithms to help select the ‘best’ candidate, and one story that springs to mind recently, was the case where a highly academic young man with Bi-Polar was turned down repeatedly for minimum wage jobs he was overqualified for, and his barrister father discovered that each of these organisations used a personality test developed by Kronos, a workforce management company that had written algorithms, that many believe discriminated against people with disabilities. That’s the thing, because algorithms are written by people, they can, and will, have bias and discrimination written in to them and because, by nature, algorithms are shrouded in secrecy, you can appreciate that these mathematical number crunchers have the potential, like many believe, to create an ‘underclass’ of people who will find themselves progressively and unaccountably shut out from normal life – just because the computer says no.

Cathy O’Neil, a data scientist warns about, “blindly trusting” algorithms to decide an impartial outcome:

“Algorithms are not inherently fair, because the person who builds the model defines success.”

In fact, O’Neil feels that, quite often, these formulas magnify prejudice against the disadvantaged.

There’s also the fact that these computer-generated decisions are (a lot of the time) based on data collected about us without our consent. Zeynep Tufekci, a professor of technology and society at the University of North Carolina said,

“These computational systems can infer all sorts of things about you from your digital crumbs…. they can infer your sexual orientation, your personality traits, your political leanings …. They have predictive power, with high levels of accuracy.”

Used in the right way, algorithms could be so beneficial, but many would contend they are being used, in many situations, to ostracize the disadvantaged. There’s been talk about using them in an app to help identify people prone to suicide – I can see that for many friends and family members of people living with severe depression, this could be very useful. But how long would it be until the very thing this app is helping with, is later used to discriminate against the person when applying for a job?

I can see the financial benefits of algorithms, especially in big companies, for instance, a computer programme can speed through thousands of applications and CV’s in a couple of seconds, shortlist candidates, based on their qualifications and rank in order of expertise, and whilst that is handy, purely from the qualification/experience criteria – it shouldn’t be used for anything else. If candidates (like many believe) will have to write their CV’s in future with automatic reader in mind, it will mean the writer has to pepper with the buzz words, that company is looking for, which will mean, yet again, that the wealthy, who will have access to the latest information and have the money and resources to prepare resumes that come out on top, will win again, and those who don’t have access to such resources, no matter how qualified or well-suited to the post, will inevitably have their CV cascade down the black hole.

From the bottom line viewpoint, algorithms could (and do) cut administrative costs enormously in HR departments – but we have to ask ourselves, would Bill Gates or Warren Buffet have made it through the computer filters? I fear we will lose individuals and naturally creative people, who may not fit certain criteria – thanks to the algorithm. The odds are, without doubt, stacked against individualism.

Algorithms really are akin to asking a school-leaver in [Northern Ireland] what sports they played and what extra-curricular activities they were involved in (camogie/hurling – Catholic, Girls’/Boys’ Brigade – Protestant). We all know information like that will either work for you or against you, and algorithms, when misused are just another form of legal discrimination – even though the whole concept of them was to replace subjective judgements.

Another area of real concern about using algorithms is in the legal system. Scoring criminal defendants, rather than relying on the judge will surely cause problems (taking into account things like risk factors, i.e. the neighbourhood the person lives in, may well be bigoted, not to mention, unfair). The data that is dispensed into algorithms comes from communications with the public – which, let’s be honest, can be downright racist, sexist, elitist, homophobic. Other input, like questionnaires, aren’t always the best either – some have asked defendants if they come from families who have a history of breaking the law which, of course, would be illegal if asked in court – yet it’s allowed to be rooted in the defendant’s ‘score’ and viewed as impartial. Something is very wrong.

A real worry, from the ‘Big Brother is watching you’ standpoint, in the insurance industry is the fact that they are using ever-more sophisticated data capture technology for insurance rate decisions, for instance, telematics that analyse how you are driving in real time – which shows the lengths companies are prepared to go to, to use algorithms and real-time data analysis to compute costs. Many companies now offer cut-price insurance deals – or ‘dashcam’ car insurance for people who, in effect, don’t mind being spied on.

We all deserve a home, but this is another area algorithms are at large – who gets a mortgage and who doesn’t. We are hearing more and more of the elderly (even when they’ve proven they have the funds to pay into old age) being turned down, as well as women, aged 25-45, (seemingly, in case they get pregnant and have the added cost of childcare) and individuals who may have had a small ‘blip’ in their credit score as young adults but have since had a good credit history being turned out. Why? Because the algorithm sees only the data – a (human) manual underwriter could make a much better judgement in this area.

Data doesn’t always give the full picture. I remember reading about a man who had been turned down for mortgage refinancing as he had recently left his job after eight years, even though he had a history of twenty years stable employment before that – but the algorithms detected he was a risk, even though he continued to be self-employed as a public speaker. That man was Ben Barnanke, the recently retired chair of the U.S. Federal Reserve, and any one of his (many) speaking engagements brought in $250,000 a time – which would have been enough to write a cheque there and then for the mortgage. The personal risk was trivial, but the robots, or algorithms weren’t to know that. The computer is not always right – and this is a classic example of that.

Algorithms are supposed to be objective, but let’s remember that these are based choices made by imperfect humans, who can encode human prejudice, misunderstanding and any bias into computer programmes that are increasingly deciding our fate and managing our lives. Even if verdicts are incorrect or damaging, algorithms are beyond debate or petition – and, like many believe, they tend to punish the poor and oppressed in our society. How long will it be until university applications, or jobs in certain supercilious industries ask if your father and grandfather went to OxBridge before you?

We as citizens should have a right to protection and a right of explanation when we are impacted by an algorithmic decision. There also needs to be greater transparency and accountability. Saying that, algorithms can’t be the scapegoat either for societal ills – bias will always exist, but somehow when data has the ability to affect almost every aspect of our lives, it becomes unsettling and to be a victim of hidden bias will be increasingly protested – and quite rightly, until something is done to regulate this mayhem, in the meantime we are all at risk of becoming victims of – The computer says NO!

Courtesy of Ireland’s Big Issue / INSP.ngo

 


Posted

in

,

by

Tags:

Comments

Leave a Reply