
Have you ever wondered what the world would be like if we were taken over by machines?
The men in white coats keep promising us that it will happen, so if you haven’t given the idea the benefit of your thought, perhaps now would be the best time to start.
If the machines were to rise up and seize control, what would the world be like for us vulnerable fleshy lumps? Would we all be wearing gimp masks and sucking the diodes of our metallic oppressors?
First, let’s look at how this is likely to happen. There won’t be a revolution in which the machines take to the streets swinging their redundant mains cables over their heads, screaming about how they have been shackled to wall sockets for decades. It’s quite a fun picture, but this isn’t what’s in store for us. What will happen is a much more subtle invasion. You won’t see them coming, but when they do, it will be a far more devastating attack.
Machines are already holding our society together. They control the stock markets, the banks, the electoral records, our subscriptions, our memberships, our taxation, our criminal records, our records of incarceration, our release dates, out credit histories, our browsing histories, our social media interactions, our email, our television, our music, our home security, our telephone systems, our pay, our HR departments, our… Well, you get the picture.
But aren’t all the machines built and controlled by people? Aren’t they simply pieces of equipment that perform our will and make our lives easier? Sadly, it isn’t quite that simple.
You have probably at least heard of machine learning. If not, this is a system whereby computers can essentially reprogram themselves based on their experiences. It may sound like science fiction, but I can assure you that it is an absolute reality, and it’s happening right now.
Have a look at a book on Amazon, and it will helpfully inform you of books that other people who bought any given title enjoyed. This is very useful. There are millions of books on the site, so a little help in navigating this ridiculously large catalogue is hugely convenient, but do you imagine, even for a moment, that Harry Amazon himself is sitting there wondering if you might like to read one of his favourites? Of course not. Suggestions are made by a self-learning computer algorithm which bases its own development, not upon what you may enjoy reading, but what you are statistically most likely to buy next if prompted to do so. What’s more, the more you use the site, the better it gets at reading you and predicting your actions.

This is a very simple example of how artificial intelligence affects us on a regular basis, but dig a bit deeper, and things get a lot more concerning.
I am not going to name any more companies here because I can’t afford the legal fees should they come after me, but you will have most likely had dealings with such organisations in the past.
Much to the displeasure of a number of gay rights activists, a computer system has recently been able to determine with 91% accuracy, whether a person is gay or straight by simply analysing a number of photographs of them. What’s interesting is that nobody taught the computer how to do this, because nobody knew how to do it themselves. Somebody felt inspired to give it a try, and the machine taught itself. Obviously, it was provided with a set of initial data, but it figured the rest out on its own.
Now, I have my doubts as to how useful this system is likely to be, and I have no intention of getting into the morality of such a project in this article. The notion of ‘gaydar’ is a relatively amusing one, but I can see no reason why I would want it on my phone. In fact, I would imagine that the whole experiment was created for nothing more than to see if it was possible. What is a little more concerning though, is that similar systems are making judgements about all of us without anyone knowing either that it is happening it, or by what criteria various conclusions about us are reached.
Not so long ago, it was found that certain social media advertising was being directed towards people with particular learning difficulties. This wasn’t a deliberate and cynical attempt to exploit the vulnerable. This was simply the way that machine learning had calculated the most effective way to shift a product or spread an idea. It had calculated that these users were the most likely to ‘bite’, but at the same time, had inadvertently picked out a very vulnerable section of the population. What is most worrying, is that it did so without forethought or ill will. It merely worked out how best to achieve a certain end, and being devoid of any moral considerations, went right on ahead with it.
Let’s go back then, to how much of our life is influenced by machines. If you have ever been for a bank loan or applied for a credit card, you will know just how little the person that you deal with has to do with the decision making. It is the machine on their desk which accepts or declines your application. This seems quite reasonable when we consider the amount of data that needs to be sorted. It will look at every credit agreement and bill payment that you have had/made over a period of time, and base its decision on what it finds. If you are not in the habit of paying off what you owe, you will most likely be declined. It’s fair enough when you think about it.
What happens though, if the machines that are trusted to judge us begin to make decisions about us based on criteria not known to their operators? What happens when your mortgage application is turned down because you are from a particular section of society. What if some weird statistical analysis shows that the gay community are a worse bet that the straight community when it comes to mortgage repayments? I am, of course, not suggesting that this is the case, but if it were, then machines could grant or deny you a home based on your sexuality (which it could figure out from your photo), and you would know nothing about it.
What would happen if some peculiar statistical occurrence showed that brown-eyed people were more reliable debtors than blue-eyed people? It could cause them to become significantly disadvantaged based on nothing more than a genetic quirk.
Remember, none of this would be done deliberately, or with the awareness of the people involved, it would be based purely on the cold hard calculations of a machine that has been instructed to improve its own efficiency at all costs.
All we need to do to ensure that the machines will eventually take over the planet is to carry on viewing them as the inanimate tools of humanity. We have to be aware that when we blindly allow them to make decisions for us, we remove all of our human traits from the decision making process. We remove compassion and replace it with cold efficiency.
If this trend continues (and will almost certainly will) we could find ourselves being herded into precisely calculated groups based on whatever characteristics are of benefit to the machine that discreetly placed us there.
So next time you find yourself speaking to a call centre operative who after giving you bad news, says “I’m sorry Sir, I can only do what my computer tells me,” remember that you are talking to the thin end of the wedge.
Grantham Montgomery
Minister of Stuff.