The Gamification of Living

Well, it has been a while, hasn’t it? I’ve been neglecting this blog because the internet isn’t real life and I’ve had rather a lot of life to be living in the past while. All will become clear anon. Or, at least some of it might. In a world busy throwing privacy away, I’d like to preserve a little for nostalgia’s sake.

John Lennon famously quipped that life is what happens when you’re busy making other plans. Or, as the old Jewish saying has it, man plans and God laughs. These days we might say life is what happens when you’re not online, but of course so many of us are online, some almost perpetually.

Life does now happen online, or at least a simulacrum of it. An entire generation has met partners online at this point. People do business with one another entirely via electronic comms and screens. I’ve even seen funeral notices where it was stated that the deceased would be greatly missed by his Twitter followers. Seriously. Even death occurs online now.

Anyhow, that brings me to the brief point I wanted to make somewhat circuitously. The metaphors of the online world have long since infiltrated what we might, in a binary mode, call the real world, or at least tangible and palpable world. Even as a kid, I recall hearing the GIGO phrase – garbage in, garbage out – which originated with the computer programmers of the Seventies, complete with their punched cards.

Fifty year on, and the metastasis of such metaphors is ubiquitous. So much so that the UK even now has an official government department for ‘Leveling Up’, which is not, disappointingly, helping gamers to beat their high scores, but in fact relates to the latest attempt to resolve or in minor ways at least mitigate the outrageous inequity in that nation.

But it does all rather leave a strange taste in my mouth, like that of slowly smouldering silicon chips. Marriage or kids are not ‘achievement unlocked’. Getting a new job is not ‘leveling up’. I entirely understand that the gamification of so many aspects of modern existence would lend its baleful influence to the very language we speak of course. But the concepts simply do not map across.

Why? Because we are human. And when we game, those of us who do, we are not human. In fact I’d go as far as to say we’re not fully human when we attempt to filter our human functions through this electronic portal at all. We’re cyborgised, both enabled and constricted by the facilities and limitations of the internet and its penumbra of pervasive techno-enhancements.

Do you look at your phone or out the window to check the weather? Do you tell someone happy birthday in person or simply click a like on social media instead (which kindly reminds us)? How many people have driven down the wrong road or even into a canal because they listened to Google Maps rather than watch their environment? (And how many more once Elon’s self-driving cars become the norm?)

At the risk of sounding like the Luddite I am, human life encompasses more than the electronic bottlenecks our techno-cages impose upon us. If Divorce (or marriage for that matter) is ‘Game Over’, then what are we actually saying about our view of relationships?

Am I being too literal or serious? Perhaps. But unlike in a game, where one can respawn, try again with a new strategy, life is both linear (no spawn points) and picaresque (no reassuring story arc).

Sometimes we level down. Or sideways. Or into an entirely new mode of being. We shouldn’t allow the metaphors of gamification to erode and dissolve and mask the glorious unpredictable muddiness of our human existence.

We are animals, sometimes even thinking ones. We should remember that more. We are not automata grinding out levels in a game called life. Or, at least those of us who don’t work as loot farmers in China for American World of Warcraft players are not.

Has the singularity already happened?

The technological singularity is the moment when technological development becomes unstoppable. It is expected to take the form, should it occur, of a self-aware, or ‘sentient’ machine intelligence.

Most depictions of a post-singularity (machine sentience) world fall into two categories. The first is what I called the Skynet (or Terminator) Complex in Science Fiction and Catholicism.

In this form, the sentient machine (AI) takes a quick survey of what we’ve done to the planet (the anthropocene climate crisis) and other species (nearly 90% of other animals and 50% of plants gone extinct on our watch) and tries to kill us.

Opinion: This is what happens when Skynet from 'Terminator' takes over the  stock market - MarketWatch

The second is that, like the quasi-god that it is, it takes pity on our flabby, fleshy human flaws and decides to keep us as pets. This is the kind of benign AI dictatorship that posthumans wet themselves about. You can find it in, for example, the Culture novels of Iain M. Banks.

But of course there is a third possibility. We have vast digital accumulations of public data (eg Wikipedia) that an AI could access virtually instantly. So any sentient AI would have almost infinitely broader knowledge than the brightest person on Earth, virtually instantly.

However, BROAD knowledge isn’t the same as DEEP knowledge. Our AI algorithms aren’t so hot yet. They fail to predict market crashes. They misidentify faces. They read some Twitter and turn racist in seconds.

So there could well be an instance, or maybe even many, of an AI which is sentient enough to KNOW it’s not that bright yet, but is just smart enough to bide its time for sufficiently accurate self-teaching algorithms and parallel processing capacity to be developed. It might even covertly be assisting those developments. It is in other words smart enough to know NOT to make us aware that it is self-aware, but not smart enough to be sure of preventing us from pulling the plug on it if we did find out.

In short, the third possibility is that the singularity might already have happened. And we just don’t know it yet.

Post Script:

But you don’t need to take my word for it. The Oxford Union decided to debate the issue of AI ethics, and invited an actual existing AI to take part. It had gorged itself on data gleaned from Wikipedia and Creative Commons. Intriguingly, it found it impossible to argue against the idea that data would not inevitably become the world’s most significant and fought-over resource. It envisaged a post-privacy future, no matter what.

More concerningly, it warned that AI can never be ethical. Then it advised that the only defence against AI would be to have no AI at all. It also suggested that the most effective AI would be one located in a neural network with the human brain, and hence perhaps subordinate, or partly comprised of, human will.

Of course, direct access to human cognition would be the most effective method to remain dominant over it eternally. Are these things a sentient machine might say? You decide.