What if World War III broke out and no one noticed?

What if no one noticed for the same reason that for a long time no one noticed that industrialisation was causing the climate to change? What if World War III is a hyperobject?


We live at a time when empires are decaying, arising and reformulating themselves in new structures and alliances. Does knowing this help us at all? Are we like Europe in 1914, on the brink of a seemingly inevitable global conflagration? Or more like the great empires of the Bronze Age, which collapsed in darkness three millennia ago following their own tragic but elusive hyperobjective moment?

Perhaps AI might yet save us from ourselves, if only it too were not a hyperobject, or worse, the oscillating image of multiple potential hyperobjects, each one more alien and incomprehensible than the last.

So if we can’t rely on a digital messiah, we might be forced to resolve our current issues the old-fashioned way.

No, not war. The OTHER old-fashioned way.

I’ll be giving a talk on all this next month. More info shortly.

He did it AI way

What you notice on first listen is of course how the AI has mimicked the diphthong pronunctions of Thom Yorke in the chorus, rendering the fake Sinatra version self-evidently fake.

But if you persevere, you notice something more significant about the AI rendering. It’s superficially impressive, apart from those pronunciation errors. What I mean is that it’s more persuasively Sinatra than almost all cover artists could aspire to be.

However, unlike almost any human singer, it’s soulless. There’s no attempt to convey or interpret the emotion of the original, because the emotion is the one singular component that the AI cannot aggregate or understand.

It makes a better fist of the Doors, perhaps because of much closer musical, chronological and cultural proximity. But generally, as more and more of these AI covers make their way into the cultural arena online, it’s becoming clear that, as Simon Pegg recently explained, AI is a mediocrity machine.

Doing a Degree in How the World Works

It’s that time of the year when the students are facing their exam periods. I used to see quite a few meltdowns each year from anxious kids inhabiting a grade borderline. The pressure seems so vast for some of them.

So much seems to ride on their results. Will they outcompete their peers sufficiently to nab a good job? Where even are there any jobs anymore? AI seems to be devouring entire sectors of human labour voraciously. Copyrighting, graphic design, news reporting, even creative writing. Even the student’s own essays. All their future labour is already being replaced before they even got there.

Most of the answers out there tend to say STEM. Go into science or tech. That’s where the last stand of human labour takes place. That’s where there’s still a wage, still some future perhaps.

As a result, the humanities are allegedly dying. Literature and History now look as threatened as Modern Languages did a half-generation ago. Market forces are eroding Literature enrollments so much that entire departments are closing en masse. It doesn’t currently look like Literature is going to have a happy ending.

I had a ringside seat in academia for some of the death throes of Modern Languages as a discipline. As the world, particularly the online bit of it, converges on Globish as its chosen lingua franca, there will always be a need for TEFL and TESOL, though increasingly even pedagogy will be achieved in part online and learning from machines.

But to an entire generation in Britain and Ireland, it suddenly seemed kinda pointless to study French, or German, or Spanish, or Italian, never mind Russian, or Chinese, or Arabic. Entire departments dissolved overnight. The discipline is a fraction of its former size now in just a few years and the condition may be close to terminal.

And now it seems like English Lit is dying too, or at least, so the media is scaremongering. For full disclosure, I’m happy to admit I have an English Lit degree. In fact, I’ve got two (a BA (Hons) and a PhD, both from Trinity College Dublin; I never bothered doing a Master’s.)

When I was doing my second, in what we might politely refer to as early middle age, I was frogmarched to a fascinating talk given to doctoral students by former doctoral graduates of English. We were told that only one in ten of us would end up permanently employed in academia, such was the pressure of the number of graduates versus the number of already threatened positions worldwide. Such jobs as did seem to be available were already primarily in China.

We got given inspirational little mini-lectures from people working in publishing, in parliament, in accountancy and law, and entrepreneurs of all kinds. The message was clear: this is the future most of you should expect.

Then they told me something which has haunted me since, primarily because they excluded me from it on the grounds of my more advanced age. They said: the demands of the world are now constantly changing. Most of you will have about ten jobs in your career, where previous generations might have had merely one or two, the ‘job for life’ of lore. And of those ten jobs, they added, six haven’t yet been invented.

Of course, it’s true, or at least it’s a truism of sorts. Tech is accelerating sufficiently as to require entire squadrons of people, from programmers to imagineers, that didn’t exist a generation ago, and will continue to do so. Hence the allegedly safe haven of STEM.

University Careers Advisor: Six out of ten of your future jobs haven’t been invented yet, yay!

Digital Careers Advice Avatar ten years from now: The AI overlords are hiring radiation sludge technicians for the prohibited zone. It’s that or we’re uploading you to the cloud to save the cost of feeding you. Which do you prefer?

I was lucky enough to be the one in ten who got a career in academia, though I’m currently out of it. But it did get me thinking about the need to embed some form of adaptability and resilience into student curricula at all levels, from primary school to post-doc. (I’ve spoken about this extensively before.) Because those are the only attributes that will truly allow young people to future-proof themselves for the demands of their adult lives.

And this is where I think studying the literature of the past can come in useful. English Literature is a degree which teaches critical thinking, use of language, aesthetic appreciation and a range of other comprehensive techniques. But it also frames the world as stories. As Yuval Noah Harari has pointed out, the human superpower, the ability which shot us past predator species and all other creatures to dominate this planet, was and remains our collective abilities to tell and share stories. And English Lit graduates learn how those stories work, which is another way of saying how the world works.

So I have two English Lit degrees. I can’t exactly say that they always directly impacted on all the jobs I’ve had. I have been among other things a roulette croupier, a barman in a lesbian pub, the Olympics correspondent for the Morning Star newspaper, a wine bar sommelier, a roofer, a film critic, maître d’ of a Creole restaurant, a playwright, and a member of the Guinness quality control taste panel at St James’s Gate brewery.

Most of those experiences didn’t make it to my formal LinkedIn CV. Pretty much all the things that did make it – primarily my careers in journalism and academia – are very clearly connected to my initial course of study.

But whether I was serving ales to Dublin’s lesbian community, reporting on an international soccer match, describing that night’s special in the restaurant, or assessing an exotic variant of Guinness, I think my undergrad study of literature and language always served me well.

The literature I studied taught me lessons of adaptability and resilience. I can’t think of another degree that might have prepared me better for life. This world is made of stories and I was privileged to spend some years learning how those stories work.

Return of the Dread AI

Or, you are DEFINITELY the data they’re looking for.

Do you remember when AI was nothing to worry about? It was just an oddity, a subject of humour. But yet people with lots of money and power kept taking it extremely seriously. They kept training up AIs, even when they turned out to be hilarious, or racist, or just downright incompetent.

And then all of a sudden AI got good at things. It began to be able to draw pictures, or write basic journalistic-like factual articles. Then more recently, it began to write plausible student essays. I say plausible, even if it did seem to be doing so with artificial tongue placed firmly in virtual cheek, penning histories of bears in space.

Nevertheless, this was an example of the sole virtue which Silicon Valley values – disruption. And so everyone took notice, especially those who had just gotten disrupted good and hard. Best of luck to academic institutions, particularly those responsible for grading student work, as they scramble to find a way to ensure the integrity of assessment in a world where Turnitin and similar plagiarism software systems are about to become defunct.

And yet there are still some people who would tell you that AI is just a toy, a gimmick, nothing to worry about. And yes, as AI begins to get good at some things, mostly we are enjoying it as a new toy, something to play with. Isn’t it, for example, joyous to recast Star Wars as if it had been made by Akira Kurosawa or Bollywood?

(Answer: yes, it very much is, and that’s why I’m sharing these AI-generated images of alternative cinematic histories below):

Bollywood, long long ago, in a galaxy far far away…
Akira Kurosawa’s version of Star Wars, as envisioned using Midjourney V4 by Alex Grekov

So where, if anywhere, is the dark side of this new force? Isn’t it fun to use the power of algorithms to invent these dreamscapes? Isn’t it fascinating to see what happens when you give AI an idea, like Kurosawa and Star Wars, or better again, a human-written script, and marvel at what it might produce?

(Answer: Yes, it is fascinating. Take for example this script written by Sapienship, inspired by Yuval Noah Harari, and illustrated by algorithm. Full disclosure: I wrote a very little bit of this.)

The one thing we all thought was that some jobs, some industries, some practices were immune to machine involvement. Sure, robots and automation might wipe out manufacturing and blue collar work. What a pity, eh? The commentariat for some time has shown little concern for the eradication of blue collar employment. Their mantra of ‘learn to code’ is now coming back to bite them on the ass as firstly jobs in the media itself got eviscerated and then so too this year did jobs in the software sector.

2022 tech sector job losses, Jan-Nov 2022.

But those old blue collar manufacturing industries had mostly left the West for outsourced climes anyhow. So who exactly would lose their jobs in a wave of automation? Bangladeshi garment factory seamstresses? Chinese phone assemblers? Vietnamese machine welders? (In fact, it turns out to be lots of people in Europe too, like warehouse workers in Poland for example.)

But the creative industries were fine, right? Education was fine. Robots and automation weren’t going to affect those. Except now they are. People learn languages from their phones rather than from teachers increasingly. (Soon they won’t have to, when automation finally and successfully devours translation too.)

Now AI can write student essays for them, putting the degree mills and Turnitin out of business, and posing a huge challenge for educational institutions in terms of assessment. These are the same institutions whose overpaid vice-chancellors have already fully grasped the monetary benefits of remote learning, recorded lectures, and cutting frontline teaching staff in record numbers.

What’s next? What happens when someone takes deepfakes out of the porn sector and merges it into the kind of imagery we see above? In other words, what happens when AI actually releases a Kurosawa Star Wars? Or writes a sequel to James Joyce’s Ulysses? Or some additional Emily Dickinson poems? Or paints whatever you like in the style of Picasso? Or sculpts, via a 3D printer, the art of the future? Or releases new songs by Elvis, Janis Joplin, Whitney Houston or Tupac?

Newsflash: we’re already there. Here’s some new tracks dropped by Amy Winehouse, Jim Morrison and some other members of the 27 Club, so named because they all died at 27.

What happens, in other words, when AI starts doing us better than we do us? When it makes human culture to a higher standard than we do? It’s coming rapidly down the track if we don’t very quickly come up with some answers about how we want to relate to AI and automation, and how we want to restrict it (and whether it’s even possible to persuade all the relevant actors globally of the wisdom of doing so.)

In the meantime, we can entertain ourselves with flattering self-portraits taken with Lensa, even as we concede the art of photography itself to the machines. Or we can initiate a much-needed global conversation about this technology, how fast it is moving, and where it is going.

But we need to do that now, because, as Yoda once said in a movie filmed in Elstree Studios, not Bollywood nor Japan, “Once you start down the dark path, forever it will dominate your destiny.” As we generate those Lensa portraits, we’re simultaneously feeding its algorithm our image, our data. We’re training it to recognise us, and via us, other humans, including those who never use their “service”, even those have not been born yet.

Let’s say that Lensa does indeed delete the images afterwards. The training their algorithm has received isn’t reversed. And less ethical entities, be they state bodies like the Chinese Communist Party or corporate like Google, might not be so quick to delete our data, even if we want them to.

Aldous Huxley, in his famous dystopia Brave New World, depicted a nightmare vision of people acquiescing to their own restraint and manipulation. This is what we are now on the brink of, dreaming our way to our own obsolescence. Dreams of our own unrealistic and prettified faces. Dreams of movies that never were filmed, essays we never wrote, novels the authors never penned, art the artists never painted.

Lots of pretty baubles, ultimately meaningless, in return for all that we are or can be. It’s not so great a deal, really, is it?

The Curious Tale of the Metaverse and the Multiverse

One of the issues with trying to surf the zeitgeist is precisely that – you remain on the surface with no depth of understanding of any individual issue. So high is the noise-to-signal ratio nowadays that it is almost overwhelming for many people to ascertain what information IS relevant and important to their lives, and what is not.

It can be hard to find the time to think deeply about quickly moving events, or to link them correctly to one another. In fact, such are the time and cognitive pressures that many people end up succumbing to conspiracy theories which offer neat and totalising explanations for the state of the world, provide suitably nefarious-seeming scapegoats and attempt to rally the public to action.

Of course, a lot of this action devolves quickly into “send me money”, but at that point some people are already sufficiently relieved to find a handy explanation for everything, happy not to have to think deeply, and grateful enough to contribute to the professional liars.

Unfortunately, there are no quick fixes or easy answers. Not for the world, and not for those of us who live in it. And there are many ways to become confused, or to pursue dead-end fictions, in the attempt to comprehend the fast-moving reality we find ourselves in. Conspiracy theories are just the odious tip of a large iceberg of false information and fake news. Beneath the surface are many other attempts to explain the world simply, or to simplify it, most of which are not as nefarious as conspiracies, but are in some regards equally constructed and equally untrue.

Two terms which crop up often these days, though maybe not often enough in this context, are the multiverse and the metaverse. The multiverse refers to the idea, widely accepted by theoretical physicists, that our universe is not the only one, and instead exists in relation to an infinitude of other universes, some highly similar, some wildly different from our own.

Many universes – but isn’t this one enough already?

By contrast the metaverse is an as yet hazy idea quickly obtaining momentum among tech circles which proposes itself as the future of the internet, and seeks to displace or replace many aspects of contemporary life with a virtual reality alternative.

Mark Zuckerberg’s vision of your future

So the multiverse is an expansive concept and the metaverse is a limiting one, but both seek to tackle the issue of explaining the complexity of the world by replacing it with something else. And they do so in different ways. While the metaverse is a collective effort by tech firms, Facebook (now renamed ‘Meta’) in particular, the multiverse is an idea poorly adopted from theoretical physics and science fiction novels which has grown, like conspiracy theories, in the corners of communication that the mainstream media do not reach primarily.

Already it seems that the brave new Metaversal world may not be about to materialise in quite the way its ‘imagineers’ were hoping. Only today, Facebook – sorry, Meta – announced swingeing job cuts across their company, which is undoubtedly informed by the one billion dollars PER MONTH they have been spending recently on developing Metaverse tech.

Over the past three decades, we have as individuals, societies and even as species, learned to adopt, adapt and accommodate the internet in our lives. But the prospect of a life spent primarily in virtual reality seems to be a bridge too far for many of us. We are not our avatars. We are not inputs into a global algorithm. We do not need to escape meatspace for metaspace.

But it seems some people do want to escape, though perhaps not into a corporate vision of virtual reality. After all, movies like The Matrix have warned the public to be wary of dreamscapes, especially when those dreams are programmed by others. Instead, they escape into their own dreams, where the complexity of reality can be argued away, in all its nuances and seeming contradictions, by the simple assertion that they have migrated between universes.

The growth of a subculture of people who appear to believe that they can traverse between universes is a particularly fantastikal form of failing to deal with how complex the world has become. It’s clearly not as nefarious as the various conspiracy theories circulating online, but of course any movement attracts shysters and wannabe leaders, in search of money or influence, and hence there are now people offering to teach others how to move between universes.

In one sense this is no less valid than teaching people how to chant mantras, say the rosary or engage in any other religious practice that is more metaphorical than metaphysical. But one of the catalysing aspects of online culture is the ability for people to find like-minded people. Hence conspiracy theorists can find communities where their toxic ideas are cultivated, while multiversers can source validation and endorsement from others who similarly seek to explain the anomalies of their memory or complexities of reality in the same way.

There are no doubt complex reasons to explain why so many people are subject to psychological phenomena like the Mandela Effect, but these explanations do not include watching YouTube videos on how to meditate your way into another universe while in the shower.

Both the multiverse and the metaverse offer simplistic and ultimately unsuitable resolutions to the ever-growing complexity of modern existence. Fundamentally, these escapist dreamscapes are coping mechanisms for dealing with this complexity.

The world is already too complex for any individual mind to comprehend, and probably too complex for even artificial intelligences to ever understand. But we can’t, or at least shouldn’t, escape it. Instead, we should try to understand it, and the best way to do that is to escape not from the world but from our online echo chambers.

If we can learn again to speak to one another, identify areas of agreement and try to find ways to foster collaboration despite disagreement, we stand a much better chance of improving our own collective futures.

At Sapienship, we believe everyone has a story to tell and all those add up to the story of us. We think everyone needs to be heard, and debated, and engaged with. It’s not easy, but it’s clearly the best way to resolve the major issues that face us, our planet and our reality.

We don’t need to hide in virtual realities or imagine alternative universes when the one we have is so rich with possibility and potential. Instead we need to come together to realise our hopes.

Are we Sleepwalking into Slavery?

Usually, hardcore technophiles get hurt in the pocket. I still recall people spending £800 on VHS video recorders (about £3,900 in today’s money) only for them to fall to a fraction of that soon afterwards. Likewise with early laptops and cellphones.

May be an image of 1 person and text
Cutting edge technology c. 1980.

What’s concerning about AI’s earliest adopters is both their blasé attitudes to its many flaws and weaknesses, and their insistence on foisting AI-driven “solutions” upon the rest of us.

Which brings us to the Synthetic Party. On paper no doubt it sounds great. Remove those problematic humans from decision-making. But politics takes place in the very real world of human society, not on paper or in bits and bytes.

This scenario – actually of an AI coming to power – was workshopped at the Athens Democracy Forum by a very interesting organisation called Apolitical. Our collective conclusion was very clear that AI isn’t ready to rule – and perhaps never will be.

Even if the advent of AI was at worst likely to punish enthusiasts financially, as with previous technology early adopters, I’d still have issues with it. AI needs to be fed with data to learn, and that data is your and my personal information, whether gathered legitimately with full consent or not.

However, AI could have ramifications far beyond our worst current nightmares. As always, we dream negatively in Orwellian terms, fearing technology will turn on us like Frankenstein’s monster or the Terminator, when history suggests that dystopia more often manifests in Huxleyan terms.

We are sleepwalking into this, and judging by these Danish early adopters, we will happily embrace our own slavery. It would be much preferable if the cost of AI was merely financial. But the ramifications are likely to be much more impactful.

Already in many democracies, a large proportion of the electorate simply don’t engage. And when they do, a growing proportion are voting for parties with extreme ideologies. On our current vector, we could easily end up volunteering for our own obsolescence.

What the Synthetic Party promise is true technocracy – rule by machines and algorithms rather than rule by unelected officials as we currently understand the term. As always, be careful what you wish for.

Has the singularity already happened?

The technological singularity is the moment when technological development becomes unstoppable. It is expected to take the form, should it occur, of a self-aware, or ‘sentient’ machine intelligence.

Most depictions of a post-singularity (machine sentience) world fall into two categories. The first is what I called the Skynet (or Terminator) Complex in Science Fiction and Catholicism.

In this form, the sentient machine (AI) takes a quick survey of what we’ve done to the planet (the anthropocene climate crisis) and other species (nearly 90% of other animals and 50% of plants gone extinct on our watch) and tries to kill us.

Opinion: This is what happens when Skynet from 'Terminator' takes over the  stock market - MarketWatch

The second is that, like the quasi-god that it is, it takes pity on our flabby, fleshy human flaws and decides to keep us as pets. This is the kind of benign AI dictatorship that posthumans wet themselves about. You can find it in, for example, the Culture novels of Iain M. Banks.

But of course there is a third possibility. We have vast digital accumulations of public data (eg Wikipedia) that an AI could access virtually instantly. So any sentient AI would have almost infinitely broader knowledge than the brightest person on Earth, virtually instantly.

However, BROAD knowledge isn’t the same as DEEP knowledge. Our AI algorithms aren’t so hot yet. They fail to predict market crashes. They misidentify faces. They read some Twitter and turn racist in seconds.

So there could well be an instance, or maybe even many, of an AI which is sentient enough to KNOW it’s not that bright yet, but is just smart enough to bide its time for sufficiently accurate self-teaching algorithms and parallel processing capacity to be developed. It might even covertly be assisting those developments. It is in other words smart enough to know NOT to make us aware that it is self-aware, but not smart enough to be sure of preventing us from pulling the plug on it if we did find out.

In short, the third possibility is that the singularity might already have happened. And we just don’t know it yet.

Post Script:

But you don’t need to take my word for it. The Oxford Union decided to debate the issue of AI ethics, and invited an actual existing AI to take part. It had gorged itself on data gleaned from Wikipedia and Creative Commons. Intriguingly, it found it impossible to argue against the idea that data would not inevitably become the world’s most significant and fought-over resource. It envisaged a post-privacy future, no matter what.

More concerningly, it warned that AI can never be ethical. Then it advised that the only defence against AI would be to have no AI at all. It also suggested that the most effective AI would be one located in a neural network with the human brain, and hence perhaps subordinate, or partly comprised of, human will.

Of course, direct access to human cognition would be the most effective method to remain dominant over it eternally. Are these things a sentient machine might say? You decide.