Return of the Dread AI

Or, you are DEFINITELY the data they’re looking for.

Do you remember when AI was nothing to worry about? It was just an oddity, a subject of humour. But yet people with lots of money and power kept taking it extremely seriously. They kept training up AIs, even when they turned out to be hilarious, or racist, or just downright incompetent.

And then all of a sudden AI got good at things. It began to be able to draw pictures, or write basic journalistic-like factual articles. Then more recently, it began to write plausible student essays. I say plausible, even if it did seem to be doing so with artificial tongue placed firmly in virtual cheek, penning histories of bears in space.

Nevertheless, this was an example of the sole virtue which Silicon Valley values – disruption. And so everyone took notice, especially those who had just gotten disrupted good and hard. Best of luck to academic institutions, particularly those responsible for grading student work, as they scramble to find a way to ensure the integrity of assessment in a world where Turnitin and similar plagiarism software systems are about to become defunct.

And yet there are still some people who would tell you that AI is just a toy, a gimmick, nothing to worry about. And yes, as AI begins to get good at some things, mostly we are enjoying it as a new toy, something to play with. Isn’t it, for example, joyous to recast Star Wars as if it had been made by Akira Kurosawa or Bollywood?

(Answer: yes, it very much is, and that’s why I’m sharing these AI-generated images of alternative cinematic histories below):

Bollywood, long long ago, in a galaxy far far away…
Akira Kurosawa’s version of Star Wars, as envisioned using Midjourney V4 by Alex Grekov

So where, if anywhere, is the dark side of this new force? Isn’t it fun to use the power of algorithms to invent these dreamscapes? Isn’t it fascinating to see what happens when you give AI an idea, like Kurosawa and Star Wars, or better again, a human-written script, and marvel at what it might produce?

(Answer: Yes, it is fascinating. Take for example this script written by Sapienship, inspired by Yuval Noah Harari, and illustrated by algorithm. Full disclosure: I wrote a very little bit of this.)

The one thing we all thought was that some jobs, some industries, some practices were immune to machine involvement. Sure, robots and automation might wipe out manufacturing and blue collar work. What a pity, eh? The commentariat for some time has shown little concern for the eradication of blue collar employment. Their mantra of ‘learn to code’ is now coming back to bite them on the ass as firstly jobs in the media itself got eviscerated and then so too this year did jobs in the software sector.

2022 tech sector job losses, Jan-Nov 2022.

But those old blue collar manufacturing industries had mostly left the West for outsourced climes anyhow. So who exactly would lose their jobs in a wave of automation? Bangladeshi garment factory seamstresses? Chinese phone assemblers? Vietnamese machine welders? (In fact, it turns out to be lots of people in Europe too, like warehouse workers in Poland for example.)

But the creative industries were fine, right? Education was fine. Robots and automation weren’t going to affect those. Except now they are. People learn languages from their phones rather than from teachers increasingly. (Soon they won’t have to, when automation finally and successfully devours translation too.)

Now AI can write student essays for them, putting the degree mills and Turnitin out of business, and posing a huge challenge for educational institutions in terms of assessment. These are the same institutions whose overpaid vice-chancellors have already fully grasped the monetary benefits of remote learning, recorded lectures, and cutting frontline teaching staff in record numbers.

What’s next? What happens when someone takes deepfakes out of the porn sector and merges it into the kind of imagery we see above? In other words, what happens when AI actually releases a Kurosawa Star Wars? Or writes a sequel to James Joyce’s Ulysses? Or some additional Emily Dickinson poems? Or paints whatever you like in the style of Picasso? Or sculpts, via a 3D printer, the art of the future? Or releases new songs by Elvis, Janis Joplin, Whitney Houston or Tupac?

Newsflash: we’re already there. Here’s some new tracks dropped by Amy Winehouse, Jim Morrison and some other members of the 27 Club, so named because they all died at 27.

What happens, in other words, when AI starts doing us better than we do us? When it makes human culture to a higher standard than we do? It’s coming rapidly down the track if we don’t very quickly come up with some answers about how we want to relate to AI and automation, and how we want to restrict it (and whether it’s even possible to persuade all the relevant actors globally of the wisdom of doing so.)

In the meantime, we can entertain ourselves with flattering self-portraits taken with Lensa, even as we concede the art of photography itself to the machines. Or we can initiate a much-needed global conversation about this technology, how fast it is moving, and where it is going.

But we need to do that now, because, as Yoda once said in a movie filmed in Elstree Studios, not Bollywood nor Japan, “Once you start down the dark path, forever it will dominate your destiny.” As we generate those Lensa portraits, we’re simultaneously feeding its algorithm our image, our data. We’re training it to recognise us, and via us, other humans, including those who never use their “service”, even those have not been born yet.

Let’s say that Lensa does indeed delete the images afterwards. The training their algorithm has received isn’t reversed. And less ethical entities, be they state bodies like the Chinese Communist Party or corporate like Google, might not be so quick to delete our data, even if we want them to.

Aldous Huxley, in his famous dystopia Brave New World, depicted a nightmare vision of people acquiescing to their own restraint and manipulation. This is what we are now on the brink of, dreaming our way to our own obsolescence. Dreams of our own unrealistic and prettified faces. Dreams of movies that never were filmed, essays we never wrote, novels the authors never penned, art the artists never painted.

Lots of pretty baubles, ultimately meaningless, in return for all that we are or can be. It’s not so great a deal, really, is it?

The Curious Tale of the Metaverse and the Multiverse

One of the issues with trying to surf the zeitgeist is precisely that – you remain on the surface with no depth of understanding of any individual issue. So high is the noise-to-signal ratio nowadays that it is almost overwhelming for many people to ascertain what information IS relevant and important to their lives, and what is not.

It can be hard to find the time to think deeply about quickly moving events, or to link them correctly to one another. In fact, such are the time and cognitive pressures that many people end up succumbing to conspiracy theories which offer neat and totalising explanations for the state of the world, provide suitably nefarious-seeming scapegoats and attempt to rally the public to action.

Of course, a lot of this action devolves quickly into “send me money”, but at that point some people are already sufficiently relieved to find a handy explanation for everything, happy not to have to think deeply, and grateful enough to contribute to the professional liars.

Unfortunately, there are no quick fixes or easy answers. Not for the world, and not for those of us who live in it. And there are many ways to become confused, or to pursue dead-end fictions, in the attempt to comprehend the fast-moving reality we find ourselves in. Conspiracy theories are just the odious tip of a large iceberg of false information and fake news. Beneath the surface are many other attempts to explain the world simply, or to simplify it, most of which are not as nefarious as conspiracies, but are in some regards equally constructed and equally untrue.

Two terms which crop up often these days, though maybe not often enough in this context, are the multiverse and the metaverse. The multiverse refers to the idea, widely accepted by theoretical physicists, that our universe is not the only one, and instead exists in relation to an infinitude of other universes, some highly similar, some wildly different from our own.

Many universes – but isn’t this one enough already?

By contrast the metaverse is an as yet hazy idea quickly obtaining momentum among tech circles which proposes itself as the future of the internet, and seeks to displace or replace many aspects of contemporary life with a virtual reality alternative.

Mark Zuckerberg’s vision of your future

So the multiverse is an expansive concept and the metaverse is a limiting one, but both seek to tackle the issue of explaining the complexity of the world by replacing it with something else. And they do so in different ways. While the metaverse is a collective effort by tech firms, Facebook (now renamed ‘Meta’) in particular, the multiverse is an idea poorly adopted from theoretical physics and science fiction novels which has grown, like conspiracy theories, in the corners of communication that the mainstream media do not reach primarily.

Already it seems that the brave new Metaversal world may not be about to materialise in quite the way its ‘imagineers’ were hoping. Only today, Facebook – sorry, Meta – announced swingeing job cuts across their company, which is undoubtedly informed by the one billion dollars PER MONTH they have been spending recently on developing Metaverse tech.

Over the past three decades, we have as individuals, societies and even as species, learned to adopt, adapt and accommodate the internet in our lives. But the prospect of a life spent primarily in virtual reality seems to be a bridge too far for many of us. We are not our avatars. We are not inputs into a global algorithm. We do not need to escape meatspace for metaspace.

But it seems some people do want to escape, though perhaps not into a corporate vision of virtual reality. After all, movies like The Matrix have warned the public to be wary of dreamscapes, especially when those dreams are programmed by others. Instead, they escape into their own dreams, where the complexity of reality can be argued away, in all its nuances and seeming contradictions, by the simple assertion that they have migrated between universes.

The growth of a subculture of people who appear to believe that they can traverse between universes is a particularly fantastikal form of failing to deal with how complex the world has become. It’s clearly not as nefarious as the various conspiracy theories circulating online, but of course any movement attracts shysters and wannabe leaders, in search of money or influence, and hence there are now people offering to teach others how to move between universes.

In one sense this is no less valid than teaching people how to chant mantras, say the rosary or engage in any other religious practice that is more metaphorical than metaphysical. But one of the catalysing aspects of online culture is the ability for people to find like-minded people. Hence conspiracy theorists can find communities where their toxic ideas are cultivated, while multiversers can source validation and endorsement from others who similarly seek to explain the anomalies of their memory or complexities of reality in the same way.

There are no doubt complex reasons to explain why so many people are subject to psychological phenomena like the Mandela Effect, but these explanations do not include watching YouTube videos on how to meditate your way into another universe while in the shower.

Both the multiverse and the metaverse offer simplistic and ultimately unsuitable resolutions to the ever-growing complexity of modern existence. Fundamentally, these escapist dreamscapes are coping mechanisms for dealing with this complexity.

The world is already too complex for any individual mind to comprehend, and probably too complex for even artificial intelligences to ever understand. But we can’t, or at least shouldn’t, escape it. Instead, we should try to understand it, and the best way to do that is to escape not from the world but from our online echo chambers.

If we can learn again to speak to one another, identify areas of agreement and try to find ways to foster collaboration despite disagreement, we stand a much better chance of improving our own collective futures.

At Sapienship, we believe everyone has a story to tell and all those add up to the story of us. We think everyone needs to be heard, and debated, and engaged with. It’s not easy, but it’s clearly the best way to resolve the major issues that face us, our planet and our reality.

We don’t need to hide in virtual realities or imagine alternative universes when the one we have is so rich with possibility and potential. Instead we need to come together to realise our hopes.

Are we Sleepwalking into Slavery?

Usually, hardcore technophiles get hurt in the pocket. I still recall people spending £800 on VHS video recorders (about £3,900 in today’s money) only for them to fall to a fraction of that soon afterwards. Likewise with early laptops and cellphones.

May be an image of 1 person and text
Cutting edge technology c. 1980.

What’s concerning about AI’s earliest adopters is both their blasé attitudes to its many flaws and weaknesses, and their insistence on foisting AI-driven “solutions” upon the rest of us.

Which brings us to the Synthetic Party. On paper no doubt it sounds great. Remove those problematic humans from decision-making. But politics takes place in the very real world of human society, not on paper or in bits and bytes.

This scenario – actually of an AI coming to power – was workshopped at the Athens Democracy Forum by a very interesting organisation called Apolitical. Our collective conclusion was very clear that AI isn’t ready to rule – and perhaps never will be.

Even if the advent of AI was at worst likely to punish enthusiasts financially, as with previous technology early adopters, I’d still have issues with it. AI needs to be fed with data to learn, and that data is your and my personal information, whether gathered legitimately with full consent or not.

However, AI could have ramifications far beyond our worst current nightmares. As always, we dream negatively in Orwellian terms, fearing technology will turn on us like Frankenstein’s monster or the Terminator, when history suggests that dystopia more often manifests in Huxleyan terms.

We are sleepwalking into this, and judging by these Danish early adopters, we will happily embrace our own slavery. It would be much preferable if the cost of AI was merely financial. But the ramifications are likely to be much more impactful.

Already in many democracies, a large proportion of the electorate simply don’t engage. And when they do, a growing proportion are voting for parties with extreme ideologies. On our current vector, we could easily end up volunteering for our own obsolescence.

What the Synthetic Party promise is true technocracy – rule by machines and algorithms rather than rule by unelected officials as we currently understand the term. As always, be careful what you wish for.

Has the singularity already happened?

The technological singularity is the moment when technological development becomes unstoppable. It is expected to take the form, should it occur, of a self-aware, or ‘sentient’ machine intelligence.

Most depictions of a post-singularity (machine sentience) world fall into two categories. The first is what I called the Skynet (or Terminator) Complex in Science Fiction and Catholicism.

In this form, the sentient machine (AI) takes a quick survey of what we’ve done to the planet (the anthropocene climate crisis) and other species (nearly 90% of other animals and 50% of plants gone extinct on our watch) and tries to kill us.

Opinion: This is what happens when Skynet from 'Terminator' takes over the  stock market - MarketWatch

The second is that, like the quasi-god that it is, it takes pity on our flabby, fleshy human flaws and decides to keep us as pets. This is the kind of benign AI dictatorship that posthumans wet themselves about. You can find it in, for example, the Culture novels of Iain M. Banks.

But of course there is a third possibility. We have vast digital accumulations of public data (eg Wikipedia) that an AI could access virtually instantly. So any sentient AI would have almost infinitely broader knowledge than the brightest person on Earth, virtually instantly.

However, BROAD knowledge isn’t the same as DEEP knowledge. Our AI algorithms aren’t so hot yet. They fail to predict market crashes. They misidentify faces. They read some Twitter and turn racist in seconds.

So there could well be an instance, or maybe even many, of an AI which is sentient enough to KNOW it’s not that bright yet, but is just smart enough to bide its time for sufficiently accurate self-teaching algorithms and parallel processing capacity to be developed. It might even covertly be assisting those developments. It is in other words smart enough to know NOT to make us aware that it is self-aware, but not smart enough to be sure of preventing us from pulling the plug on it if we did find out.

In short, the third possibility is that the singularity might already have happened. And we just don’t know it yet.

Post Script:

But you don’t need to take my word for it. The Oxford Union decided to debate the issue of AI ethics, and invited an actual existing AI to take part. It had gorged itself on data gleaned from Wikipedia and Creative Commons. Intriguingly, it found it impossible to argue against the idea that data would not inevitably become the world’s most significant and fought-over resource. It envisaged a post-privacy future, no matter what.

More concerningly, it warned that AI can never be ethical. Then it advised that the only defence against AI would be to have no AI at all. It also suggested that the most effective AI would be one located in a neural network with the human brain, and hence perhaps subordinate, or partly comprised of, human will.

Of course, direct access to human cognition would be the most effective method to remain dominant over it eternally. Are these things a sentient machine might say? You decide.