What if no one noticed for the same reason that for a long time no one noticed that industrialisation was causing the climate to change? What if World War III is a hyperobject?
We live at a time when empires are decaying, arising and reformulating themselves in new structures and alliances. Does knowing this help us at all? Are we like Europe in 1914, on the brink of a seemingly inevitable global conflagration? Or more like the great empires of the Bronze Age, which collapsed in darkness three millennia ago following their own tragic but elusive hyperobjective moment?
Perhaps AI might yet save us from ourselves, if only it too were not a hyperobject, or worse, the oscillating image of multiple potential hyperobjects, each one more alien and incomprehensible than the last.
So if we can’t rely on a digital messiah, we might be forced to resolve our current issues the old-fashioned way.
No, not war. The OTHER old-fashioned way.
I’ll be giving a talk on all this next month. More info shortly.
What you notice on first listen is of course how the AI has mimicked the diphthong pronunctions of Thom Yorke in the chorus, rendering the fake Sinatra version self-evidently fake.
But if you persevere, you notice something more significant about the AI rendering. It’s superficially impressive, apart from those pronunciation errors. What I mean is that it’s more persuasively Sinatra than almost all cover artists could aspire to be.
However, unlike almost any human singer, it’s soulless. There’s no attempt to convey or interpret the emotion of the original, because the emotion is the one singular component that the AI cannot aggregate or understand.
It makes a better fist of the Doors, perhaps because of much closer musical, chronological and cultural proximity. But generally, as more and more of these AI covers make their way into the cultural arena online, it’s becoming clear that, as Simon Pegg recently explained, AI is a mediocrity machine.
Or, you are DEFINITELY the data they’re looking for.
Do you remember when AI was nothing to worry about? It was just an oddity, a subject of humour. But yet people with lots of money and power kept taking it extremely seriously. They kept training up AIs, even when they turned out to be hilarious, or racist, or just downright incompetent.
And then all of a sudden AI got good at things. It began to be able to draw pictures, or write basic journalistic-like factual articles. Then more recently, it began to write plausible student essays. I say plausible, even if it did seem to be doing so with artificial tongue placed firmly in virtual cheek, penning histories of bears in space.
Nevertheless, this was an example of the sole virtue which Silicon Valley values – disruption. And so everyone took notice, especially those who had just gotten disrupted good and hard. Best of luck to academic institutions, particularly those responsible for grading student work, as they scramble to find a way to ensure the integrity of assessment in a world where Turnitin and similar plagiarism software systems are about to become defunct.
And yet there are still some people who would tell you that AI is just a toy, a gimmick, nothing to worry about. And yes, as AI begins to get good at some things, mostly we are enjoying it as a new toy, something to play with. Isn’t it, for example, joyous to recast Star Wars as if it had been made by Akira Kurosawa or Bollywood?
(Answer: yes, it very much is, and that’s why I’m sharing these AI-generated images of alternative cinematic histories below):
So where, if anywhere, is the dark side of this new force? Isn’t it fun to use the power of algorithms to invent these dreamscapes? Isn’t it fascinating to see what happens when you give AI an idea, like Kurosawa and Star Wars, or better again, a human-written script, and marvel at what it might produce?
(Answer: Yes, it is fascinating. Take for example this script written by Sapienship, inspired by Yuval Noah Harari, and illustrated by algorithm. Full disclosure: I wrote a very little bit of this.)
The one thing we all thought was that some jobs, some industries, some practices were immune to machine involvement. Sure, robots and automation might wipe out manufacturing and blue collar work. What a pity, eh? The commentariat for some time has shown little concern for the eradication of blue collar employment. Their mantra of ‘learn to code’ is now coming back to bite them on the ass as firstly jobs in the media itself got eviscerated and then so too this year did jobs in the software sector.
But those old blue collar manufacturing industries had mostly left the West for outsourced climes anyhow. So who exactly would lose their jobs in a wave of automation? Bangladeshi garment factory seamstresses? Chinese phone assemblers? Vietnamese machine welders? (In fact, it turns out to be lots of people in Europe too, like warehouse workers in Poland for example.)
But the creative industries were fine, right? Education was fine. Robots and automation weren’t going to affect those. Except now they are. People learn languages from their phones rather than from teachers increasingly. (Soon they won’t have to, when automation finally and successfully devours translation too.)
Now AI can write student essays for them, putting the degree mills and Turnitin out of business, and posing a huge challenge for educational institutions in terms of assessment. These are the same institutions whose overpaid vice-chancellors have already fully grasped the monetary benefits of remote learning, recorded lectures, and cutting frontline teaching staff in record numbers.
What’s next? What happens when someone takes deepfakes out of the porn sector and merges it into the kind of imagery we see above? In other words, what happens when AI actually releases a Kurosawa Star Wars? Or writes a sequel to James Joyce’s Ulysses? Or some additional Emily Dickinson poems? Or paints whatever you like in the style of Picasso? Or sculpts, via a 3D printer, the art of the future? Or releases new songs by Elvis, Janis Joplin, Whitney Houston or Tupac?
Newsflash: we’re already there. Here’s some new tracks dropped by Amy Winehouse, Jim Morrison and some other members of the 27 Club, so named because they all died at 27.
What happens, in other words, when AI starts doing us better than we do us? When it makes human culture to a higher standard than we do? It’s coming rapidly down the track if we don’t very quickly come up with some answers about how we want to relate to AI and automation, and how we want to restrict it (and whether it’s even possible to persuade all the relevant actors globally of the wisdom of doing so.)
In the meantime, we can entertain ourselves with flattering self-portraits taken with Lensa, even as we concede the art of photography itself to the machines. Or we can initiate a much-needed global conversation about this technology, how fast it is moving, and where it is going.
But we need to do that now, because, as Yoda once said in a movie filmed in Elstree Studios, not Bollywood nor Japan, “Once you start down the dark path, forever it will dominate your destiny.” As we generate those Lensa portraits, we’re simultaneously feeding its algorithm our image, our data. We’re training it to recognise us, and via us, other humans, including those who never use their “service”, even those have not been born yet.
Let’s say that Lensa does indeed delete the images afterwards. The training their algorithm has received isn’t reversed. And less ethical entities, be they state bodies like the Chinese Communist Party or corporate like Google, might not be so quick to delete our data, even if we want them to.
Aldous Huxley, in his famous dystopia Brave New World, depicted a nightmare vision of people acquiescing to their own restraint and manipulation. This is what we are now on the brink of, dreaming our way to our own obsolescence. Dreams of our own unrealistic and prettified faces. Dreams of movies that never were filmed, essays we never wrote, novels the authors never penned, art the artists never painted.
Lots of pretty baubles, ultimately meaningless, in return for all that we are or can be. It’s not so great a deal, really, is it?
Usually, hardcore technophiles get hurt in the pocket. I still recall people spending £800 on VHS video recorders (about £3,900 in today’s money) only for them to fall to a fraction of that soon afterwards. Likewise with early laptops and cellphones.
What’s concerning about AI’s earliest adopters is both their blasé attitudes to its many flaws and weaknesses, and their insistence on foisting AI-driven “solutions” upon the rest of us.
Which brings us to the Synthetic Party. On paper no doubt it sounds great. Remove those problematic humans from decision-making. But politics takes place in the very real world of human society, not on paper or in bits and bytes.
Even if the advent of AI was at worst likely to punish enthusiasts financially, as with previous technology early adopters, I’d still have issues with it. AI needs to be fed with data to learn, and that data is your and my personal information, whether gathered legitimately with full consent or not.
However, AI could have ramifications far beyond our worst current nightmares. As always, we dream negatively in Orwellian terms, fearing technology will turn on us like Frankenstein’s monster or the Terminator, when history suggests that dystopia more often manifests in Huxleyan terms.
We are sleepwalking into this, and judging by these Danish early adopters, we will happily embrace our own slavery. It would be much preferable if the cost of AI was merely financial. But the ramifications are likely to be much more impactful.
Already in many democracies, a large proportion of the electorate simply don’t engage. And when they do, a growing proportion are voting for parties with extreme ideologies. On our current vector, we could easily end up volunteering for our own obsolescence.
What the Synthetic Party promise is true technocracy – rule by machines and algorithms rather than rule by unelected officials as we currently understand the term. As always, be careful what you wish for.
It’s a foolish person who seeks to draw conclusions from an election where the votes haven’t finished being counted yet. But I am a foolish person, and I want to explain to you, wherever you may be, why a round of elections for a regional parliament in a small European backwater which is likely to result in no one actually wielding power is nevertheless of critical relevance to you.
You almost definitely don’t care about the latest Northern Irish Assembly elections. Why should you? Even about 40% of the people of Northern Ireland couldn’t care less about the latest Northern Irish Assembly elections, according to the turnout. But actually, these elections are supremely relevant to all of us because they are uniquely helpful in explaining why democracy is failing.
Northern Ireland is a small territory in the North Atlantic, of just under 2 million people, bordering the Republic of Ireland and administrated by Britain. The elections there are of parochial interest.
Great Britain, the reluctant ruler of the territory, casts at best a weary side glance towards Northern Irish elections, which tend to have no relevance at all to the island of Britain except about once or twice a century, when suddenly they do so crucially, out of all proportion.
Likewise, for the most part, the politicos of the Republic of Ireland, so insistant on their their shiny hi-tech cosmopolitanism and Europeanness, prefer to function mostly under the self-delusion that the problematic six counties to their north don’t exist. Nevertheless, the shadow of history and what is colloquially known as the ‘national question’ has a habit of flaring up into relevance, not least because in the most recent round of elections, Sinn Féin, a party which espouses the political unification of the island of Ireland, became the single biggest party.
So if the British tend to ignore Northern Irish elections, and the Irish do likewise, and even nearly half the Northern Irish don’t show any interest, why should you?
Because these elections help to reveal a range of truths about why democracy is failing. Specifically, they show us that:
Democracy is being consumed by identity politics.
Democracy is promoting extremism, and extreme methods for excluding extremists.
Democratic systems are essentially flawed, especially when one attempts to embed fairness into them.
Political parties have natural lifespans, and run based on their positions on the challenges of the past, not the future.
The really important decisions aren’t made democratically anymore.
Let’s go through this point by point. Democracy is being consumed by identity politics. This was always the case in Northern Ireland. It was created by partition a century ago to ensure a majority for the unionist, British-affiliated, largely Protestant community in the north-east of Ireland. By definition therefore, its politics have perennially been about orange and green, the vying of two identity blocs for recognition of their cultural aspirations.
This was not the case in most other places until relatively recently, with the advent of more multicultural societies, of course. But in similarly riven territories, such as Sri Lanka, societies have tended, as occurred in Northern Ireland for three decades, to descend into civil war.
What we’re beginning to see in many democratic nations is the emergence of political identity parties akin to those in Northern Ireland. In Western Europe, these tend to emerge first among indigenes on the right wing, who are ethno-nationalist and resistant to immigration. But not exclusively. There is, for example, an Islamic party in Holland. This tendency is therefore beginning to proliferate.
Likewise, we are seeing the co-option of existing political parties, or rather, their repurposing to become focused on identity politics issues rather than whatever political ideology accompanied their foundation. To this end, we can identify the move towards ethno-nationalism among long-established parties like the UK’s Conservatives or the USA’s Republicans. In response, we can see their main political rivals adopt a rival identity politics, that of an opposing ‘rainbow coalition’. But what Holland demonstrates is that such broad churches of disparate identity politics are likely in the end to splinter into more coherent, more homogeneous forms.
What results is a refutation of game theory approaches to politics. As in Northern Ireland, where unionist farmers along the border or nationalist bureaucrats in Belfast actively vote against their personal best interests and in favour of broader identity issues, we’re seeing people gravitate in many democracies towards voting for political identities which actually function against their own personal interests in many cases.
Democracy is also now promoting extremism, and extreme methods for excluding extremists, I’d argue. It promotes extremism because in a contemporary mediated world where political debate and the public sphere is being reduced to soundbites and tweets, only the loudest and simplest arguments are getting through. Furthermore, more and more of us exist in cultural echo chambers, obtaining our news and information from inside discourses we already entirely concur with. We are rendered impervious to having our minds changed because we don’t encounter alternative perspectives except in terms couched in condemnation.
In reaction to this, political establishments are forced to take more extreme measures to restrict the spread and growth of such extremism. Sometimes this involves co-opting the less extreme aspects and attempting to detoxify them. Other times it involves unstable coalitions of very odd bedfellows coming together to exclude parties perceived as extreme from holding any power. In Ireland, this manifested most recently with a grand coalition of two bitter rivals, Fine Gael and Fianna Fail, to exclude Sinn Fein.
We can begin to argue persuasively in many nations therefore that democratic systems appear to be essentially flawed, even or especially when one attempts to embed fairness into them. In Northern Ireland, the Assembly is a kind of regional parliament, overseen by the British government in Westminister, but semi-autonomous in theory. Its creation was underwritten by the Belfast Agreement, in which both unionist and nationalist communities must be represented in government in an enforced power-sharing executive.
In reality, this doesn’t work very well as it creates inbuilt antagonism among people forced to share collective responsibility for political decision-making, and as a result it has collapsed on more than one occasion in the past, and is likely to collapse imminently again despite these most recent elections. So, it’s a unique system and a unique situation, not one easily mapped onto other democratic nations.
However, the strange and unstable coalitions we see democracy throwing up in recent times, often in reaction to identity politics parties, is a very similar situation. Who could have foreseen Cinque Stelle sharing power with Salvini’s Lega in Italy? Or the Republicains and Socialistes in France effectively stepping aside to permit a shiny new centrist party with a relatively untried leader to become president?
Where proportional representational models, especially list models, exist, there is a risk of opening the doors to fringe extremist parties. But in FPTP systems, though this doesn’t occur to the same extent, it prevents it solely because in itself it is less than fully democratic. Tens of thousands of voices of, for example, Green voters in the UK, simply are not represented.
More significantly, we’re beginning to see in many nations that political parties have natural lifespans, and these spans relate to the fact that they all run based on their positions on the challenges of the past, not the future. In the Northern Irish Assembly elections, formerly the biggest two parties, the Ulster Unionists and the Social Democratic and Labour Party, have effectively been consigned to history, despite their estimable political lineages.
Why is this? Partly because they are not extreme parties, but nor are they, like Macron, radical centrist alternatives. (The radical centre in Northern Ireland are the Alliance Party, who just polled an historic high of 13.5% of the vote.) Falling between two stools, their time in the sun appears to have passed, their votes cannibalised by more polarising, more extreme versions of their own politics (Sinn Fein in the case of the SDLP, and the DUP in the case of the Ulster Unionists. In fact, we can already see this even happening to the DUP. Their vote sunk this time around, largely due to leaking votes to an even more extreme unionist alternative.)
The problem for the SDLP and UUP is that they campaign based on their histories, their ability in the past to come together in a functioning power-sharing executive, to represent their communities and their identities in ways which were nuanced, reasonable and accommodating.
But those were challenges of the 1990s in Northern Ireland, as it emerged from a civil war period. The future challenges are the ones which the 1990s parked indefinitely – those of the constitutional position of the territory. Sinn Fein espouse unifying Ireland. By constrast, the DUP vehemently oppose anything they see as undermining the union with Great Britain. To this extent, they are still fighting future challenges.
But in reality, with the partial exception of the Greens, none of the parties in Northern Ireland even have policies on the REAL major challenges likely to face the territory in years to come. And this is also true of most parties in most democracies too.
Which mainstream parties anywhere have policy documents on issues like automation or roboticisation of the workforce? Or on the challenges posed by Artificial Intelligence? Or on the dangers of autonomous weapons? We’ve seen parties around the world mostly fail at addressing the recent Covid pandemic. What are their policies should an outbreak of Ebola occur globally, or even just in their nation? We’re seeing most of them fail right now at addressing energy provision and future security. How do they actually intend to transit to a renewable future energy economy?
Actually, what are their policies on the real challenges of the climate crisis? Not just things like recycling, but how to prevent the great die off of our fellow species on Earth, or the likelihood of conflict over water or food resources? The answer is that almost all political parties have little to no coherent positions on such issues. But these kinds of issues are the ones most likely to impact most people over the next few decades.
Finally, we many conclude that all the really important decisions aren’t made democratically anymore. In Northern Ireland this is utterly self-evident, because it is a regional parliament overseen by Westminster. If, as could well happen, this Assembly is unable to form a functioning executive, it will merely revert to the ministers, or to London, to run the place.
But likewise we can see in many democracies that increasingly national parliaments either do not or cannot invoke agency or power over issues of significant national interest. This is partly because of the growth in power of corporations, which often can flex more economic might than those nations.
Even where nations, or supranational blocs like the European Union, do have such might, they appear all but impotent in the face of even exacting reasonable taxes from such corporations. Meanwhile, those corporations fund squadrons of lobbyists in every democratic nation in order to bend parliamentary decisions to their interests and not those of electorates. And that’s before we even address issues like the democratic deficits embedded in so many democratic systems, from the 2 party monopolies in Britain or America, to the technocracy of the EU.
So the latest Northern Irish Assembly elections are simultaneously historical and meaningless, for a number of reasons. We might be inclined to dismiss them because of that. But actually that’s why we should be paying close attention, because they help to reveal the huge systemic flaws in all democracies.
They help explain the rise of ethno-nationalism, the prevalence of unstable and unlikely coalitions, the temporary ‘radical centre’ solutions, the apparent failures of agency, and most of all the utter failure to address the real challenges of the future.
The world seems rife with conspiracy. Never before have we had a population so well educated, yet apparently so vulnerable to believing in vast conspiracy narratives. It seems like a contradiction. Researchers at UCLA have been using AI to work out how such conspiracy theories seem to emerge and subsequently collapse with ever greater velocity. But they struggle to explain why these ideas emerge at all.
The attraction of conspiracy theories is the promise that beneath the apparent chaos of the world is some underlying order and meaning, even if that meaning is negative and the order is destructive. It’s a desire to feel control, to possess agency over one’s own life.
In an ever more individuated and atomised world, the natural human desire for bonding en masse, for submerging into a gestalt and having a sense of belonging, therefore becomes subverted by such theories. Conspiracy theories are less ideas than they are communities.
The question is not why do conspiracy theories occur. They occur because of the human need for meaning and desire for order. Nor is the question how they may be combated or defeated. They can only be challenged and overcome by implementing transparent order in society. Transparent in this sense includes the underlying principles of fairness and dignity, because people will also strive for alternate explanations when they are treated unfairly or suspect they are being stripped of their human dignity.
The question that remains about conspiracy theories is why certain narratives prosper and others do not. To an extent this is a cui bono question – who benefits? Who makes money from proliferating certain conspiracies? And certainly, there are many who make a healthy living propagating nonsense and half-baked ideas to the masses. They may even be acting in good faith, believing in the attenuated and baroque web of connections they themselves are weaving. But more significantly, it’s an issue of what cultural anxieties are exposed by conspiracy theories.
The current most prolific conspiracy theory – that shadowy cabals of elites operating both in and out of the public eye are attempting to implement population reduction and totalitarian rule – is in this sense a throwback to the unequal and undignified social structures of the laissez-faire 19th century or even earlier, to feudalism. But it also expresses very contemporary anxieties about the Covid pandemic, and deeply held suspicions about the democratic unaccountability of transnational bodies in particular, be they the EU, the IMF, the World Economic Forum or the UN.
There are, in short, no easy answers to conspiracy theories, because conspiracy theories ARE the easy answers. They satisfy the atomised citizen’s need to bond in dignity with fellow citizens and they provide a simple and moralistic order against which to resist, thereby providing meaning.
History suggests that people, no matter how well educated, will be inclined to prefer such easy moralistic explanations of the world in which they live. The attraction of such explanations is as hardwired as the desire for sugar or animal fats, and as difficult to break as a habit.
Only a world which offers its citizenry ever greater fairness and dignity, which entrusts them with agency over their own lives, has any hope of competing with the memetic addiction to conspiracy. Until such a world is in place, people will continue to believe that shadowy forces secretly rule the world and wish them harm, be they demons, or Illuminati, Elders of Zion, or psychotic men in the boardrooms of Brussels and Washington.
The jumping off point for this question is the seeming contradiction that the world is becoming more religious, not less, even as we are moving towards an ever more algorithm-led society.
It’s worth pointing out at the outset that this is less of a polarised binary than it may initially seem, of course, for a whole range of reasons. Firstly we can nibble at the roots of both immediate-to-medium-term predictions. What do we mean by ‘more religious’, exactly? Just because many more people in the next few decades will affiliate as Muslim or Catholic does not necessarily mean that the world will be more fundamentalist in its outlook (though that’s clearly possible.) They may simply affiliate as cultural positions, cherry-picking at dogmas and behaviours.
There’s not a lot of point in asking why about this, to my mind. Probably, issues like relative birth rates between religious communities and non-religious communities has a lot to do with things, I suspect. Geography, along with its varied sociocultural religious traditions, also play a significant role, as do the relative population decline (and geopolitical and cultural wane in influence) of the West, where atheism and agnosticism have been most notably prevalent since the fall of the formally atheistic Communist regimes in 1989/90.
We can similarly query the inevitability of the singularity, though there is absolutely no doubt that currently we are in an a spiral of increasing datafication of our world, as Douglas Rushkoff persuasively argues in his relatively recent neo-humanist book Team Human. And why is the world becoming so? As Rushkoff and others point out, it is in order to feed the development of Artificial Intelligence, which concomitantly makes us more machinic as a consequence. (This is again very well argued by Rushkoff.)
So, on the one hand we have a more religious population coming down the track, but on the other, that population will inhabit a world which requires them to be ever more machinic, ever more transhuman, conceived of as data generators and treated ever more machinically by the forces of hypercapitalism.
Let’s say that, as it looks today, both of these trends seem somewhat non-negotiable. Where does that leave us? A dystopian perspective (or a neo-Marxist one) might be that we will enter some kind of situation wherein a religion-doped global majority are easily manipulated and data-harvested by a coldly logical machinic hegemony (which the current global elite seem, with irrational confidence, to feel they will be able to guide to their own ends and enrichment.)
I feel that such a simple filtering into Eloi and Morlocks is unlikely. Primarily this is because I have (an irrational?) confidence that a degree of rationality is likely to intervene to mitigate the very worst excesses of this binary. Unlike Marx, I don’t consider those of religious faith to be drugged morons, for a start. Some (probably a large majority) of our finest thinkers throughout history into the present day have held religious beliefs which in no way prevented them from innovating in science, philosophy, engineering and cultural thought.
Similarly, I believe the current existence and popularity of leading thinkers expressing a firm affiliation with organic humanism (or to put it more accurately, a deeply suspicious antipathy to the alleged utopia of transhumanism) is a strong indication that a movement in defence of organic humanism is coming to the fore of our collective consciousness, perhaps just in time for us to consider the challenges of potentially imminent rule by the algorithms.
Thinkers like Rushkoff, or Yuval Noah Harari, have clearly expressed this concern, and I believe it is implicit in the work of many other futurists, like Nick Bostrom too. If it wasn’t, we would likely not have had the current explosion of interest in issues like AI ethics, which seek to explore how to mitigate the potential risks of machine disaffiliation from humankind, and ensure fairness to all humans who find more of their lives falling under algorithmic control.
But how might we explain this apparent dichotomy, and how might we mitigate it? Steven Pinker’s recent book Rationality: What It Is, Why It Seems Scarce, Why It Matters may offer some assistance.
Pinker summarises rationality as a post-Enlightenment intellectual toolkit featuring “Bayesian reasoning, that is evaluating beliefs in the face of evidence, distinguishing causation and correlation, logic, critical thinking, probability, game theory”, which seems as good a list as any I could think of, but argues that all of these are on the wane in our current society, leading to the rise of a wide range of irrationalities, such as “fake news, quack cures, conspiracy theorizing, post-truth rhetoric, [and] paranormal woo-woo.”
If, as Pinker argues, rationality is an efficient method mankind has developed in order to pursue our own (organic and human) goals, such as pleasure, emotion or human relationships, then we can conceive of it in terms divorced from ideology, as method rather than ethos. It’s possible, then, to conceive of, for example, people rationally pursuing ends which may be perceived as irrational, such as religious faith.
Pinker believes that most people function rationally in the spheres of their lives which they personally inhabit – the workplace, day-to-day life, and so on. The irrational, he argues, emerges in spheres we do not personally inhabit, such as the distant past or future, halls of power we cannot access, and metaphysical considerations.
Humans have happily and successfully been able to shift between these two modes for most (if not all) of their existence of course. As he rightly points out, there was no expectation to function solely rationally until well into the Enlightenment period. And indeed, we may add, in many cultural circumstances or locations, there still is no such expectation.
Why does irrationality emerge in these spheres we cannot access? Partly it is because the fact that we cannot directly access them opens up the possibility of non-rational analysis. But also, as Pinker notes, because we are disempowered in such spheres, it is uplifting psychologically to affiliate with uplifting or inspiring “good stories”.
We need not (as Pinker might) disregard this as a human weakness for magical thinking. Harari has pointed out that religion functions as one of the collective stories generated by humanity which facilitated mass collaboration and directly led to much of human civilisation.
But if we were to agree, with Rushkoff and contra the transhumanists and posthumanists, that the correct response to an ever more algorithmic existence is not to adapt ourselves to a machinic future, but instead to bend back our tools to our human control, then how might rationality assist that?
As a mode of logical praxis which is nevertheless embedded in and consistent with humanist ideals, rationality could function well as a bridge between organic human values and the encroachment of machinic and algorithmic logic. The problem, however, is how to interpolate rationality into those spheres which lie open to magical thinking.
It’s clear that the retreat into atomising silos of woo-woo, fake news, conspiracies and nonsense is not a useful or coherent response to the rise of the machines. Spheres like the halls of power must therefore be rendered MORE transparent, MORE accountable to the body of humanity, and cease to be the fiefdoms of billionaires, corporations and their political puppets.
However, obviously this is much harder to apply to issues of metaphysical concern. Even rationality only takes us so far when considering things like the nature of love or the meaning of life, those metaphysical concerns which, though ultimately inaccessible, nevertheless engage most of us from time to time.
But mankind developed religion as a response to this a long time ago, and has continued to utilise, hone and develop religious faith as a communal experience, bonding mechanism and mode of collaboration. And religion has stood the test of time in those regards. Not for all, and certainly not for those post-Enlightenment exclusive rationalists (ie agnostics and atheists, a population seemingly destined to play a smaller role in our immediate future, according to current prognoses.)
If the positive ramifications of religion can be fostered, in a context of mutual respect, then it seems to me that there is no inherent contradiction or polarisation necessary. Indeed, a kind of Aquinian détente is perfectly possible. Rationality may be our best defence against an algorithmic hegemony, but rationality itself must acknowledge its own limitations of remit.
As long as the advocates of exclusive rationalism continue to view religious adherents (without distinction as to the form of their faiths or the presence or absence of fundamentalism) as their primary enemy and concern, they are in fact fighting the wars of a previous century, even while the bigger threat is posed by the hyperlogical opponent.
We therefore have a third option on the table, beyond the binary of gleeful acquiescence to algorithmic slavery (transhumanism) or a technophobic and Luddite-like retreat into woo-woo (which is equally no defence to machinic hegemony.) An accommodating rationality, operating as it always did in the spheres we do inhabit, has the potential to navigate this tricky Scylla and Charybdis.
To paraphrase someone who was not without rationality, we could usefully render unto rationality that which is open to rationality, and render unto God (of whatever flavour) that which is for now only open to God.
But we do need to open up some spheres to rationality which currently are not open to most of humanity – the power structures, the wealth imbalances, the blind gallop into faith in the algorithm. Because, pace the posthumanist faith in a benign singularity, there’s no guarantee that machinic merger or domination will preserve us, and even if it does, it will not conserve us as we know ourselves today.
The technological singularity is the moment when technological development becomes unstoppable. It is expected to take the form, should it occur, of a self-aware, or ‘sentient’ machine intelligence.
Most depictions of a post-singularity (machine sentience) world fall into two categories. The first is what I called the Skynet (or Terminator) Complex in Science Fiction and Catholicism.
In this form, the sentient machine (AI) takes a quick survey of what we’ve done to the planet (the anthropocene climate crisis) and other species (nearly 90% of other animals and 50% of plants gone extinct on our watch) and tries to kill us.
The second is that, like the quasi-god that it is, it takes pity on our flabby, fleshy human flaws and decides to keep us as pets. This is the kind of benign AI dictatorship that posthumans wet themselves about. You can find it in, for example, the Culture novels of Iain M. Banks.
But of course there is a third possibility. We have vast digital accumulations of public data (eg Wikipedia) that an AI could access virtually instantly. So any sentient AI would have almost infinitely broader knowledge than the brightest person on Earth, virtually instantly.
However, BROAD knowledge isn’t the same as DEEP knowledge. Our AI algorithms aren’t so hot yet. They fail to predict market crashes. They misidentify faces. They read some Twitter and turn racist in seconds.
So there could well be an instance, or maybe even many, of an AI which is sentient enough to KNOW it’s not that bright yet, but is just smart enough to bide its time for sufficiently accurate self-teaching algorithms and parallel processing capacity to be developed. It might even covertly be assisting those developments. It is in other words smart enough to know NOT to make us aware that it is self-aware, but not smart enough to be sure of preventing us from pulling the plug on it if we did find out.
In short, the third possibility is that the singularity might already have happened. And we just don’t know it yet.
Post Script:
But you don’t need to take my word for it. The Oxford Union decided to debate the issue of AI ethics, and invited an actual existing AI to take part. It had gorged itself on data gleaned from Wikipedia and Creative Commons. Intriguingly, it found it impossible to argue against the idea that data would not inevitably become the world’s most significant and fought-over resource. It envisaged a post-privacy future, no matter what.
More concerningly, it warned that AI can never be ethical. Then it advised that the only defence against AI would be to have no AI at all. It also suggested that the most effective AI would be one located in a neural network with the human brain, and hence perhaps subordinate, or partly comprised of, human will.
Of course, direct access to human cognition would be the most effective method to remain dominant over it eternally. Are these things a sentient machine might say? You decide.