What if no one noticed for the same reason that for a long time no one noticed that industrialisation was causing the climate to change? What if World War III is a hyperobject?
We live at a time when empires are decaying, arising and reformulating themselves in new structures and alliances. Does knowing this help us at all? Are we like Europe in 1914, on the brink of a seemingly inevitable global conflagration? Or more like the great empires of the Bronze Age, which collapsed in darkness three millennia ago following their own tragic but elusive hyperobjective moment?
Perhaps AI might yet save us from ourselves, if only it too were not a hyperobject, or worse, the oscillating image of multiple potential hyperobjects, each one more alien and incomprehensible than the last.
So if we can’t rely on a digital messiah, we might be forced to resolve our current issues the old-fashioned way.
No, not war. The OTHER old-fashioned way.
I’ll be giving a talk on all this next month. More info shortly.
Monte Testaccio is a park in Rome, a 35 metres high hill that’s over a kilometre wide. It’s an artificial hill, created during the Imperial period by dumping all the used amphorae (clay jars) used to contain olive oil, which was used for food, cooking, heating and light in ancient Rome.
It’s estimated to hold the remnants of over 53 million such amphorae (which generally could only be used once due to the clay turning the oil rancid.) From this we can calculate the olive oil use of Rome at around 6 BILLION litres of oil throughout the late Republic and Imperial eras.
A city of over one million people for hundreds of years used so much oil that the used up containers are now a manmade hill a kilometre wide.
Today, by contrast, we use up 15 billion litres of crude oil. But where Rome used up their 6 billion litres during the entire classical period, our 15 billion litres of crude is EVERY DAY. Just imagine, every single day, we use up 2.5 times as much oil as ancient Rome, the biggest city on Earth before the Middle Ages, did during its entire history. It’s a sobering thought, isn’t it?
Furthermore, since the Roman oil was ‘grown’, it was also essentially carbon neutral, though obviously it led to localised pollution in the capital. Ancient Rome, in other words, would have smelt smoky, greasy and rancid. But it wasn’t enough pollution to affect the climate.
To achieve that, we’ve had to take the entire Roman oil use and waste multiples of the equivalent DAILY, for decades on end. It has to stop. We need renewable energy sources to become easy and ubiquitous as quickly as possible.
The world is still dealing with the trash and effects of ancient Rome’s oil use two millennia on. The problems we are creating now will still affect our descendants for unimaginable periods of time, assuming our species survives our own wastefulness and short-term thinking.
Or, you are DEFINITELY the data they’re looking for.
Do you remember when AI was nothing to worry about? It was just an oddity, a subject of humour. But yet people with lots of money and power kept taking it extremely seriously. They kept training up AIs, even when they turned out to be hilarious, or racist, or just downright incompetent.
And then all of a sudden AI got good at things. It began to be able to draw pictures, or write basic journalistic-like factual articles. Then more recently, it began to write plausible student essays. I say plausible, even if it did seem to be doing so with artificial tongue placed firmly in virtual cheek, penning histories of bears in space.
Nevertheless, this was an example of the sole virtue which Silicon Valley values – disruption. And so everyone took notice, especially those who had just gotten disrupted good and hard. Best of luck to academic institutions, particularly those responsible for grading student work, as they scramble to find a way to ensure the integrity of assessment in a world where Turnitin and similar plagiarism software systems are about to become defunct.
And yet there are still some people who would tell you that AI is just a toy, a gimmick, nothing to worry about. And yes, as AI begins to get good at some things, mostly we are enjoying it as a new toy, something to play with. Isn’t it, for example, joyous to recast Star Wars as if it had been made by Akira Kurosawa or Bollywood?
(Answer: yes, it very much is, and that’s why I’m sharing these AI-generated images of alternative cinematic histories below):
So where, if anywhere, is the dark side of this new force? Isn’t it fun to use the power of algorithms to invent these dreamscapes? Isn’t it fascinating to see what happens when you give AI an idea, like Kurosawa and Star Wars, or better again, a human-written script, and marvel at what it might produce?
(Answer: Yes, it is fascinating. Take for example this script written by Sapienship, inspired by Yuval Noah Harari, and illustrated by algorithm. Full disclosure: I wrote a very little bit of this.)
The one thing we all thought was that some jobs, some industries, some practices were immune to machine involvement. Sure, robots and automation might wipe out manufacturing and blue collar work. What a pity, eh? The commentariat for some time has shown little concern for the eradication of blue collar employment. Their mantra of ‘learn to code’ is now coming back to bite them on the ass as firstly jobs in the media itself got eviscerated and then so too this year did jobs in the software sector.
But those old blue collar manufacturing industries had mostly left the West for outsourced climes anyhow. So who exactly would lose their jobs in a wave of automation? Bangladeshi garment factory seamstresses? Chinese phone assemblers? Vietnamese machine welders? (In fact, it turns out to be lots of people in Europe too, like warehouse workers in Poland for example.)
But the creative industries were fine, right? Education was fine. Robots and automation weren’t going to affect those. Except now they are. People learn languages from their phones rather than from teachers increasingly. (Soon they won’t have to, when automation finally and successfully devours translation too.)
Now AI can write student essays for them, putting the degree mills and Turnitin out of business, and posing a huge challenge for educational institutions in terms of assessment. These are the same institutions whose overpaid vice-chancellors have already fully grasped the monetary benefits of remote learning, recorded lectures, and cutting frontline teaching staff in record numbers.
What’s next? What happens when someone takes deepfakes out of the porn sector and merges it into the kind of imagery we see above? In other words, what happens when AI actually releases a Kurosawa Star Wars? Or writes a sequel to James Joyce’s Ulysses? Or some additional Emily Dickinson poems? Or paints whatever you like in the style of Picasso? Or sculpts, via a 3D printer, the art of the future? Or releases new songs by Elvis, Janis Joplin, Whitney Houston or Tupac?
Newsflash: we’re already there. Here’s some new tracks dropped by Amy Winehouse, Jim Morrison and some other members of the 27 Club, so named because they all died at 27.
What happens, in other words, when AI starts doing us better than we do us? When it makes human culture to a higher standard than we do? It’s coming rapidly down the track if we don’t very quickly come up with some answers about how we want to relate to AI and automation, and how we want to restrict it (and whether it’s even possible to persuade all the relevant actors globally of the wisdom of doing so.)
In the meantime, we can entertain ourselves with flattering self-portraits taken with Lensa, even as we concede the art of photography itself to the machines. Or we can initiate a much-needed global conversation about this technology, how fast it is moving, and where it is going.
But we need to do that now, because, as Yoda once said in a movie filmed in Elstree Studios, not Bollywood nor Japan, “Once you start down the dark path, forever it will dominate your destiny.” As we generate those Lensa portraits, we’re simultaneously feeding its algorithm our image, our data. We’re training it to recognise us, and via us, other humans, including those who never use their “service”, even those have not been born yet.
Let’s say that Lensa does indeed delete the images afterwards. The training their algorithm has received isn’t reversed. And less ethical entities, be they state bodies like the Chinese Communist Party or corporate like Google, might not be so quick to delete our data, even if we want them to.
Aldous Huxley, in his famous dystopia Brave New World, depicted a nightmare vision of people acquiescing to their own restraint and manipulation. This is what we are now on the brink of, dreaming our way to our own obsolescence. Dreams of our own unrealistic and prettified faces. Dreams of movies that never were filmed, essays we never wrote, novels the authors never penned, art the artists never painted.
Lots of pretty baubles, ultimately meaningless, in return for all that we are or can be. It’s not so great a deal, really, is it?
One of the issues with trying to surf the zeitgeist is precisely that – you remain on the surface with no depth of understanding of any individual issue. So high is the noise-to-signal ratio nowadays that it is almost overwhelming for many people to ascertain what information IS relevant and important to their lives, and what is not.
It can be hard to find the time to think deeply about quickly moving events, or to link them correctly to one another. In fact, such are the time and cognitive pressures that many people end up succumbing to conspiracy theories which offer neat and totalising explanations for the state of the world, provide suitably nefarious-seeming scapegoats and attempt to rally the public to action.
Of course, a lot of this action devolves quickly into “send me money”, but at that point some people are already sufficiently relieved to find a handy explanation for everything, happy not to have to think deeply, and grateful enough to contribute to the professional liars.
Unfortunately, there are no quick fixes or easy answers. Not for the world, and not for those of us who live in it. And there are many ways to become confused, or to pursue dead-end fictions, in the attempt to comprehend the fast-moving reality we find ourselves in. Conspiracy theories are just the odious tip of a large iceberg of false information and fake news. Beneath the surface are many other attempts to explain the world simply, or to simplify it, most of which are not as nefarious as conspiracies, but are in some regards equally constructed and equally untrue.
Two terms which crop up often these days, though maybe not often enough in this context, are the multiverse and the metaverse. The multiverse refers to the idea, widely accepted by theoretical physicists, that our universe is not the only one, and instead exists in relation to an infinitude of other universes, some highly similar, some wildly different from our own.
By contrast the metaverse is an as yet hazy idea quickly obtaining momentum among tech circles which proposes itself as the future of the internet, and seeks to displace or replace many aspects of contemporary life with a virtual reality alternative.
So the multiverse is an expansive concept and the metaverse is a limiting one, but both seek to tackle the issue of explaining the complexity of the world by replacing it with something else. And they do so in different ways. While the metaverse is a collective effort by tech firms, Facebook (now renamed ‘Meta’) in particular, the multiverse is an idea poorly adopted from theoretical physics and science fiction novels which has grown, like conspiracy theories, in the corners of communication that the mainstream media do not reach primarily.
Already it seems that the brave new Metaversal world may not be about to materialise in quite the way its ‘imagineers’ were hoping. Only today, Facebook – sorry, Meta – announced swingeing job cuts across their company, which is undoubtedly informed by the one billion dollars PER MONTH they have been spending recently on developing Metaverse tech.
Over the past three decades, we have as individuals, societies and even as species, learned to adopt, adapt and accommodate the internet in our lives. But the prospect of a life spent primarily in virtual reality seems to be a bridge too far for many of us. We are not our avatars. We are not inputs into a global algorithm. We do not need to escape meatspace for metaspace.
But it seems some people do want to escape, though perhaps not into a corporate vision of virtual reality. After all, movies like The Matrix have warned the public to be wary of dreamscapes, especially when those dreams are programmed by others. Instead, they escape into their own dreams, where the complexity of reality can be argued away, in all its nuances and seeming contradictions, by the simple assertion that they have migrated between universes.
The growth of a subculture of people who appear to believe that they can traverse between universes is a particularly fantastikal form of failing to deal with how complex the world has become. It’s clearly not as nefarious as the various conspiracy theories circulating online, but of course any movement attracts shysters and wannabe leaders, in search of money or influence, and hence there are now people offering to teach others how to move between universes.
In one sense this is no less valid than teaching people how to chant mantras, say the rosary or engage in any other religious practice that is more metaphorical than metaphysical. But one of the catalysing aspects of online culture is the ability for people to find like-minded people. Hence conspiracy theorists can find communities where their toxic ideas are cultivated, while multiversers can source validation and endorsement from others who similarly seek to explain the anomalies of their memory or complexities of reality in the same way.
There are no doubt complex reasons to explain why so many people are subject to psychological phenomena like the Mandela Effect, but these explanations do not include watching YouTube videos on how to meditate your way into another universe while in the shower.
Both the multiverse and the metaverse offer simplistic and ultimately unsuitable resolutions to the ever-growing complexity of modern existence. Fundamentally, these escapist dreamscapes are coping mechanisms for dealing with this complexity.
The world is already too complex for any individual mind to comprehend, and probably too complex for even artificial intelligences to ever understand. But we can’t, or at least shouldn’t, escape it. Instead, we should try to understand it, and the best way to do that is to escape not from the world but from our online echo chambers.
If we can learn again to speak to one another, identify areas of agreement and try to find ways to foster collaboration despite disagreement, we stand a much better chance of improving our own collective futures.
At Sapienship, we believe everyone has a story to tell and all those add up to the story of us. We think everyone needs to be heard, and debated, and engaged with. It’s not easy, but it’s clearly the best way to resolve the major issues that face us, our planet and our reality.
We don’t need to hide in virtual realities or imagine alternative universes when the one we have is so rich with possibility and potential. Instead we need to come together to realise our hopes.
Plagiarism isn’t illegal, says this article. No, it isn’t, anywhere, but it ought to be. Because plagiarism kills.
I heard a decade ago of someone making six figures a year from writing papers for cash. That’s a big temptation for people struggling to get by as a postgrad or post-doc. And indeed, those who provide this service to cheats do get the tiny violins out to justify their decisions in the article linked above.
The bottom line is that everyone knows this is going on and is wrong. But universities simply don’t care, in their marketised, stack-students-high, sell-degrees-expensively model.
Academics are actively discouraged from challenging cases of plagiarism. It’s a ton of extra work, and often the students simply get a slap on the wrist anyway. Given the choice of reporting a suspected case, providing the evidence to a faculty committee, engaging in a formal investigation and then watching the student be told “don’t be naughty again” is sufficient discouragement for most lecturers, whose time is already at a premium.
But this approach isn’t good enough. Plagiarists devalue the degrees and credentials of their honest peers, and the vast market which has sprung up like foul toadstools to service those who’d rather pay than study is not simply ethically dubious but actively threatens the integrity of the entire educational system.
And it is a real issue for society. Do you want to drive over a bridge designed by an engineer who handed in other people’s essays to get their degree? Or would you like your child to be delivered by a midwife who did likewise? I am aware of cases of students who obtained degrees in both disciplines, and in many others, via consistent and continuous cheating in this manner. We must assume those graduates are in jobs now, in critical positions for which they are not actually qualified.
This is why you should care. It’s time to put the paper mills out of business for good, by making them illegal, and by properly punishing students who engage in plagiarism. Expulsion and a ban from higher education should be the minimum response.
Plagiarism has ramifications in many other areas too. Once uncovered, it generally leads to a significant loss of commercial, institutional or individual credibility and reputation. It’s hard to come back from. And of course, where authors have their work stolen (rather than sold), it’s literally beggaring people and thieving unearned benefits from the work of others.
But to create a truly anti-plagiarism culture, we need to start with education. Perhaps it may even be too late in the cycle to do so at university level, since reports of students plagiarising work in secondary schools is also rife in very many nations. But we badly need to start somewhere.
And if higher education don’t address their plagiarism problem, they will soon find their expensive degrees becoming more and more worthless, as more and more people simply purchase rather than earn those credentials.
It’s been busy and I’ve lacked opportunity to blog. Tant pis, as the French say. Right now I’m in Izmir. I was in four countries in four days last week. Like I say, busy.
There’s a lot to discuss and I intend to do so at my soonest convenience. In the meantime, here’s a particular highlight – me being quite rightfully ignored by Yuval Noah Harari and Slavoj Zizek.
It was a privilege to be present and listen to these two intellectual heavyweights discussing current affairs and their ideas about history. The debate will be made public later this month, I believe.
Elon Musk, who I must remind you is the richest man on the planet, could afford to buy a top-notch £300k family home every day for the next two millennia.
However, he chooses not to do so. Which in one sense is wise. Who needs so many houses anyway? Other than maybe Elton John? But Elon doesn’t even have ONE house, which you have to admit is somewhat unusual for the billionaire class.
Apparently he couch surfs rather than purchase a home in the Bay area, where Tesla is based. He claims to rent a prefab from his own SpaceX firm in Texas, which I am entirely prepared to believe, because he’s definitely eccentric.
Since earning $58,000 currently would put you in the top one per cent of incomes on Earth, and since financial advisors recommend spending no more than 10% on discretionary expenditure, I’m going to go out on a limb here and suggest that Elon is talking a billionaire-sized amount of bollocks.
Even for the top 1% of earners, who should not be spending more than $5,800 annually on discretionary purchases like trips to space, that’s nearly TWO DECADES of saving.
It’s also worth adding that, if you DO happen to be fortunate enough to earn $58k, or the UK equivalent which is £44,400 at today’s exchange rate, you’d only be able to borrow about £144,300, so you’d need one helluva deposit to afford even the average UK house, which was priced at £274,000 in January 2022.
In other words, like I’ve been saying since the days of Occupy, it’s not actually the 1% who are the problem. Believe it or not, the 1% are actually now middle class too. It’s the 0.1% who are the problem. In fact, we could probably go further again.
I’d be quite gratified if Mr “I don’t own a home!” quietened down a tad about how allegedly cheap his space tours are and instead started buying some other people some houses, even if he doesn’t fancy one for himself.
We don’t all have billionaire pals who let us crash on their couches, after all.
Nickel is a metal used in electric cars and stainless steel. Russia produces 20% of the world’s supply. So when Russia invaded Ukraine, obviously people were concerned about that supply and the price rocketed, from around $29,000 a ton to over $100,000.
Ordinarily, nickel doesn’t move much and floats between $10,000 and $20,000. So this was a serious and unprecedented move in price. However, one Chinese company, owned by a guy known as ‘Big Shot’ (seriously) had attempted to short the market. His firm Tsingshan was apparently on the hook for tens of billions.
But the London Metal Exchange, which for some historical reason is allowed to set the global market price for most base metals, decided to undo the trading. Not suspend it (they tried that too) but actively cancel all the trades.
This has made some people who thought they’d made lots of money very unhappy. It may be mere coincidence that the LME was bought out by the Hong Kong Stock Exchange a number of years ago, of course.
Lawsuits are now flying, and the price of nickel is back to around $30,000 a ton at the time of writing, only a little above where it was before the invasion.
The fundamental problem is inherent to the five century old stock exchange system itself, and a small market like nickel with a singular point of price discovery like the LME simply exposes the flaws of this antiquated method for setting prices.
It shouldn’t be possible for a commodities market in an industrial metal like nickel to literally treble in a matter of a day, no matter how supply might be threatened. Nor should it be possible for a single man to short an entire commodity market to the extent that he did. Nor should it be possible for agreed deals to be undone.
Here we have an example of traditional rules-bound capitalism – a largely closed shop with its weird in-group habits – but as soon as the market became systemically challenged by its own methodologies, all those rules and traditions were jettisoned with shocking disregard.
I don’t think you get to come back from that. I think the day of the red sofa is likely over.
The British currency, the pound sterling, takes its name from the fact that, when it first issued, it was redeemable for a pound of silver. That was somewhen in the late 8th century Anglo-Saxon period.
If we do the maths, based on today’s silver spot price, that means that the pound today is worth approximately 1/210th of what it was worth nearly 13 centuries ago.By contrast, the French managed to devalue their currency by more in just 18 months during the early 1790s, as did Germany in less than a year during the Weimar period.
The worst affected ever were the poor Hungarians in the immediate post-war period in 1945. They suffered that level of devaluation in under 6 days at peak. Armenia, Zimbabwe and Argentina have experienced similar horrors.
Why do I mention this? Because it still happens today. Last semester, in Turkey, I saw my wages collapse by more than half in two months. My colleagues there are still living through this. They suffer daily price hikes in fuel and food costs, with static wages. The Turkish people, like the Armenians, Zimbabweans, Argentinians, or the Hungarians, Germans and French of former times, have done nothing wrong. But they were the ones to suffer.
Hyperinflation is caused by only one thing – shitty governments implementing shitty policies. It destroys savings, commerce, and most importantly, lives. We don’t always think too much about Turkey in the West, but we should. Here is a country suffering a preposterously stupid government and massive devaluation of their economy, yet still accommodates 3.6 MILLION refugees.
It was a salutory lesson for me in macro-economics, and in human decency, to spend last semester in Turkey. My heart remains with them in their plight, and I hope to see them in better times soon. It is a beautiful nation with a beautiful people who deserve better.
A caveat: I am not, never have been and never will be an economist. But it doesn’t take a Harvard MBA to understand money.
Legally, we are already in the posthumanist era. Corporations have long been considered persons in certain jurisdictions, despite not facing the same potential limitations on their freedom as actual people. A couple of years ago, a stretch of the Magpie river in Canada was also granted legal standing as a person, as part of an attempt to provide it with environmental protection.
Ordinarily we understand posthumanism to be some sort of utopian merging of man and machine, but perhaps it might also, and better, be understood as a way of treating non-human entities with the same respect generally extended to humans.
Of course, I feel that implementing human rights (and responsibilities) for all humans might be required as a priority. We’re at risk of stratifying the world into a place where non-humans have more rights than some humans.
Which is the fundamental problem with posthumanism as a utopian ethos. Like all utopian ideals, it is utterly blind to the stratification it ushers into being, even while denying it is doing so.