Last January, I flew out of Cappadocia and left academia, which was a strange thing for me to do really, since I’d aspired to be an academic for decades before I finally achieved it.
So what lured me away? The opportunity to work with Professor Yuval Noah Harari and his NGO Sapienship, a social impact company that aims to focus global attention on issues of global importance, including the climate crisis, technological disruption and the prevalence of war.
So what have I been doing with them for the past year? Largely, I’ve been working with my colleagues developing the Sapienship Lab which launches today. There’s a lot of content in there already and much more to come over the coming weeks and months. It even includes some audio dramas I wrote which I guess count as my first published science fiction in a while.
Most of the content is factual, educational, and intended to act as a guide through the labyrinth that is our fast-moving now. A lot is aimed at middle schoolkids through to undergrads, but we hope that everyone can learn something from it.
I miss teaching but I feel that in my new role I can still educate, albeit remotely, and contributing to the Lab is how I’m doing that now. I hope you’ll take a look at some of what we’ve prepared. It’s taken a lot of people a long time to put all this together.
And perhaps you might share it too with anyone who might be interested, which hopefully will transpire to be everyone, because we strongly believe that we’re all in this together and only by talking and listening to everyone will we manage to improve our world.
Or, you are DEFINITELY the data they’re looking for.
Do you remember when AI was nothing to worry about? It was just an oddity, a subject of humour. But yet people with lots of money and power kept taking it extremely seriously. They kept training up AIs, even when they turned out to be hilarious, or racist, or just downright incompetent.
And then all of a sudden AI got good at things. It began to be able to draw pictures, or write basic journalistic-like factual articles. Then more recently, it began to write plausible student essays. I say plausible, even if it did seem to be doing so with artificial tongue placed firmly in virtual cheek, penning histories of bears in space.
Nevertheless, this was an example of the sole virtue which Silicon Valley values – disruption. And so everyone took notice, especially those who had just gotten disrupted good and hard. Best of luck to academic institutions, particularly those responsible for grading student work, as they scramble to find a way to ensure the integrity of assessment in a world where Turnitin and similar plagiarism software systems are about to become defunct.
And yet there are still some people who would tell you that AI is just a toy, a gimmick, nothing to worry about. And yes, as AI begins to get good at some things, mostly we are enjoying it as a new toy, something to play with. Isn’t it, for example, joyous to recast Star Wars as if it had been made by Akira Kurosawa or Bollywood?
(Answer: yes, it very much is, and that’s why I’m sharing these AI-generated images of alternative cinematic histories below):
So where, if anywhere, is the dark side of this new force? Isn’t it fun to use the power of algorithms to invent these dreamscapes? Isn’t it fascinating to see what happens when you give AI an idea, like Kurosawa and Star Wars, or better again, a human-written script, and marvel at what it might produce?
(Answer: Yes, it is fascinating. Take for example this script written by Sapienship, inspired by Yuval Noah Harari, and illustrated by algorithm. Full disclosure: I wrote a very little bit of this.)
The one thing we all thought was that some jobs, some industries, some practices were immune to machine involvement. Sure, robots and automation might wipe out manufacturing and blue collar work. What a pity, eh? The commentariat for some time has shown little concern for the eradication of blue collar employment. Their mantra of ‘learn to code’ is now coming back to bite them on the ass as firstly jobs in the media itself got eviscerated and then so too this year did jobs in the software sector.
But those old blue collar manufacturing industries had mostly left the West for outsourced climes anyhow. So who exactly would lose their jobs in a wave of automation? Bangladeshi garment factory seamstresses? Chinese phone assemblers? Vietnamese machine welders? (In fact, it turns out to be lots of people in Europe too, like warehouse workers in Poland for example.)
But the creative industries were fine, right? Education was fine. Robots and automation weren’t going to affect those. Except now they are. People learn languages from their phones rather than from teachers increasingly. (Soon they won’t have to, when automation finally and successfully devours translation too.)
Now AI can write student essays for them, putting the degree mills and Turnitin out of business, and posing a huge challenge for educational institutions in terms of assessment. These are the same institutions whose overpaid vice-chancellors have already fully grasped the monetary benefits of remote learning, recorded lectures, and cutting frontline teaching staff in record numbers.
What’s next? What happens when someone takes deepfakes out of the porn sector and merges it into the kind of imagery we see above? In other words, what happens when AI actually releases a Kurosawa Star Wars? Or writes a sequel to James Joyce’s Ulysses? Or some additional Emily Dickinson poems? Or paints whatever you like in the style of Picasso? Or sculpts, via a 3D printer, the art of the future? Or releases new songs by Elvis, Janis Joplin, Whitney Houston or Tupac?
Newsflash: we’re already there. Here’s some new tracks dropped by Amy Winehouse, Jim Morrison and some other members of the 27 Club, so named because they all died at 27.
What happens, in other words, when AI starts doing us better than we do us? When it makes human culture to a higher standard than we do? It’s coming rapidly down the track if we don’t very quickly come up with some answers about how we want to relate to AI and automation, and how we want to restrict it (and whether it’s even possible to persuade all the relevant actors globally of the wisdom of doing so.)
In the meantime, we can entertain ourselves with flattering self-portraits taken with Lensa, even as we concede the art of photography itself to the machines. Or we can initiate a much-needed global conversation about this technology, how fast it is moving, and where it is going.
But we need to do that now, because, as Yoda once said in a movie filmed in Elstree Studios, not Bollywood nor Japan, “Once you start down the dark path, forever it will dominate your destiny.” As we generate those Lensa portraits, we’re simultaneously feeding its algorithm our image, our data. We’re training it to recognise us, and via us, other humans, including those who never use their “service”, even those have not been born yet.
Let’s say that Lensa does indeed delete the images afterwards. The training their algorithm has received isn’t reversed. And less ethical entities, be they state bodies like the Chinese Communist Party or corporate like Google, might not be so quick to delete our data, even if we want them to.
Aldous Huxley, in his famous dystopia Brave New World, depicted a nightmare vision of people acquiescing to their own restraint and manipulation. This is what we are now on the brink of, dreaming our way to our own obsolescence. Dreams of our own unrealistic and prettified faces. Dreams of movies that never were filmed, essays we never wrote, novels the authors never penned, art the artists never painted.
Lots of pretty baubles, ultimately meaningless, in return for all that we are or can be. It’s not so great a deal, really, is it?
I recently got the chance to appear on the excellent Art of Problem Solving podcast on behalf of Sapienship, talking about how to raise and educate a generation whose jobs may not exist yet, or who may find automation erodes their employment opportunities.
To date, I haven’t spoken much on my personal site here about my work with Sapienship, largely because most of it has yet to reach the public domain. I expect that to change quite a lot in the next few months.
Anyhow, one of the benefits of migrating to an academic-adjacent position, especially one as wide-ranging as mine, is the ability to escape the narrow pigeon-holes of expertise which the artificial boundaries of academic disciplines enforce.
In my career, as noted elsewhere, I’ve had a number of very different roles. As a journalist alone, I gained expertise in a very varied range of topics and subjects including healthcare, politics and international sport. Hence it always seemed somewhat constrictive to me that academia was so insistent that I stay in my narrow lane, even as it nominally espoused interdisciplinary practices.
This is why my current areas of personal research are fundamentally interdisciplinary – in particular Religious Futurisms and Invented Languages. But it also informs why I have always been keen to teach students to be resilient and adaptable. I’ve finally been offered the chance by the Art of Problem-Solving podcast to expound on this pedagogical ethos and I feel especially privileged that in this area, as in many others, I find my personal values echoed and amplified by Sapienship.
I did not have a role model or a teacher to guide me how to become resilient and adaptable to a world in which change seems to be perpetually accelerating. I had to develop those skills myself, on the hoof, as I migrated from the Arts to Journalism to Academia and to the position I now hold.
Hopefully this podcast can help others to shorten that learning process, because the world is not slowing down anytime soon, and resilience and adaptability are going to become the defining traits of success, or possibly even survival, in the decades to come.