Over at Ponying the Slovos, our ongoing project on invented languages in art and literature, I wrote a series of posts on Anthony Burgess’s other invented languages a couple of years back, of which there are more than a few.
These collected thoughts have now been expanded, revised and published in the peer-reviewed Hungarian journal of English literature, The Anachronist, and (almost all) the journal is free to read or download in the spirit of open access thanks to the publishers at ELTE, Hungary’s foremost university.
In this paper, Burgess is used to demonstrate that the role of invented languages in literature goes far beyond the existing well-explored territories of Science Fiction (SF) or High Fantasy, though they predominate therein, and can also be found in historical novels, and even realist fiction, as Burgess’s variegated novels reveal.
This is Ponying the Slovos’s second publication for 2023, and it’s not even two weeks in. We might need a little lie-down!
Anyhow, feel free to read the article here, and the whole journal, all of which will be of interest to Burgess scholars, may be accessed from this page.
This holiday period is an especially difficult one for many people, who will look up into the cold sky not in expectation of Santa Claus, but in despair. From wartorn Ukraine to the cost of living crisis in Europe, many people are suffering in ways that seemed unthinkable only a year ago.
This night, the seventh night of Hanukkah and the night before Christmas, pay a thought for those who are living insecurely and losing hope. There are many of them. All we have is each other, ultimately. Alas, some of us do not even have that. Here is the story of one such man, Maximilian Bern.
Maximilian Bern, (born Bernstein), was a Jewish German writer who died during the hyperinflation which brought the Weimar Republic to an end almost a century ago, in 1923.
He had been born in 1849 in Ukraine, in Kherson, where his father was a doctor. But then as now, people were leaving Ukraine, and Maximilian relocated with his mother to Vienna after his father died. Though the family fortune was lost, Maximilian’s first novel Auf Schwankem Grunde (“On Shaky Ground”), made his name, and he became a freelance poet, writer and novelist thereafter.
Bern is alas not much read today.
He lived for a couple of years in Paris, and for a time he was married to the renowned actress Olga Wohlbrück, who is now regarded as Germany’s first female movie director. She later left him for a playwright. However, until soon before his death in 1923, he lived an affluent life of artistic renown in Berlin.
In 1904, he published a collection of poems called Die zehnte Muse (“The Tenth Muse”), in which we may read two of his poems which now seem disturbingly prophetic. These are On a Dead Track, and Vagabond Song, which I have lovingly mistranslated below.
What do they appear to prophecy? His own death, which appears almost as a footnote or an aside in Frederick Taylor’s 2013 history The Downfall of Money: Germany’s Hyperinflation and the Destruction of the Middle Class. Taylor had borrowed the anecdote about Bern’s death from a book by Otto Friedrich entitled Before the Deluge: A Portrait of Berlin in the Twenties, wherein on page 126 we hear briefly about Bern’s fate.
Hyperinflation had destroyed Maximilian’s savings as it had so many others, and aged in his seventies, he was in no position to restore the family fortune a second time. He withdrew them all – over 100,000 marks – and spent the entirety of his wealth on a subway ticket, all he could now purchase. After riding one last time around the city, Bern withdrew to his apartment and starved to death.
Or, you are DEFINITELY the data they’re looking for.
Do you remember when AI was nothing to worry about? It was just an oddity, a subject of humour. But yet people with lots of money and power kept taking it extremely seriously. They kept training up AIs, even when they turned out to be hilarious, or racist, or just downright incompetent.
And then all of a sudden AI got good at things. It began to be able to draw pictures, or write basic journalistic-like factual articles. Then more recently, it began to write plausible student essays. I say plausible, even if it did seem to be doing so with artificial tongue placed firmly in virtual cheek, penning histories of bears in space.
Nevertheless, this was an example of the sole virtue which Silicon Valley values – disruption. And so everyone took notice, especially those who had just gotten disrupted good and hard. Best of luck to academic institutions, particularly those responsible for grading student work, as they scramble to find a way to ensure the integrity of assessment in a world where Turnitin and similar plagiarism software systems are about to become defunct.
And yet there are still some people who would tell you that AI is just a toy, a gimmick, nothing to worry about. And yes, as AI begins to get good at some things, mostly we are enjoying it as a new toy, something to play with. Isn’t it, for example, joyous to recast Star Wars as if it had been made by Akira Kurosawa or Bollywood?
(Answer: yes, it very much is, and that’s why I’m sharing these AI-generated images of alternative cinematic histories below):
Bollywood, long long ago, in a galaxy far far away…Akira Kurosawa’s version of Star Wars, as envisioned using Midjourney V4 by Alex Grekov
So where, if anywhere, is the dark side of this new force? Isn’t it fun to use the power of algorithms to invent these dreamscapes? Isn’t it fascinating to see what happens when you give AI an idea, like Kurosawa and Star Wars, or better again, a human-written script, and marvel at what it might produce?
(Answer: Yes, it is fascinating. Take for example this script written by Sapienship, inspired by Yuval Noah Harari, and illustrated by algorithm. Full disclosure: I wrote a very little bit of this.)
The one thing we all thought was that some jobs, some industries, some practices were immune to machine involvement. Sure, robots and automation might wipe out manufacturing and blue collar work. What a pity, eh? The commentariat for some time has shown little concern for the eradication of blue collar employment. Their mantra of ‘learn to code’ is now coming back to bite them on the ass as firstly jobs in the media itself got eviscerated and then so too this year did jobs in the software sector.
2022 tech sector job losses, Jan-Nov 2022.
But those old blue collar manufacturing industries had mostly left the West for outsourced climes anyhow. So who exactly would lose their jobs in a wave of automation? Bangladeshi garment factory seamstresses? Chinese phone assemblers? Vietnamese machine welders? (In fact, it turns out to be lots of people in Europe too, like warehouse workers in Poland for example.)
But the creative industries were fine, right? Education was fine. Robots and automation weren’t going to affect those. Except now they are. People learn languages from their phones rather than from teachers increasingly. (Soon they won’t have to, when automation finally and successfully devours translation too.)
Now AI can write student essays for them, putting the degree mills and Turnitin out of business, and posing a huge challenge for educational institutions in terms of assessment. These are the same institutions whose overpaid vice-chancellors have already fully grasped the monetary benefits of remote learning, recorded lectures, and cutting frontline teaching staff in record numbers.
What’s next? What happens when someone takes deepfakes out of the porn sector and merges it into the kind of imagery we see above? In other words, what happens when AI actually releases a Kurosawa Star Wars? Or writes a sequel to James Joyce’s Ulysses? Or some additional Emily Dickinson poems? Or paints whatever you like in the style of Picasso? Or sculpts, via a 3D printer, the art of the future? Or releases new songs by Elvis, Janis Joplin, Whitney Houston or Tupac?
Newsflash: we’re already there. Here’s some new tracks dropped by Amy Winehouse, Jim Morrison and some other members of the 27 Club, so named because they all died at 27.
What happens, in other words, when AI starts doing us better than we do us? When it makes human culture to a higher standard than we do? It’s coming rapidly down the track if we don’t very quickly come up with some answers about how we want to relate to AI and automation, and how we want to restrict it (and whether it’s even possible to persuade all the relevant actors globally of the wisdom of doing so.)
In the meantime, we can entertain ourselves with flattering self-portraits taken with Lensa, even as we concede the art of photography itself to the machines. Or we can initiate a much-needed global conversation about this technology, how fast it is moving, and where it is going.
But we need to do that now, because, as Yoda once said in a movie filmed in Elstree Studios, not Bollywood nor Japan, “Once you start down the dark path, forever it will dominate your destiny.” As we generate those Lensa portraits, we’re simultaneously feeding its algorithm our image, our data. We’re training it to recognise us, and via us, other humans, including those who never use their “service”, even those have not been born yet.
Let’s say that Lensa does indeed delete the images afterwards. The training their algorithm has received isn’t reversed. And less ethical entities, be they state bodies like the Chinese Communist Party or corporate like Google, might not be so quick to delete our data, even if we want them to.
Aldous Huxley, in his famous dystopia Brave New World, depicted a nightmare vision of people acquiescing to their own restraint and manipulation. This is what we are now on the brink of, dreaming our way to our own obsolescence. Dreams of our own unrealistic and prettified faces. Dreams of movies that never were filmed, essays we never wrote, novels the authors never penned, art the artists never painted.
Lots of pretty baubles, ultimately meaningless, in return for all that we are or can be. It’s not so great a deal, really, is it?
Whereas last year the Oxford English Dictionary (with the somewhat American-sounding diminutive ‘vax’) and Merriam-Webster (with the more formal, and somehow British sounding vaccine) concurred on word of the year, this time they have diverged.
For the M-W, this year’s word is ‘gaslighting’, a not-especially-new term used to describe a kind of cruel psychological manipulation. However, the OED put their favoured options (which included a phrase and a hashtag!) to a public vote, and came up with ‘goblin mode’.
What is ‘goblin mode’, you may ask? Some Tolkienesque monstrous tendency to murderous behaviour?
Goblin Mode?
No, apparently it is a term of online usage which is defined as “a type of behaviour which is unapologetically self-indulgent, lazy, slovenly, or greedy, typically in a way that rejects social norms or expectations.”
In other words, the kind of behaviour one expects from people who consider hashtags to be words and eschew responsibility by putting their work out to a vote.
I feel like the OED has started gaslighting me. I’m team Merriam-Webster until the goblin mode ceases in Oxford.
It’s been a while since I last published a mistranslation, so here’s The Iceberg, mistranslated from the poem by the late great Brazilian poet Paulo Leminski. It’s not the first of his I’ve egregiously mishandled. Regular readers may recall this travesty from earlier this year.
Having now done damage to his work twice, I will release Leminski from the clutches of this project and seek other subjects elsewhere. You, however, are advised to go and read as much of his poetry as possible.
One of the issues with trying to surf the zeitgeist is precisely that – you remain on the surface with no depth of understanding of any individual issue. So high is the noise-to-signal ratio nowadays that it is almost overwhelming for many people to ascertain what information IS relevant and important to their lives, and what is not.
It can be hard to find the time to think deeply about quickly moving events, or to link them correctly to one another. In fact, such are the time and cognitive pressures that many people end up succumbing to conspiracy theories which offer neat and totalising explanations for the state of the world, provide suitably nefarious-seeming scapegoats and attempt to rally the public to action.
Of course, a lot of this action devolves quickly into “send me money”, but at that point some people are already sufficiently relieved to find a handy explanation for everything, happy not to have to think deeply, and grateful enough to contribute to the professional liars.
Unfortunately, there are no quick fixes or easy answers. Not for the world, and not for those of us who live in it. And there are many ways to become confused, or to pursue dead-end fictions, in the attempt to comprehend the fast-moving reality we find ourselves in. Conspiracy theories are just the odious tip of a large iceberg of false information and fake news. Beneath the surface are many other attempts to explain the world simply, or to simplify it, most of which are not as nefarious as conspiracies, but are in some regards equally constructed and equally untrue.
Two terms which crop up often these days, though maybe not often enough in this context, are the multiverse and the metaverse. The multiverse refers to the idea, widely accepted by theoretical physicists, that our universe is not the only one, and instead exists in relation to an infinitude of other universes, some highly similar, some wildly different from our own.
Many universes – but isn’t this one enough already?
By contrast the metaverse is an as yet hazy idea quickly obtaining momentum among tech circles which proposes itself as the future of the internet, and seeks to displace or replace many aspects of contemporary life with a virtual reality alternative.
Mark Zuckerberg’s vision of your future
So the multiverse is an expansive concept and the metaverse is a limiting one, but both seek to tackle the issue of explaining the complexity of the world by replacing it with something else. And they do so in different ways. While the metaverse is a collective effort by tech firms, Facebook (now renamed ‘Meta’) in particular, the multiverse is an idea poorly adopted from theoretical physics and science fiction novels which has grown, like conspiracy theories, in the corners of communication that the mainstream media do not reach primarily.
Already it seems that the brave new Metaversal world may not be about to materialise in quite the way its ‘imagineers’ were hoping. Only today, Facebook – sorry, Meta – announced swingeing job cuts across their company, which is undoubtedly informed by the one billion dollars PER MONTH they have been spending recently on developing Metaverse tech.
Over the past three decades, we have as individuals, societies and even as species, learned to adopt, adapt and accommodate the internet in our lives. But the prospect of a life spent primarily in virtual reality seems to be a bridge too far for many of us. We are not our avatars. We are not inputs into a global algorithm. We do not need to escape meatspace for metaspace.
But it seems some people do want to escape, though perhaps not into a corporate vision of virtual reality. After all, movies like The Matrix have warned the public to be wary of dreamscapes, especially when those dreams are programmed by others. Instead, they escape into their own dreams, where the complexity of reality can be argued away, in all its nuances and seeming contradictions, by the simple assertion that they have migrated between universes.
The growth of a subculture of people who appear to believe that they can traverse between universes is a particularly fantastikal form of failing to deal with how complex the world has become. It’s clearly not as nefarious as the various conspiracy theories circulating online, but of course any movement attracts shysters and wannabe leaders, in search of money or influence, and hence there are now people offering to teach others how to move between universes.
In one sense this is no less valid than teaching people how to chant mantras, say the rosary or engage in any other religious practice that is more metaphorical than metaphysical. But one of the catalysing aspects of online culture is the ability for people to find like-minded people. Hence conspiracy theorists can find communities where their toxic ideas are cultivated, while multiversers can source validation and endorsement from others who similarly seek to explain the anomalies of their memory or complexities of reality in the same way.
There are no doubt complex reasons to explain why so many people are subject to psychological phenomena like the Mandela Effect, but these explanations do not include watching YouTube videos on how to meditate your way into another universe while in the shower.
Both the multiverse and the metaverse offer simplistic and ultimately unsuitable resolutions to the ever-growing complexity of modern existence. Fundamentally, these escapist dreamscapes are coping mechanisms for dealing with this complexity.
The world is already too complex for any individual mind to comprehend, and probably too complex for even artificial intelligences to ever understand. But we can’t, or at least shouldn’t, escape it. Instead, we should try to understand it, and the best way to do that is to escape not from the world but from our online echo chambers.
If we can learn again to speak to one another, identify areas of agreement and try to find ways to foster collaboration despite disagreement, we stand a much better chance of improving our own collective futures.
At Sapienship, we believe everyone has a story to tell and all those add up to the story of us. We think everyone needs to be heard, and debated, and engaged with. It’s not easy, but it’s clearly the best way to resolve the major issues that face us, our planet and our reality.
We don’t need to hide in virtual realities or imagine alternative universes when the one we have is so rich with possibility and potential. Instead we need to come together to realise our hopes.
I am a scholar of dystopia – a dystopian if you will. I am an aficionado of dystopia, a connoisseur of the literary and artistic genre in its myriad of forms and nightmares.
I consider dystopian thinking to be an evolution, or sometimes an extrapolation, from the precautionary principle, which warns against change for the sake of change. Dystopia is a form of negative imagining, an attempt to envision and render in realistic terms a truly ‘negative place’, the etymological meaning of the term.
In this sense, I find dystopian thinking to be significantly more culturally useful than utopian thinking, which to a large extent has been reduced to a singular political ideology derived from a Marxist strain of post 1960s counterculture.
Whereas utopian thinking has devolved to activist academic attempts to plot routes towards one particular ‘positive place’ future, dystopian thinking has instead remained more broad and wide in its purview. After all, there are many nightmares.
If there is a structural flaw to both modes of art and thinking, it is that in practice they generally extrapolate forward to complete visions, the totalising utopia or dystopia. Rarely if ever do we see depicted the many incremental stages between the world as we know it and the heavenly or nightmare future world depicted.
Where utopian thinkers in particular have addressed the explicit or implicit developments towards utopia or dystopia, they have, to my mind, missed the point somewhat. The terms ‘critical utopia’ and ‘critical dystopia’ emerged some four decades or so ago to describe incomplete elements of depicted utopias and dystopias. Thus these key depictions of complexity, nuance and evolution in such literature and art (and philosophy) were reduced to anomalies which could either be countered (in the case of ‘critical utopias’) or fostered (in the case of ‘critical dystopias.’)
This was an innovative way of looking at things then, but it was always reductive, and ideologically driven, and at this point its limitations are becoming quite obvious. Actual examination of how society develops towards utopia or dystopia tends to be quite thin on the ground, despite examples existing all around us.
The exception if there is one is the regularly bruited risk of a return to 1930s-style fascist governance in current democratic societies. The election of leaders with an authoritarian populist rhetoric, be they Trump, Orban or Meloni, is now routinely accompanied by dire extrapolations (and often incomplete historical parallels) which overtly suggest that a slippery slope to neo-Nazi rule is already well underway.
But dystopia as I said takes a myriad of forms, and each form evolves and devolves in different forms and at different rates in different cultural and historical circumstances. As a dystopia thinker, I try to look for patterns, for trends, which suggest dystopian vectors of society, ways in which society is moving towards a less civilised state of being for most people.
In this way, many instances seem to pass under the radar. In fact, very often when they do occur, they are depicted as the opposite of what they are. They are reported as beacons of hope, anomalies which ‘critical utopias’ habitually accommodate in their positivist post-Enlightenment progress ratcheting ever forwards.
These instances are a little like ‘magic eye’ pictures, which were popular a generation back. Once you see it, you can’t unsee it, as they say. I refer to them as examples of dyst-hope-ia, as they are fundamentally dystopian developments, though usually incremental rather than totalising, swathed in a good-news suit of hope to make the bitter pill go down more easily.
In this way, a ratcheting towards a more dystopian society occurs in an almost Huxleyan sense, with the passive acceptance and approval of the population who actually are encouraged to associate such instances with hope rather that its opposite.
This is a little difficult to explain in abstract, so let me offer some concrete examples. Many years ago, I noticed a large building being erected in my district in Dublin. Over many months the grand edifice came together. I didn’t pass it often, so didn’t know what the building was intended to be, until one day in the local newspaper I read that it was due to open the following week. It was a new unemployment welfare office.
The local paper depicted this as a good thing. It was reported as a net good that the unemployed of the area now had a better, bigger dedicated office to deal with them efficiently. But beneath this patina of hope, one swiftly discerns that the expenditure of millions of euro in such a building is a commitment to societal unemployment in the area.
It is in fact an admission of failure – the failure to regenerate the area, or to provide employment for its inhabitants. At the time of its opening I wrote in my journalist’s notebook, “who approved this investment in indolence?” (I used a lot more alliteration in those days.)
Another example comes in today’s news from Britain, which in recent times can be relied upon as a stable and consistent source of examples of dyst-hope-ia. The emergence of a social charitable phenomenon called ‘warm banks’ (though the term is never used) is a classic example of dyst-hope-ia.
The welcoming warm bank, depicted as a jolly public community space – image courtesy of Getty Pictures.
Surely the hopeful depiction is legitimate? After all, the idea of the community rallying around to offer protection and support to the most vulnerable among them is a supremely positive and human thing. This is the hope in dyst-hope-ia, the positive cloak in which the nightmare clothes itself, the sheep’s clothing on the dystopian wolf.
Because, under this surface reaction is the initial action causing the need for such support – the vastly and rapidly escalating food and fuel costs which have left many vulnerable people in Britain with a choice between eating and heating.
And as with food banks before them, warm banks will function not only as a precarious safety net for the vulnerable, but also as a creeping normalisation of a more dystopian society, one in which it is normalised for people not to be able to afford food or heat their homes.
What dystopian thinking teaches me is not to dismiss this patina of hope cynically, nor to be seduced into thinking of the overall scenario as a positive development either. It allows me instead to see through the sheep’s clothing to the wolf beneath.
I suggest always lifting the surface of the good news story to check what might be smuggled into normality underneath. I admire the efforts of each and every person who contributes their time or money to keeping their community warm. But I refuse to allow that kind-heartedness to obscure the fact that the government is attempting to normalise the concept of citizens who cannot heat their own homes.
Plagiarism isn’t illegal, says this article. No, it isn’t, anywhere, but it ought to be. Because plagiarism kills.
I heard a decade ago of someone making six figures a year from writing papers for cash. That’s a big temptation for people struggling to get by as a postgrad or post-doc. And indeed, those who provide this service to cheats do get the tiny violins out to justify their decisions in the article linked above.
The bottom line is that everyone knows this is going on and is wrong. But universities simply don’t care, in their marketised, stack-students-high, sell-degrees-expensively model.
Academics are actively discouraged from challenging cases of plagiarism. It’s a ton of extra work, and often the students simply get a slap on the wrist anyway. Given the choice of reporting a suspected case, providing the evidence to a faculty committee, engaging in a formal investigation and then watching the student be told “don’t be naughty again” is sufficient discouragement for most lecturers, whose time is already at a premium.
But this approach isn’t good enough. Plagiarists devalue the degrees and credentials of their honest peers, and the vast market which has sprung up like foul toadstools to service those who’d rather pay than study is not simply ethically dubious but actively threatens the integrity of the entire educational system.
And it is a real issue for society. Do you want to drive over a bridge designed by an engineer who handed in other people’s essays to get their degree? Or would you like your child to be delivered by a midwife who did likewise? I am aware of cases of students who obtained degrees in both disciplines, and in many others, via consistent and continuous cheating in this manner. We must assume those graduates are in jobs now, in critical positions for which they are not actually qualified.
This is why you should care. It’s time to put the paper mills out of business for good, by making them illegal, and by properly punishing students who engage in plagiarism. Expulsion and a ban from higher education should be the minimum response.
Plagiarism has ramifications in many other areas too. Once uncovered, it generally leads to a significant loss of commercial, institutional or individual credibility and reputation. It’s hard to come back from. And of course, where authors have their work stolen (rather than sold), it’s literally beggaring people and thieving unearned benefits from the work of others.
But to create a truly anti-plagiarism culture, we need to start with education. Perhaps it may even be too late in the cycle to do so at university level, since reports of students plagiarising work in secondary schools is also rife in very many nations. But we badly need to start somewhere.
And if higher education don’t address their plagiarism problem, they will soon find their expensive degrees becoming more and more worthless, as more and more people simply purchase rather than earn those credentials.
Usually, hardcore technophiles get hurt in the pocket. I still recall people spending £800 on VHS video recorders (about £3,900 in today’s money) only for them to fall to a fraction of that soon afterwards. Likewise with early laptops and cellphones.
Cutting edge technology c. 1980.
What’s concerning about AI’s earliest adopters is both their blasé attitudes to its many flaws and weaknesses, and their insistence on foisting AI-driven “solutions” upon the rest of us.
Which brings us to the Synthetic Party. On paper no doubt it sounds great. Remove those problematic humans from decision-making. But politics takes place in the very real world of human society, not on paper or in bits and bytes.
Even if the advent of AI was at worst likely to punish enthusiasts financially, as with previous technology early adopters, I’d still have issues with it. AI needs to be fed with data to learn, and that data is your and my personal information, whether gathered legitimately with full consent or not.
However, AI could have ramifications far beyond our worst current nightmares. As always, we dream negatively in Orwellian terms, fearing technology will turn on us like Frankenstein’s monster or the Terminator, when history suggests that dystopia more often manifests in Huxleyan terms.
We are sleepwalking into this, and judging by these Danish early adopters, we will happily embrace our own slavery. It would be much preferable if the cost of AI was merely financial. But the ramifications are likely to be much more impactful.
Already in many democracies, a large proportion of the electorate simply don’t engage. And when they do, a growing proportion are voting for parties with extreme ideologies. On our current vector, we could easily end up volunteering for our own obsolescence.
What the Synthetic Party promise is true technocracy – rule by machines and algorithms rather than rule by unelected officials as we currently understand the term. As always, be careful what you wish for.
Already it’s October, when the leaves turn red and fall from the trees, the nights grow longer and the days colder, and the Nobel prizes are awarded.
The Nobel committee for lit does tend to go leftfield when possible. One is therefore required to read into their decisions, a little like ancient haruspices reading the entrails of chickens or 20th century Kremlinologists interpreting the gnomic actions of the politburo.
How then should we read the decision to anoint the sparse, harsh and uncompromising pseudo-autobiographical work of Annie Ernaux?
To me it seems like a commentary upon Michel Houellebecq and Karl Ove Knausgård. All three are known for writing their big books of me, but perhaps the men are better known than Mme Ernaux internationally. Equally, both Houellebecq and Knausgård have been heavily criticised, among other things, for their misogyny. Awarding Ernaux seems to me to be a reaction to their popularity and the fact that both have been tipped for this prize previously. Your mileage may vary.
(Full disclosure: I’ve never read Knausgård or Ernaux and have at best a passing familiarity with Houellebecq, who I found to be a very rude interviewee at the Dublin Impac Award in a previous millennium.)
Also elevated to laureate this year was Svante Pääbo, the man who proved that ancient hominid species such as Neanderthals did not entirely die out but in fact persist to this day within non-African human genomes. In fact, I likely owe some Neanderthal ancestor the gene which oversees my melanocortin-1 receptor proteins, which gave me my once russet beard.
What’s intriguing personally for me about this year’s Nobels for medicine and literature isn’t that I’d not previously heard of the literature recipient, nor that I had previously heard of the medicine recipient, but the fact that both these things occurred in the same year. I guess my interests have shifted over the decades away from solely literary pursuits, and towards scientific interests, especially in early hominids. This year’s prizes have brought that home to me, and congratulations to the winners.
I’ve long criticised the Nobel Prize for Peace, because the Norwegian parliament committee which awards it has a knack for often choosing inappropriate recipients. Hello Henry Kissinger, Aung San Suu Kyi, Barack Obama, UN “peace-keeping” forces, etc.
Nevertheless, I’d argue they got it right this year. The 2022 Nobel Peace Prize has been awarded to human rights advocate Ales Bialiatski from Belarus, the Russian human rights organisation Memorial and the Ukrainian human rights organisation Center for Civil Liberties. Congratulations to them too.
POST-SCRIPT: The newest Nobel physics laureates have also been announced and their award is for proving that reality, as we understand it currently, is not real in the ways we think it is. Not awarded, though clearly the forefather of all of this research (which aimed to prove his hypotheses) is my compatriot John Stewart Bell, who alas died in 1990 while the experiments proving him correct were still in process.
John Stewart Bell
Congratulations to Alain Aspect, John F. Clauser and Anton Zeilinger for proving once again that the universe is not only stranger than we think, but most likely as Heisenberg noted, stranger than we can think.