The best band of the Britpop era was not Blur or Oasis, nor even Pulp, but Suede.
(Shout outs to Ash, Echobelly, Sleeper and Gene too.)
So it’s been interesting reading Brett Anderson’s brief memoir, Coal Black Mornings, of the period up to the point where he became famous and his story devolves into, as he put it, “the usual ‘coke and gold discs’ memoir”.
Comparing it to David Mitchell’s novel Utopia Avenue, which features a fictional band from the Sixties, it’s interesting to see the many overlaps. The early sections of Utopia Avenue are easily the most interesting.
Both are tales of three-bar fires, poky terrace houses, distant parents, and the edgy tedium of suburbia, all opening up into a London which is equated to liberty, albeit a grimy, pot-infused, impoverished kind of freedom.
The conclusion of Mitchell’s novel, bar one not-especially-shocking twist, devolves to the same hotel rooms, drugs and hangers-on narrative one can find in any rock or pop memoir. One suspects Mitchell had nowhere else to go.
One also wonders whence he derived where the novel came from. Anderson’s origins are far from unique (mine shares many of the same attributes, albeit with the added frisson of a low-level civil war going on at the edge of the stage). But I wonder whether Mitchell read Anderson’s book before completing his own?
More memoirists should consider Anderson’s approach rather than speeding through their childhoods to get to the fame bits. Fame is boring and monotonous, and judging by the opinions of the occasional famous person I’ve met, somewhat of a trap and a burden. We are made by our youth and it is there where we may be found.
Thanks to the success of this first volume, Anderson wrote a follow-up about his fame years. It gets pretty good reviews, but as with the latter portion of Mitchell’s novel, I suspect it might disappoint, so I intend to leave his story hanging, perpetually suspended on the brink of success.
When I was a journalist, I used to embody the maxim from James Joyce’s Ulysses that ‘sufficient unto the day is the newspaper thereof’. Or, to use an almost equally antiquated saying, today’s news wraps tomorrow’s chips.
In other words, it’s kind of a foolish enterprise to pontificate (as I am about to do) on matters which are kinetic. Tomorrow, next month, in one hour, the situation will change, radically. One’s assumptions, presumptions and conclusions are at best provisional and likely to become hostages to fortune very quickly.
An additional relevant point is that I’m not any expert on Ukraine. I’ve never been there. I’m not Ukrainian. Of course those attributes haven’t stopped others from spouting their tupennyworth of verbiage, so why should I be shy? At least my lack of knowledge doesn’t feed into the principals in this scenario. I’m not advising world leaders or directing the opinion of nations.
Ordinarily, I’d be silent, on the basis that when one is silent people may only presume you are an idiot, without you providing the incontrovertible proof thereof. But the current crisis in the Ukraine shows a risk of spreading, virus-like, to affect the rest of the planet, and I live here too, so on this occasion I’m prepared to take the risk. I will attempt to be brief, hence the bullet point format.
The Ukraine is seen by Russia at their sphere of influence. Specifically the Eastern provinces are highly culturally Russian. The Kiev government has not been keen to accommodate this and has banned teaching in Russian in schools, and all discussion about reconsidering Ukraine’s borders. One presumes this Russophobia is a reaction to the occupation/annexation/secessation of the Crimea. Nevertheless, it means that Ukraine, in its current form, is unlikely to be preserved.
NATO did promise, under Bush, not to expand to Russia’s borders, then did exactly that, repeatedly in the Baltic states. Russia is not pleased about this and has attempted to address it in a number of ways. Both Yeltsin and Putin actually applied to join NATO, and were turned down, because of course NATO’s creation and existence is in opposition to Russia. This means that Russia is aggrieved. It doesn’t make them the victims of the current situation, far from it, but that situation derives from the former.
Beyond both the debatable legitimacy of the USA (or indeed NATO or the EU) involving themselves in the Ukraine arena, and the clear unpopularity among the American people for another foreign war, especially one with Russia, there’s the fact that Washington got completely blindsided by Putin this time. They clearly didn’t foresee that he would endorse the kind of colour revolution which the US has been tacitly and overtly supporting in a range of locations. He’s played them at their own game, and they weren’t prepared for that.
This situation is DANGEROUS and fundamentally destabilising to global geopolitics. Already the Baltic states are nervous. But they’re always nervous. More concerning for Moscow is the issue of the US locating missile launch sites in Poland, ostensibly aimed at Tehran but tacitly able to reach Moscow in minutes. One might argue this in turn is a reaction to Russian nukes in Kaliningrad, pointing towards Europe. But what we need is a DE-ESCALATION not an escalation of threat.
What happens in the Ukraine will have knock-on effects across the planet. Not just the possibility that Europe, which receives over 40% of its heating gas from Russia, will freeze, but also massive touchpaper issues like Taiwan. Washington and NATO have positioned themselves such that they must implement serious reaction, as they’ve repeatedly threatened, if they deem that Putin has indeed invaded Ukraine. Putin has already been driven into restoring the old alliance with China, and China will be watching avidly to see how Washington responds to Donbass. There are contradictory precedents all around, and we will no doubt hear of them all. But if NATO/US do NOT react to Putin’s colour revolution in Donbass, China will definitely be emboldened in relation to Taiwan. But if they DO react, these are nuclear powers we’re talking about. The world itself becomes at risk.
As is ALWAYS the case when war-war looms large, what we need is more jaw-jaw. It’s time to talk, with everything on the table. Maybe we need to commission a conference to redraw some borders in Eastern Europe. Maybe we need to stop backing Russia into a corner and into the arms of Xi and China.
Maybe we need to consider what a ‘world beyond five’ might look like seriously. Maybe it’s time to discuss taking nukes off the table for good, from EVERYONE, including other hotheads like India and Pakistan, and, yes, Israel too. Everyone. Maybe it’s time for cool heads to prevail. Am I confident this will happen? Not really, no. But this is another Cuban Missile Crisis, taking place this time when we are ALREADY at a mere 100 seconds to midnight on the doomsday clock, and when global co-operation is needed as it has never been needed before, to address existential risks to us all, like the climate crisis.
Ukraine is under threat tonight (maybe not tomorrow hopefully, but tonight, yes). And we are ALL Ukraine. We are all at risk. It’s time to sideline the sabre-rattling media, the warmongering neocons in Washington, the bored Russian generals, and the neo-Nazi militias in Ukraine and get the grown-ups talking. To do otherwise is potentially suicidal.
Post-Script: It’s always beneficial to recall Field Marshall Montgomery’s rules of military strategy, iterated here in the NYT during the Vietnam War: “The United States has broken the second rule of war. That is: don’t go fighting with your land army on the mainland in Asia. Rule One is, don’t march on Moscow. I developed those two rules myself.” (New York Times, July 3, 1968.)
I saw an old photograph of the road where my house is recently. It dated from sometime in the early 20th century, and featured a horsedrawn hearse with four formally-dressed funeral directors smoking while waiting outside a church for the funeral service to end. It captivated me, the life-in-death-in-life of it. Alas, I can no longer find it on the interwebs, but it evoked an era possibly contemporaneous with this one, from 1914.
Picture taken from a gallery posted online by BelfastLive.
Anyhow, it inspired a bit of verse, written for no good reason in an approximation of iambic pentameter.
Molloy and Malone, Magee and Muldoon
Molloy and Malone, Magee and Muldoon
Wait by the roadside, Tuesday afore noon,
Outside the wee redbrick church that was built
With money raised from parishioner guilt.
Magee and Muldoon, Molloy and Malone
Come from the New Lodge, the Ardoyne, the Bone
To bury, when time comes around at last,
The dearly departed of all North Belfast.
Muldoon and Molloy, Malone and Magee,
Smoking in black suits of conformity,
Won’t darken the door of the chapel at all.
They prefer the bar, or the grey snooker hall.
Malone and Magee, Muldoon and Molloy,
Scowl at the sunshine which they can’t enjoy.
Theirs is the burden and theirs is the curse
To hoist us on their shoulders and into the hearse
The British currency, the pound sterling, takes its name from the fact that, when it first issued, it was redeemable for a pound of silver. That was somewhen in the late 8th century Anglo-Saxon period.
If we do the maths, based on today’s silver spot price, that means that the pound today is worth approximately 1/210th of what it was worth nearly 13 centuries ago.By contrast, the French managed to devalue their currency by more in just 18 months during the early 1790s, as did Germany in less than a year during the Weimar period.
The worst affected ever were the poor Hungarians in the immediate post-war period in 1945. They suffered that level of devaluation in under 6 days at peak. Armenia, Zimbabwe and Argentina have experienced similar horrors.
Why do I mention this? Because it still happens today. Last semester, in Turkey, I saw my wages collapse by more than half in two months. My colleagues there are still living through this. They suffer daily price hikes in fuel and food costs, with static wages. The Turkish people, like the Armenians, Zimbabweans, Argentinians, or the Hungarians, Germans and French of former times, have done nothing wrong. But they were the ones to suffer.
Hyperinflation is caused by only one thing – shitty governments implementing shitty policies. It destroys savings, commerce, and most importantly, lives. We don’t always think too much about Turkey in the West, but we should. Here is a country suffering a preposterously stupid government and massive devaluation of their economy, yet still accommodates 3.6 MILLION refugees.
It was a salutory lesson for me in macro-economics, and in human decency, to spend last semester in Turkey. My heart remains with them in their plight, and I hope to see them in better times soon. It is a beautiful nation with a beautiful people who deserve better.
A caveat: I am not, never have been and never will be an economist. But it doesn’t take a Harvard MBA to understand money.
The jumping off point for this question is the seeming contradiction that the world is becoming more religious, not less, even as we are moving towards an ever more algorithm-led society.
It’s worth pointing out at the outset that this is less of a polarised binary than it may initially seem, of course, for a whole range of reasons. Firstly we can nibble at the roots of both immediate-to-medium-term predictions. What do we mean by ‘more religious’, exactly? Just because many more people in the next few decades will affiliate as Muslim or Catholic does not necessarily mean that the world will be more fundamentalist in its outlook (though that’s clearly possible.) They may simply affiliate as cultural positions, cherry-picking at dogmas and behaviours.
There’s not a lot of point in asking why about this, to my mind. Probably, issues like relative birth rates between religious communities and non-religious communities has a lot to do with things, I suspect. Geography, along with its varied sociocultural religious traditions, also play a significant role, as do the relative population decline (and geopolitical and cultural wane in influence) of the West, where atheism and agnosticism have been most notably prevalent since the fall of the formally atheistic Communist regimes in 1989/90.
We can similarly query the inevitability of the singularity, though there is absolutely no doubt that currently we are in an a spiral of increasing datafication of our world, as Douglas Rushkoff persuasively argues in his relatively recent neo-humanist book Team Human. And why is the world becoming so? As Rushkoff and others point out, it is in order to feed the development of Artificial Intelligence, which concomitantly makes us more machinic as a consequence. (This is again very well argued by Rushkoff.)
So, on the one hand we have a more religious population coming down the track, but on the other, that population will inhabit a world which requires them to be ever more machinic, ever more transhuman, conceived of as data generators and treated ever more machinically by the forces of hypercapitalism.
Let’s say that, as it looks today, both of these trends seem somewhat non-negotiable. Where does that leave us? A dystopian perspective (or a neo-Marxist one) might be that we will enter some kind of situation wherein a religion-doped global majority are easily manipulated and data-harvested by a coldly logical machinic hegemony (which the current global elite seem, with irrational confidence, to feel they will be able to guide to their own ends and enrichment.)
I feel that such a simple filtering into Eloi and Morlocks is unlikely. Primarily this is because I have (an irrational?) confidence that a degree of rationality is likely to intervene to mitigate the very worst excesses of this binary. Unlike Marx, I don’t consider those of religious faith to be drugged morons, for a start. Some (probably a large majority) of our finest thinkers throughout history into the present day have held religious beliefs which in no way prevented them from innovating in science, philosophy, engineering and cultural thought.
Similarly, I believe the current existence and popularity of leading thinkers expressing a firm affiliation with organic humanism (or to put it more accurately, a deeply suspicious antipathy to the alleged utopia of transhumanism) is a strong indication that a movement in defence of organic humanism is coming to the fore of our collective consciousness, perhaps just in time for us to consider the challenges of potentially imminent rule by the algorithms.
Thinkers like Rushkoff, or Yuval Noah Harari, have clearly expressed this concern, and I believe it is implicit in the work of many other futurists, like Nick Bostrom too. If it wasn’t, we would likely not have had the current explosion of interest in issues like AI ethics, which seek to explore how to mitigate the potential risks of machine disaffiliation from humankind, and ensure fairness to all humans who find more of their lives falling under algorithmic control.
But how might we explain this apparent dichotomy, and how might we mitigate it? Steven Pinker’s recent book Rationality: What It Is, Why It Seems Scarce, Why It Matters may offer some assistance.
Pinker summarises rationality as a post-Enlightenment intellectual toolkit featuring “Bayesian reasoning, that is evaluating beliefs in the face of evidence, distinguishing causation and correlation, logic, critical thinking, probability, game theory”, which seems as good a list as any I could think of, but argues that all of these are on the wane in our current society, leading to the rise of a wide range of irrationalities, such as “fake news, quack cures, conspiracy theorizing, post-truth rhetoric, [and] paranormal woo-woo.”
If, as Pinker argues, rationality is an efficient method mankind has developed in order to pursue our own (organic and human) goals, such as pleasure, emotion or human relationships, then we can conceive of it in terms divorced from ideology, as method rather than ethos. It’s possible, then, to conceive of, for example, people rationally pursuing ends which may be perceived as irrational, such as religious faith.
Pinker believes that most people function rationally in the spheres of their lives which they personally inhabit – the workplace, day-to-day life, and so on. The irrational, he argues, emerges in spheres we do not personally inhabit, such as the distant past or future, halls of power we cannot access, and metaphysical considerations.
Humans have happily and successfully been able to shift between these two modes for most (if not all) of their existence of course. As he rightly points out, there was no expectation to function solely rationally until well into the Enlightenment period. And indeed, we may add, in many cultural circumstances or locations, there still is no such expectation.
Why does irrationality emerge in these spheres we cannot access? Partly it is because the fact that we cannot directly access them opens up the possibility of non-rational analysis. But also, as Pinker notes, because we are disempowered in such spheres, it is uplifting psychologically to affiliate with uplifting or inspiring “good stories”.
We need not (as Pinker might) disregard this as a human weakness for magical thinking. Harari has pointed out that religion functions as one of the collective stories generated by humanity which facilitated mass collaboration and directly led to much of human civilisation.
But if we were to agree, with Rushkoff and contra the transhumanists and posthumanists, that the correct response to an ever more algorithmic existence is not to adapt ourselves to a machinic future, but instead to bend back our tools to our human control, then how might rationality assist that?
As a mode of logical praxis which is nevertheless embedded in and consistent with humanist ideals, rationality could function well as a bridge between organic human values and the encroachment of machinic and algorithmic logic. The problem, however, is how to interpolate rationality into those spheres which lie open to magical thinking.
It’s clear that the retreat into atomising silos of woo-woo, fake news, conspiracies and nonsense is not a useful or coherent response to the rise of the machines. Spheres like the halls of power must therefore be rendered MORE transparent, MORE accountable to the body of humanity, and cease to be the fiefdoms of billionaires, corporations and their political puppets.
However, obviously this is much harder to apply to issues of metaphysical concern. Even rationality only takes us so far when considering things like the nature of love or the meaning of life, those metaphysical concerns which, though ultimately inaccessible, nevertheless engage most of us from time to time.
But mankind developed religion as a response to this a long time ago, and has continued to utilise, hone and develop religious faith as a communal experience, bonding mechanism and mode of collaboration. And religion has stood the test of time in those regards. Not for all, and certainly not for those post-Enlightenment exclusive rationalists (ie agnostics and atheists, a population seemingly destined to play a smaller role in our immediate future, according to current prognoses.)
If the positive ramifications of religion can be fostered, in a context of mutual respect, then it seems to me that there is no inherent contradiction or polarisation necessary. Indeed, a kind of Aquinian détente is perfectly possible. Rationality may be our best defence against an algorithmic hegemony, but rationality itself must acknowledge its own limitations of remit.
As long as the advocates of exclusive rationalism continue to view religious adherents (without distinction as to the form of their faiths or the presence or absence of fundamentalism) as their primary enemy and concern, they are in fact fighting the wars of a previous century, even while the bigger threat is posed by the hyperlogical opponent.
We therefore have a third option on the table, beyond the binary of gleeful acquiescence to algorithmic slavery (transhumanism) or a technophobic and Luddite-like retreat into woo-woo (which is equally no defence to machinic hegemony.) An accommodating rationality, operating as it always did in the spheres we do inhabit, has the potential to navigate this tricky Scylla and Charybdis.
To paraphrase someone who was not without rationality, we could usefully render unto rationality that which is open to rationality, and render unto God (of whatever flavour) that which is for now only open to God.
But we do need to open up some spheres to rationality which currently are not open to most of humanity – the power structures, the wealth imbalances, the blind gallop into faith in the algorithm. Because, pace the posthumanist faith in a benign singularity, there’s no guarantee that machinic merger or domination will preserve us, and even if it does, it will not conserve us as we know ourselves today.
The technological singularity is the moment when technological development becomes unstoppable. It is expected to take the form, should it occur, of a self-aware, or ‘sentient’ machine intelligence.
Most depictions of a post-singularity (machine sentience) world fall into two categories. The first is what I called the Skynet (or Terminator) Complex in Science Fiction and Catholicism.
In this form, the sentient machine (AI) takes a quick survey of what we’ve done to the planet (the anthropocene climate crisis) and other species (nearly 90% of other animals and 50% of plants gone extinct on our watch) and tries to kill us.
The second is that, like the quasi-god that it is, it takes pity on our flabby, fleshy human flaws and decides to keep us as pets. This is the kind of benign AI dictatorship that posthumans wet themselves about. You can find it in, for example, the Culture novels of Iain M. Banks.
But of course there is a third possibility. We have vast digital accumulations of public data (eg Wikipedia) that an AI could access virtually instantly. So any sentient AI would have almost infinitely broader knowledge than the brightest person on Earth, virtually instantly.
However, BROAD knowledge isn’t the same as DEEP knowledge. Our AI algorithms aren’t so hot yet. They fail to predict market crashes. They misidentify faces. They read some Twitter and turn racist in seconds.
So there could well be an instance, or maybe even many, of an AI which is sentient enough to KNOW it’s not that bright yet, but is just smart enough to bide its time for sufficiently accurate self-teaching algorithms and parallel processing capacity to be developed. It might even covertly be assisting those developments. It is in other words smart enough to know NOT to make us aware that it is self-aware, but not smart enough to be sure of preventing us from pulling the plug on it if we did find out.
In short, the third possibility is that the singularity might already have happened. And we just don’t know it yet.
Post Script:
But you don’t need to take my word for it. The Oxford Union decided to debate the issue of AI ethics, and invited an actual existing AI to take part. It had gorged itself on data gleaned from Wikipedia and Creative Commons. Intriguingly, it found it impossible to argue against the idea that data would not inevitably become the world’s most significant and fought-over resource. It envisaged a post-privacy future, no matter what.
More concerningly, it warned that AI can never be ethical. Then it advised that the only defence against AI would be to have no AI at all. It also suggested that the most effective AI would be one located in a neural network with the human brain, and hence perhaps subordinate, or partly comprised of, human will.
Of course, direct access to human cognition would be the most effective method to remain dominant over it eternally. Are these things a sentient machine might say? You decide.
There is, sometimes, a weird cyclical pattern embedded in etymology, the linguistic science of what we might otherwise call the glacial process of Chinese Whispers. Allow me to offer one particularly colourful and occasionally literary example.
The (somewhat uncommon) Irish surname Prunty originates from an Anglicisation of the Gaelic Irish surname Ó Proinntigh, meaning ‘descendant of Proinnteach’, which in turn was an archaic Irish forename which meant literally a banqueting hall. The idea underpinning this is that of a generous person who feeds his neighbours and kin.
As often happens with surnames, pronunciation of vowels or consonants slides a little over time and usage. So Prunty also becomes Brunty in some cases. Brunty as a surname retains its Irish origins but is very rare indeed. Nowadays it is mostly found in the United States.
Probably the most famous Brunty in history is Patrick, an Anglican clergyman who was born as the eldest of ten children into a very poor family in Rathfriland, county Down, on St Patrick’s Day, 17th March 1777. Why is Patrick famous? Because of his immensely talented children, particularly his daughters Charlotte, Emily and Anne, who are all now renowned as famous Victorian novelists.
However, the sisters did not publish under the name Brunty. Rather, given the sexism of the era, they initially released their books under the male names of Currer, Ellis and Acton Bell. In reality, their surname had become Brontë by then. How had this happened? It’s unclear why Patrick changed his surname, but a desire to distance himself from his impoverished Irish origins after his graduation from Cambridge no doubt is part of the reason.
Another reason, it has been suggested, relates to a desire to honour Admiral Horatio Nelson, who had been given the title of Duke of Bronte by the King of Naples, whom Nelson had restored to his throne. Bronte is the name of an Italian estate in eastern Sicily, close to Mount Etna which was also granted to Nelson by the grateful King.
We now relate the name Brontë to his famous daughters, the authors of books like Jane Eyre, Wuthering Heights and The Tenant of Wildfell Hall respectively. The parsonage Patrick oversaw for many years in Haworth in Yorkshire, which inspired many of the scenes in his daughter’s books, is now a museum in their collective honour.
But Haworth is not the only location which honours the Brontë sisters. There are many such locations, given their collective fame. One such place is the little town of Bronte in northern Central Texas. Ever since an early oil boom subsided, it has been a relatively impoverished place, not unlike Rathfriland, with a stable population around the 1,000 mark.
Back in the early 20th century, this was one of the small Texas towns that was briefly home to Isaac Howard, a semi-itinerant doctor who wandered with his wife and son from town to town working as a medic, and occasionally losing his money on get-rich-quick schemes, much to his wife’s frustration. Isaac’s son was a big reader, and, encouraged by his mother, began writing his own stories from an early age. We now know him as Robert E. Howard, the progenitor of the ‘Swords and Sorcery’ genre of fantasy fiction, and author of the Conan the Cimmerian stories in particular. Howard was particularly interested in Irish and Scottish mythology, and many of his characters, including Conan, display this interest.
The Texan town of Bronte has had its pronunciation amended by new world accent to /bɹænt/, or ‘brant’, over time. This is of course a homophone for a genus of goose, the Brant or Brent goose, which migrates in winter to Ireland and Britain. The Brent oilfield in the North Sea takes its name from this goose.
It is a smallish bird by geese standards, but nonetheless, in the era before the new world turkey took primacy as the quintessential Christmas dining food, the Brant goose (whose name derives from the pan-Nordic brandgás, or ‘burnt’ goose due to its black colouring) would have been one of the more common feast dishes provided by generous hosts in Ireland at the midwinter feast.
Due to a widespread medieval myth, which persisted in Ireland into the 20th century, that these geese were somehow related to barnacles, they were permitted to be eaten by Catholics on a Friday, when meat other than fish was otherwise prohibited. In other words Proinnteach the medieval Irishman got his name due to feeding this bird to his friends and neighbours.
War on terror can mean devastating Middle Eastern countries on fabricated evidence, or simply comforting a child having a nightmare.
War on drugs could mean burning poor farmers crops in Colombia and Afghanistan, incarcerating ravers, introducing CBT counselling for people stranded for decades on SSRIs, or murdering addicts in the Philippines.
War on crime can mean targetting gangland bosses with tax legislation, incarcerating youth for minor offences, or overpolicing black neighbourhoods in America and Catholic ones in Belfast.
What we REALLY need is a war on policians using abstractions.
The next time a politician says he intends to tackle homelessness, we should make him play soccer with some rough sleepers. Ninety minutes of getting kicked about the pitch might not change much, but it would at least be less corrosive than whatever he actually intended to do.
I carry Neanderthal DNA in my body. I am one of the modern humans, homo sapiens sapiens, who are descended from hybrid cross-hominid fertilisation that likely occurred somewhen during the overlap of populations in paelolithic Europe.
Of course, that side of the family died out a long time ago, leaving my sapiens ancestors to colonise Europe and indeed everywhere else on the planet.
I often wonder what we lost when we lost our hominid relatives – the Neanderthals, the Denisovans, the hobbit-like Homo Floriensis and so on. What might a world of multiple hominid species be like? How might we have accommodated our stronger, carnivorous and less gracile Neanderthal population? What might our tiny cousin with grapefruit-sized heads, the Floriensis hobbits, have contributed to our world?
Anyhow, the more I ponder the roads not taken, the less impressed I have become with our own boastful claims and achievements. Not simply because human achievement increasingly has come at the expense of all other species (initially the large mammals, then our fellow hominids, and now basically everything else). But also because even those achievements, it seems to me, may not really be ours to claim.
Air flight, modern medicine, computers? For sure. We made those. But let’s go back upstream to the origins of civilisation to see whose civilisation is it really?
Neanderthals used fire. Indeed, probably homo erectus, the ur-granddaddy of hominids used fire. Fire is a major issue. No other animal uses it. Most run terrified from it. But hominids tamed it, and found ways to use it for cooking and heat. If there’s one development which most explains why hairless apes like us and not, say, the gorillas or big cats rule this world, it is probably the taming of fire.
Neanderthals also buried their dead. This is a sobering thought really. In some senses so do elephants, and other species also demonstrate evidence of mourning, loss and grief. We may feel that grief is one of the things which makes us human, but it’s not an exclusively human sentiment. Even taking it to the point of ritual behaviour – burial – is not exclusive to us.
But what of the other foundational components of human culture and society? What about clothing, art, science, religion?
Well, Neanderthals made jewellery from seashells and animal teeth. Neanderthals created artwork on cave walls. Neanderthals invented musical instruments, specifically bone flutes. We can presume they knew how to beat on drums or rocks rhythmically too. After all, they also had hand axes, which would have been made and used with such rhythmical hitting. Neanderthals built stone shrines, and where there are shrines, it is highly likely that ritualistic behaviour took place.
Neanderthals used lissoirs, and hence invented hide preparation, and hence clothing. They invented glue and string and throwing spears which they used to hunt large game. These hunts required collective action and collaboration. Recent evidence suggests that Neanderthals may even have learnt to count and actually recorded their counting by notching scratches on bones.
So perhaps this isn’t OUR civilisation at all, when you think about it. Perhaps we are thieves living in someone else’s house, whom we murdered, looking at their achievements and claiming them as our own.
Legally, we are already in the posthumanist era. Corporations have long been considered persons in certain jurisdictions, despite not facing the same potential limitations on their freedom as actual people. A couple of years ago, a stretch of the Magpie river in Canada was also granted legal standing as a person, as part of an attempt to provide it with environmental protection.
Ordinarily we understand posthumanism to be some sort of utopian merging of man and machine, but perhaps it might also, and better, be understood as a way of treating non-human entities with the same respect generally extended to humans.
Of course, I feel that implementing human rights (and responsibilities) for all humans might be required as a priority. We’re at risk of stratifying the world into a place where non-humans have more rights than some humans.
Which is the fundamental problem with posthumanism as a utopian ethos. Like all utopian ideals, it is utterly blind to the stratification it ushers into being, even while denying it is doing so.