We see lots of “You can use AI for that” boosterism coming from the corporate right, desperate to convince investors in this stuff that the money they’ve thrown at generative AI will pay huge dividends. So I was bemused to see one Oliver Markus Malloy, apparently a progressive Democrat bigging it up (link to his piece here).
His core argument is that reactionaries (MAGA, its global network of far right allies and weird techbros) use this stuff, so progressives also need to use this weapon or get destroyed when the right brings its AI gun to a political knife fight. It’s classic arms race thinking - just as the rival imperial powers threw steel and sweat into the pre-World War One dreadnought-building race, he argues that progressives should respond to their rivals' AI by throwing scraped content into producing bigger, badder memes than the enemy (he also makes another argument that content created by AI can be art, not just derivative "slop" but it's his thoughts on political comms that concern me most).
I’m sceptical about the “this is an arms race we must win” argument. OK, we’re agreed that the corporate right/far right alliance has an advantage when it comes flooding the zone with clickbaity content quickly and at scale. The argument goes that by not adopting the same techniques & tools, progressives are ceding the information space to their enemies.
I'm not convinced that progressives should mirror their opponents in this reactive way. I'm not arguing that the far right doesn't use this stuff, or that the sheer volume of content it allows them to pump out doesn't work. After all, "Quantity has a quality all its own" as Josef Vissarionovich probably never said.
But before we decide that "we need some of that", maybe we need to think about what lessons we can learn from the last time the right used IT and big data to make historic gains. In the run-up to the United Kingdom's vote on whether or not to leave the European Union and Trump's first battle for the White House, we saw how bad actors like Cambridge Analytica were able to use big data to flood social media with vast quantities of targeted disinfomation and rage bait.
The likes of Facebook and Twitter (as it then was) were overrun with trolls, bots and a blizzard of vaguely plausible but disingenous messages. Many of the messages contradicted one another but, because they were targeted at specific groups, members of other demographics who saw different messages didn't see the contraditions.
This IT-enabled flim-flam certainly played a role in selling different groups of the unsupecting voters a bill of goods (the mis-sold product being Brexit in the UK and Trump in the US).
So, what would've happened if progressives had a Cambridge Analytica equivalent of their own in the run up to the 2016 EU referendum and US presidential votes? Could they have swung the results in different directions? I'm not so sure, because what we were looking at then wasn't just a gap in capabilities but a gap in content and ethics.
Remember what I said about micro-targeted and often contradictory messages? Yes, delivering those messages was a tech problem, but the content of the messages was as important as the medium. Tech aside, the innovation was to efficiently send contradictory messages tailored to appeal to different sets of voters and to amplify existing misinformation which was already out there in the wild, courtesy of years of mendacious campaigning in the right wing press. Think about that for a moment.
Using that tech wouldn't have worked (at least in the same way) if the people using it had been held back by the moral scruples to make an honest case. To use the example I'm most familiar with, the EU Referendum and the effectiveness of the Leave campaigns:
...But, as I discovered while knocking on doors during the campaign, many Britons believe all sort of bizarre things about the EU that have no basis in fact, and the source of which is ultimately newspapers – for example, that most immigrants to Britain come from the EU, that 20 per cent of the population are EU migrants or that 75 per cent of Britain’s laws are made in Brussels.
Before and during the campaign, the eurosceptic newspapers carried a strong message that EU migrants were causing enormous problems in Britain. They ran front-page after front-page of scare stories about how migrants and refugees were trying to get into the country – often conflating the two groups. Many of these articles were factually incorrect. Even on the day after the Orlando shootings in Florida, the Daily Mail – uniquely among UK papers – led its front page with ‘Fury over plot to let 1.5 million Turks into Britain’. The written press did a great job in reinforcing Vote Leave’s twisted message that thousands of foreigners – whether asylum-seekers, Romanians, Syrians, terrorists or Turks – were all hell-bent on entering the country.
From a piece by Charles Grant of the Centre for European Reform.
Ultimately, the Leave campaigns' ability to leverage their big data advantge rested to their willingness to lie shamelessly without fear of retribution. As Charles Grant put it "They exploited the fact that in political advertising, unlike commercial advertising, there are no penalties for untruths".
So I'm not sure that, even with the same resouces, the other side would've been able to win big by upping its big data/social media game. Unless, of course, it was willing to lie as shamelessly itself. The tech was just the delivery system. The weapon itself was made of good old-fahioned lies and bullshit and its effectiveness relied on our old friend the bullshit asymmetry principle, AKA Brandolini's Law ("The amount of energy needed to refute bullshit is an order of magnitude bigger than that needed to produce it").
There are two main reasons why I'm not pro-using the latest tech tools to "flood the zone with (our) shit" in an effort to emulate how the bad guys have been winning so far. The first reason is that it's lying and that's just straight wrong.* Secondly, it moves the fight onto the enemy's territory. If the opposing party has no qualms about lying on an industrial scale and you decide to get into a lying contest with them, you're going to lose.
You'll lose because blurring the lines between lies and truth is more helpful to liars than it is to members of the evidence-based community. So long as there is still some shred of evidence or objective truth out there by which people can judge them, liars and propagandists are vulnerable. When both sides are unreliable and there's no objective standard of truth, you're into a world of "he said she said" and "they're all as bad as each other".** And, aside from direct disinformation, getting there is a win for bad actors from Cambridge Analytica to MAGA influencers to Russian troll farms. Yes, we're circling back to that famous Garry Kasparov quote:
The point of modern propaganda isn't only to misinform or push an agenda. It is to exhaust your critical thinking, to annihilate truth.
In other words, creating a space where there are no objective standards to which the powerful can be held or which they can judged by. Believe me, you don't want to go there.
So how does this map on to the latest wave of AI-driven political comms? Well, the next image seems to have been a viral anti-MAGA hit. It allgedly upset the MAGA regime so much it led to a Norwegian tourist being refused entry to the Land Of The Free™ for having it on his phone (I think the US side eventually denied that this image was the reason but, honestly, I can wholly believe tat the MAGA crowd are thin-skinned and control-freakish enough to have barred someone for making fun of them):
![]() |
Meme of J D Vance as a bald, bearded baby |
And you'd need a heart of stone not to appreciate those pompous assholes being mocked, or the Streisand effect that kicks in when they try to suppress the mockery.
The problem is ... as far as I know, the viral J D Vance memes were produced using Photoshop, not AI.
As far as I know. But even if they were the products of AI, would reaching for AI be worth it? Clearly you could do this in Photoshop or similar without colluding with the damage AI is doing to the environment, wholesale copyright theft, disrupting creative jobs (no, techbros "disrupting" isn't automatically a good thing - while it may be good to disrupt, say, a terrorist cell, disrupting the lives of people who are just trying to make a living without harming anybody else is a bad thing), widening the already obscene inequlities of wealth and power still further, all in the service of what may well be yet another a very expensive financial bubble.
And that's without considering that AI is asymetrically attractive to bad actors who want to generate big volumes of content quickly without being concerned by troublesome details like truth or ethics. Deepfakes, amplifying existing prejudices (even when there isn't a man like Elon "Roman salute my arse" Musk behind the curtain obviously tweaking his pet AI to let it express its inner MechaHitler)...
... the list of opportunities for plausibly deniable deception & generally messing with people's heads is long and depressing. What I'm seeing is another tech-enabled opportunity for people with no principles to flood the zone with shit in much the same way they did a decade ago. The opportunities for progressives and people who value strightforward messages anchored in reality and something approaching objective truth seem a lot more constrained.OK, beyond dumping on AI do I have any thoughts about what would work? Well, the ten years since the last tech attack seem to have been largely years of lessons not learned. But not everywhere. In Finland, they got the memo about going for the harmful content, actively pushing back against fake information, fake images and clickbait being used to attack the principle of informed consent with a programme of education and public infomation aimed at children and adults alike. Another lesson not learned in too many places by too many politicians is there are plenty of progressive policies which are poplar and can cut through, even against the screeching coordinated right wing media claque, if they're presented clearly and with conviction. Just ask Mr Mamdani. And remember the lesson of the J D Vance memes; these militant far right characters are freakishly weird. Stop normalising their batshittery and mock them - it's easy when you try and the best part is that we're quite smart enough to do it right now with without the help of AI.
*Yes, I know that technically there are cases where it may be right to lie (e.g. lying to a knife-wielding maniac about the whereabouts of his intended victim) but that's not the sort of scenario we're considering here.
**This both-sides-ism already happened during the Brexit debate - even though the vast majority of points made by the Remain campaign were broadly correct and borne out by subsequent events, the reaction of the Leaver campaigners to their lies being called out was to find a dodgy statement from one of the millions of Remainers (or to make one up) and shout "See! Both sides!" blithely ignoring the fact that they'd been caught red-handed themselves.