I like a bit of AI slop, sometimes. I enjoy generating brutalist illustrations inspired by 1960s Chinese propaganda posters for my blog, because... well, I like them. Making slop is cheap, borderline free actually, and extremely quick and simple. But it will alienate some readers. Some people just plain hate that slop. So what do I do?
The same thing I would do for cookies, if I wanted to use them: I'd ask for user consent, right?
Why people hate slop
There are perfectly legitimate reasons for not enjoying AI generated images, apart from the glitzy-yet-trashy vibe it gives off. There is solidarity with the collapsing art professions and illustrators going unemployed. There is the environmental impact, and a reasonable objection to all those tech bros that are running politics now. Not to mention the whole AI bet/bubble in general, the idea that we should let big tech commoditise cognition.
Those are good reasons, but I think there is more to it. I think that slop triggers anxiety about the future. Just seeing identifiable AI images is a reminder that the way of life people spent their youth and their funds preparing for might not continue to exist. AI generated images trigger that uncertainty, even when presenting as cartoony cuteness.
The slop is just the trigger
The anxiety is about seeing the bottom dropping out of market after market: All the aspiring developers who will never get to write code for money. Musicians, Youtubers, Designers... traditionally safe jobs are reportedly next: accountants, legal, HR, most sorts of work taking place at a desk. Probably not all the positions, but enough to create a significant workforce surplus, and a very high barrier to entry. It means a whole lot of people will have to plan a different kind of career, and some think they might have no career at all. Slop photos of cats in papal robes are less amusing when that is where they take your thoughts.
So people are not really turning away from slop because they hate cheap trash per se. They don't fear hands with extra digits, stairs that lead nowhere or even ghiblification. What they fear is a future that has no use for them. It is not the image, it is what it represents. It's the whole tool chain, the world order, that they reject. It is perhaps not as much anxiety that is triggered as it is anger and resentment. Not so much towards the subject that is depicted, but towards the institutions finding it perfectly acceptable to utilize that technology. That gives the content category a negative brand proposition for a large and growing segment.
Pattern recognition
If it is indeed the case that slop triggers anxiety (and rage, and other strong, negative emotions), AI slop joins a category of objectionable imagery that we already have been blurring out for decades: NSFW content, phobia and anxiety triggers, graphic injuries and violence, assault, self-harm, animal cruelty, smoking, substance abuse and many more.
Twitter, in the era when it was still called Twitter, had a sensitive-content toggle. That blurring makes sense. Some sites do a similar thing to spare their users' bandwidth budgets: showing scaled down versions until the user decides to load them, to save energy costs and bandwidth budgets. Many do cooler effects than a gentle gaussian blur.
So there is already a grouped feature set for visual content.
Opt-in for slop and spiders
So I thought: For my blog, AI slop should be opt-in only. An opt-in should not only obscure, but minimize bandwidth and computing cost. That would come with a slight inconvenience to those who don't mind slop, and a much better experience for those who do object to the content. If I can make the user interaction appealing in itself, it becomes a positive feature, an engaging reader experience for all users.
Brutalist Monet
On vwwwv.org, every content image renders as a pixelated halftone by default. The image is progressively loaded, first as a single kilobyte version (80 pixels wide, 20% quality), then scaled back up and pixelated with the CSS property image-rendering: pixelated, and pushed toward two-tone with a grayscale and contrast filter. It looks a bit like 32-bit graphics, and if you squint a bit, you can almost make out what the image depicts. I call the style "Brutalist Monet".
Each <figure> carries a data-triggers attribute listing the objectionable categories that the image trips. AI generated images trigger slop. A photo with a spider in it would trigger spider. A graphic injury photo would trigger gore. An image can trigger several at once, although I don't plan to post anything other than slop, and maybe a cute spider to prove the point. An image with no triggers gets the same halftone placeholder, but it fades through to the full-colour image as soon as the full-image bytes arrive. The halftone is the loading state. For non-triggering images, it lasts a fraction of a second.

How to opt in
For triggered images, a small label and checkbox sit above the image: Show slop, Show slop and spiders, Show gore, et cetera. Tick it and the full image is loaded. The halftone fades to the real image, appropriately sized. Untick it and the halftone returns. A settings panel (always show: slop / spiders / gore) persists selections in localStorage (not a cookie, it is not tracked, the server never sees it). A returning reader who has decided "I'm fine with slop" doesn't have to keep saying so.
I'm not going to pretend this is cutting edge innovation. These are all well established techniques. The only novelty is in categorically applying this to AI generated visuals, to group AI slop with the porn and the gore and the rest of that content group. Because slapping a badge on that was never enough.
We don't need no stinking badges
The current approach for dealing with AI generated assets is to either do nothing at all, or to slap a badge on it, to inform the reader that it is, indeed, AI generated.
But that doesn't cut the mustard.
It is not enough to show a banner, as if to say "I just showed you something you despise, in case you missed it". An opponent to AI slop should not have to waste bandwidth, generate any sort of carbon footprint in the form of server load, or have to fill up their device memory with bytes they don't want to see or keep in browser cache for the next 12 months. The thing they despise should not reach them at all.
Fixelation
So does that get me in the clear, then? I did some pixelation, and now the issue is fixed?
Well, no. My stubborn, continued use of AI slop, which I know is bad for me, does indeed make me a part of that tool chain, of the market process that is unemploying graphic designers and illustrators, and will soon - if the tech lords are to be trusted - come for the rest of the workforce. It would indeed be easier to just say no to slop.
Instead I chose the opt-in, a position somewhere between yes and no. It's just a statement feature on a tiny blog, and it won't change the world, but it could contribute to a conversation, and that conversation could lead somewhere. How great would it be if LinkedIn, for instance, processed all uploaded portraits mechanically and made fake profile pictures opt-in?
I know what I would choose, if I had that option.