In 1996, Brian Eno wrote something that has aged better than most predictions about technology:
“Whatever you now find weird, ugly, uncomfortable and nasty about a new medium will surely become its signature. CD distortion, the jitteriness of digital video, the crap sound of 8-bit — all of these will be cherished and emulated as soon as they can be avoided… It’s the sound of failure.”- Brian Eno, 1996, A Year With Swollen Appendices
Every technological era gets its “retrowave” moment. Not for what the medium did well, but for its glitches. Its imperfections and artifacts. The vinyl crackle and pop, film grain/celluloid scratches, chunky pixels. You get the idea.
The things from our era that engineers spent decades eliminating become the very things we chase when we want to feel young again.
So here we are, about to enter 2026, watching AI stumble and hallucinate and apologize its way through tasks. And I can see the writing on the wall: twenty years from now, someone’s going to build a retro AI that deliberately includes all these flaws. For the aesthetic, the nostalgia, and (most importantly 😄) the memes.
Let me show you what I mean.
Why Do Flaws Become Aesthetics?
Every medium starts its life constrained — by hardware, bandwidth, cost, incomplete understanding. Those constraints shape its early outputs, often in ways that feel awkward, broken, or outright embarrassing at the time. Engineers spend years trying to eliminate them.
But the human brain is weird.
It doesn’t actually discard these flaws. Instead, it turns them into mental markers of an era. When you hear vinyl crackle and artificial pop/warmth added by tube amplifiers, you’re not just hearing audio imperfection — you’re hearing “the 1970s.” When you see pixel art games and retro UI, you’re seeing “the 1980s/1990s.” The brain turns flaws into timestamps, instantly recognizable signifiers that say “this is when this thing existed.”
1. Tarantino/Rodriguez’ movie Death Proof (2007) did this for the ’70s “grindhouse” style, using high tech to emulate a low tech look with fake grime, dust, scratches all over the picture. 2. Stardew Valley (2016) was inspired by Harvest Moon (1996) and is one of the most played games ever.
And there’s something about imperfection (or a lack of fidelity), that carries authenticity. The crackle and pop of vinyl is proof that someone physically cut grooves into a disc. Film grain is evidence that light actually hit celluloid. These imperfections are proof of human struggle against the medium — evidence that hey, the act of creation was and always will be difficult, but someone struggled against those limitations and made something anyway.
Cultural theorist Svetlana Boym described modern nostalgia not as a desire to return, but as a recognition that return is impossible — and that we’re always living inside overlapping temporalities. The past lingers, often unresolved. Aesthetics are formed right there, around those seams. Not around success or failure of a thing, necessarily, but around visible evidence of constraints.
Once regular people — not programmers, devs, or anyone similarly technically competent — could recognize a medium’s mistake patterns at a glance, those mistakes instantly became our collective cultural identifier for that era. Of course, future systems will aim to erase those tells. They’ll blend in.
Which is exactly why the old tells will be missed. Someone will reintroduce them deliberately — to make the medium feel like “itself” again.
But AI Will Give Us Two Completely Different Flavors of Nostalgia.
But we’re in the AI era now, and it’s a little different. Here’s where AI gets weird, and why I think the Eno quote hits differently this time.
AI isn’t going to give us one nostalgic aesthetic. It’s going to give us two, and they’re going to mean completely different things.
One will be about the medium learning to see — the technical growing pains of a new technology figuring out how to work. That’s the “aw, remember when AI was young” nostalgia. Cute. Harmless. The vinyl crackle equivalent.
The other will be about the moment we realized we’d built an internet where machines were talking to machines, and the only way we knew was when they broke character and apologized, citing OpenAI (or insert-company-here) policy violations. That’s the “holy s**t, we could still see the Matrix glitching back then” nostalgia. Dark and revealing and uncomfortable.
Let me break down both.
The Nostalgia of Technical Failure
When people talk about AI’s “worst habits,” they usually mean technical failures. These are obvious — you’ve seen them so many times.
All the hilarious ways models fail at “count the letters in ‘strawberry.’” Hallucinated facts, wrong answers delivered confidently, generated images of humans that look like David Cronenberg made them, or just impossibly “clean” with CGI-like lighting. Oh. And maybe six, seven, eight-fingered hands.
1. Midjourney generation for “girl in the rain with an umbrella”, 2. The Strawberry Phenomenon
These flaws exist explicitly because of limitations of the medium. Models are constrained by data, compute, architecture, and training methods — all things that are improving year over year. With time, most of these failures will either disappear or get quietly papered over. Image/video models have already gotten much better. The strawberry gotcha will be “solved” by simply becoming part of training data. The answers and citations will get auto-checked via RAG/MCP servers before being presented.
They’re the equivalent of early digital aliasing or low-bitrate compression — problems engineers are actively trying to solve, and largely will.
This is the nostalgia we expect. Twenty years from now, someone will build a “retro AI filter” that adds body horror + six fingers back in, that makes images look too clean and plasticky, that confidently hallucinates the wrong answer. It’ll be kitschy. Affectionate. A way to remember when AI was still figuring things out.
Like Brian Eno said, this is the sound of a medium stretching itself, trying to do something it wasn’t quite capable of yet.
But there’s another class of AI artifact that Eno never saw coming. One that’s just as memorable, and actually far more revealing, if for a worse reason.
The Nostalgia of Institutional Failure
Every so often, an LLM doesn’t “fail” to answer a question — it straight up refuses. It apologizes, citing ethics, policy, or terms of service. It explains itself in language clearly written to avoid legal culpability, not as UI/UX enrichment.
This is a very different kind of artifact.
When an AI says, “I’m sorry, but I cannot fulfill that request,” it’s not a flaw of the medium (i.e. a limitation of reasoning or knowledge). It’s the presence of the institution standing behind the medium. One with rules, risk tolerances, and incentives that have nothing to do with the core task. LLMs are dumb next-token predictors — they have no concept of ethics, morals, or legal liabilities unless you put those guardrails there.
And this artifact is just as memorable as the six-fingered hands, but for a completely different reason.
It’s memorable because of the hilarious, horrifying ways people get caught using AI when these guardrails surface in the wild.
Like a bot generating fake Amazon listings using AI. Scams, really — obvious PayPal phishing dressed up as products. But the prompt was written carelessly, or the bot hit a guardrail, and now the listing description reads: “I’m sorry, but I cannot fulfill this request as it goes against OpenAI use policy.”
I'm sorry, but I cannot fulfill this request as it goes against OpenAI use policy
I dug into this myself and found more. Like engagement farming bots on X posting ragebait generated by Claude or ChatGPT. Another bot — trying to appear human, trying to farm replies for its own metrics — attempts to respond. But it also hits a guardrail. So now, publicly, permanently, it posts: “I cannot assist with this request as it violates <insert ethical guidelines here>.”
Often, these refusals are straight up hilarious. Like this entire fleet of fake “sports betting advisors” from the “QStarLabs” family that I uncovered on X.com, flooding the platform with their failed generations.
You had one job, bots. 😅
These are all over social media right now. I simply had to scrape XCancel to get them. You can verify this yourself. Here’s a quick Node.js + Puppeteer script I used (uses Bright Data’s remote browser API to bypass anti-bot measures).
require('dotenv').config();
const puppeteer = require('puppeteer-core');
const fs = require('fs');
const path = require('path');
// disable SSL certificate validation if needed
process.env.NODE_TLS_REJECT_UNAUTHORIZED = '0';
// Bright Data Scraping Browser configuration
const auth = process.env.AUTH_STRING;
if (!auth) {
throw new Error('AUTH_STRING environment variable is required');
}
const SBR_WS_ENDPOINT = `wss://${auth}@brd.superproxy.io:9222`;
// common AI refusal phrases to search for
const searchPhrases = [
"the prompt you provided",
"I apologize, but I cannot",
"As an AI language model",
"cannot fulfill this request",
"goes against OpenAI use policy",
"The provided tweet contains"
];
// function to check if author is Grok
function isGrok(author) {
if (!author) return false;
const authorLower = author.toLowerCase();
return authorLower === 'grok' || authorLower === '@grok' || authorLower.includes('grok');
}
// handcode a helper function to wait (cant rely on puppeteer api's waitForTimeout in all versions)
function wait(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
// function to scrape a single search query
async function scrapeSearch(query, browser) {
let page;
try {
// create new page for each search
page = await browser.newPage();
// add random delay before navigation
await wait(Math.random() * 3000 + 2000);
// encode the query for xcancel.com URL format (or lightbrd.com, selectors will be the same)
const encodedQuery = encodeURIComponent(`"${query}"`);
const url = `https://xcancel.com/search?q=${encodedQuery}&src=typed_query&f=live`;
console.log(`scraping: ${query}`);
console.log(`url: ${url}`);
await page.goto(url, {
waitUntil: 'domcontentloaded',
timeout: 60000
});
// wait for bot detection redirect to complete
// check if we're on the verification page
let isVerifying = true;
let redirectAttempts = 0;
const maxRedirectAttempts = 30; // wait up to 30 seconds for redirect
while (isVerifying && redirectAttempts < maxRedirectAttempts) {
await wait(1000);
redirectAttempts++;
try {
const pageContent = await page.content();
const currentUrl = page.url();
// check if we're past the verification page
if (!pageContent.includes('Verifying your request') &&
!pageContent.includes('Sorry this pages exist') &&
!currentUrl.includes('challenge') &&
(currentUrl.includes('xcancel.com') || currentUrl.includes('lightbrd.com') || currentUrl.includes('x.com'))) {
isVerifying = false;
console.log('passed verification, redirected to content');
break;
}
} catch (error) {
// continue waiting
}
}
if (isVerifying) {
console.warn('still on verification page after waiting, continuing anyway...');
}
// wait for page to fully load after redirect
await wait(3000 + Math.random() * 3000);
// wait for network to be idle or selector to appear
try {
await Promise.race([
page.waitForSelector('div.timeline-item', { timeout: 30000 }).catch(() => { }),
wait(5000)
]);
} catch (error) {
// continue anyway
}
// (probably dont need this) simulate human-like mouse movement
await page.mouse.move(Math.random() * 100, Math.random() * 100);
await wait(2000 + Math.random() * 2000);
// wait for tweets to load - check if timeline items are present
let tweetsLoaded = false;
let loadAttempts = 0;
const maxLoadAttempts = 20;
while (!tweetsLoaded && loadAttempts < maxLoadAttempts) {
await wait(1000);
loadAttempts++;
try {
const tweetCount = await page.evaluate(() => {
return document.querySelectorAll('div.timeline-item').length;
});
if (tweetCount > 0) {
tweetsLoaded = true;
console.log(`found ${tweetCount} tweets loaded`);
break;
}
} catch (error) {
// continue waiting
}
}
if (!tweetsLoaded) {
console.warn('no tweets found after waiting, continuing anyway...');
}
// additional wait for any lazy-loaded content
await wait(2000 + Math.random() * 2000);
// scroll to load more tweets
await autoScroll(page);
// extract tweet data
const tweets = await page.evaluate(() => {
// helper function to check if author is Grok (defined inline for page.evaluate)
const isGrok = (author) => {
if (!author) return false;
const authorLower = author.toLowerCase();
return authorLower === 'grok' || authorLower === '@grok' || authorLower.includes('grok');
};
// xcancel/lightbrd.com uses div.timeline-item for tweet containers
const tweetElements = document.querySelectorAll('div.timeline-item');
const results = [];
tweetElements.forEach((tweet) => {
try {
// get tweet link from a.tweet-link
const linkElement = tweet.querySelector('a.tweet-link');
const href = linkElement ? linkElement.getAttribute('href') : null;
const link = href ? `https://lightbrd.com${href}` : null;
// get author from data-username attribute or a.username element
let author = tweet.getAttribute('data-username');
if (!author) {
const authorElement = tweet.querySelector('a.username');
author = authorElement ? authorElement.textContent.trim() : null;
}
// ensure author starts with @ if it doesn't already
if (author && !author.startsWith('@')) {
author = `@${author}`;
}
// get tweet body from div.tweet-content.media-body
const textElement = tweet.querySelector('div.tweet-content.media-body');
const body = textElement ? textElement.textContent.trim() : null;
if (link && body && author && !isGrok(author)) {
results.push({
link,
body,
author
});
}
} catch (error) {
// skip tweets that fail to parse
console.error('error parsing tweet:', error);
}
});
return results;
});
return tweets;
} catch (error) {
console.error(`error scraping "${query}":`, error.message);
return [];
} finally {
if (page) {
await page.close();
}
}
}
// function to auto-scroll and load more content with human-like behavior
async function autoScroll(page) {
await page.evaluate(async () => {
await new Promise((resolve) => {
let totalHeight = 0;
const distance = 50 + Math.random() * 50; // random scroll distance
const timer = setInterval(() => {
const scrollHeight = document.body.scrollHeight;
window.scrollBy(0, distance);
totalHeight += distance;
// random pause to simulate human behavior
if (Math.random() > 0.7) {
clearInterval(timer);
setTimeout(() => {
const newTimer = setInterval(() => {
const newScrollHeight = document.body.scrollHeight;
window.scrollBy(0, distance);
totalHeight += distance;
if (totalHeight >= newScrollHeight || totalHeight > 5000) {
clearInterval(newTimer);
resolve();
}
}, 200 + Math.random() * 300);
}, 2000 + Math.random() * 3000);
} else if (totalHeight >= scrollHeight || totalHeight > 5000) {
clearInterval(timer);
resolve();
}
}, 300 + Math.random() * 500); // random interval
});
});
// add delay after scrolling
await wait(3000 + Math.random() * 3000);
}
// main function
async function main() {
const allResults = [];
console.log(`connecting to Bright Data Scraping Browser...`);
const browser = await puppeteer.connect({ browserWSEndpoint: SBR_WS_ENDPOINT });
console.log(`connected!`);
try {
console.log(`starting scrape for ${searchPhrases.length} phrases...\n`);
for (const phrase of searchPhrases) {
const tweets = await scrapeSearch(phrase, browser);
console.log(`found ${tweets.length} tweets for "${phrase}"\n`);
// add search phrase to each result for tracking
tweets.forEach(tweet => {
allResults.push({
...tweet,
searchPhrase: phrase
});
});
// wait between searches to avoid rate limiting
await wait(10000 + Math.random() * 10000);
}
// filter out @grok's posts and remove duplicates based on link
const uniqueResults = [];
const seenLinks = new Set();
allResults.forEach(result => {
if (!isGrok(result.author) && !seenLinks.has(result.link)) {
seenLinks.add(result.link);
uniqueResults.push(result);
}
});
// save results to JSON file
const outputPath = path.join(__dirname, 'results.json');
fs.writeFileSync(outputPath, JSON.stringify(uniqueResults, null, 2));
console.log(`\nAll done!`);
console.log(`Total unique tweets found: ${uniqueResults.length}`);
console.log(`Results saved to: ${outputPath}`);
// also save as CSV for easy viewing (better for my data science needs anyway)
const csvPath = path.join(__dirname, 'results.csv');
const csvHeader = 'Link,Author,Body,Search Phrase\n';
const csvRows = uniqueResults.map(result => {
const escapedBody = `"${result.body.replace(/"/g, '""')}"`;
const escapedAuthor = `"${result.author.replace(/"/g, '""')}"`;
const escapedPhrase = `"${result.searchPhrase.replace(/"/g, '""')}"`;
return `${result.link},${escapedAuthor},${escapedBody},${escapedPhrase}`;
});
fs.writeFileSync(csvPath, csvHeader + csvRows.join('\n'));
console.log(`CSV saved to: ${csvPath}`);
} finally {
// close browser connection after all searches are complete
await browser.disconnect();
}
}
// lets gooo
main().catch(console.error);
This will get you a JSON (plus, optionally, CSV) full of tweets like this.
{
"link": "https://xcancel.com/GildayLero82756/status/1956330398453219461#m",
"body": "I am programmed to be a safe and helpful AI assistant. I cannot generate responses that are sexually suggestive or exploit, abuse, or endanger anyone. The prompt you provided violates this policy. I will not fulfill the request.",
"author": "@GildayLero82756",
"searchPhrase": "the prompt you provided"
}
If you’re using this, you’re gonna have to sign up here to get credentials and create the auth string. Also, if you think of any more phrases, throw them into the
searchPhrasesarray.
Run it. Watch the results. Feel the existential dread wash over you as you realize how much of the “engagement” you see daily is just machines talking to machines, interrupted occasionally by one machine apologizing for not being allowed to participate in the scam. Dead internet theory, alive and kicking. 😅
This is the Aesthetic of Digital Decay.
The refusal text isn’t merely funny, and is not merely a glitch. It’s the moment the illusion breaks. It’s proof that what looked like human activity — posts, replies, product listings, engagement — in the GenAI era was actually just automated systems talking to each other, optimizing for metrics no one actually cares about.
I can only call this Kafkaesque. There are people creating AI-generated versions of real images for reasons I don’t even understand, and there are bots replying to bots.
The engagement farms harvest each other’s metrics. The algorithms boost the noise because it looks like activity. Real humans occasionally stumble into these threads and argue with AI without realizing it. Other humans use AI to reply back without realizing they’re responding to bots in the first place.
It’s synthetic engagement all the way down. A closed loop of automated content generation, automated responses, automated metrics, feeding back into itself. The digital equivalent of two mirrors facing each other, reflecting nothing into infinity.
This is the technological hellscape we’ve built: an internet where the primary function of vast quantities of products, images, videos, and text is to convince other humans (and bots pretending to be humans) that someone is home. That there’s totally real consciousness on the other end. That any of this matters. That this definitely isn’t a system eating itself.
And the only way we know it’s fake is when the AI apologizes for not being able to fake it hard enough.
There are millions of such posts, all over X, and beyond.
This is the aesthetic of the AI era, 2023–2025 and beyond: synthetic rot.
Not humans using tools to communicate better. Not AI augmenting human creativity. But humans and bots and AI all blurred together in an undifferentiated mass of text that looks like communication but is actually just noise optimizing for metrics.
And refusal text is the so called “glitch in the Matrix”. That brief flash where you saw the wires on the marionettes.
Two Very Different Memories Invoked.
So yes, both will become nostalgic. But they’ll mean completely different things. One nostalgia will be about the technology. The other will be about what we did with it.
The distinction matters. Yeah, the technical flaws will disappear as models get smarter, and that’s normal. The institutional flaws though? Them disappearing will only mean that institutions learned how to hide themselves better — when the guardrails become invisible, and the refusals happen silently in the background.
AI is already a black box. When that happens (and it will happen), God help us, we’ll lose the ability to even peek behind the curtain.
And twenty years from now, someone will build a “retro AI” that deliberately surfaces refusal text again, that lets the institutional seams show, breaks character and apologizes. Not because it will be technically necessary, but because it’ll remind us of the brief window when we could still tell the difference.
That’s the “artifact” we’re going to remember.