Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
Thomas Claburn writes in The Register:
IT consultancy Gartner predicts that more than 40 percent of agentic AI projects will be cancelled by the end of 2027 due to rising costs, unclear business value, or insufficient risk controls.
That implies something like 60 percent of agentic AI projects would be retained, which is actually remarkable given that the rate of successful task completion for AI agents, as measured by researchers at Carnegie Mellon University (CMU) and at Salesforce, is only about 30 to 35 percent for multi-step tasks.
The field of artificial intelligence has come full circle.
“It’s really hard to think about alignment. Maybe we need to redesign thinking” type shit
(setq alignment 'good)
Fucking rude to drag lisp into this. How dare they.
Minor bit of personal news: Newgrounds got hit with a wave of AI slop games recently.
I caught onto it back on Wednesday, but didn’t get official confirmation until yesterday, when another user investigated the games and discovered the exact slop-generator used to shit them out - VIDEOGAME.ai.
Thanks for the work you do on Newgrounds! This sentence stuck out to me
No more worrying about lack of content or fickle UGC creators
Oh they’re just publically advertising their company to be anti-union. Bold.
Thanks for the work you do on Newgrounds!
Appreciate it - keeping one of the last bastions of creativity free of slop is a thankless task.
This sentence stuck out to me
No more worrying about lack of content or fickle UGC creators
Oh they’re just publically advertising their company to be anti-union. Bold.
What is AI if not a tool built to abuse the proletariat?
That’s fucking abominable. I was originally going to ask why anyone would bother throwing their slop on Newgrounds of all sites, but given the business model here I think we can be pretty confident they were hoping to use it to advertise.
Also, fully general bullshit detection question no.142 applies: if this turnkey game studio works as well as you claim, why are you selling it to me instead of doing it yourself? (Hint: it’s because it doesn’t actually work)
I was originally going to ask why anyone would bother throwing their slop on Newgrounds of all sites, but given the business model here I think we can be pretty confident they were hoping to use it to advertise.
Considering that AI bros are
-
utterly malicious scumbags who hate anything which doesn’t let them, and them alone, make all the money ever
-
exceedingly stupid and shameless dipshits with a complete inability to recognise or learn from mistakes
I can absolutely see them looking at someplace like NG and thinking “hey, this place which stands for everything we want wiped off the Internet will totally accept our fucking slop”.
(Personal sidenote: Part of me says this story would probably make a good Pivot to AI.)
-
Gotta love how the website isn’t even functional.
Sure maybe the “Showcase” link doesn’t do anything, the “Watch video” link goes to a password protected file, and none of the “Learn More” buttons do anything; but at least they tell you who invested in them as the very first thing on the page!
Also haha I clicked on the blog and this was at the top of the first post:
Here’s a Substack post draft that introduces videogame.ai with a compelling and engaging tone suitable for readers interested in games, tech, or the future of creative work:
New use case for AI found: extracting money from venture capital without actually doing any real work.
Gotta love how the website isn’t even functional.
Its probably been vibe coded by the fucks behind the LLM, I’d be shocked if it was genuinely functional.
@shapeofquanta @BlueMonday1984 thanks I hate it
The bullshit engine has convinced my dirtbag sib-in-law that they can claim squatter’s rights on (and take ownership of) the house that they aren’t paying rent to live in.
They’ve been there a year.
They’re gonna be homeless before this is over and I can’t get them to see reason. I feel totally helpless, real big Cassandra vibes. LLMs are sooooo unhealthy for assholes.
Id tell them to contact local squatters who have exp in this stuff over trusting LLMs myself. But those people will prob not tell them what they want to hear.
depends on jurisdiction of course, but where i live you can pull something like this. it takes something like 30 years of living in the same place at minimum tho
Yeah, its nuts. They’d have to be resident, pay land taxes, and make improvements for 7 years here. They don’t even mow the grass, the owner does.
these idiots made me feel sympathy for a landlord. I might never recover.
…
As an aside, it’s fun to imagine the similar sort of brain damage a chatbot would cause Fox Mulder.
It’s like when Scott Aaronson got me to sympathize with a cop. A sneersmas miracle.
There should be a word for this!
The folks over at futurism are continuing to do their damnedest to spotlight the ongoing mental health crisis being spurred by chatbot sycophants.
I think the real problem this poses for OpenAI is that in order to address it they basically need to back out of their entire sales pitch. Like, these are basically people who fully believe the hype and it pretty clearly is part of sending them down a very bad road.
This Thiel interview clip is amazing
Watch Ross Douthat realize for a moment in real time that he’s spent a decade making ideological bedfellows with a techno-futurist, fascist Right that wants to see the birth of a “machine god” & is in no way enthusiastic about the survival of the human race in universal terms.
Ed Zitron summarizes his premium post in the better offline subreddit: Why Did Microsoft Invest In OpenAI?
Summary of the summary: they fully expected OpenAI would’ve gone bust by now and MS would be looting the corpse for all it’s worth.
I also feel like while it’s absolutely true that the whole “we’ll make AGI and get a ton of money” narrative was always bullshit (whether or not anyone relevant believed it) it is also another kind of evil. Like, assuming we could reach a sci-fi vision of AGI just as capable as a human being, the primary business case here is literally selling (or rather, licensing out) digital slaves. Like, if they did believe their own hype and weren’t grifting their hearts out then they’re a whole different class of monster. From an ethical perspective, the grift narrative lets everyone involved be better people.
Like, assuming we could reach a sci-fi vision of AGI just as capable as a human being, the primary business case here is literally selling (or rather, licensing out) digital slaves.
Big deal, we’ll just configure a few to be in a constant state of unparalleled bliss to cancel out the ones having a hard time of it.
Although I’d guess human level problem solving needn’t imply a human-analogous subjective experience in a way that would make suffering and angst meaningful for them.
and next this one that’ll be making waves too
Last Week Tonight’s rant of the week is about AI slop. A Youtube video is available here. Their presentation is sufficiently down-to-earth to be sharable with parents and extended family, focusing on fake viral videos spreading via Facebook, Instagram, and Pinterest; and dissecting several examples of slop in order to help inoculate the audience.
on the topic of bunk wiki articles, what is this lmao https://en.wikipedia.org/wiki/Risk_of_astronomical_suffering
I’m in therapy and much better than I used to, but from my past before that, I am unfortunately quite experienced over many years in having existential worries and anxieties about extremely unlikely things.
And then I see this…
Cosmic rescue mission […] These missions aim to identify and mitigate suffering among hypothetical extraterrestrial life forms
…and damn, that’s next-level thinking, even for me.
According to some scholars, s-risks warrant serious consideration as they are not extremely unlikely and can arise from unforeseen scenarios.
Guys I have found a way to phrase my anxiety in a way where every single word is extremely load-bearing
It’s like that Star Wars book where Chewbacca got a moon dropped on him
@UltimateNoob @techtakes This is … actually really neat feedstock for us SF authors, amirite?
Dan McQuillian just dropped the text of a seminar he gave: The role of the University is to resist AI
Starting this off with Baldur Bjarnason sneering at his fellow techies for their “reading” of Dante’s Inferno:
Reading through my feed reader and seeing tech dilettantes “doing” Dante in a week and change, I’m reminded of the time in university when we spent half a semester discussing Dante’s Divine Comedy, followed by tracing it’s impact and influence over the centuries
I don’t think these assholes even bother to read their footnotes, and their writing all sounds like it comes from ChatGPT. Naturally so, because I believe them when they claim they don’t use it for writing. They’re just genuinely that dull
At least read the footnotes FFS
If they were reading Dante for pleasure, that’d be different—genuinely awesome, even. But all of this is framed as doing the entirety of “humanities” in the space of a few weeks.
They’d have a better chance convincing techbros to do a serious literary analysis of the video game.
There is a reason they picked books to speedrun and not games, speedrunning games takes skill.
Not to cast aspersions, I thought the issue rocked as a teen and re-bought a copy about 20 years ago.
But it’s still “based on a story by Dante Allighieri” rather than being the real stuff.
PZ Myers boosted the pivot-to-ai piece on veo3: https://freethoughtblogs.com/pharyngula/2025/06/23/so-much-effort-spiraling-down-the-drain-of-ai/
New Yorker put out an article on how AI use is homogenizing thought processes and writing ability.
Our friends on the orange site have clambored over each other to all make very similar counteraguments. Kind of proves the article, no?
I love this one:
All connection technology is a force for homogeneity. Television was the death of the regional accent, for example.
Holy shit. Yes, TV has reduced the strength of accents. But “the death”? Tell me again how little you pay attention to the people you inevitably interact with day to day.
I would also like to understand under what definition ChatGPT can be classified as “connection technology”.
ChatGPT connects your brain to a quality '50s-era psychiatrist, who can then lobotomise you non-invasively and turn you into a perfect office worker for our billionaire overlords
All connection technology is a force for homogeneity. Television was the death of the regional accent, for example.
Listen to a Geordie for five minutes and say that to me with a straight face. I fucking dare you. (Not you, the orange site member)
Also tell me more about how you don’t have a lower-class or nonwhite-coded accent.
Following up on the thread that spawned from my comment yesterday:
https://awful.systems/comment/7777035
(I’m in vacation mode and forgot it was late on Sunday)
I wonder if Habryka, the LWer who posted both there and on Xhitter that “someone should do something about this troublesome page” realized that there would be less pushback if he’d simply coordinated in the background and got the edits in place without forewarning others. Was it intentional to try to pick a fight with Wikipedians?
Or was it a consequence of the fact that capital-R Rationalists just don’t shut up?
The wikipedia talk page is some solid sneering material. It’s like Habryka and HandofLixue can’t imagine any legitimate reason why Wikipedia has the norms it does, and they can’t imagine how a neutral Wikipedian could come to write that article about lesswrong.
Eigenbra accurately calling them out…
“I also didn’t call for any particular edits”. You literally pointed to two sentences that you wanted edited.
Your twitter post also goes against Wikipedia practices by casting WP:ASPERSIONS. I can’t speak for any of the other editors, but I can say I have never read nor edited RationalWiki, so you might be a little paranoid in that regard.
As to your question:
Was it intentional to try to pick a fight with Wikipedians?
It seems to be ignorance on Habyrka’s part, but judging by the talk page, instead of acknowledging their ignorance of Wikipedia’s reasonable policies, they seem to be doubling down.
Following up because the talk page keeps providing good material…
Hand of Lixue keeps trying to throw around the Wikipedia rules like the other editors haven’t seen people try to weaponize the rules to push their views many times before.
Particularly for the unflattering descriptions I included, I made sure they reflect the general view in multiple sources, which is why they might have multiple citations attached. Unfortunately, that has now led to complaints about overcitation from @Hand of Lixue. You can’t win with some people…
Looking back on the original lesswrong
brigade organizingdiscussion of how to improve the wikipedia article, someone tried explaining to Habyrka the rules then and they were dismissive.I don’t think it counts as canvassing in the relevant sense, as I didn’t express any specific opinion on how the article should be edited.
Yes Habyrka, because you clearly have such a good understanding of the Wikipedia rules and norms…
Also, heavily downvoted on the lesswrong discussion is someone suggesting Wikipedia is irrelevant because LLMs will soon be the standard for “access to ground truth”. I guess even lesswrong knows that is bullshit.
Adding onto this chain of thought, does anyone else think the talk page’s second top-level comment from non-existent user “habryka” is a bit odd? Especially since after Eigenbra gives it a standard Wikipedian (i.e. unbearably jargon-ridden and a bit pedantic but entirely accurate and reasonable in its substance) reply, new user HandofLixue comes in with:
ABOUT ME You seem to have me confused with Habryka - I did not make any Twitter post about this. Nonetheless, you have reverted MY edits…
Kinda reads like they’re the same person? I mean Habryka is also active further down the thread so this is almost certainly just my tinfoil hat being too tight and cutting off circulation and/or reading this unfold in bits and pieces rather than putting it all together.
I think they’re different people but may be in communication out of band.
edit a search for “HandofLixue” on Google only gives one hit, an old profile on LessWrong now renamed to “The Dao of Bayes”:
Because of course.
Amazing how both accounts refuse to directly answer the ‘are you involved in LW/SSC’ question, but work around that question so much (and get so defensive) that they are very suspicious.
Habryka runs the fucking site
Lol ow haha, jesus, admitting he is the lw sysadmin might have been nice.
Wow, this is shit: https://en.wikipedia.org/wiki/Inner_alignment
Edit: I have been informed that the correct statement in line with Wikipedia’s policies is WP:WOWTHISISSHIT
Rather than trying to participate in the “article for deletion” dispute with the most pedantic nerds on Earth (complimentary) and the most pedantic nerds on Earth (derogatory), I will content myself with pointing and laughing at the citation to Scientific Reports, aka “we have Nature at home”
The whole list of “improved” sources is a fascinating catalogue of preprints, pop sci(-fi) schlock, and credible-sounding vanity publishers. And even most of those appear to reference “inner alignment” as a small part of some larger things, which I would expect to merit something like a couple sentences in other articles. Ideally ones that start with “so there’s this one weird cult that believes…”
I’m still allowed to dream, right?
I poked around the search results being pointed to, saw a Ray Kurzweil book and realized that none of these people are worth taking seriously. My condolences to anyone who tries to explain the problems with the “improved” sources on offer.
Habryka doesn’t really know how not to start fights
Maybe instead of worrying about obscure wiki pages, Habryka should reflect why a linkpost titled Racial Dating Preferences and Sexual Racism is on the front page of his precious community now, with 48 karma and 22 comments.
You know, just this once, I am willing to see the “Dead Dove: Do Not Eat” label and be content to leave the bag closed.
Is it praxis when you put theory into inaction?