Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
In case you needed more evidence that the Atlantic is a shitty rag.
The phrase “adorned with academic ornamentation” sounds like damning with faint praise, but apparently they just mean it as actual praise, because the rot has reached their brains.
The implication that Soares / MIRI were doing serious research before is frankly journalist malpractice. Matteo Wong can go pound sand.
It immediately made me wonder about his background. He’s quite young and looks to be just out of college. If I had to guess, I’d say he was probably a member of the EA club at Harvard.
Just earlier this month, he was brushing off all the problems with GPT-5 and saying that “OpenAI is learning from its greatest success.” He wrapped up a whole story with the following:
At this stage of the AI boom, when every major chatbot is legitimately helpful in numerous ways, benchmarks, science, and rigor feel almost insignificant. What matters is how the chatbot feels—and, in the case of the Google integrations, that it can span your entire digital life. Before OpenAI builds artificial general intelligence—a model that can do basically any knowledge work as well as a human, and the first step, in the company’s narrative, toward overhauling the economy and curing all disease—it is aiming to build an artificial general assistant. This is a model that aims to do everything, fit for a company that wants to be everywhere.
Weaselly little promptfucker.
His group chats with Kevin Roose must be epic.
also, they misspelled “Eliezer”, lol
I’ve created a new godlike AI model. Its the Eliziest yet.
My copy of “the singularity is near” also does that btw.
(E: Still looking to confirm that this isn’t just my copy, or it if is common, but when I’m in a library I never think to look for the book, and I don’t think I have ever seen the book anywhere anyway. It is the ‘our sole responsibility…’ quote, no idea which page, but it was early on in the book. ‘Yudnowsky’).
Image and transcript
Transcript: Our sole responsibility is to produce something smarter than we are; any problems beyond that are not ours to solve…[T]here are no hard problems, only problems that are hard to a certain level of intelligence. Move the smallest bit upwards [in level of intelligence], and some problems will suddenly move from “impossible” to “obvious.” Move a substantial degree upwards and all of them will become obvious.
—ELIEZER S. YUDNOWSKY, STARING INTO THE SINGULARITY, 1996
Transcript end.
How little has changed, he has always believed intelligence is magic. Also lol on the ‘smallest bit’. Not totally fair to sneer at this as he wrote this when he was 17, but oof being quoted in a book like this will not have been good for Yudkowskys ego.
The Atlantic puts the “shit” in “shitlib”
deleted by creator
gemini isn’t even trying now
Oh, looks like gemini is a fan of the hacky anti-comedy bits from some of my favorite podcasts
The usual suspects are mad about college hill’s expose of the yud/kelsey piper eugenics sex rp. Or something, I’m in bed and can’t be bothered to link at the moment.
Is the scoop that besides being an EA mouthpiece KP is also into the weird stuff?
Weird rp wouldn’t be sneer worthy on it’s own (although it would still be at least a little cringe), it’s contributing factors like…
-
the constant IQ fetishism (Int is superior to Charisma but tied with Wis and obviously a true IQ score would be both Int and Wis)
-
the fact that Eliezer cites it like serious academic writing (he’s literally mentioned it to Yann LeCunn in twitter arguments)
-
the fact that in-character lectures are the only place Eliezer has written up many of his decision theory takes he developed after the sequences (afaik, maybe he has some obscure content that never made it to lesswrong)
-
the fact that Eliezer think it’s another HPMOR-level masterpiece (despite how wordy it is, HPMOR is much more readable, even authors and fans of glowfic usually acknowledge the format can be awkward to read and most glowfics require huge amounts of context to follow)
-
the fact that the story doubles down on the HPMOR flaw of confusion of which characters are supposed to be author mouthpieces (putting your polemics into the mouths of character’s working for literal Hell… is certainly an authorial choice)
-
and the continued worldbuilding development of dath ilan, the rationalist utopia built on eugenics and censorship of all history (even the Hell state was impressed!)
…At least lintamande has the commonsense understanding of why you avoid actively linking your bdsm dnd roleplay to your irl name and work.
And it shouldn’t be news to people that KP supports eugenics given her defense of Scott Alexander or comments about super babies, but possibly it is and headliner of weird roleplay will draw attention to it.
obligatory reminder that “dath ilan” is misspelled “thailand” and I still don’t know why. Working theory is Yud wants to recolonise thailand
That’s about what I was thinking, I’m completely ok with the weird rpg aspect.
Regarding the second and third point though I’ll admit I thought the whole thing was just yud indulging, I missed that it’s also explicitly meant as rationalist esoterica.
also explicitly meant as rationalist esoterica.
Always a bad sign when people can’t just let a thing be a thing just for enjoyment, but see everything as the ‘hustle’ (for lack of a better word). I’m reminded of that dating profile we looked at which showed that 99% what he did was related to AI and AI doomerism, even the parties.
I actually think “Project Lawful” started as Eliezer having fun with glowfic (he has a few other attempts at glowfics that aren’t nearly as wordy… one of them actually almost kind of pokes fun at himself and lesswrong), and then as it took off and the plot took the direction of “his author insert gives lectures to an audience of adoring slaves” he realized he could use it as an opportunity to squeeze out all the Sequence content he hadn’t bothered writing up in the past decade^ . And that’s why his next attempt at a HPMOR-level masterpiece is an awkward to read rp featuring tons of adult content in a DnD spinoff, and not more fanfiction suitable for optimal reception to the masses.
^(I think Eliezer’s writing output dropped a lot in the 2010s compared to when he was writing the sequences and the stuff he has written over the past decade is a lot worse. Like the sequences are all in bite-size chunks, and readable in chunks in sequence, and often rephrase legitimate science in a popular way, and have a transhumanist optimism to them. Whereas his recent writings are tiny little hot takes on twitter and long, winding, rants about why we are all doomed on lesswrong.)
I missed that it’s also explicitly meant as rationalist esoterica.
It turns in that direction about 20ish pages in… and spends hundreds of pages on it, greatly inflating the length from what could be a much more readable length. It then gets back to actual plot events after that.
-
I’m sorry, we finally, officially need to cancel fantasy TTRPGs. If it’s not the implicit racialization of everything, it’s the use of the stat systems as a framework for literally masturbatory eugenics fetishization.
You all can keep a stripped-down version of Starfinder as a treat. But if I see any more of this, we’re going all the way back to Star Wars d6 and that’s final.
To be fair to DnD, it is actually more sophisticated than the IQ fetishists, it has 3 stats for mental traits instead of 1!
also: The int-maxxing and overinflated ego of it all reminds me of the red mage from 8-bit theater, a webcomic based on final fantasy about the LW (light warriors) that ran from 2001-2010
E: thinking back on it, reading this webcomic and seeing this character probably in some part inoculated me against people like yud without me knowing
I never read 8bit. I read A Modest Destiny. Wonder how that guy is doing, he always was a bit weird and combative, but when he deleted his blog it was getting very early signs of right wing culture warrior bits (which was ironic considering he burned a us flag).
Never read AMD (and shan’t). The author’s site appears to be live.
8BF’s site has been taken over by bots, and I can’t be bothered to find an alternate source. Dead internet go brrrrr. Otherwise, the creator, Brian Clevinger, appears to have had a long career in comics, and has written many things for Marvel.
8BF’s site has been taken over by bots, and I can’t be bothered to find an alternate source.
You can find it directly on Brian Clevinger’s blog, Nuklear Power. Here’s a direct link to the archive.
Ah thanks! On mobile the main page gets redirected to spam, but the site is navigable from the archive.
Yeah, but he used to have forums, and then a blog, and then no blog and then a blog again, and then a hidden blog etc. Think Howard has only a few minor credits on some games, he always came off as a bit of a weirdly combative nerd who thought he was right and the smartest in the room and didn’t get that people didn’t agree with his definitions/assumptions. He is a big idea guy for example. One of his comics was also called ‘the atheist, the agnostic and the asshole’ so yeah. The 00’s online comic world was something.
has only a few minor credits[…], he always came off as a bit of a weirdly combative nerd who thought he was right and the smartest in the room and didn’t get that people didn’t agree with his definitions/assumptions. He is a big idea guy for example.
gosh i’m sure glad that these kinds of people disappeared from the internet /s
Anyone found with a non-cube platonic solid will be lockerized indefinitely
I would simply learn how to keep “games” and “reality” separate. I actually already know. It helps a lot.
Racists are gonna racist no matter what. They didn’t need TTRPGs around to give them the idea of breaking out the calipers.
Yes but basic dnd does have a lot of racism build in, esp with Gygax not being great on that end (nits make lice he said about how it lawful for paladins to kill orc babies). They did drop the sexism pretty quickly, but no big suprise his daughters were not into it. It certainly helps with the whole hierarchical mindset. My int/level is higher than yours so im better than you stuff. And sadly a lot of people do have trouble keeping both seperate (and even that isn’t always ideal, esp in larps).
But yes this, considering the context ks def a bit of a case of some of their ideologies, or ideological fantasies bleeding through. (Esp considering, Yud has been corrected on his faulty understanding of genetics before).
We’ve definitely sneered at this before, i do not recall if it was known that KP was the cowriter in this weird forum RP fic
E: googling “lintamande kelsey piper” and looking at a reddit post digs up the inactive since 2018 AO3. A total just shy of 130k words, a little marvel stuff, most of it LOTR based, and some of it tagged “Vladmir Putin/Sauron”. How fun!
No judgement from me, tbh. Fanfic be fanficking. I aint gonna read that shit tho.
Previous thread
E: we didn’t fucking know
Not sure if anybody noticed the last time, but so they get isekayed into a DND world, which famously runs on some weird form of fantasy feudalism and they expect a random high int person to rule the country somehow? What in the primogenitor is this stuff, you can’t just think yourself into being a king, that is one of the issues with monarchies.
E: ah no they are in a totalitarian state ruled by the literal forces of hell, places that totally praise merit based upwards mobility.
ah no they are in a totalitarian state ruled by the literal forces of hell, places that totally praise merit based upwards mobility.
Hey, write what you know
An encounter of this sort is what drove Lord Vetinari to make a scorpion pit for mimes, probably.
For all of the 2.2 seconds I have spent wondering who Yud’s coauthor on that was, I vaguely thought that it was Aella. I don’t know where I might have gotten that impression from. A student paper about fanfiction identified “lintamande” as Kelsey Piper in 2013.
I tried reading the forum roleplay thing when it came up here, and I caromed off within a page. I made it through this:
The soap-bubble forcefield thing looks deliberate.
And I got to about here:
Mad Investor Chaos heads off, at a brisk heat-generating stride, in the direction of the smoke. It preserves optionality between targeting the possible building and targeting the force-bubble nearby.
… before the “what the fuck is this fucking shit?” intensified beyond my ability to care.
Yeah I couldn’t find the strength to even get to the naughty stuff, I gave up after one or two chapters. And I’ve read through all of HPMOR. 😐
I’m hard-pressed to think of anything else I have tried to read that was comparably impenetrable. At least when we played “exquisite corpse” parlor games on the high-school literary magazine staff, we didn’t pretend that anything we improvised had lasting value.
got sent this image
wonder how many more of these things we’ll see before people start having a real bileful response to this (over and above the fact that a number of people have been warning about exactly this outcome for a while now)
(transcript below)
transcript
title: I gave my mom’s company an Al automation and now she and her coworkers are unemployed
body: So this is eating me alive and I don’t really know where else to put it. I run this little agency that builds these Al agents for staffing firms. Basically the agent pre-screens candidates, pulls the info into a neat report, and sends it back so recruiters don’t waste hours on screening calls. It’s supposed to be a tool, not a replacement.
My mom works at this mid sized recruiting company. She’s always complained about how long it takes to qualify candidates, so I set them up with one of my agents just to test it. It crushed it. Way faster, way cheaper, and honestly more consistent than most of their team.
Fast forward two months and they’ve quietly laid off almost her whole department. Including my mom. I feel sick. Like I built something that was supposed to help people, and instead it wiped out my mom’s job and her team. I keep replaying it in my head like I basically automated my own family out of work.
Pressing F for doubt, looks like a marketing scam to me.
It’s pretty screwed up that humble bragging about putting their own mother out of a job is a useful opening to selling a scam-service. At least the people that buy into it will get what they have coming?
that or some kind of bait
I didn’t dig into the post/username at all so I can’t guesstimate likelihood of this! get where you’re coming from
(…I really need to finish my blog relaunch (this thought brought to you by the explication I was about to embark on in this context))
(((it’s soon.gif tho!)))
Gonna have to agree with zogwarg here. I checked out the Reddit profile and they’re a self-proclaimed entrepreneur whose one-man “agency” has zero clients and yet to even have an idea, attempting to crowdsource the latter on r/entrepreneur.
dude has a post named “from 0 to 1 clients in 48h” where someone calls him out for already claiming to have 17 customers, so it’s reasonable to assume that this guy is full of shit either way
then again, there’s plenty of clueless, could be real, because welcome to current year, where everything is fake, satire is dead and reuters puts the onion out of the business
‘set them up with’
Anybody want to bet if they did it for free?
could go either way tbh
Gary asks the doomers, are you “feeling the agi” now kids?
To which Daniel K, our favorite guru lets us know that he has officially
moved his goal postsupdated his timeline so now the robogod doesnt wipe us out until the year of our lorde 2029.It takes a big brain superforecaster to have to admit your four month old rapture prophecy was already off by at least 2 years omegalul
Also, love: updating towards my teammate (lmaou) who cowrote the manifesto but is now saying he never believed it. “The forecasts that don’t come true were just pranks bro, check my manifold score bro, im def capable of future sight, trust”
look at me, the thinking man, i update myself just like a computer beep boop beep boop
Clown world.
How many times will he need to revise his silly timeline before media figures like Kevin Roose stop treating him like some kind of respectable authority? Actually, I know the answer to that question. They’ll keep swallowing his garbage until the bubble finally bursts.
And once it does they’ll quietly stop talking about it for a while to “focus on the human stories of those affected” or whatever until the nostalgic retrospectives can start along with the next thing.
“Kevin Roose”? More like Kevin Rube, am I right? Holy shit, I actually am right.
So, as I have been on a cult comparison kick lately, how did it work for those doomsday cults when the world didn’t end, and they picked a new date, did they become more radicalized or less? (I’m not sure myself, I’d assume it would be the people disappointed leave, and the rest get worse).
… prophecies, per se, almost never fail. They are instead component parts of a complex and interwoven belief system which tends to be very resilient to challenge from outsiders. While the rest of us might focus on the accuracy of an isolated claim as a test of a group’s legitimacy, those who are part of that group—and already accept its whole theology—may not be troubled by what seems to them like a minor mismatch. A few people might abandon the group, typically the newest or least-committed adherents, but the vast majority experience little cognitive dissonance and so make only minor adjustments to their beliefs. They carry on, often feeling more spiritually enriched as a result.
When Prophecy Fails is worth the read just for the narrative, he literally had his grad students join a UFO / Dianetics cult and take notes in the bathroom and kept it going for months. Really impressive amount of shoe leather compared to most modern psych research.
deleted by creator
New piece from the Financial Times: Tech utterly dominates markets. Should we worry?
Pulling out a specific point, the article’s noted how market concentration is higher now than it was in the dot-com bubble back in 2000:
You want my overall take, I’m with Zitron - this is quite a narrative shift.
Meanwhile on /r/programmingcirclejerk sneering hn:
transcription
OP: We keep talking about “AI replacing coders,” but the real shift might be that coding itself stops looking like coding. If prompts become the de facto way to create applications/developing systems in the future, maybe programming languages will just be baggage we’ll need to unlearn.
Comment: The future of coding is jerking off while waiting for AI managers to do your project for you, then retrying the prompt when they get it wrong. If gooning becomes the de facto way to program, maybe expecting to cum will be baggage we’ll need to unlearn.
Promptfondlers are tragically close to the point. Like I was saying yesterday about translators the future of programming in AI hell is going to be senior developers using their knowledge and experience to fix the bullshit that the LLM outputs. What’s going to happen when they retire and there’s nobody with that knowledge and experience to take their place? I’ll have sold off my shares by then, I’m sure.
New Atlantic article regarding AI, titled “AI Is a Mass-Delusion Event”. Its primarily about the author’s feelings of confusion and anxiety about the general clusterfuck that is the bubble.
better, or equivalent to, a mass defecation event?
Here’s a blog post I found via HN:
Physics Grifters: Eric Weinstein, Sabine Hossenfelder, and a Crisis of Credibility
Author works on ML for DeepMind but doesn’t seem to be an out and out promptfondler.
Oh, man, I have opinions about the people in this story. But for now I’ll just comment on this bit:
Note that before this incident, the Malaney-Weinstein work received little attention due to its limited significance and impact. Despite this, Weinstein has suggested that it is worthy of a Nobel prize and claimed (with the support of Brian Keating) that it is “the most deep insight in mathematical economics of the last 25-50 years”. In that same podcast episode, Weinstein also makes the incendiary claim that Juan Maldacena stole such ideas from him and his wife.
The thing is, you can go and look up what Maldacena said about gauge theory and economics. He very obviously saw an article in the widely-read American Journal of Physics, which points back to prior work by K. N. Ilinski and others. And this thread goes back at least to a 1994 paper by Lane Hughston, i.e., years before Pia Malaney’s PhD thesis. I’ve read both; Hughston’s is more detailed and more clear.
DRAMATIS PERSONAE
- Michael Shermer: dry and limp writer, horribly dull public speaker, sex pest
- Sabine Hossenfelder: transphobe, endorser of sex pest Lawrence Krauss, on the subject of physics either incompetent or maliciously deceptive
- Eric Weinstein: Thielboy, he totally invented a Theory of Everything, for realsies, honest, but the dog ate his equations
- Curt Jaimungal: podcast bro who doesn’t even rate a Wikipedia article, but in searching for one we learn that he has platformed a Bell Curve stan
- Scott Aaronson: author of a blog named for a sex fantasy, he has the superpower of making people sympathize with a cop
- Chris Langan: racist, egomaniacal kook
has anyone worked out who Hossenfelder’s new backer is yet
he has the superpower of making people sympathize with a cop
He’s second only to the average sovereign citizen in that field.
I once randomly found Hossenfelder’s YT channel, it had a video about climate change and someone linked it somewhere, I didn’t know who she was. That video seemed fine, it correctly pointed out the urgency of the matter, and while I don’t know enough climate science to say much about the veracity of all its content, nothing stuck out as particularly weird to me. So I looked at some other videos from the channel… and boooooy did I quickly discover some serious conspiracy-style nonsense stuff. Real “the cabal of physicists are suppressing the truth” vibes, including “I got this email which I will read to you but I can’t tell you who it’s from, but it’s the ultimate proof” (both not quotes, just how I’d summarize the content…)
Longtime friends of the pod will recognize the trick of turning molehills into mountains. Creationists take a legitimate debate over a detail, like how many millions of years ago did species A and species B diverge, and they blow it up into “evolution is wrong”. Hossenfelder and her ilk do the same thing. They start with “pre-publication peer review has limited effectiveness” or “the allocation of funding is sometimes susceptible to fads”, and they blow it up into “physicists are a cabal out to suppress The Truth”.
One nugget of fact that Hossenfelder in particular exploits is that the specific way we have been investigating the corner of physics we like to call “fundamental” is, possibly, arguably, maybe tapped out. The same poster of sub-sub-atomic particles that you’d have put on your wall 30 or 40 years ago is still good today, with an edit or two in the corner. We found the top quark, we found the Higgs, and so, possibly, arguably, maybe, building an even bigger CERN machine isn’t a worthwhile priority right now. Does this spell doom for physics? No, having to reorganize how we do things in one corner of our subject after decades of astonishing success is not “doom”.
DRAMATIS PERSONAE
Belligerents
Author works on ML for DeepMind but doesn’t seem to be an out and out promptfondler.
Quote from this post:
I found myself in a prolonged discussion with Mark Bishop, who was quite pessimistic about the capabilities of large language models. Drawing on his expertise in theory of mind, he adamantly claimed that LLMs do not understand anything – at least not according to a proper interpretation of the word “understand”. While Mark has clearly spent much more time thinking about this issue than I have, I found his remarks overly dismissive, and we did not see eye-to-eye.
Based on this I’d say the author is LLM-pilled at least.
However, a fruitful outcome of our discussion was his suggestion that I read John Searle’s original Chinese Room argument paper. Though I was familiar with the argument from its prominence in scientific and philosophical circles, I had never read the paper myself. I’m glad to have now done so, and I can report that it has profoundly influenced my thinking – but the details of that will be for another debate or blog post.
Best case scenario is that the author comes around to the stochastic parrot model of LLMs.
E: also from that post, rearranged slightly for readability here. (the […]* parts are swapped in the original)
My debate panel this year was a fiery one, a stark contrast to the tame one I had in 2023. I was joined by Jane Teller and Yanis Varoufakis to discuss the role of technology in autonomy and privacy. [[I was] the lone voice from a large tech company.]* I was interrupted by Yanis in my opening remarks, with claps from the audience raining down to reinforce his dissenting message. It was a largely tech-fearful gathering, with the other panelists and audience members concerned about the data harvesting performed by Big Tech and their ability to influence our decision-making. […]* I was perpetually in defense mode and received none of the applause that the others did.
So also author is tech-brained and not “tech-fearful”.
Our Very Good Friends are often likened to Scientology, but have we considered Happy Science and Aum Shinrikyo?
https://en.wikipedia.org/wiki/Happy_Science https://en.wikipedia.org/wiki/Aum_Shinrikyo
Aum is very apt imo given how it recruited stem types.
aum recruited a lot of people, and also failed at some things that would be presumably easier to do safely than what they did
Meanwhile, Aum had also attempted to manufacture 1,000 assault rifles, but only completed one.[37]
otoh they were also straight up delusional about what they could achieve, including toying with the idea of manufacturing nukes, military gas lasers, and getting and launching Proton rocket. (not exactly grounded for a group of people who couldn’t make AK-74s)
they were also more media savvy in that they didn’t pollute info space with their ideas only using blog posts, they
had entire radio stationrented time from a major radio station within russia, broadcasting both within freshly former soviet union and into japan from vladivostok (which was much bigger deal in 90s than today)they were also more media savvy in that they didn’t pollute info space with their ideas only using blog posts, they had entire radio station rented time from a major radio station within russia, broadcasting both within freshly former soviet union and into japan from vladivostok (which was much bigger deal in 90s than today)
Its pretty telling about Our Good Friends’ media savviness that it took an all-consuming AI bubble and plenty of help from friends in high places to break into the mainstream.
With all that money sloshing around, It’s only a matter of time before they start cribbing from their neighbors and we get an anime adaptation of HPMoR.
radio transmissions in russia were money shot for aum, and idk if it was a fluke or deliberate strategy. people had for a long time expectation that radio and tv are authoritative, reliable sources (due to censorship that doubled as fact-checker, and about all of it was state-owned) and in 90s every bit of that broke down because of privatization, and now you could get on the air and say anything, with many taking that at face value, as long as you pay up. at the same time there was major economic crisis and cults prey on the desperate. result?
Following the sarin gas attack on the Tokyo subway, two Russian Duma committees began investigations of the Aum – the Committee on Religious Matters and the Committee on Security Matters. A report from the Security Committee states that the Aum’s followers numbered 35,000, with up to 55,000 laymen visiting the sect’s seminars sporadically. This contrasts sharply with the numbers in Japan which are 18,000 and 35,000 respectively. The Security Committee report also states that the Russian sect had 5,500 full-time monks who lived in Aum accommodations, usually housing donated by Aum followers. Russian Aum officials, themselves, claim that over 300 people a day attended services in Moscow. The official Russian Duma investigation into the Aum described the cult as a closed, centralized organization.
And how it fused Buddhism with more Christian religions. Considering how often you heard of old hackers being interested in the former.
aum:
Advertising and recruitment activities, dubbed the “Aum Salvation plan”, included claims of […] realizing life goals by improving intelligence and positive thinking, and concentrating on what was important at the expense of leisure.
this is in common with both our very good friends and scientology, but i think happy science is much stupider and more in line with srinivasan’s network states, in that it has/is an explicitly far-right political organization built in from day one
Yeah, good point.
Network State def has that store-brand Team Rocket vibe.
Ran across a viral post on Bluesky:
Unsurprisingly, the replies and quotes are universally outraged at the news.
Every task you outsource to a machine is a task that you don’t learn how to do.
And school is THE PLACE WHERE YOU ARE SUPPOSED TO LEARN THINGS, JESUS H. FUCK
A story in two Skeets - one from a TV writer, one from a software dev:
On a personal sidenote, part of me suspects the AI bubble is gonna turn tech as a whole into a pop-culture punchline - the bubble’s all-consuming nature and wide-ranging harms, plus the industry’s relentless hype campaign, have already built a heavy amount of resentment against the industry, and the general public is gonna experience a colossal amount of schadenfreude once it bursts,
Looking at the replies and quotes of a Bluesky post that shared some anti-AI headlines, one definitely gets the sense that a segment of the population will greet the bubble popping with joy not seen since Kissinger died.
I looked through the quotes, and found someone openly hoping human-made work will be more highly valued in the bubble’s wake:
You want my suspicion, I suspect she’s gonna get her wish - with the slop-nami flooding the Internet, human-made work in general is gonna be valued all the more.
Ultra-rare NIMBY W
Ed Zitron’s planning to hold AI boosters to account:
Well if the bubble pops he will have to pivot to people who pivot. (That is what is going to suck when to bubble pops, so many people are going to lose their jobs, and I fear a lot of people holding the bags are not the ones who really should be punished the mosts (really hope not a lot of pension funds bought in). The stock market was a mistake).
I imagine it’ll be a pretty lucrative pivot - the public’s ravenous to see AI bros and hypesters get humiliated, and Zitron can provide that in spades.
Plus, he’ll have a major headstart on whatever bubble the hucksters attempt to inflate next.
the hucksters attempt to inflate next.
Quantum, it has already started: https://www.schneier.com/blog/archives/2025/07/cheating-on-quantum-computing-benchmarks.html
Y’know, I was predicting at least a few years without a tech bubble, but I guess I was dead wrong on that. Part of me suspects the hucksters are gonna fail to inflate a quantum bubble this time around, though.
Quantum computing is still too far out from having even a niche industrial application, let alone something you can sell to middle managers the world over. Anybody who day-traded could get into Bitcoin; millions of people can type questions at a chatbot. Hucksters can and will reinvent themselves as quantum-computing consultants on LinkedIn, but is the raw material for the grift really there? I’m doubtful.
Quantum computing isn’t just hard, it’s hadamard
Hucksters can and will reinvent themselves as quantum-computing consultants on LinkedIn, but is the raw material for the grift really there? I’m doubtful.
By my guess, no. AI earned its investor/VC dollars by providing bosses and CEOs alike a cudgel to use against labour, either by deskilling workers, degrading their work conditions, or killing their jobs outright.
Quantum doesn’t really have that - the only Big Claim™ I know it has going for it is its supposed ability to break pre-existing encryption clean in half, but that’s near-certainly gonna be useless for hypebuilding.
I think they will just start to make up capabilities, also with the added capabilities of quantum of a computing paradigm, AGI is back on the menu. Now, due to quantum without all the expensive datacenters and problems. We are gonna put quantum in glasses! VR/Augmented reality quantum AI glasses!
“AI is obviously gonna one-shot the human limbic system,” referring to the part of the brain responsible for human emotions. “That said, I predict — counter-intuitively — that it will increase the birth rate!” he continued without explanation. “Mark my words. Also, we’re gonna program it that way.”
Here’s my idea to increase the birth rate:
Make the world less of an all-consuming dystopian hellscape, so people can actually start and raise a family without ruining themselves, and can feel confident their children won’t have horrible lives.
groks gunna make u fuck
So why did you have your first child?
Optimus found a gun.
…I’m sure someone has already found and indulged that fetish.
Also, common autobot L; Megatron is a gun.