

Translation is a good fit because generally the input is “bounded” and stays on the path of the original input. I’d much rather trust an ML system that translates a sentence or a paragraph than something that tries to summarize a longer text.
Translation is a good fit because generally the input is “bounded” and stays on the path of the original input. I’d much rather trust an ML system that translates a sentence or a paragraph than something that tries to summarize a longer text.
I enjoy the work for the 3 Macs from the British Isles:
In general I prefer UK English SF, because it’s a bit less infected by the pernicious frontier mentality of US mainstream SF. Note that there are very good American authors too who kinda push back on that, but my impression was formed when Christopher Priest and Jerry Pournelle were active and could be contrasted.
Hey, no kink-shaming.
Ken McLeod’s The Cassini Division tells the fate of all uploaded superhumans - blasted to plasma by bombardment of comet nuclei
This is classic labor busting. If the relatively expensive, hard-to-train and hard-to-recruit software engineers can be replaced by cheaper labor, of course employers will do so.
A hackernews doesn’t think that LLMs will replace software engineers, but they will replace structural engineers:
https://news.ycombinator.com/item?id=43317725
The irony is that most structural engineers are actually de jure professionals, and an easy way for them to both protect their jobs and ensure future buildings don’t crumble to dust or are constructed without sprinkler systems is to simply ban LLMs from being used. No such protection exists for software engineers.
Edit the LW post under discussion makes a ton of good points, to the level of being worthy of posting to this forum, and then nails its colors to the mast with this idiocy
At some unknown point – probably in 2030s, possibly tomorrow (but likely not tomorrow) – someone will figure out a different approach to AI. Maybe a slight tweak to the LLM architecture, maybe a completely novel neurosymbolic approach. Maybe it will happen in a major AGI lab, maybe in some new startup. By default, everyone will die in <1 year after that.
Gotta reaffirm the dogma!
Why put in the work when you can ask Claude to summarize them for you and reap those sweet sweet internet points?
A classic example of the “AI can’t be dumb because humans are dumb too” trope, Pokemon Red edition:
Interesting to see if he gets one.
I believe Trump is A-OK with pay for play for pardons, but what’s the actual price? Something flew by where people were buying a one-to-one with him for $5M, but that’s basically “private”. A pardon of someone as high-profile as SBF has to be worth the reputation hit. Can SBF and/or his family swing it? Would SBF be a good ally/toady of Trump?
Somehow I don’t see it. Unlike UIrich, a lot of people lost real money when FTX imploded. There wasn’t that much sympathy for him from crapto huggers. And let’s not forget he’s an autistic Jew, not a clear hero for the people who have Trump’s ear.
Worrying about a woke nanny AGI, and not the woke wirehead AGI (wireheading being a lot scarier).
This is very much the right-wing mainstream fear - not being able to generate Nazi memes with OpenAI
Wait until they find out it’s not all iambic pentameter and Doric columns…