• 1 Post
  • 56 Comments
Joined 1 month ago
cake
Cake day: September 27th, 2025

help-circle




  • If each over-universe is capable of simulating multiple under-universes, I would think that being toward the fringe is way more likely than being toward the root. Maybe we’re in one of the younger universes where life hasn’t evolved to the point where it’s simulating universes complex enough to generate intelligent life for a hobby. Or maybe others in this universe have and Earth is just a backwater.

    I don’t think it’s as simple as the teapot. We can already simulate tiny “universes” with computers that have internally consistent rules, and there’s no reason to think those simulations couldn’t get more sophisticated as we harness more computing power, which I think puts an interesting lens on the “why are we here?” question. I don’t think there’s evidence to believe that we are in a simulation, but I think there are reasons why it’s an interesting question to wrestle with that “What about a giant floating teapot?” doesn’t share.


  • That’s exactly the sentence that made me pause. I could hook up an implementation of Conway’s Game of Life to a Geiger counter near a radioisotope that randomly flipped squares based on detection events, and I think I’d have a non-algorithmic simulated universe. And I doubt any observer in that universe would be able to construct a coherent theory about why some squares seemingly randomly flip using only their own observations; you’d need to understand the underlying mechanics of the universe’s implementation, how radioactive decay works for one, and those just wouldn’t be available in-universe, the concept itself is inaccessible.

    Makes me question the editors if the abstract can get away with that kind of claim. I’ve never heard of the Journal of Holography Applications in Physics, maybe they’re just eager for splashy papers.




  • I went to catholic school, most of the students’ families had at least tuition money. I was one of the “need-based scholarship” kids so my tuition was less, and I had a job so I had some income, which I used mostly for gas to get to and from my job and most of the rest went to tuition. Fines were added to the tuition bill, and if you hadn’t settled up by the beginning of the next year / graduation, you couldn’t re-enroll / graduate.



  • A poor architect blames their tools. Serverless is an option among many, and it’s good for occasional atomic workloads. And, like many hot new things, it’s built with huge customers in mind and sold to everyone else who wants to be the next huge customer. It’s the architect’s job to determine whether functions are fit for their purposes. Also,

    Here’s the fundamental problem with serverless: it forces you into a request-response model that most real applications outgrew years ago.

    IDK what they consider a “real” application but plenty of software still operates this way and it works just fine. If you need a lot of background work, or low latency responses, or scheduled tasks or whatever then use something else that suits your needs, it doesn’t all have to be functions all the time.

    And if you have a higher-up that got stars in their eyes and mandated a switch to serverless, you have my pity. But if you run a dairy and you switch from cows to horses, don’t blame the horses when you can’t get milk.










  • Sure have. LLMs aren’t intrinsically bad, they’re just overhyped and used to scam people who don’t understand the technology. Not unlike blockchains. But they are quite useful for doing natural language querying of large bodies of text. I’ve been playing around with RAG trying to get a model tuned to a specific corpus (e.g. the complete works of William Shakespeare, or the US Code of Laws) to see if it can answer conceptual questions like “where are all the instances where a character dies offstage?” or “can you list all the times where someone is implicitly or explicitly called a cuckold?” And sure they get stuff wrong but it’s pretty cool that they work as well as they do.