Lvxferre [he/him]

I have two chimps within, called Laziness and Hyperactivity. They smoke cigs, drink yerba, fling shit at each other, and devour the faces of anyone who comes close to them.

They also devour my dreams.

  • 33 Posts
  • 975 Comments
Joined 2 years ago
cake
Cake day: January 12th, 2024

help-circle












  • Transcript:

    • [guy] Hey, Meta, start LiveAI.
    • [two centuries later…]
    • [robot] Starting LiveAI. I love the setup you have here with soy sauce and other ingredients. How can I help?
    • [guy] Hey, can you help me make a Korean-inspired steak sauce for my steak sandwich here?
    • [robot] You can make a Korean-inspired steak sauce using soy sauce, sesame oil…
    • [guy, interrupting bot] What do I do first?
    • [three centuries later…]
    • [guy, repeating] What do I do first?
    • [robot] You’ve already combined the base ingredients, so now grate a pear to add to the sauce.
    • [guy] What do I do first?
    • [audience laughs]
    • [robot] You’ve already combined the base ingredients, so now grate the pear [audience laughs] and gently combine it with the base sauce.
    • [guy] Alright, I think the Wi-Fi might be messed up. Sorry, back to you, Mark!
    • [robot LARPing as a guy] [Mark Zuckerberg] It’s all good. Youknowwhat? It’s all good. The irony of the whole thing is that you spend years making technology and then the Wi-Fi at the e[nd of the] day kinda catches you.

    My comments:

    1. Wi-Fi my arse. This is blatantly bull fucking shit. The model answered the situation wrong; it is able to parse individual items in the footage (note how it praises the “setup” at the start), but it babbles about the guy combining the base ingredients even if not the case.
    2. Bot feels like a slowpoke. Seriously, it takes ages to answer the guy.
    3. Anyone with a functional brain knows those models don’t understand shit. However, answering “what do I do first?” with the assumption a person already did some steps is dumb even for those models.
    4. People don’t repeat questions to get the same answer. Is the “context” window of the bot that small?



  • The core argument of the text isn’t even arms race, like yours. It’s basically “if you can’t get it 100% accurate then it’s pointless lol lmao”. It’s simply a nirvana fallacy; on the same level of idiocy as saying “unless you can live forever might as well die as a baby”.


    With that out of the way, addressing your argument separately: the system doesn’t need to be 100% accurate, or perfectly future-proof, to be still useful. It’s fine if you get some false positives and negatives, or if you need to improve it further to account for newer models evading detection.

    Accuracy requirements depend a lot on the purpose. For example:

    • you’re using a system to detect AI “writers” to automatically permaban them - then you need damn high accuracy. Probably 99.9% or perhaps even higher.
    • you’re using a system to detect AI “writers”, and then manually reviewing their submissions before banning them - then the accuracy can be lower, like 90%.
    • you aren’t banning anyone, just trialling what you will / won’t read - then 75% accuracy is probably enough.

    I’m also unsure if it’s as simple as using the detection tool to “train” the generative tool. Often I notice LLMs spouting nonsense the same model is able to call out afterwards as nonsense; this hints that generating content with certain attributes is more complex than detecting if some content lacks them.



  • The anime expanded the time Suika is alone in the world; it was just three chapters (194~196), but we got a full episode out of it.

    And I’m glad it did. It doesn’t change the plot at all, but it gives Suika’s time alone a well-deserved depth.

    Sure, she woke up all alone, just like Senku did seven years earlier. But unlike Senku she was still a child, and the episode showed well how lonely and vulnerable she felt. (Specially the part where she hugs Kohaku’s statue.) And Suika was never shown to be a talented scientist or anything similar; she didn’t even get modern education. And yet she was able to make the revival fluid. It plays really well with the theme of the anime, on science being not quite the result of a few talented individuals, but of knowledge accumulated over time: previous knowledge (Senku notes), failures (the rain over the nitrate crystals), and eventually success.

    By far one of the best episodes I watched this season.

    @[email protected]


  • [OP, sorry for the harsh words. They’re directed at the text and not towards you.]

    To be blunt this “essay” is a pile of shit. It’s so bad, but so bad, that I gave up dissecting it. Instead I’ll list the idiocies = fallacies = disingenuous arguments it’s built upon:

    • Nirvana idiocy = fallacy: “unless its perfect than its useless lol lmao”.
    • Begging the question: being trained on [ipsis ungulis] “the entire corpus of human output” with enough money to throw at it won’t “magically” make AI output indistinguishable from human generated content.
    • Straw man: if the author is going to distort the GPTZero FAQ, to double down on the nirvana idiocy, they should at least clip the quote further, to not make it so obvious. There’s a bloody reason the FAQ is focusing on punishment.

    Note nirvana fallacy is so prevalent, but so prevalent, that once you try to remove it the text puffs into nothing. The whole text is built upon it. (I’m glad people developing anti-spam systems don’t take the same idiocy seriously, otherwise our mailboxes would be even worse than they already are.)