• ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
    link
    fedilink
    English
    arrow-up
    2
    ·
    4 days ago

    I mean LLMs are just inference engines. I’d argue they’re not fundamentally different from other types of neural networks in that regard. If you train a language model on a particular domain, it’ll make predictions about likely future states given a particular state of the system. In this scenario, the LLM could encode whatever sensor data the system has for monitoring the fermentation process, and take actions to maintain a desired state when sensory data gets out of line.

    • RedSailsFan [none/use name]@hexbear.net
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 days ago

      fundamentally? no, but i thought LLMs were specifically “deep” neural networks, right? also, saying the LLM could take actions on its own is maybe one of the scariest things ive seen you say lol

      • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 days ago

        Deep neural network simply means it has a lot of layers and a big parameter space. And I don’t see what’s scary about an automated system taking action I’m this particular context. Do you find thermostats scary too?