• RedSailsFan [none/use name]@hexbear.net
    link
    fedilink
    English
    arrow-up
    14
    ·
    edit-2
    7 days ago

    could you edit title to make it clear they’re using machine learning and other much older branches of “AI” research to do this, not GenAI like many people will assume nowadays when they see AI?

      • RedSailsFan [none/use name]@hexbear.net
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        6 days ago

        in the abstract of the paper (i cannot read mandarin unfortunately):

        The integration of machine learning technologies (e.g., artificial neural networks and support vector machines) and genetic algorithms significantly enhances the regulation efficiency of feeding strategies and process parameters.

        i suppose they could be referring to LLMs with the “neural networks” part but 1) if it were an llm i feel like they would specify at bare minimum the kind of neural network that points towards using one and 2) i am very skeptical how a LLM could be used in this scenario, so i think unless you have hard evidence of an llm being used*, you should assume they are not being used in a scenario like this.

        (sorry for getting to this so late, dont get on hb on my pc where im logged in much these days)

        *

        frankly i am so distrustful of the usefulness of LLMs in scientifc research specifically that if any paper came out claiming to use one to increase efficiency like this i will flat out believe the scientists are either trying to commit fraud/so mistaken it appears they are omitting fraud à la lk-99

        • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
          link
          fedilink
          English
          arrow-up
          2
          ·
          6 days ago

          I mean LLMs are just inference engines. I’d argue they’re not fundamentally different from other types of neural networks in that regard. If you train a language model on a particular domain, it’ll make predictions about likely future states given a particular state of the system. In this scenario, the LLM could encode whatever sensor data the system has for monitoring the fermentation process, and take actions to maintain a desired state when sensory data gets out of line.

          • RedSailsFan [none/use name]@hexbear.net
            link
            fedilink
            English
            arrow-up
            1
            ·
            6 days ago

            fundamentally? no, but i thought LLMs were specifically “deep” neural networks, right? also, saying the LLM could take actions on its own is maybe one of the scariest things ive seen you say lol

            • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
              link
              fedilink
              English
              arrow-up
              1
              ·
              5 days ago

              Deep neural network simply means it has a lot of layers and a big parameter space. And I don’t see what’s scary about an automated system taking action I’m this particular context. Do you find thermostats scary too?