• dreugeworst@lemmy.ml
    link
    fedilink
    English
    arrow-up
    26
    ·
    edit-2
    8 days ago

    afaict they’re computers with a GPU that has some hardware dedicated to the kind of matrix multiplication common in inference in current neural networks. pure marketing BS because most GPUs come with that these days, and some will still not he powerful enough to be useful

    • blarth@thelemmy.club
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      7 days ago

      This comment is the most importantly one in this thread. Laptops already had GPUs. Does the copilot button actually result in you conversing with an LLM locally or is inference done in the cloud? If the latter, it’s even more useless.