• 0 Posts
  • 9 Comments
Joined 11 months ago
cake
Cake day: April 27th, 2024

help-circle
  • I have one big frustration with that: Your voice input has to be understood PERFECTLY by TTS.

    If you have a “To Do” list, and speak “Add cooking to my To Do list”, it will do it! But if the TTS system understood:

    • Todo
    • To-do
    • to do
    • ToDo
    • To-Do

    The system will say it couldn’t find that list. Same for the names of your lights, asking for the time,… and you have very little control over this.

    HA Voice Assistant either needs to find a PERFECT match, or you need to be running a full-blown LLM as the backend, which honestly works even worse in many ways.

    They recently added the option to use LLM as fallback only, but for most people’s hardware, that means that a big chunk of requests take a suuuuuuuper long time to get a response.

    I do not understand why there’s no option to just use the most similar command upon an imperfect matching, through something like the Levenshtein Distance.


  • WHO IS GOING TO SELL THEM THEN?

    From what you keep repeating over and over in this thread, it seeks like you think the German state should seize Tesla’s assets and sell them off.

    That is an absolutely ridiculously unrealistic idea. But hey, let’s say you start campaigning for it TODAY. You start convincing all the “low average intelligence” people in order to get a sufficient portion of the population on board to sway politicians to seize Tesla.

    (Note that this is not 50%; for example, legalizing abortions has had far wider support in the German population for a long time, yet it’s not happened so far.)

    So let’s be really, REALLY optimistic and say, in 10 years you will be able to get a government voted in which enacts the seizure of Tesla assets, agaojat all corporate-backed influences and interests. And somehow change the Grundgesetz so Tesla can not spend years moving up the courts to prevent this.

    Do you see how this does nothing TODAY? I’m all for the systemic change; go vote and campaign in that direction, but here, in this comment section you are not offering a realistic or timely solution. Should nothing he done until your “perfect” solution becomes workable?






  • No. I am not saying that to put man and machine in two boxes. I am saying that because it is a huge difference, and yes, a practical one.

    An LLM can talk about a topic for however long you wish, but it does not know what it is talking about, it has no understanding or concept of the topic. And that shines through the instance you hit a spot where training data was lacking and it starts hallucinating. LLMs have “read” an unimaginable amount of texts on computer science, and yet as soon as I ask something that is niche, it spouts bullshit. Not it’s fault, it’s not lying; it’s just doing what it always does, putting statistically likely token after statistically liken token, only in this case, the training data was insufficient.

    But it does not understand or know that either; it just keeps talking. I go “that is absolutely not right, remember that <…> is <…,>” and whether or not what I said was true, it will go "Yes, you are right! I see now, <continues to hallucinate> ".

    There’s no ghost in the machine. Just fancy text prediction.


  • I’m a programmer as well. When ChatGPT & Co initially came out, I was pretty excited tbh and attempted to integrate it into my workflow, which kinda worked-ish? But was also a lot of me being amazed by the novelty, and forgiving of the shortcomings.

    Did not take me long to phase them out again though. (And no, it’s not the models I used; I have tried again now and then with the new, supposedly perfect-for-programming models, same results). The only edgecase where they are generally useful (to me at least) are simple tasks that I have some general knowledge of (to double theck the LM’s work) but not have any interest in learning anything further than I already know. Which does occur here and there, but rarely.

    For everything else programming-related, it’s flat out shit.I do not beleive they are a time saver for even moderately difficult programs. Bu the time you’ve run around in enough circles, explaining “now, this does not do what you say it does”, “that’s the same wring answer you gave me two responses ago”, “you have hallucinated that function”, and found out the framework in use dropped that general structure in version 5, you may as well do it yourself, and actually learn how to do it at the same time.

    For work, I eventually found that it took me longer to describe the business logic (and do the above dance) than to just… do the work. I also have more confidence in the code, and understand it completely.

    In terms of programming aids, a linter, formatter and LSP are, IMHO, a million times more useful than any LM.