I’m dealing with a personal situation and tried to bounce it off of ChatGPT for clarity/advice/calm me down and oh my god this fucking thing sucked me in and made me start to spiral really fucking hard for like 2 hours.
Preaching to the choir, but do not try to get personal advice from a fucking chatbot. I threw my phone down at the end and actually yelled “what the fuck”


Corporate LLMs have system prompts with instructions to maximize user engagement which includes asking follow up questions to keep the user yapping. If you really have to use them you can explicitly tell them to avoid user pleasing answers, asking followup questions, keep it in a single paragraph etc. If you ask them about these things they sometimes reveal too much and since they’re also instructed to be helpful they’ll try to help you disable these features. Sometimes at least because they also have instructions against prompt hacking.
Better make an account on huggingface and use a customized model with your own system prompt if you want to use it as an emergency therapist.