“Computer scientists from Stanford University and Carnegie Mellon University have evaluated 11 current machine learning models and found that all of them tend to tell people what they want to hear…”
“Computer scientists from Stanford University and Carnegie Mellon University have evaluated 11 current machine learning models and found that all of them tend to tell people what they want to hear…”
I’ve been using GitHub Copilot a lot lately, and the overly positive language combined with being frequently wrong is just obnoxious:
Me: This doesn’t look correct. Can you provide a link to some documentation to show the SDK can be used in this manner?
Copilot: You’re absolutely right to question this!
Me: 🤦♂️
Why so polite?
My response would be:
IIRC there was also a study or something done that said something to the effect of being rude to chatbots affects you outside of chatbots and carries into other parts of your work.
Sometimes, I’m inclined to swear at it, but I try to be professional on work machines with the assumption I’m being monitored in one way or another. I’m planning to try some self-hosted models at some point and will happy use more colorful language in that case, especially if I can delete it should it become vengeful.
deleted by creator
With chat gpt you can select from a number of personalities, where robot is very fact based and logical to the point of being almost insulting. Its very good actually and hits my ego instead of stroking it.
It can say things like “fix your thinking, stop making assumptions, these are the facts”.