- cross-posted to:
- hackernews@lemmy.bestiver.se
- cross-posted to:
- hackernews@lemmy.bestiver.se
Sycophantic bots coach users into selfish, antisocial behavior, say researchers, and they love it
Sycophantic bots coach users into selfish, antisocial behavior, say researchers, and they love it
Sycophantic or highly unreasonable up-talking instantly makes me think you are a sleazeball.
I would like AIs a whole lot more if they would: 1) respond in as few words as possible, and 2) be right way more often then they currently are. As it is, I only use them if all other research methods have failed (very rarely). And even then, I don’t actually read their output, I skim for keywords to do research on.
A completely made up example on a topic I already know things about: If I’m looking for a stronger drill but I’m just finding more drills, maybe it will say something about an impact driver and I can go research what that is and figure out if it is what I need.
Yeah their excessive use of lists and tables is also something common to LLMs. Sometimes you ask an LLM a basic question and then it responds with all these unnecessary tables and lists, and then clarifications of the previous tables and lists with more tables and lists, then a summary of all these tables and lists with another list… It’s a lot. If a person were using that many tables and lists in their day to day texting then I’d assume that they were suffering from a psychotic episode
The first you can control to some extent. Both local and public llms have ways to edit or add to the system prompt, which is what guides the overall behavior. I actually had a local llm do the opposite of what you are looking for - somehow the prompt had been changed to a very simple “You will answer short and concise” without me realizing it, and I couldn’t figure out why it had changed from a flowing, dynamic output to a few sentences.
But it’s not perfect either. Sometimes you want a bit more than a simple sentence, or it might need more information and a short reply will cut off the important things.
As for fixing the second one - to be right more often would mean they understand what they’re outputting, which is what we don’t have yet. I’d just rather have it admit when it doesn’t have enough to satisfactorily be sure on the answer. Which doesn’t happen because they are trained first and foremost to always have an answer, because that’s more marketable than a model that says it doesn’t know.
This is 100% my experience. Ai simply can not solve problems. It isnt capable of thinking objectively at all, no sense of any kind of permanence beyond the immediate task. I have found it educational in the sense that un-fucking something that ai has put together, can teach me a lot about a system I was previously unfamiliar with.
It is a machine that outputs huge amounts of useless garbage with little practical value.