

11·
3 months agoI realized a while back that one of the primary goals of these LLMs is to get people to continue using them. While that’s not especially notable - the same could be said of many consumer products and services - the way in which this manifests in LLMs is pretty heinous.
This need for continued use is why, for example, Google’s AI was returning absolute nonsense when asked about the origins of fictitious idioms. These models are designed to return something, and to make that something pleasing to the reader, truth and utility be damned. As long as the user thinks that they’re getting what they wanted, it’s mission accomplished.
You can, but in my experience it is resistant to custom instructions.
I spent an evening messing around with ChatGPT once, and fairly early on I gave it special instructions via the options menu to stop being sycophantic, among other things. It ignored those instructions for the next dozen or so prompts, even though I followed up every response with a reminder. It finally came around after a few more prompts, by which point I was bored of it, and feeling a bit guilty over the acres of rainforest I had already burned down.
I don’t discount user error on my part, particularly that I may have asked too much at once, as I wanted it to dramatically alter its output with so my customizations. But it’s still a computer, and I don’t think it was unreasonable to expect it to follow instructions the first time. Isn’t that what computers are supposed to be known for, unfailingly following instructions?