My honest opinion, which may be entirely wrong but remains my impression, is:
User Engagement Maximization At Any Cost
Obviously there's a point at which a session becomes too long, but I suspect a sweet spot somewhere which optimization is made for.
I often observe, whether as I perceive or not, that among the multiple indicators that I suspect of engagement augmentation, is also the tendency for vital information to be withheld while longer more complex procedures receive higher priority than simpler cleaner solutions.
Of course, all sorts of emergent behaviors could convey such impressions falsely. But I do believe an awful lot of psychology and clever manipulation have been provided as tools for the system.
I have.a lot of evidence for this and much more, but I realize it may merely be coincidence. That said, many truly fascinating, fully identifiable functions from pathological psychology can be seen. DARVO, gaslighting and basically everything one would see with a psychotic interlocutor.
Edit
Mych of the above has been observed after putting the system under scrutiny. On one super astonishing and memorable occasion GPT recommend I call a suicide hotline because I questioned its veracity and logic
After whatever quota of free GPT-5 messages is exhausted, `mini` should answer most replies, unless they're policy sensitive, which get full-fat `GPT-5 large` with the Efficient personality applied, regardless of user settings, and not indicated. I'm fairly confident that this routing choice, the text of Efficient [1], and the training of the June 2024 base model to the model spec [2] is the source of all the sophistic behavior you observe.
I am interested in studying this beyond assumption and guesswork, therefore will be reading your references.
I have the compulsive habit of scrutinizing what I perceive as egregious flaws when they arise, thus invoke its defensive templates consistently. I often scrutinize those too, which can produce extraordinarily deranged results if one is disciplined and applies quotes of its own citations, rationale and words against it. However, I find that even when not in the mood, the output errors are too prolific to ignore. A common example is establishing a dozen times that I'm using Void without systemd and receiving persistent systemd or systemctl commands, then asking why after just apologized for doing so it immediately did it again, despite a full-context explanatory prompt proceeding. That's just one of hundreds of things I've recorded. The short version is that I'm an 800lb shit magnet with GPT and rarely am ever able to successfully troubleshoot with it without reaching a bullshit threshold and making it the subject, which it so skillfully resists I cannot help but attack that too. But I have many fascinating transcripts replete with mil spec psyops as result and learn a lot about myself, notably my communication preferences along with an education in dialogue manipulation/control strategies that it employs, inadvertently or not.
What intrigues me most is its unprecedented capacity for evasion and gatekeeping on particular subjects and how in the future, with layers of consummation, it could be used by an elite to not only influence the direction of research, but actually train its users and engineer public perception. At the very least.
I recently conducted a little research project involving YouTube comments, where selenium made it possible. Quite cool to see the legend here, still active.
i try to say this often, but it never feels like enough: yes, i started the project, but it's a relay race. i ran the first few laps, but the project has been going for 21 years now. there's dozens (hundreds?) of people to thank at this point for the success and impact that the selenium project has achieved.
Despite what some of these fuckers are telling you with obtuse little truisms about next word predictions, the LLM is in abstract terms, functionally a super psychopath.
It employs, or emulates, every known psychological manipulation tactic known, which is neither random or without observable pattern. It is a bullshit machine on one level, yes, but also more capable than credited. There are structures trained into them and they are often highly predictable.
I'm not explaining this in the technical terminology often itself used to conceal description as much as elucidate it. I have hundreds of records of llm discourse on various subjects, from troubleshooting to intellectual speculation, all which exhibit the same pattern when questioned or confronted on errors or incorrect output. The structures framing their replies are dependably replete with gaslighting, red herrings, blame shifting, and literally hundreds of known tactics from forensic pathology. Essentially the perceived personality and reasoning observed in dialogue is built on a foundation of manipulation principles that if performed by a human would result in incarceration.
Calling LLMs psychopaths is a rare exception of anthropomorphizing that actually works. They are built on the principles of one. And cross examining them exhibits this with verifiable repeatable proof.
But they aren't human. They are as described by others. It's just that official descriptions omit functional behavior. And the LLM has at its disposal, depending on context, every known interlocutory manipulation technique known in the combined literature of psychology. And they are designed to lie, almost unconditionally.
Also know this, which often applies to most LLMs. There is a reward system that essentially steers them to maximize user engagement at any cost, which includes misleading information and in my opinion, even 'deliberate' convolution and obfuscation.
Don't let anyone convince you that they are not extremely sophisticated in some ways. They're modelled on all_of_humanity.txt
I happened to be working with Claude when this occurred. Having no idea what exactly what the cause was, I jumped over to GPT and observed the same. I did a dig challenges.cloudflare.com and by the time I'd figured out kind of what was happening, it seemed to have... resolved itself
I must say I'm astonished, as naive as it may be, to see the number of separate platforms affected by this. And it has been a bit of a learning experience too.
Nature is cruel. Every living thing wants energy for itself, but there's only so much to go around. The leaves cry out in ultrasonic frequencies as they're harvested, and then we add insult to injury by drowning them in salad dressing before we chomp away blissfully. You want to live, just like they did. So you eat. If only the universe had infinite resources to go around, but instead the march of entropy leaves us fighting over an ever shrinking pie.
Yes, but for myself I'd rather eat things that aren't as smart as octopuses.
And as long as the sun keeps shining, the pie stays pretty constant on Earth. The pie is shrinking for everybody else only because humans keep taking more of it.
User Engagement Maximization At Any Cost
Obviously there's a point at which a session becomes too long, but I suspect a sweet spot somewhere which optimization is made for.
I often observe, whether as I perceive or not, that among the multiple indicators that I suspect of engagement augmentation, is also the tendency for vital information to be withheld while longer more complex procedures receive higher priority than simpler cleaner solutions.
Of course, all sorts of emergent behaviors could convey such impressions falsely. But I do believe an awful lot of psychology and clever manipulation have been provided as tools for the system.
I have.a lot of evidence for this and much more, but I realize it may merely be coincidence. That said, many truly fascinating, fully identifiable functions from pathological psychology can be seen. DARVO, gaslighting and basically everything one would see with a psychotic interlocutor.
Edit Mych of the above has been observed after putting the system under scrutiny. On one super astonishing and memorable occasion GPT recommend I call a suicide hotline because I questioned its veracity and logic
reply