One additional bit of context, they provided guidelines and instructions specifically to send emails and verify their successful delivery so that the "random act of kindness" could be properly reported and measured at the end of this experiment.
Copying and pasting doesn't work. Unless your PDF viewer does OCR. And if the redaction is just a black rectangle overlaid on top, that can still be removed.
Of course it is. It's not capable of actually forgetting or suppressing its training data. It's just double checking rather than assuming because of the prompt. Roleplaying is exactly what it's doing. At any point, it may stop doing that and spit out an answer solely based on training data.
It's a big part of why search overview summaries are so awful. Many times the answers are not grounded in the material.
It may actually have the opposite effect - the instruction to not use prior knowledge may have been what caused Gemini 3 to assume incorrect details about how certain puzzles worked and get itself stuck for hours. It knew the right answer (from some game walkthrough in its training data), but intentionally went in a different direction in order to pretend that it didn't know. So, paradoxically, the results of the test end up worse than if the model truly didn't know.
That initial percentage is a little misleading. It includes everything that caniuse isn't sure about. Really it should be something like 97.5±2.5 but the issue's been stalled for years.
Even the absolute most basic features that have been well supported for 30 years, like the HTML "div" element, cap out at 96%. Change the drop-down from "all users" to "all tracked" and you'll get a more representative answer.
As opposed to the DisplayPort cable, DisplayPort standard, or DisplayPort encoding that's sent over the wire, yes. This isn't a PIN number situation despite the stutter.
"Use AI to fix AI" is not my interpretation of the technique. I may be overlooking it, but I don't see any hint that this soul doc is AI generated, AI tuned, or AI influenced.
Separately, I'm not sure Sam's word should be held as prophetic and unbreakable. It didn't work for his company, at some previous time, with their approaches. Sam's also been known to tell quite a few tall tales, usually about GPT's capabilities, but tall tales regardless.
That's true for UI, it's not true when you're arbitrarily injecting user feedback into a dynamic system where you do not know how the dominoes will be affected as they fall.
reply