ChatGPT still does not display per-message timestamps (time of day / date) in conversations.
This has been requested consistently since early 2023 on the OpenAI community forum, with hundreds of comments and upvotes and deleted threads, yet remains unimplemented.
Do any of you could think of a reason (UX-wise) for it not to be displayed?
Isn't it just simpler to believe that ChatGPT doesn't have timestamps because... they never added them? It wasn't in the original MVP prototype and they've just never gotten around to it?
Surely there's enough people working in product development here to recognise this pattern of never getting around to fixing low-hanging fruit in a product.
They exist in the exported data. It'd require a weekend's worth of effort to roll out a new feature that gives users a toggle to turn timestamps off and on.
It's trivial, but we will never see it. The people in charge of UX/UI don't care about what users say they want, they all know better.
Yeah… even in the Web interface if you crack open Developer Tools and look at the json, the timestamps are all there, available in the data model. Those values are simply not displayed to the end user.
I was looking to write a browser extension and this was a preliminary survey for me.
There’s a very long list of “weekends’s worth of effort” jobs that exist in our product that’ll probably never get done because of just the general distinctions of product development instead of some conspiracy by Big Designer.
People on HN are not regular users in any way, shape or form.
It's just the "cognitive load" UX idea, with extremely non-technical people having extremely low limits before they decide to never try again, or just feel intimidated and never try to begin with.
UX/UI research if it exists at all is akin to religious healers who touch you on your head and bam you can suddenly walk after spending 25 years in a wheelchair.
I say that 99.5% of the UI/UX blog posts I've read in the last 10 years were all hogwash. Gloating about spacing, gaps, unnecessary I know this better mantra that leads to nowhere.
And it shows. Show me a platform where you have proper user experience and not some overgeneralized ui, that reeks of bad design. Also, defaults used everywhere.
It's just the "cognitive load" UX idea, with extremely non-technical people having extremely low limits before they decide to never try again, or just feel intimidated and never try to begin with.
There is a non-trivial number of people who get an adverse reaction to anything technical, including the language of technical - numbers. Numbers are the language of confusion, not getting it, feeling inadequate, nerds and losers, stupid math, and the "cold dead machines".
The thing is that people who are fine with numbers will still use those products anyway, perhaps mildly annoyed. People who hate numbers will feel a permeating discomfort and gravitate towards products that don't make them feel bad.
It's something extremely pervasive in modern design language.
It actually infuriates me to no end. There are many many many instances where you should use numbers but we get vague bullshit descriptions instead.
My classic example is that Samsung phones show charging as Slow, Fast, Very fast, Super fast charging. They could just use watts like a sane person. Internally of course everything is actually watts and various apps exist to report it.
Another example is my car shows motor power/regen as a vertical blue segmented bar. I'm not sure what the segments are supposed to represent but I believe its something like 4kW or something. If you poke around you can actually see the real kW number but the dash just has the bar.
Another is WiFi signal strength which the bars really mean nothing. My router reports a much more useful dBm measurement.
Thank god that there are lots of legacy cases that existed before the iPhone-ized design language started taking over and are sticky and hard to undo.
I can totally imagine my car reporting tire pressure as low or high or some nonsense or similarly I'm sure the designers at YouTube are foaming at the mouth to remove the actual pixel measurements from video resolutions.
It's all rather dumb, but your examples are really counterexamples, because a watt is sadly not something most people understand. One would at minimum need to have passed a physics class, and even that doesn't necessarily leave a person with an intuitive, visceral understanding of what a watt is, feels like, can do. I appreciate my older Samsung phone that just converts it into expected time until full charge. That's the number that matters to me anyway, and I can make my own value judgment about how "super" the fastness is. But I do agree with your point and would be pissed if they dumbed it down to Later, Soon, Very Soon and Super Soon.
Speaking of time and timestamps, which I would've thought were straightforward, I get irked to see them dumbed-down to "ago" values e.g. an IM sent "10 minutes ago" or worse "a day ago." Like what time of day, a day ago?
And just through exposure over time they'd learn "my phone usually charges around X" and be able to see if their new cable is actually charging faster or not.
In US, washing machines have "cold", "warm", "hot" settings. In Europe, you have a temperature knob "30C", "40C", "60C".
Like you, I don't buy the argument that people are actually too dumb to deal with the latter or are allergic to numbers. People get used to and make use of numbers in context naturally if you expose them.
I have a machine which has cold/warm/hot because it doesn't heat water by itself, it just takes whatever hot water there exists in the house (and "warm" means 50% hot water and 50% cold).
I still think anyone who grew up with such a machine would be able to graduate to a numerical temp knob without having a visceral reaction over the numbers every time they do laundry.
Well, that's obviously an exaggeration, but in any case, there's a choice here. Historically interface designers expected users to read a manual, and later to at least go through some basic onboarding and then read the occasional "tip of the day", before finally arriving at the current "don't make me think" approach. It's not too late to expect people to think again.
At the start of 2025 I stopped buying Spotify and started buying Apple Music because I felt manipulated by the Spotify application's metrics-first design.
I felt that Spotify was trying to teach me to rely on its automated recommendations in place of any personal "musical taste", and also that those recommendations were of increasingly (eventually, shockingly), poor quality.
The implied justification for these poor recommendations is a high "Monthly Listener Count". Don't mind that Spotify can guarantee that any crap will have a high listener count by boosting it's place in their recommendation algorithm.
I think many people may have a similar experience on once-thriving social media platforms like facebook/instragram/X.
What I mean to say is that I think people associate the experience of being continually exposed to dubiously sourced and dubiously relevant metrics with the feeling of being manipulated by illusions of scale.
I actually agree there's an issue here. I feel we've been dumbing down interfaces so much, to the extent that people who in previous generations would barely write and who wouldn't affect anyone outside their close friends and family, now having their voice algorithmically amplified to millions. And given that the algorithms care only about engagement, rather than eloquence (let alone veracity), these people end up believing that their thoughts are as valid regardless of substance, and that there's nothing they could gain by learning numeracy.
EDIT: It's not a new issue, and Asimov phrased it well back in 1980, but I feel it got much worse.
> Anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge'.
I tried to play a game with some family this weekend. It requires using your phone. Literally every turn I had to answer someones question with "READ YOUR FUCKING PHONE ITS TELLING YOU WHAT TO DO RIGHT THERE" "where" "REEEAAAAAD"
We humans use timestamps in conversations to reference a persons particular state of reference at a given point in time.
Ie “remember on Tuesday how you said that you were going to make tacos for dinner”.
Would an llm be able to reason about its internal state? My understanding is that they dont really. If you correct them they just go “ah you right” they dont say “oh i had this incorrect assumption here before and with this new information i now understand it this way”
If i chatted to an llm and was like “remember on Tuesday when you said X” i suspect it wouldn't really flow.
It’s better for them if you don’t know how long you’ve been talking to the LLM. Timestamps can remind you that it’s been 5 hours: without it you’ll think less about timing and just keep going.
Your suggestion is to not use the platform as intended, and to understand the source code of the extension. That advice is not actionable by non-technical people and does not help mitigate mass surveillance.
Ok, should we just use the provided 'app' and assume things are fine? FAANG or whoever take our privacy and security very seriously, you know!
The only reasonable approach is to view the code that is run on your system, which is possible with a extension script, and not possible with whatever non-technical people are using.
I don't know what point you're trying to make, but I already expect OpenAI to maintain records of my usage of their service. I do not however want other parties to be privy to this data, especially without my knowledge or consent.
My honest opinion, which may be entirely wrong but remains my impression, is:
User Engagement Maximization At Any Cost
Obviously there's a point at which a session becomes too long, but I suspect a sweet spot somewhere which optimization is made for.
I often observe, whether as I perceive or not, that among the multiple indicators that I suspect of engagement augmentation, is also the tendency for vital information to be withheld while longer more complex procedures receive higher priority than simpler cleaner solutions.
Of course, all sorts of emergent behaviors could convey such impressions falsely. But I do believe an awful lot of psychology and clever manipulation have been provided as tools for the system.
I have.a lot of evidence for this and much more, but I realize it may merely be coincidence. That said, many truly fascinating, fully identifiable functions from pathological psychology can be seen. DARVO, gaslighting and basically everything one would see with a psychotic interlocutor.
Edit
Mych of the above has been observed after putting the system under scrutiny. On one super astonishing and memorable occasion GPT recommend I call a suicide hotline because I questioned its veracity and logic
After whatever quota of free GPT-5 messages is exhausted, `mini` should answer most replies, unless they're policy sensitive, which get full-fat `GPT-5 large` with the Efficient personality applied, regardless of user settings, and not indicated. I'm fairly confident that this routing choice, the text of Efficient [1], and the training of the June 2024 base model to the model spec [2] is the source of all the sophistic behavior you observe.
I am interested in studying this beyond assumption and guesswork, therefore will be reading your references.
I have the compulsive habit of scrutinizing what I perceive as egregious flaws when they arise, thus invoke its defensive templates consistently. I often scrutinize those too, which can produce extraordinarily deranged results if one is disciplined and applies quotes of its own citations, rationale and words against it. However, I find that even when not in the mood, the output errors are too prolific to ignore. A common example is establishing a dozen times that I'm using Void without systemd and receiving persistent systemd or systemctl commands, then asking why after just apologized for doing so it immediately did it again, despite a full-context explanatory prompt proceeding. That's just one of hundreds of things I've recorded. The short version is that I'm an 800lb shit magnet with GPT and rarely am ever able to successfully troubleshoot with it without reaching a bullshit threshold and making it the subject, which it so skillfully resists I cannot help but attack that too. But I have many fascinating transcripts replete with mil spec psyops as result and learn a lot about myself, notably my communication preferences along with an education in dialogue manipulation/control strategies that it employs, inadvertently or not.
What intrigues me most is its unprecedented capacity for evasion and gatekeeping on particular subjects and how in the future, with layers of consummation, it could be used by an elite to not only influence the direction of research, but actually train its users and engineer public perception. At the very least.
Ya honestly that’s a great question. I think more public awareness would be helpful and pressure on state representatives. But honestly you see where it can go badly wrong in Austin TX. A majority (as far as I’m aware) are against the DoT highway expansion and have proposed alternative plans for light rail… but the DoT has over ridden this majority and gone ahead with the highway expansion anyway.
Maybe if the engineers that work on these projects are skilled in planning rail projects too… there’d be less myopic focus on roads. Just a thought.
This has been requested consistently since early 2023 on the OpenAI community forum, with hundreds of comments and upvotes and deleted threads, yet remains unimplemented.
Do any of you could think of a reason (UX-wise) for it not to be displayed?
reply