Everything humans do is harmful to some degree. I don't want to put words in Pike's mouth, but I'm assuming his point is that the cost-benefit-ratio of how LLMs are often used is out of whack.
Somebody burned compute to send him an LLM-generated thank-you note. Everybody involved in this transaction lost, nobody gained anything from it. It's pure destruction of resources.
Data center power usage has been fairly flat for the last decade (until 2022 or so). While new capacity has been coming online, efficiency improvements have been keeping up, keeping total usage mostly flat.
The AI boom has completely changed that. Data center power usage is rocketing upwards now. It is estimated it will be more than 10% of all electric power usage in the US by 2030.
It's a completely different order of magnitude than the pre AI-boom data center usage.
The first chart in your link doesn't show "flat" usage until 2022? It is clearly rising at an increasing rate, and it more than doubles over 2014-2022.
It might help to look at global power usage, not just the US, see the first figure here:
I think you're referring to Figure ES-1 in that paper, but that's kind of a summary of different estimates.
Figure 1.1 is the chart I was referring to, which are the data points from the original sources that it uses.
Between 2010 and 2020, it shows a very slow linear growth. Yes, there is growth, but it's quite slow and mostly linear.
Then the slope increases sharply. And the estimates after that point follow the new, sharper growth.
Sorry, when I wrote my original comment I didn't have the paper in front of me, I linked it afterwards. But you can see that distinct change in rate at around 2020.
ES-1 is the most important figure, though? As you say, it is a summary, and the authors consider it their best estimate, hence they put it first, and in the executive summary.
Figure 1.1 does show a single source from 2018 (Shehabi et al) that estimates almost flat growth up to 2017, that's true, but the same graph shows other sources with overlap on the same time frame as well, and their estimates differ (though they don't span enough years to really tell one way or another).
I still wouldn't say that your assertion that data center energy use was fairly flat until 2022 is true. Even in Figure 1.2, for global data center usage, tracks more in line with the estimates in the executive summary. It just seems like the run-of-the-mill exponential increase with the same rate since at least 2014, a good amount of time before genAI was used heavily.
Basing off Yahoo historical price data, Bitcoin prices first started being tracked in late 2014. So my guess would be the increase from then to 2022 could have largely been attributed to crypto mining.
The energy impact of crypto is rather exaggerated. Most estimates on this front are aiming to demonstrate as a high value as possible, and so should be taken as higher upper bound, and yet even that upper bound is 'only' around 200TWh a year. Annual energy consumption is in the 24,000TWh range with growth averaging around 2% or so per year.
So if you looked at a graph of energy consumption, you wouldn't even notice crypto. In fact even LLM stuff will just look like a blip unless it scales up substantially more than its currently trending. We use vastly more more energy than most appreciate. And this is only electrical energy consumption. All energy consumption is something like 185,000 TWh. [1]
This is where the debate gets interesting, but I think both sides are cherrypicking data a bit. The energy consumption trend depends a lot on what baseline you're measuring from and which metrics you prioritize.
Yes, data center efficiency improved dramatically between 2010-2020, but the absolute scale kept growing. So you're technically both right: efficiency gains kept/unit costs down while total infrastructure expanded. The 2022+ inflection is real though, and its not just about AI training. Inference at scale is the quiet energy hog nobody talks about enough.
What bugs me about this whole thread is that it's turning into "AI bad" vs "AI defenders," when the real question should be: which AI use cases actually justify this resource spike? Running an LLM to summarize a Slack thread probably doesn't. Using it to accelerate drug discovery or materials science probably does. But we're deploying this stuff everywhere without any kind of cost/benefit filter, and that's the part that feels reckless.
Have you dived into the destructive brainrot that YouTube serves to millions of kids who (sadly) use it unattended each day? Even much of Google's non-ad software is a cancer on humanity.
Only if you believe in water memory or homeopathy.
To stretch the analogy, all the "babies" in the "bathwater" of youtube that I follow are busy throwing themselves out by creating or joining alternative platforms, having to publicly decry the actions Google takes that make their lives worse and their jobs harder, and ensuring they have very diversified income streams and productions to ensure that WHEN, not IF youtube fucks them, they won't be homeless.
They mostly use Youtube as an advertising platform for driving people to patreon, nebula, whatever the new guntube is called, twitch, literal conventions now, tours, etc.
They've been expecting youtube to go away for decades. Many of them have already survived multiple service deaths, like former Vine creator Drew Gooden, or have had their business radically changed by google product decisions already.
Will you be responding similarly to Pike? I think the parent comment is illustrating the same sort of logic that we're all downwind of, if you think it's flawed, I think you've perhaps discovered the point they were making.
Yes I agree although I still believe that there is some tangential truth in parent comment when you think about it.
I am not accurate about google but facebook definitely has some of the most dystopian tracking I have heard. I might read the facebook files some day but the dystopian fact that facebook tracks young girls and sees if that they delete their photos, they must feel insecure and serves them beauty ads is beyond predatory.
Honestly, my opinion is that something should be done about both of these issues.
But also its not a gotcha moment for Rob pike that he himself was plotting up the ads or something.
Regarding the "iphone kids", I feel as if the best thing is probably an parental level intervention rather than waiting for an regulatory crackdown since lets be honest, some kids would just download another app which might not have that regulation.
Australia is implementing social media ban basically for kids but I don't think its gonna work out but everyone's looking at it to see what's gonna happen basically.
Personally I don't think social media ban can work if VPN's just exist but maybe they can create such an immense friction but then again I assume that this friction might just become norm. I assume many of you guys must have been using internet from the terminal days where the friction was definitely there but the allure still beat the friction.
How does the compute required for that compare to the compute required to serve LLM requests? There's a lot of goal-post moving going on here, to justify the whataboutism.
You could at least argue while there is plenty of negatives, at least we got to use many services with ad-supported model.
There is no upside to vast majority of the AI pushed by the OpenAI and their cronies. It's literally fucking up economy for everyone else all to get AI from "lies to users" to "lies to users confidently", all while rampantly stealing content to do that, because apparently pirating something as a person is terrible crime govt need to chase you, unless you do that to resell it in AI model, then it's propping up US economy.
I feel you. All that time in the beginning of the mp3 era the record industry was perusing people for pirating music. And then when an AI company does it for books, its some how not piracy?
If there is any example of hypocrisy, and that we don't have a justice system that applies the law equally, that would be it.
Agree, but I'm speaking more in aggregate. And even individually, it's not hard to find people who will say that e.g. an Instagram ad gave them a noticable benefit (I've experienced it myself) as you can who will feel that it was a waste of money.
It isn't that simple. Each company paying for ads would have preferred that their competitors had not advertised, then spend a lot less on ads... for the same value.
It is like an arms race. Everyone would have been better off if people just never went to war, but....
There's a tiny slice of companies deal with advertising like this. Say, Coke vs Pepsi, where everyone already knows both brands and they push a highly similar product.
A lot of advertising is telling people about some product or service they didn't even know existed though. There may not even be a competitor to blame for an advertising arms race.
It can't function without advertising, money, or oxygen, if we're just adding random things to obscure our complete lack of an argument for advertising. We can't go back to an anaerobic economy, silly wabbit.
Btw., how do you calculate the toll that ads take on society?
I mean, buying another pair of sneakers you don't need just because ads made you want them doesn't sound like the best investment from a societal perspective. And I am sure sneakers are not the only product that is being bought, even though nobody really needs them.
> “this other thing is also bad” is not an exoneration
No, but it puts some perspective on things. IMO Google, after abandoning its early "don't be evil" motto is directly responsible for a significant chunk of the current evil in the developed world, from screen addiction to kids' mental health and social polarization.
Working for Google and drawing an extravagant salary for many, many years was a choice that does affect the way we perceive other issues being discussed by the same source. To clarify: I am not claiming that Rob is evil; on the contrary. His books and open source work were an inspiration to many, myself included. But I am going to view his opinions on social good and evil through the prism of his personal employment choices. My 2c.
This is a purity test that cannot be passed. Give me your career history and I’ll tell you why you aren’t allowed to make any moral judgments on anything as well.
My take on the above, and I might be taking it out of context is that I think what is being said here is that the exploitation and grift needs to stop. And if you are working for a company that does this, you are part of the problem. I know that pretty much every modern company does this, but it has to stop somewhere.
We need to find a way to stop contributing to the destruction of the planet soon.
I don't work for any of these companies, but I do purchase things from Amazon and I have an apple phone. I think the best we can do is minimize our contribution to it. I try to limit what services I use from this companies, and I know it doesnt make much of a differnce, but I am doing what I can.
I'm hoping more people that need to be employed by tech companies can find a way to be more selective on who they employ with.
Point is he is criticizing Google but still collecting checks from them. That's hypocritical. He would have a little sympathy if he never worked for them. He had decades to resign. He didn't. He stayed there until retirement. He's even using gmail in that post.
I still don't see the problem. You can criticize things you're part of. Probably being part of something is what informs a person enough, and makes it matter enough to them, to criticize in the first place.
> I still don't see the problem. You can criticize things you're part of.
Certainly. But this, IMO, is not the reason for the criticism in the comments. If Rob ranted about AI, about spam, slop, whatever, most of those criticizing his take would nod instead.
However, the one and only thing that Rob says in his post is "fuck you people who build datacenters, you rape the planet". And this coming from someone who worked at Google from 2004 to 2021 and instead could have picked any job anywhere. He knew full well what Google was doing; those youtube videos and ad machines were not hosted in a parallel universe.
I have no problem with someone working at Google on whatever with full knowledge that Google is pushing ads, hosting videos, working on next gen compute, LLM, AGI, whatever. I also have no problem with someone who rails against cloud compute, AI, etc. and fights it as a colossal waste or misallocation of resources or whatever. But not when one person does both. Just my 2c, not pushing my worldview on anyone else.
If rob pike was asked about these issues of systemic addiction and others where we can find things google was bad at. I am sure that he wouldn't defend google about these things.
Maybe someone can mail a real message asking Rob pike genuinely (without any snarkiness that I feel from some comments here) about some questionable google things and I am almost certain that if those questions are reasonable, rob pike will agree that some actions done by google were wrong.
I think its just that rob pike got pissed off because an AI messaged him so he got the opportunity to talk about these issues and I doubt that he got the opportunity to talk / someone asking him about some other flaws of google / systemic issues related to it.
Its like, Okay, I feel like there is an issue in the world so I talk about it. Now does that mean that I have to talk about every issue in the world, no not really. I can have priorities in what issues I wish to talk about.
But that being said, if someone then asks me respectfully about issues which are reasonable, Being moral, I can agree about that yes those are issues as well which needs work upon.
And some people like rob pike who left google because of (ideological reasons perhaps, not sure?) wouldn't really care about the fallback and like you say, its okay to collect checks from organization even if they critize
Honestly Google's lucky that they got rob pike instead of vice versa from my limited knowledge.
Golang is such a brilliant language and ken thompson and rob pike are consistently some of the best coders and their contributions to golang and so many other projects is unparalleled.
I don't know much about rob pike as compared to Ken thompson but I assume he is really great too! Mostly I am just a huge golang fan.
I know this will probably not come off very well in this community. But there is something to be said about criticizing the very thing you are supporting. I know in this day and age, its not easy to survive without contributing to the problem in some degree.
Im not saying nobody has the right to criticize something they are supporting, but it does say something about our choices and how far we let this problem go before it became too much to solve. And not saying the problem isn't solvable. Just saying its become astronomically more difficult now then ever before.
I think at the very least, there is a little bit of cringe in me every time I criticize the very thing I support in some way.
The problem is that everyone on HN treats "You are criticizing something you benefit from" as somehow invalidating the arguments themselves rather than impeaching the person making the arguments.
Being a hypocrite makes you a bad person sometimes. It doesn't actually change anything factual or logical about your arguments. Hypocrisy affects the pathos of your argument, but not the logos or ethos! A person who built every single datacenter would still be well qualified to speak about how bad datacenters are for the environment. Maybe their argument is less convincing because you question their motives, but that doesn't make it wrong or invalid.
Unless HNers believe he is making this argument to help Google in some way, it doesn't fucking matter that google was also bad and he worked for them. Yes he worked for google while they built out datacenters and now he says AI datacenters are eating up resources, but is he wrong?. If he's not wrong, then talk about hypocrisy is a distraction.
HNers love arguing to distract.
"Don't hate the player, hate the game" is also wrong. You hate both.
Well said. Thank you. I just wanted to point out that there is some truth behind the negative effects of criticizing what you helped create. IMHO not everything is about facts and logic, but also about the spirit that's behind our choices. I know that kind of perspective is not very welcome here, but wanted to say it anyway.
Sometimes facts and logic can only get you so far.
>But that being said, if someone then asks me respectfully about issues which are reasonable, Being moral, I can agree about that yes those are issues as well which needs work upon.
With all due respect, being moral isn't an opinion or agreement about an opinion, it's the logic that directs your actions. Being moral isn't saying "I believe eating meat is bad for the planet", it's the behaviour that abstains from eating meat. Your moral is the set of statements that explains your behaviour. That is why you cannot say "I agree that domestic violence is bad" while at the same time you are beating up your spouse.
If your actions contradict your stated views, you are being a hypocrite. This is the point that people in here are making. Rob Pike was happy working at Google while Google was environmentally wasteful (e-waste, carbon footprint and data center related nastiness) to track users and mine their personal and private data for profit. He didn't resign then nor did he seem to have caused a fuss about it. He likely wasn't interested in "pointless politics" and just wanted to "do some engineering" (just a reference to techies dismissing or critising folks discussing social justices issues in relation to big tech). I am shocked I am having to explain this in here. I understand this guy is an idol of many here but I would expect people to be more rational on this website.
I think everyone, including myself, should be extremely hesitant to respond to marketing emails with profanity-laden moralism. It’s not about purity testing, it’s about having the level of introspection to understand that people do lots of things for lots of reasons. “Just fuck you. Fuck you all.” is not an appropriate response to presumptively good people trying to do cool things, even if the cool things are harmful and you desperately want to stop them.
Yes, I'm trying to marginalize the author's view. I think that “Just fuck you. Fuck you all.” is a bad view which does not help us see problems for what they are nor analyze negative impacts on society.
For example, Rob seems not to realize that the people who instructed an AI agent to send this email are a handful of random folks (https://theaidigest.org/about) not affiliated with any AI lab. They aren't themselves "spending trillions" nor "training your monster". And I suspect the AI labs would agree with both Rob and me that this was a bad email they should not have sent.
It's a smarmy sycophantic email addressing him personally and co-opting his personal achievements written by something he dislikes. This would feel really fucked up. It's true that anger is not always a great response but this is one of those occasions where it fits exactly.
That's frankly just pure whataboutism. The scale of the situation with the explosion of "AI" data centres is far far higher. And the immediate spike of it, too.
It’s not really whataboutism. Would you take an environmentalist seriously if you found out that they drive a Hummer?
When people have choices and they choose the more harmful action, it hurts their credibility. If Rob cares so much about society and the environment, why did he work at a company that has horrendous track record on both? Someone of his level of talent certainly had choices, and he chose to contribute to the company that abandoned “don’t be evil” a long time ago.
I would argue that Google actually has had a comparitively good track record on the environment, I mean if you say (pre AI) Google does have a bad track record on the environment, then I wonder which ones do in your opinion. And while we can argue about the societal cost/benefit of other Google services and their use of ads to finance them, I would say there were very different to e.g Facebook with a documented effort to make their feed more addictive
Honestly, it seems like Rob Pike may have left Google around the same I did. (2021, 2022). Which was about when it became clear it was 100% down in the gutter without coming back.
But you left because you were feeling like google was going in gutter and wanted to make an ethical choice perhaps on what you felt was right.
Honestly I believe that google might be one of the few winners from the AI industry perhaps because they own the whole stack top to bottom with their TPU's but I would still stray away from their stock because their P/E ratio might be insanely high or something
So like, we might be viewing the peaks of the bubble and you might still hold the stocks and might continue holding it but who knows what happens after the stock depreciates value due to AI Bubble-like properties and then you might regret as why you didn't sell it but if you do and google's stock rises, you might still regret.
I feel as if grass is always greener but not sure about your situation but if you ask me, you made the best out of the situation with the parameters you had and logically as such I wouldn't consider it "unfortunately" but I get what you mean.
That's one of the reasons I left. It also became intolerable to work there because it had gotten so massive. When I started there was an engineering staff of about 18,000 and when I left it was well over 100,000 and climbing constantly. It was a weird place to work.
But with remote work it also became possible to get paid decently around here without working there. Prior I was bound to local area employers of which Google was the only really good one.
I never loved Google, I came there through acquisition and it was that job with its bags of money and free food and kinda interesting open internal culture, or nothing because they exterminated my prior employer and and made me move cities.
After 2016 or so the place just started to go downhill faster and faster though. People who worked there in the decade prior to me had a much better place to work.
Interesting, so if I understand you properly, you would prefer working remote nowadays with google but that option didn't exist when you left google.
I am super curious as I don't get to chat with people who have worked at google as so much so pardon me but I got so many questions for you haha
> It was a weird place to work
What was the weirdness according to you, can you elaborate more about it?
> I never loved Google, I came there through acquisition and it was that job with its bags of money and free food and kinda interesting open internal culture, or nothing because they exterminated my prior employer and and made me move cities.
For context, can you please talk more about it :p
> After 2016 or so the place just started to go downhill faster and faster though
What were the reasons that made them go downhill in your opinion and in what ways?
Naturally I feel like as organizations move and have too many people, maybe things can become intolerable to work but I have heard it be described as it depends where and in which project you are and also how hard it can be to leave a bad team or join a team with like minded people which perhaps can be hard if the institution gets micro-managed at every level due to just its sheer size of employees perhaps?
> you would prefer working remote nowadays with google but that option didn't exist when you left google.
Not at all. I actually prefer in-office. And left when Google was mostly remote. But remote opened up possibilities to work places other than Google for me. None of them have paid as well as Google, but have given more agency and creativity. Though they've had their own frustrations.
> What was the weirdness according to you, can you elaborate more about it?
I had a 10-15 year career before going there. Much of what is accepted as "orthodoxy" at Google rubbed me the wrong way. It is in large part a product of having an infinite money tree. It's not an agile place. Deadlines don't matter. Everything is paid for by ads.
And as time goes on it became less of an engineering driven place and more of a product manager driven place with classical big-company turf wars and shipping the org chart all over the place.
I'd love to get paid Google money again, and get the free food and the creature comforts, etc. But that Google doesn't exist anymore. And they wouldn't take my back anyways :-)
It was still a wildly wasteful company doing morally ambiguous things prior to that timeframe. I mean, its entire business model is tracking and ads— and it runs massive, high energy datacenters to make that happen.
I wouldn't argue with this necessarily except that again the scale is completely different.
"AI" (and don't get me wrong I use these LLM systems constantly) is off the charts compared to normal data centre use for ads serving.
And so it's again, a kind of whataboutism that pushes the scale of the issue out of the way in order to make some sort of moral argument which misses the whole point.
BTW in my first year at Google I worked on a change where we made some optimizations that cut the # of CPUs used for RTB ad serving by half. There were bonuses and/or recognition for doing that kind of thing. Wasteful is a matter of degrees.
> "AI" (and don't get me wrong I use these LLM systems constantly) is off the charts compared to normal data centre use for ads serving.
It wasn't only about serving those ads though, traditional machine-learning (just not LLMs) has always been computationally expensive and was and is used extensively to optimize ads for higher margins, not for some greater good.
Obviously, back then and still today, nobody is being wasteful because they want to. If you go to OpenAI today and offer them a way to cut their compute usage in half, they'll praise you and give you a very large bonus for the same reason it was recognized & incentivized at Google: it also cuts the costs.
It's dumb, but energy wise, isn't this similar to leaving the TV on for a few minutes even though nobody is watching it?
Like, the ratio is not too crazy, it's rather the large resource usages that comes from the aggregate of millions of people choosing to use it.
If you assume all of those queries provide no value then obviously that's bad. But presumably there's some net positive value that people get out of that such that they're choosing to use it. And yes, many times the value of those queries to society as a whole is negative... I would hope that it's positive enough though.
But mining all the tracking data in order to show profitable targeted ads is extremely intensive. That’s what kicked off the era of “big data” 15-20 years ago.
Mining tracking data is a megaFLOP and gigaFLOP scale problem while just a simple LLM response is a teraFLOP scale problem. It also tends towards embarrassingly parallel because tracks of multiple users aren't usually interdependent. The tracking data processing also doesn't need to be calculated fresh for every single user with every interaction.
LLMs need to burn significant amounts of power for every inference. They're exponentially more power hungry than searches, database lookups, or even loads from disk.
The generation of the content was done intentionally though. If they saved the output and you visited their site it wasn’t really generated for you (rather just static content served to you).
I.e., they are proud to have never intentionally used AI and now they feel like they have to maintain that reputation in order to remain respected among their close peers.
Asking about the value of ads is like asking what value I derive from buying gasoline at the petrol station. None. I derive no value from it, I just spend money there. If given the option between having to buy gas and not having to buy gas, all else being equal, I would never take the first option.
But I do derive value from owning a car. (Whether a better world exists where my and everyone else's life would be better if I didn't is a definitely a valid conversation to have.)
The user doesn't derive value from ads, the user derives value from the content on which the ads are served next to.
If they want LLM, you probably don't have to advertise them as much
No the reality of the matter is that people are being shoved LLM's. They become the talk of the town and algorithms share any development related to LLM or similar.
The ads are shoved down to users. Trust me, the average person isn't as much enthusiastic about LLM's and for good reasons when people who have billions of dollars say that yes its a bubble but its all worth it or similar and the instances where the workforce themselves are being replaced/actively talked about being replaced by AI
We live in an hackernews bubble sometimes of like-minded people or communities but even on hackernews we see disagreements (I am usually Anti AI mostly because of the negative financial impact the bubble is gonna have on the whole world)
So your point becomes a bit moot in the end but that being said, Google (not sure how it was in the past) and big tech can sometimes actively promote/ close their eyes if the ad sponsors are scammy so ad-blockers are generally really good in that sense.
That's just not true... When a mother nurses her child and then looks into their eyes and smiles, it takes the utmost in cynical nihilism to claim that is harmful.
I could be misinterpreting parent myself, but I didn't bat an eye on the comment because I interpreted it similarly to "everything humans (or anything really) do increases net entropy, which is harmful to some degree for earth". I wasn't considering the moral good vs harm that you bring up, so I had been reading the the discussion from the priorities of minimizing unnecessary computing scope creep, where LLMs are being pointed to as a major aggressor. While I don't disagree with you and those who feel that statement is anti-human (another responder said this), this is what I think parent was conveying, not that all human action is immoral to some degree.
> Somebody burned compute to send him an LLM-generated thank-you note. Everybody involved in this transaction lost, nobody gained anything from it. It's pure destruction of resources.
Well the people who burnt compute got it from money so they did burn money.
But they don't care about burning money if they can get more money via investors/other inputs faster than they can burn (fun fact: sometimes they even outspend that input)
So in a way the investors are burning their money, now they burn the money because the market is becoming irrational. Remember Devin? Yes cognition labs is still there etc. but I remember people investing into these because of their hype when it did turn out to be moot comparative to their hype.
But people/market was so irrational that most of these private equities were unable to invest in something like openai that they are investing in anything AI related.
And when you think more deeper about all the bubble activities. It becomes apparent that in the end bailouts feel more possible than not which would be an tax on average taxpayers and they are already paying an AI tax in multiple forms whether it be in the inflation of ram prices due to AI or increase in electricity or water rates.
So repeat it with me: whose gonna pay for all this, we all would but the biggest disservice which is the core of the argument is that if we are paying for these things, then why don't we have a say in it. Why are we not having a say in AI related companies and the issues relating to that when people know it might take their jobs etc. so the average public in fact hates AI (shocking I know /satire) but the fact that its still being pushed shows how little influence sometimes public can have.
Basically public can have any opinions but we won't stop is the thing happening in AI space imo completely disregarding any thoughts about the general public while the CFO of openAI proposing an idea that public can bailout chatgpt or something tangential.
> Somebody burned compute to send him an LLM-generated thank-you note. Everybody involved in this transaction lost, nobody gained anything from it. It's pure destruction of resources.
I don't think it does unless you ignore the context of the conversation. Its very clear that the reference about "letters" being made wasn't "all mail."
When the thought is "I'd like this person to know how grateful I am", the medium doesn't really matter.
When the thought is "I owe this person a 'Thank You'", the handwritten letter gives an illusion of deeper thought. That's why there are fonts designed to look handwritten. To the receiver, they're just junk mail. I'd rather not get them at all, in any form. I was happy just having done the thing, and the thoughtless response slightly lessens that joy.
We’re well past that. Social media killed that first. Some people have a hard time articulating their thoughts. If AI is a tool to help, why is that bad?
Imagine the process of solving a problem as a sequence of hundreds of little decisions that branch between just two options. There is some probability that your human brain would choose one versus the other.
If you insert AI into your thinking process, it has a bias, for sure. It will helpfully reinforce whatever you tell it you think makes sense, or at least on average it will be interpreted that way because of a wide variety of human cognitive biases even if it hedges. At the least it will respond with ideas that are very... median.
So at each one of these tiny branches you introduce a bias towards the "typical" instead of discovering where your own mind would go. It's fine and conversational but it clearly influences your thought process to, well, mitigate your edges. Maybe it's more "correct", it's certainly less unique.
And then at some point they start charging for the service. That's the part I'm concerned about, if it's on-device and free to use I still think it makes your thought process less interesting and likely to have original ideas, but having to subscribe to a service to trust your decision making is deeply concerning.
> And then at some point they start charging for the service. That's the part I'm concerned about, if it's on-device and free to use I still think it makes your thought process less interesting and likely to have original ideas, but having to subscribe to a service to trust your decision making is deeply concerning.
This, speaking about environmental impacts. I wish that more models start focusing on the parameter density / their compactness more so that they can run locally but this isn't something that big tech really wants so we are probably gonna get models like the recent minimax model or glm air models or qwen or mistral models.
These AI services only work as long as they are free and burn money. As an example, me and my brother were discussing something yesterday related to LLM and my mother tried to understand and talk about it too and wanted to get ghibli styles photo since someone had ghibli generated photo as their pfp and she wanted to try it too
She then generated the pictures and my brother did a quick calculation and it took around 4 cents for each image which with PPP in my country and my currency is 3 ruppees.
When asked by my brother if she would pay for it, she said that no she's only using it for free but she also said that if she were forced to, she might even pay 50 rupees.
I jumped in the conversation and said nobody's gonna force her to make ghibli images.
Articulating thoughts is the backbone of communication. Replacing that with some kind of emotionless groupthink does actually destroy human-to-human communication.
I would wager that the amount of “very significant thing that have happened over the history of humanity” come down to a few emotional responses.
I shouldn't have to explain this, but a letter is a medium of communication, that could just as easily be written by a LLM (and transcribed by a human onto paper).
Communication happen between two parties. I wouldn't consider LLM an party considering it's just an autosuggestion on steroids at the end of day (lets face it)
Also if you need communication like this, just share the prompt anyway to that other person in the letter, people much rather might value that.
Someone taking the time and effort to write and send a letter and pay for postage might actually be appreciated by the receiver. It’s a bit different from LLM agents being ordered to burn resources to send summaries of someone’s work life and congratulating them. It feels like ”hey look what can be done, can we get some more funding now”. Just because it can be done doesn’t mean it adds any good value to this world
> I don’t know anyone who doesn’t immediately throw said enveloppe, postage, and letter in the trash
If you're being accurate, the people you know are terrible.
If someone sends me a personal letter [and I gather we're talking about a thank-you note here], I'm sure as hell going to open it. I'll probably even save it in a box for an extremely long time.
Of course. I took it to be referring the 98% of other paper mail that that goes straight to the trash. Often unopened. I don't know if I'm typical but the number of personal cards/letters I received in 2025 I could count on one hand.
> Of course. I took it to be referring the 98% of other paper mail that that goes straight to the trash. Often unopened. I don't know if I'm typical but the number of personal cards/letters I received in 2025 I could count on one hand.
Yes so this is why the reason why person card/letters really matter because most people sheldom get any and if you know a person in your life / in any (community/project) that you deeply admire, sending them a handwritten mail can be one of the highest gestures which shows that you took the time out of your day and you really cared about them so much in a way.
Perhaps because disrupting things was the actual goal, rather than saving money. DOGE was highly effective in harming the entities meant to oversee Musk's companies, stealing information about union organizing and labor complaints, reducing the government's ability to collect taxes, and destroying its regulatory capacity.
Well, for Twitter it's fine. It's a private company, and the shareholders can only blame themselves for the management they put in charge.
(From a broader society point of view, I'm a bit sad that they didn't actually manage to run Twitter into the ground. I think Twitter's a net-negative for humanity. But that's a different topic. People obviously like using it.)
The things that make social media net-negative--advertising, infinite scroll, global scale--aren't part of HN. Facebook wasn't net-negative when it was just a website that a few million people used to post semi-publicly with their community.
Once content begins being served by algorithm social networks start taking a nose dive in terms of quality and user experience and they slowly spiral into lowest common denominator smut. It juices engagement and therefore advertising dollars for a time, but slowly half of users start to recognize the vapidness of it all and disengage for good.
Hacker News is paginated, but effectively infinite, too. Though I guess that's enough of a UI friction to make a difference?
How is it not global scale? Or do you mean it only target a specific slice of your life (even if it makes not much of a difference where on the globe you are)?
It was staffed with walking examples of the Dunning-Kruger effect. People who knew very little about the departments or the work that they were cutting but enough to assume they knew more than people who had spent their life working there. That requires a special level of arrogance. They went in with the idea that all of these people at this organization are lazy and stupid and so everything they didn’t understand must be a result of one of those things or the other.
Musk is uniquely stupid and arrogant for refusing to understand very complex systems before making radical changes to them. This behavior directly led to outages at Twitter after he bought it.
I do think Musk correctly identified excess staff and irresponsible spending, but where he screwed up was being his toxic self which drove away even more of the audience and almost every big advertiser.
Why wouldn't Peter Principle apply just because the magical financial threshold is crossed? This is Peter Principle in a textbook way, a promotion from managing companies to managing the government.
my original thesis is wrong - while musk may have petered up to the top, that doesnt imply his actions must also be attributed to stupidity. the error in the thesis is conflation of stupidity with the raw brutal strength of cancer
> Unlike the Peter principle, the promoted individuals were not particularly good at any job they previously had, so awarding them a supervisory position is a way to remove them from the productive workflow.
> An earlier formulation of this effect was known as Putt's Law (1981), credited to the pseudonymous author Archibald Putt ("Technology is dominated by two types of people, those who understand what they do not manage and those who manage what they do not understand.").
They said they only sold around 5000 of the trucks in the quarter. Was only responding to the stuff about Cybertruck. It seems like a material portion of its sales are to his own other company.
I like how in today’s world and especially when it comes to Musk things cannot be as simple as incompetence. It has to be some 4D chess move. Like a reverse Hanlon’s razor: Never attribute to stupidity that which might be/maybe/perhaps explained by 4D chess move. It’s like 4chan leaking all over the Internet. And Musk can keep his genius legacy alive.
is it really 4D chess to imagine that a man under investigation by the federal government would desire to benefit from being given express permission to reduce force and efficacy of agencies directly threatening him?
I don't think Musk having bad faith intent shows him to be intelligent, more just greedy and selfish, but I think it's actually more irresponsible to believe that he had absolutely no idea what he was doing
That falls under “dishonest stuff companies do all the time”. Unless there are major political points to be scored by nailing him (which there may be now, don’t get me wrong), this would get a slap on the wrist. The cars do drive themselves, they have for awhile, Tesla never claimed it was perfect, they only claimed it would be perfect in the near future and musk plausibly could have (delusionally) thought this so there is no case (not saying he isn’t dishonest, though, that’s just not how the legal system works).
So I don’t think being looked at for the kind of stuff many companies do all the time explains <checks notes> infiltrating the government and personally disrupting the people investigating him, in public. If he’s worried about a financial hit, souring Tesla’s reputation as he has is obviously not worth it. If he’s worried about prosecution, surely he would be better off being nice to everyone in politics, not pissing anyone off and strongly supporting choice causes off the mainstream radar that happen to be in the interest of politicians.
So if he is doing it on principle, he just needs to be hubristic and reckless and possibly very autistic. If he is doing it to mess with the people investigating him, he needs to be outright stupid.
Hubristic and reckless (and autistic) are much, much more realistic adjectives for Musk than “outright stupid”. I know a lot of people will just assert that he is stupid, but if you yourself are sufficiently intelligent and you listen to the guy talk for a long time, you can at least tell he isn’t stupid. You can tell because he doesn’t do the rhetorical things stupid people need to do in order to mask contradictions or logical holes in what they are saying. They always do it. Even smart people sometimes do it quite a bit, like Steven Pinker for example m. Musk very rarely does it, and when he does it’s so completely obvious you can tell he’s bad at it and didn’t get where he is by being good at it.
It's not 4D chess to hurt the agencies that regulate and investigate you. It's the opposite of 4D chess. There is no secret plan, not conspiracy theory, no clever chess move.
I don't think that's right - although of course we are speculating about what's happening inside the head of Musk.
Musk strikes me as an juvenile and naive man, precisely the kind of man that would take a hatchet to a complex system while believing he is competently reforming. His experience with taking over Twitter probably reinforced his belief that you can move fast and break organisations and, despite all the moaning from liberals, nothing bad will happen in the end.
So Musk is exactly the man to honestly believe in what he was doing, and he was immersed in a right wing echo chamber, which for 50 years has been talking about government waste.
Don't ascribe to malevolence what can be explained by incompetence.
The idea that he is “stupid” or “naive” while also being the world’s wealthiest man by far needs to die
What he really is is a sociopath who uses the idea of “doing good” to infiltrate systems and setup laws and legal structures that benefit him and his companies
I don’t buy any of the goody-two-shoes “for the sake of humanity” persona and neither should you. But the worst thing you can do is dismiss his sociopathy as naivete or stupidity
He did try to get out of buying it which everyone seems to have memoryholed. I doubt anything other than way too much ketamine is behind a lot of the chaotic decision making.
Again, though, do you think that there’s some concrete goal he was aiming for which he could have achieved if only he hadn’t fired and insulted them? Or do you just think that it was terribly rude and they didn’t deserve to be treated that way? I wouldn’t call the latter stupidity, especially since he was working against contemporaneous predictions that the site wouldn’t be able to function without those people.
This was years in the making. He basically made a $200 million bet on the USG, one that translated into hundreds of billions. This was all calculated, and the veneer of government inefficiency was good enough to mask his actual objectives.
I can say this confidently because that's what I would have done too, and I'm not half as smart as him (given that I haven't built a Paypal or a SpaceX myself). That's what anyone in such a privileged position would have done. The upside to doing it that way was just that much massive.
Smart doesn't work like that. I have little doubt that you are as "smart" as Elon.
Usually what people mean when they say "smart" is actually more like meaning of the word "canny," which helps explain the distinction. A canny decision is one that makes you look smart in retrospect.
To put it another way, I might climb to the top of a hill. Climbing the hill doesn't make me taller, but it does get me the benefits of being able to see everything for miles around.
Perhaps after climbing a hill/Ent I see Saruman's army marching off to war, and realize that even though I may be a halfling, right now I could say a particular thing that would be "as the falling of small stones that starts an avalanche in the mountains." This is a canny moment, and like any canny moment or is filled with surreal possibility. But it isn't because Meriadoc is a tall hobbit and, not because only a tall person could do this thing that involves seeing a great distance.
> That's what anyone in such a privileged position would have done.
That’s what anyone who’s self-centered and morally bankrupt enough would do perhaps, but no, not “anyone”. Some people are committed to being good (or at least striving towards it).
Your take strikes me as sociopathic at worst, and misguided at best. Much like musk, to your point.
There is a certain class of American that rides the knife edge between credulity and contempt in supporting and accepting the activities and intent of bad actors who pledge to get rid of the things they don't like and they people they detest. They're ever-ready to believe the barest of excuses and to hand-wave the worst excesses in this regard. Today's anti-woke are yesterday's McCarthyists, and history will note the echo.
The selfish kind. Unfortunately that seems to be the end goal of the American dream: "I got mine, fuck you." I can't tell you how many times I heard the "protect my family" argument from people I never thought would vote for that clown.
But people do come here specifically to be selfish. They like that they can be selfish here in ways that are socialized away in other countries. They like that they can even socialize their selfishness, forcing poor people to subsidize the rich.
They are typically uneducated victims of the largest and most well funded mass propaganda brainwashing campaigns in the history of mankind, to be fair. Forgive them, for they know not what they do. The perpetrators of the misinformation, however, know exactly what they’re doing.
I think this misrepresents the situation. Many of these people are well-educated and affluent. In fact, such efforts wouldn't be possible without the support of the wealthy and academic elite, including on the left. Stooge-of-the-month Ezra Klein is decried as a woke liberal by certain segments of the political sphere, and yet he's running interference against those who support forcing the affluent to give back some of their recent outsize gains (through his "abundance" tripe). It's not poor, rural red-staters listening to his message.
> Many of these people are well-educated and affluent.
That does not preclude them from being uneducated and gullible to brainwashing. In fact, there is a strong case to be made that being well-educated and affluent primes one to become more likely to be uneducated/brainwashed. When you are well-educated and affluent, the "yes men" show up and start to make you feel like you know everything, and it becomes really easy to lose the skepticism and awareness that one normally has.
> It's not poor, rural red-staters listening to his message.
Was there something to suggest that it was? I see no mention of this group anywhere.
What do you know about “racism?” “Racism” is how I’ve been treated by (some) Saudis, who see a Bangladeshi and assume negative things about me as an individual. That has nothing to do with immigration, which involves cultural transplantation caused by the mass movement of people. You can gaslight and browbeat sheltered Americans into thinking that Bangladeshi moms raise their kids the same way American moms do, but that won’t work on me lol.
Immigration policy is about culture, not “race.” I suspect the Danes would be just as upset if hundreds of thousands of West Virginians were immigrating to their cities and making the culture of Denmark more like that of West Virginia.
I don't understand how people don't get this. There's a list of such agencies being gutted, but because it's compiled by democrats, the maggats just claim it's "biased".
A strong claim is severely weakened by lack of evidence. In this case, all evidence points to the claim being untrue.
> but ultimately the impact is good or bad doesn't matter at all.
That's essentially a rewording of the above claim and again without evidence.
In fact, it's detrimental for the perpetrators of disruptive actions to attract attention to them/selves when these actions don't achieve their purported benefits.
If they wanted only to simulate activity, they'd have used less damaging to themselves ways to achieve it without inflicting damage to the system. The latter is so important that it excludes accidental or PR-related actions to that end.
I feel like a lot of the criticism the GPT-5.x models receive only applies to specific use cases. I prefer these models over Anthropic's because they are less creative and less likely to take freedoms interpreting my prompts.
Sonnet 4.5 is great for vibe coding. You can give it a relatively vague prompt and it will take the initiative to interpret it in a reasonable way. This is good for non-programmers who just want to give the model a vague idea and end up with a working, sensible product.
But I usually do not want that, I do not want the model to take liberties and be creative. I want the model to do precisely what I tell it and nothing more. In my experience, te GPT-5.x models are a better fit for that way of working.
People all over the world are already building new bridges to places like China, so even if the old ones are rebuilt, they might get substantially less use.
>Are there any good robo-vacuum cleaners that will still clean your floor if the internet is down?
It depends on what exactly you want. My Roborock can't connect to my Wi-Fi anymore for some unfathomable reason. It no longer runs automatically, and I can't edit its map or tell it where exactly to clean. I just hit the power button once a day to start it manually, and it cleans everything it can access.
Somebody burned compute to send him an LLM-generated thank-you note. Everybody involved in this transaction lost, nobody gained anything from it. It's pure destruction of resources.
reply