Hacker Newsnew | past | comments | ask | show | jobs | submit | avhception's commentslogin

Maybe it's just that you're mostly viewing this through the LLM lens?

I remember having to fight with fglrx, AMDs proprietary Linux driver, for hours on end. Just to get hardware-acceleration for my desktop going! That driver was so unbearable I bought Nvidia just because I wanted their proprietary driver. Cut the fiddling time from many hours to maybe 1 or 2!

Nowadays, I run AMD because their open-source amdgpu driver means I just plonk the card into the system, and that's it. I've had to fiddle with the driver exactly zero times. The last time I used Nvidia is the distant past for me. So - for me, their drivers are indeed "so much better". But my usecase is sysadmin work and occasional gaming through Steam / Proton. I ran LMStudio through ROCm, too, a few times. Worked fine, but I guess that's very much not representative for whatever people do with MI300 / H100.


> and occasional gaming through Steam / Proton

And how does that work on AMD? I know the Steam Deck is AMD but Valve could have tweaked the driver or proton for that particular GPU.


I play lots of games on a AMD GPU (RX 7600) for about a year and I can't remember a game that had graphical issues (eg driver bugs).

Probably something hasn't run at some point but I can't remember what, more likely to be a Proton "issue". Your main problem will be some configuration of anti-cheat for some games.

My experience has been basically fantastic and no stress. Just check that games aren't installing some Linux build which are inevitably extremely out of date and probably wont run. Ex: human fall flat (very old, wont run), deus ex mankind divided (can't recall why but I elected to install the proton version, I think performance was poor or mouse control was funky).

I guess I don't play super-new games so YMMV there. Quick stuff I can recall, NMS, Dark Souls 1&2&3, Sekiro, Deep Rock Galactic, Halo MCC, Snow runner & Expeditions, Eurotruck, RDR1 (afaik 2 runs fine, just not got it yet), hard space ship breaker, vrising, Tombraider remaster (the first one and the new one), pacific drive, factorio, blue prince, ball x pit, dishonored uhhh - basically any kind of "small game" you could think of: exapunks, balatro, slay the spire, gwent rougemage, whatever. I know there were a bunch more I have forgotten that I played this year.

I actually can't think of a game that didn't work... Oh this is on Arch Linux, I imagine Debian etc would have issues with older Mesa, etc.


Works very well for me! YMMV maybe depending on the titles you play, but that would probably be more of a Proton issue than an AMD issue, I'd guess. I'm not a huge gamer, so take my experience with a grain of salt. But I've racked up almost 300 hours of Witcher3 with the HQ patch on a 4k TV display using my self-compiled Gentoo kernel, and it worked totally fine. A few other games, too. So there's that!

Don’t know what LLM lens is. I had an ATI card. Miserable. Fglrx awful. I’ve tried various AMDs over the last 15 years. All total garbage compared to nvidia. Throughout this period was consistently informed of new OSS drivers blah blah. Linus says “fuck nvidia”. AMD still rubbish.

Finally, now I have 6x4090 on one machine. Just works. 1x5090 on other. Just works. And everyone I know prefers N to A. Drivers proprietary. Result great. GPU responds well.


Well, I don't know why it didn't work out for you. But my AMD experience has improved fundamentally since the fglrx days, to the point where I prefer AMD over Nvidia. You said you don't know why people say that AMD has improved so much, but it definitely rings true for me.

I said "LLM lens" because you were talking about hardware typically used for number crunching, not graphics displays, like the MI300. So I was suggesting that the difference between what you hear online about the driver and your own experience might result from people like me mostly talking about the 2d / 3d acceleration side of things while the experience for ROCm and stuff is probably another story altogether.


I see. I see. I got tripped up by 'LLM' since I got the GPUs for diffusion models. Anyway, the whole thing sounds like the old days when I had Ubuntu Dapper Drake running flawlessly on my laptop and everyone was telling me Linux wasn't ready: it's an artifact of the hardware and some people have great support and others don't. Glad you do.

Sometimes it's also possible to simply disconnect the hotel's SIP phone from the Ethernet jack and use that :)

> Because what this AI-generated SEO slop formed from an extremely vulnerable and honest place shows is that women’s pain is still not taken seriously.

Companies putting words in people's mouth on social media using "AI" is horrible and shouldn't be allowed.

But I completely fail to see what this has to do with misogyny. Did Instagram have their LLM analyze the post and then only post generated slob when it concluded the post came from a woman? Certainly not.


Obviously I am putting words in the author's mouth here, so take with a grain of salt, but I think the reasoning is something like: such LLM-generated content disproportionately negatively affects women, and the fact that this got pushed through shows that they didn't take those consequences into account, e.g. by not testing what it would look like in situations like these.

> such LLM-generated content disproportionately negatively affects women,

Major citation needed


> Ahead of the International Women's Day, a UNESCO study revealed worrying tendencies in Large Language models (LLM) to produce gender bias, as well as homophobia and racial stereotyping. Women were described as working in domestic roles far more often than men ¬– four times as often by one model – and were frequently associated with words like “home”, “family” and “children”, while male names were linked to “business”, “executive”, “salary”, and “career”.

https://www.unesco.org/en/articles/generative-ai-unesco-stud...

> Our analysis proves that bias in LLMs is not an unintended flaw but a systematic result of their rational processing, which tends to preserve and amplify existing societal biases encoded in training data. Drawing on existentialist theory, we argue that LLM-generated bias reflects entrenched societal structures and highlights the limitations of purely technical debiasing methods.

https://arxiv.org/html/2410.19775v1

> We find that the portrayals generated by GPT-3.5 and GPT-4 contain higher rates of racial stereotypes than human-written por- trayals using the same prompts. The words distinguishing personas of marked (non-white, non-male) groups reflect patterns of othering and exoticizing these demographics. An inter- sectional lens further reveals tropes that domi- nate portrayals of marginalized groups, such as tropicalism and the hypersexualization of mi- noritized women. These representational harms have concerning implications for downstream applications like story generation.

https://aclanthology.org/2023.acl-long.84.pdf


The question is whether these LLM summaries disproportionately "impact" women, not whether LLMs describe women as more often working in domestic roles.

Then you have to do your research on whether domestic roles have an equal status to non-domestic roles, and not rest on your preconceptions

Unfortunately I can't provide that, since I'm merely trying to come up with the reasoning of the author. If they have sources, though, that could lead to this reasoning.

> Did Instagram have their LLM analyze the post and then only post generated slob when it concluded the post came from a woman? Certainly not.

I actually am sympathetic to your confusion—perhaps this is semantics, but I agree with the trivialization of the human experience assessment from the author and your post, but don't read it as an attack on women's pain as such. I think the algorithm sensed that the essay would touch people and engender a response.

--

However, I am certain that Instagram knows the author is a woman, and that the LLM they deployed can do sentiment analysis (or just call the Instagram API and ask whether the post is by a woman). So I don't think we can somehow absolve them of cultural awareness. I wonder how this sort of thing influences its output (and wish we didn't have to puzzle over such things).


When all one has is a hammer, everything looks like a nail.

I tried local models for general-purpose LLM tasks on my Radeon 7800 XT (20GB VRAM), and was disappointed.

But I keep thinking: It should be possible to run some kind of supercharged tab completion on there, no? I'm spending most of my time writing Ansible or in the shell, and I have a feeling that even a small local model should give me vastly more useful completion options...


Your comment comes across disingenuous to me. Writing it in, for example, Java would have limited it to situations where you have the JVM available, which is a minuscule subset of the situations that curl is used in today, especially if we're not talking "curl, the CLI tool" but libcurl. I have a feeling you know that already and mostly want to troll people. And Golang is only 16 years old according to Wikipedia, by the way.


Java might not be the most popular VM in Linux, but let's talk Perl or Python. It's installed by default almost everywhere, it's probably impossible to find a useful Linux installation without these runtimes. So writing curl with Python makes perfect sense, right? It's memory safe language, good for handling inherently unsafe Internet data. Its startup time is miniscule, compared to typical network response. Lots of advantages. Yet curl is still written with C.

I've never used libcurl and I don't know why is it useful, so let's focus on curl. Of course if you want C library, you gotta write it with C, that's kind of weird argument.

My point is, there were plenty of better options to replace C. Yet people chose to use C for their projects and continue to do so. Rust is not even good option for most projects, as it's too low level. It's a good option for Linux kernel, but for user space software? I'm not sure.


"[...] it's probably impossible to find a useful Linux installation without [Perl or Python]. [...]"

Oof. We seem to have very, very different definitions for both "Linux" and "useful". If all Linux installs w/o Perl or Python would cease to exist tomorrow, we'd probably enter a global crisis. Industrial processes failing left and right, collapse of wide swaths of internet and telecom infrastructure and god knows what else from ships to cars and smartphones.

Regarding libcurl: libcurl probably represents the vast majority of curl installations. curl the CLI tool is mostly porcelain on top of libcurl. libcurl is used in _a lot_ of places. For example, inside the PHP runtime. And god knows were else, there must be billions of installations as part of other projects. It's not a weird argument, libcurl is 95% of the raison d'être for curl. If you want a curl-like tool in Python or Perl, you gotta write it in Python or Perl. Somebody probably already did. So maybe just use one of these? Instead of demanding that curl be transformed into something which is incompatible with it's mission statement.


Basically the Sysadmin's dilemma.

Everything working fine: "What are we paying you for?"

Something broken: "What are we paying you for?"


Yeah, a couple years ago I built a system that undergirded what was at the time a new product but which now generates significant revenue for the company. That system is shockingly reliable to the extent that few at the company know it exists and those who do take its reliability for granted. It's not involved in any cost or reliability fires, so people never really have to think about how impressive this little piece of software really is--the things they don't need to worry about because this software is chugging along, doing its job, silently recovering from connectivity issues, database maintenance, etc without any real issue or maintenance.

It's a little bit of a tragic irony that the better a job you do, the less likely it is to be noticed. (:


Note the projects that use that software, also note metrics like API calls received, failure recoveries, uptime, etc and put that in a promo packet


Thanks, I genuinely appreciate the advice!


May be you need to have "scheduled downtime" when your undergirding system is down for "maintenance" and they will notice! [Half joking... Probably not possible but better to have scheduled maintenance than have to do firefighting under extreme time pressure]


I had a coworker legitimately put wait statements in his code so that later he could remove them and report the optimizations. I approved a few of them


Gather metrics and regularly report them.


Are you implying that the "neo-KGB" never mounted a concerted effort to manipulate western public opinion through comment spam? We can debate whether that should be called a "troll army", but we're fairly certain that such efforts are made, no?


I, too, was under the impression that Kea is now mostly out and they're going the dnsmasq route. There were open issues about some basic features with Kea, too: https://github.com/opnsense/core/issues/7475


We really shouldn't let perfect be the enemy of good here. Of course they have their faults, but I'll take Valve over any of the other players in their market all day every day without even thinking twice. EDIT: You're absolutely right, is what I'm trying to say.


Can confirm: that's exactly what we do.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: