A new Wayland protocol is in the works that should support screen cutout information out of the box: https://phosh.mobi/posts/xdg-cutouts/ Hopefully this will be extended to include color information whenever applicable, so that "hiding" the screen cutout (by coloring the surrounding area deep black) can also be a standard feature and maybe even be active by default.
You can't be serious. Wayland is the opposite of modular, and the concept of an extensible protocol only creates fragmentation.
Every compositor needs to implement the giant core spec, or, realistically, rely on a shared library to implement it for them. Then every compositor can propose and implement arbitrary protocols of their own, which should also be supported by all client applications.
It's insanity. This thing is nearly two decades old, and I still have basic clipboard issues[1]. This esoteric cutouts feature has no chances of seeing stable real-world use in at least a decade from now.
Shh...you're not supposed to mention these things alas you be down voted to death.
I also have tremendous issues with Plasma. Things such as graphics glitching in the alt+tab task switcher or Firefox choking the whole system when opening a single 4k PNG image. This is pre-alpha software... So back to X11 it is. Try again in another decade or two.
The thing is that I'm not experiencing this clipboard issue on Plasma, but on a fresh installation of Void Linux with niri. There are reports of this issue all over[1][2][3], so it's clearly not an isolated problem. The frustrating thing is that I wouldn't even know which project to report it to. What a clusterfuck.
I can't go back to X11 since the community is deliberately killing it. And relying on a fork maintained by a single person is insane to me.
Far from it. The recent XLibre release[1] has a long list of bugfixes and new features.
Besides, isn't the main complaint from the Wayland folks that X11 is insecure and broken? That means there's still a lot of work to be done. They just refuse to do it.
To be fair, X11 has worked great for me for the past ~20 years, but there are obvious improvements that can be made.
No one is killing it. No one willing to work on it is a very very different thing, and it's very bad faith and needlessly emotional to attribute malice to a lack of support.
What's in very bad faith is twisting the words of the people who work on these projects[1], and blaming me for echoing them.
It's very clear from their actions[2][3] that they have been actively working to "kill" X11.
There are still people willing to work on it, hence the XLibre fork. The fact that most mainstream distros refuse to carry it is another sign that X11 is in fact being actively "killed".
[1] is correct, though. You don't own Xorg, nor are you entitled to make distro maintainers support it. The Steam Deck doesn't support Xorg officially, but I don't see anyone rioting in the streets. If X11 has died, then it was a Darwinian process.
Nobody is claiming ownership over Xorg. That's ridiculous. If anything, it's the people who are deliberately trying to "kill" it. Some people simply want to keep working on it, many people still want to keep using it, yet they're being forced not to by egomaniacal children.
The job of distro maintainers is to make software accessible for their users. It's not to provide support for the software, nor to fix its bugs. Choosing to not package a specific software is user hostile.
YMMV and all, but my experience is that Wayland smoothness varies considerably depending on hardware. On modernish Intel and AMD iGPUs for example I’ve not had much trouble with Wayland whereas my tower with an Nvidia 3000 series card was considerably more troublesome with it.
Generally true, though this particular case is due to a single company deciding to not play ball and generally act in a manner that's hostile to the FOSS world for self-serving reasons (Nvidia).
I don't even think it's even that. These bugs seem like bog standard bugs related to correct sharing of graphics resources between processes and accessing with correct mutual exclusion.Blaming NV is likely just a convenient excuse.
> my tower with an Nvidia 3000 series card was considerably more troublesome with it.
I think you're describing a driver error from before Nvidia really supported Wayland. My 3070 exhibited similar behavior but was fixed with the 555-series drivers.
The Vulkan drivers are still so/so in terms of performance, but the smoothness is now on-par with my Macbook and Intel GNOME machine.
You can see the same problem in the XMPP world, with a lot of the extensions implemented only by a few applications. But at least most XMPP extensions are designed to be backwards-compatible with clients that don't support them.
Because one property doesn't guarantee the other. A modular system may imply that it can be extended. An extensible system is not necessarily modular.
Wayland, the protocol, may be extensible, but the implementations of it are monolithic. E.g. I can't use the xdg-shell implementation from KWin on Mutter, and so on. I'm stuck with whatever my compositor and applications support. This is the opposite of modularity.
So all this protocol extensibility creates in practice is fragmentation. When a compositor proposes a new protocol, it's only implemented by itself. Implementations by other compositors can take years, and implementations by client applications decades. This is why it's taken 18 years to get close to anything we can refer to as "stable".
> E.g. I can't use the xdg-shell implementation from KWin on Mutter, and so on.
Why not? It's open-source software. Depending on your architecture you may be able to reuse parts of it.
But as a more flexible choice, there is wlroots.
> and implementations by client applications decades.
Toolkits implement these stuff, so most of the time "support by client application" is a gtk/qt version bump away.
> This is why it's taken 18 years to get close to anything we can refer to as "stable".
Is it really fare to compare the first 10 years of a couple of hobby developers with the current "wide-spread" state of the platform? If it were like today for 18 years and fail to improve, sure, something must be truly problematic. But there were absolutely different phases and uptake of the project so it moved at widely different speeds.
> Why not? It's open-source software. Depending on your architecture you may be able to reuse parts of it.
"The system is not modular, but you can make it so."
What a ridiculous statement.
> But as a more flexible choice, there is wlroots.
Great! How do I use wlroots as a user?
> Toolkits implement these stuff, so most of the time "support by client application" is a gtk/qt version bump away.
Ah, right. Is this why Xwayland exists, because it's so easy to do? So we can tell users that all their applications will continue to work when they switch to Wayland?
> Is it really fare to compare the first 10 years of a couple of hobby developers with the current "wide-spread" state of the platform?
It's not fare, you're right. I'll wait another decade before I voice my concerns again.
Why would you want to use it as a user? That makes zero sense.
> Is this why Xwayland exists, because it's so easy to do
I don't get your point. The reason it exists is backwards compatibility. There are binaries as well where changing a library is not so easy, and not every version change is equal within a toolkit.
But it's much different to go from X to Wayland then from Wayland to Wayland with one more protocol.
1440p and 2160p is a total waste of pixels, when 1080p is already at the level of human visual acuity. You can argue that 1440p is a genuine (slight) improvement for super crisp text, but not for a game. HDR and more ray tracing/path tracing, etc. are more sensible ways of pushing quality higher.
> 1440p and 2160p is a total waste of pixels, when 1080p is already at the level of human visual acuity.
Wow, what a load of bullshit. I bet you also think the human eye can't see more than 30 fps?
If you're sitting 15+ feet away from your screen, yeah, you can't tell the difference. But for most people, with their eyes only being 2-3 feet away from their monitor, the difference is absolutely noticeable.
> HDR and more ray tracing/path tracing, etc. are more sensible ways of pushing quality higher.
HDR is an absolute game-changer, for sure. Ray-tracing is as well, especially once you learn to notice the artifacts created by shortcuts required to get reflections in raster-based rendering. It's like bad kerning. Something you never noticed before will suddenly stick out like a sore thumb and will bother the hell out of you.
Text rendering alone makes it worthwhile. 1080p densities are not high enough to render text accurately without artefacts. If you double pixel density, then it becomes (mostly) possible to renderi text weight accurately, and things like "rythm" and "density" which were things that real typographers concerned themselves with start to become apparent.
You're probably looking up close at a small portion of the screen - you'll always be able to "see the pixels" in that situation. If you sit far back enough to keep the whole of the screen comfortably in your visual field, the argument applies.
You are absolutely wrong on this subject. Importantly, what matters is PPI, not resolution. 1080P would look like crap in a movie theater or on a 55" TV, for example, while it'll look amazing on a 7" monitor.
High margins are exactly what should create a strong incentive to build more capacity. But that dynamic has been tamped down so far because we're all scared of a possible AI bubble that might pop at any moment.
Strictly speaking, you don't need that much VRAM or even plain old RAM - just enough to store your context and model activations. It's just that as you run with less and less (V)RAM you'll start to bottleneck on things like SSD transfer bandwidth and your inference speed goes down to a crawl. But even that may or may not be an issue depending on your exact requirements: perhaps you don't need your answer instantly and can wait while it gets computed in the background. Or maybe you're running with the latest PCIe 5 storage which overall gives you comparable bandwidth to something like DDR3/DDR4 memory.
NPU/Tensor cores are actually very useful for prompt pre-processing, or really any ML inference task that isn't strictly bandwidth limited (because you end up wasting a lot of bandwidth on padding/dequantizing data to a format that the NPU can natively work with, whereas a GPU can just do that in registers/local memory). Main issue is the limited support in current ML/AI inference frameworks.
GPU compute units are not that simple, the main difference with CPU is that they generally use a combination of wide SIMD and wide SMT to hide latency, as opposed to the power-intensive out-of-order processing used by CPU's. Performing tasks that can't take advantage of either SIMD or SMT on GPU compute units might be a bit wasteful.
Also you'd need to add extra hardware for various OS support functions (privilege levels, address space translation/MMU) that are currently missing from the GPU. But the idea is otherwise sound, you can think of the 'Mill' proposed CPU architecture as one variety of it.
Perhaps I should have phrased it differently. CPU and GPU cores are designed for different types of loads. The rest of your comment seems similar to what I was imagining.
Still, I don't think that enhancing the GPU cores with CPU capabilities (OOE, rings, MMU, etc from your examples) is the best idea. You may end up with the advantages of neither and the disadvantages of both. I was suggesting that you could instead have a few dedicated CPU cores distributed among the numerous GPU cores. Finding the right balance of GPU to CPU cores may be the key to achieving the best performance on such a system.
> Looking for a way to break up tasks for LLMs so that there will be multiple tasks to run concurrently would be interesting, maybe like creating one "manager" and few "delegated engineers" personalities.
This is pretty much what "agents" are for. The manager model constructs prompts and contexts that the delegated models can work on in parallel, returning results when they're done.
> Columnar layout is FUNDAMENTALLY BROKEN on media that doesn't have two fixed-size axes.
You can use plain old CSS columns (which don't have the automated "masonry" packing behavior of this new Grid layout, they just display content sequentially) and scroll them horizontally. But horizontal scrolling is cumbersome with most input devices, so this new "packed" columnar layout is a good way of coping with the awkwardness of vertical scrolled fixed-width lanes.
If you could predict a stock market correction before it happens, you'd be very, very rich. The fact that corrections sometimes happen does not negate the existence of market-wide expectations for any given stock.
> If you could predict a stock market correction before it happens, you'd be very, very rich.
Same goes for if you can predict the price of a stock... but analysts do it anyway and set targets for stocks.
> The fact that corrections sometimes happen does not negate the existence of market-wide expectations for any given stock.
The crash or not is part of the expectation. Regardless of what you read on articles, those fund managers often sit out situations they don't deem worthy of investing.
reply