My username on here is after my (now older) game engine Reactor 3D.
I taught myself this stuff back when Quake 3 took over my high school. Doom got me into computers but Quake 3 got me into 3D. I didn’t quite understand the math in the books I bought but copied the code anyway.
Fast forward into my career and it’s been a pleasant blend of web and graphics. Now that WebGL/WebGPU is widely available. I taught PhD’s how to vertex pack and align and how to send structs to the GPU at my day job. I regret not continuing my studies and getting a PhD but I ended up writing Reactor 3D part time for XNA on Xbox 360 and then rewriting it half a decade later to be pure OpenGL. I still struggle with the advanced concepts but luckily there are others out there.
Fun fact, I worked with the guy who wrote XNA Silverlight, which would eventually be used as the basis for MonoGame, so I’m like MonoGame’s great grand uncle half removed or something. However,
Now that we have different ways of doing things, it demands a different kind of engine. So the Vulkan/Dx12/Metal way is the new jam.
If you can figure out a way to collect and parse information needed to make executive decisions via LLM+tool calls, you would be a billionaire overnight. There’s a reason that it takes a human in these roles and people w/ 0 organizational/executive experience fail to understand just how complex they are.
Their level of expertise, access, relationships, etc all scale with the business. If it’s big, you need someone well connected who can mange an organization of that size. IANAE but I would imagine having access to top schools would be a big factor as well.
While it’s easy to get in and make something (it’s got all the bells and whistles) it also suffers from the monolith problem (too many features, old code, tech debt).
The asset store is gold but their tech feels less refined. It’s leaps and bounds where it was when it started but it still has this empty feel to it without heavy script modifications.
There is the problem. The scripting ending designed around mono doesn’t translate as well to CoreCLR and using the same Behavior interface gets a little more complicated.
There are times (even with my own engine) that one must let go of the old and begin a new. Dx7->dx9, dx9->opengl, opengl->vulkan, vulkan->webgpu.
EDIT
I was just thinking, returning to this a couple of minute later, that if Unity wanted to prove they really care about their Core, they would introduce a complete revamp of the editor like Blender did for 3.X. Give more thought to the level editors and prefab makers. Give different workflow views. Editing / Animation / Scripting / Rendering / Post
As it stands now, it’s all just a menu item that spawns a thing in a single view with a 1999 style property window on the side like Visual Studio was ever cool.
That last step is nonsensical: WebGPU is a shim layer that Vulkan-like layer (in the sense that WebGL is GLES-like) that allows you to use the native GPGPU-era APIs of your OS.
On a "proper OS", your WebGPU is 1:1 translating all calls to Vulkan, and doing so pretty cheaply. On Windows, your browser will be doing this depending on GPU vendor: Nvidia continues to have not amazing Vulkan performance, even in cases where the performance should be identical to DX12; AMD does not suffer from this bug.
If you care about performance, you will call Vulkan directly and not pay for the overhead. If you care about portability and/or are compiling to a WASM target, you're pretty much restricted to WebGPU and you have to pay that penalty.
Side note: Nothing stops Windows drivers or Mesa on Linux from providing a WebGPU impl, thus browsers would not need their own shim impl on such drivers and there would be no inherent translation overhead. They just don't.
WebGPU is far from cheap and has to do a substantial amount of extra work to translate to the underlying API in a safe manner. It's not 1:1 with Vulkan and diverges in a few places. WebGPU uses automatic synchronization and must spend a decent amount of CPU time resolving barriers.
You can't just ship a WebGPU implementation in the driver because the last-mile of getting the <canvas> on screen is handled by the browser in entirely browser specific ways. You'd require very tight coordination between the driver and browsers, and you still wouldn't be saving much because the overhead you get from WebGPU isn't because of API translation, rather it's the cost to make the API safe to expose in a browser.
I wouldn't call it nonsensical to target WebGPU. If you aren't on the bleeding edge for features, its overhead is pretty low and there's value in having one perfectly-consistent API that works pretty well everywhere. (Similar to OpenGL)
I'm not killing it, but there is no C API written verbatim. WebGL was fucky because it was a specific version of GLES that never changed and you couldn't actually do GL extensions; it was a hybrid of 2.0 and 3.0 and some extra non-core/ARB extensions.
WebGPU is trying to not repeat this mistake, but it isn't a 100% 1:1 translation for Vulkan, so everyone is going to need to agree to how the C API looks, and you know damned well Google is going to fuck this up for everyone and any attempt is going to die.
The problem is the same as it was 20 years ago. There’s 2 proprietary API’s and then there’s the “open” one.
I’m sick of having to write code that needs to know the difference. There’s only a need for a Render Pass, a Texture handle, a Shader program, and Buffer memory. The rest is implementation spaghetti.
I know the point you’re making but you’re talking to the wrong person about it. I know all the history. I wish for a simpler world where a WebGPU like API exists for all platforms. I’m working on making that happen. Don’t distract.
I think the major problem with Unity is they're just rudderless. They just continue to buy plugins and slap in random features but it's really just in service of more stickers on the box and not a wholistic plan.
They've tried and failed to make their own games and they just can't do it. That means they don't have the internal drive to push a new design forward. They don't know what it takes to make a game. They just listen to what people ask for in a vacuum and ship that.
A lot of talented people at Unity but I don't expect a big change any time soon.
The talent left ship years ago. The core engine’s graphics team is all that’s really left.
They also hired Jim Whitehurst as CEO after the previous CEO crapped the bed. Then Jim left as he just didn’t understand the business (he’s probably the one responsible for the “just grab it from the store” attitude). Now they have this stinking pile of legacy they can’t get rid of.
Yeah, I started a project in Unity a while ago, and tried out Godot in the meantime.
Unity really feels like there should be a single correct way to do any specific thing you want, but actually it misses <thing> for your use case so you have to work around it, (and repeat this for every unity feature basically)
Godot on the other hand, really feels like you are being handed meaningful simple building blocks to make whatever you want.
Bingo. They don’t actually understand their users. Instead they’re the Roblox of game making, just provide the ability and let devs figure it out (and then sell it as a script).
Interesting that the "NTSC" look you describe is essentially rounded versions of the coefficients quoted in the comment mentioning ppm2pgm. I don't know the lineage of the values you used of course, but I found it interesting nonetheless. I imagine we'll never know, but it would be cool to be able to trace the path that lead to their formula, as well as the path to you arriving at yours
The NTSC color coefficients are the grandfather of all luminance coefficients.
It is necessary that it was precisely defined because of the requirements of backwards-compatible color transmission (YIQ is the common abbreviation for the NTSC color space, I being ~reddish and Q being ~blueish), basically they treated B&W (technically monochrome) pictures like how B&W film and videotubes treated them: great in green, average in red, and poorly in blue.
A bit unrelated: pre-color transition, the makeups used are actually slightly greenish too (which appears nicely in monochrome).
Cool. I could have been clearer in my post; as I understand it actual NTSC circuitry used different coefficients for RGBx and RGBy values, and I didn't take time to look up the official standard. My specific pondering was based on an assumption that neither the ppm2pgm formula nor the parent's "NTSC" formula were exact equivalents to NTSC, and my "ADHD" thoughts wondered about the provenance of how each poster came to use their respective approximations. While I write this, I realize that my actual ponderings are less interesting than the responses generated because of them, so thanks everyone for your insightful responses.
I was actually researching why PAL YUV has the same(-ish) coefficients, while forgetting that PAL is essentially a refinement of the NTSC color standard (PAL stands for phase-alternating line, which solves much of NTSC's color drift issues early in its life).
Awwww, my teenage years reborn. I was just down the road from Westwood Village Drive, Tysons Corner in Reston learning to kung fu, math, hack, and talk to girls on AIM. I mastered 2 of those things.
Keen to give this a shot as I had stacks and stacks of AOL CDs and floppies in my youth.
I suspect you don’t. I suspect a couple of beelinks could run your whole business (minus the GPU needs).
reply