No, the subreddit has applied custom css to do that. It's the mildly infuriating subreddit. There's also an image of a hair visible on widescreen monitors, to make you think there's a hair on your display.
Wait, does urlib not use semvar? Don't remove APIs on minor releases people. A major release doesn't have to be a problem or a major redesign, you can do major release 400 for all I care, just don't break things on minor releases.
Lots of things not using semvar that I always just assumed did.
As an example, I always knew urllib3 as one of the foundational packages that Requests uses. And I was curious, what versions of urllib3 does Requests pull in?
That is exactly the kind of dependency specification I would expect to see for a package that is using semver: The current version of urllib3 is 2.x, so with semver, you set up your dependencies to avoid the next major-version number (in this case, 3).
So, it seems to me that even the Requests folks assumed urllib3 was using semver.
I would almost expect the 3 in urllib3 to be the major version and if something needed to break it would become urllib4. Which, I know, is terribly naive of me. But that is how psycopg does it.
That was how psycopg2 did it, but now the package is psycopg (again) version 3, as it should be. Python package management has come a long way since psycopg 1 was created.
urllib2/3’s etymology is different: urllib2’s name comes from urllib in the standard library.
Because everyone is afraid of a v4, after the 2-3 debacle. And there are things which need to be culled every once in a while to keep the stdlib fresh.
Python is culling stuff all the time, but that doesn't warrant a major version jump.
You are probably right about Pythons careful approach of when to ship v4, but for the wrong reasons. Python 3 was necessary not for the removal of functions, but because of the syntax changes e.g., moving print from a statement to a method.
Semver works fine for SDL and has worked fine since the start of the century, despite the library's complexity and scale. A few simple rules can go a long way if you're disciplined about enforcing them.
Making you distrust updates is absolutely the correct versioning method. Pin your versions in software you care about and establish a maintenance schedule. Trusting that people don't break things unintentionally all the time is extremely naive.
It was dumb and user-hostile to remove an interface for no good reason that just makes it more work for people to update, but everyone not pinning versions needs to acknowledge that they're choosing to live dangerously.
In practice, semver is very helpful. Its major benefit is allowing packages to declare compatibility with versions of their own dependencies that don’t exist yet. (Distrusting updates and pinning versions is important and correct, but it’s not a “versioning method” that stands in contrast to semver or anything. That’s what lockfiles are for.) The pre-semver Python package ecosystem is a good example of what happens without it: fresh installs of packages break all the time because they have open-ended or overly permissive upper bounds on their dependencies. If they were to specify exact preexisting upper bounds, they’d slow down bugfixes (and in Python, where you can only have one version of a package in a given environment, new features) and add maintenance busywork; I’m not aware of any packages that choose this option in practice.
> You have released version 1.0.0 of something. Then you add a feature and fix a bug unrelated to that feature. Are you at version 1.1.0 or 1.1.1? Well, it depends on the order you added your changes, doesn't it? If you fixed the bug first you'll go from 1.0.0 to 1.0.1 to 1.1.0, and if you add the feature first you'll go from 1.0.0 to 1.1.0 to 1.1.1. And if that difference doesn't matter, then the last digit doesn't matter.
It depends on the order you released your changes, yes. If you have the option, the most useful order is to release 1.0.1 with the bugfix and 1.1.0 with both changes, but you can also choose to release 1.1.0 without the bugfix (why intentionally release a buggy version?) and then 1.1.1 (with or without 1.0.1), or just 1.1.0 with both changes. You’re correct that the starting point of the patch version within a particular minor version doesn’t matter – you could pick 0, 1, or 31415. You can also increment it by whatever you want in practice. All this flexibility is a total non-problem (let alone a problem with the versioning scheme, considering it’s flexibility that comes from which releases you even choose to cut – semver just makes the relationship between them clear), and doesn’t indicate that the patch field is meaningless in general. (Obviously, you should start at 0 and increment by 1, since that’s boring and normal.)
Sure, it’s impossible to classify breaking changes and new features with perfect precision, and maintainers can make mistakes, but semver is pretty clearly a net positive. (It takes almost no effort and has no superior competitors, so it would be hard for it not to be.)
They article does validly point out that deprecation warnings don't work. Turns out in this day and age that the only thing you can reliably inform about changes is the package manager and its dependency solver, and pip requires semver or similar for that.
I'd be more worried about the implicit power imbalance. It's not what can humans provide for each-other, it's what can humans provide for a handful of ultra-wealthy oligarchs.
Yeah, from the perspective of the ultra-wealthy us humans are already pretty worthless and they'll be glad to get rid of us.
But from the perspective of a human being, an animal, and the environment that needs love, connection, mutual generosity and care, another human being who can provide those is priceless.
I propose we break away and create our own new economy and the ultra-wealthy can stay in their fully optimised machine dominated bunkers.
Sure maybe we'll need to throw a few food rations and bags of youthful blood down there for them every once in a while, but otherwise we could live in an economy that works for humanity instead.
I first saw this about 15 years ago and it had a profound impact on me. It's stuck with me ever since
"Don't give yourselves to these unnatural men, machine men, with machine minds and machine hearts. You are not machines, you are not cattle, you are men. You have the love of humanity in your hearts."
Yeah I know it's an unrealistic ideal but it's fun to think about.
That said my theory about power and privilege is that it's actually just a symptom of a deep fear of death. The reason gaining more money/power/status never lets up is because there's no amount of money/power/status that can satiate that fear, but somehow naively there's a belief that it can. I wouldn't be surprised if most people who have any amount of wealth has a terrible fear of losing it all, and to somebody whose identity is tied to that wealth, that's as good as death.
Going off your earlier comment, what if instead of a revolution, the oligarchs just get hooked up to a simulation where they can pretend to rule over the rest of humanity forever? Or what if this already happened and we're just the peasants in the simulation
This would make a good black mirror episode. The character lives in a total dystopian world making f'd up moral choice. Their choices make the world worse. It seems nightmarish to us the viewer. Then towards then end they pull back, they unplug and are living in a utopia. They grab a snack, are greeted by people that love and care about them, then they plug back in and go back to being their dystopian tech bro ideal self in their dream/ideal world.
> It's not what can humans provide for each-other, it's what can humans provide for a handful of ultra-wealthy oligarchs.
You can definitely use AI and automation to help yourself and your family/community rather than the oligarchs. You set the prompts. If AI is smart enough to do your old job, it is also smart enough to support you be independent.
LLMs are compression and prediction. The most efficient way to (lossfully) compress most things is by actually understanding them. Not saying LLMs are doing a good job of that, but that is the fundamental mechanism here.
It's the other way around. Human learning would appear to amount to very efficient compression. A world model would appear to be a particular sort of highly compressed data set that has particular properties.
This is a case where it's going to be next to impossible to provide proof that no counterexamples exist. Conversely, if what I've written there is wrong then a single counterexample will likely suffice to blow the entire thing out of the water.
No answer I give will be satisfying to you until I could come up with a rigorous mathematical definition of understanding, which is de-facto solving the hard AI problem. So there's not really point in talking about it is there?
If you're interested in why compression is like understanding in many ways, I'd suggest reading through the wikipedia article on Kolmogorov complexity.
Yeah this is like going out and getting a new cat every day and then announcing that you’ll be spending unprecedented amounts of money on cat food this year.
Like okay, that’s probably true, but nobody — literally nobody — told you that you need to keep 400 cats in your house
ImHex will tell you if it's compressed. Do you understand data structures? Floats, all those data types?
I'd suggest looking at a format like msgpack to see what a binary data format could look like: https://msgpack.org/
Then be aware that proprietary formats are going to be a lot more complicated. Or maybe it's just zipped up json data, only way to tell is to start poking around at it.
Python async may make certain types of IO-blocked tasks simpler, but it is not going to scale a web app. Now maybe this isn't a web app, I can't really tell. But this is not going to scale to a cluster of machines.
You need to use a distributed task queue like celery.
Like Uber drivers' using their girlfriends' ID verification because they have a criminal record, you can also just cut in some random guy to borrow his ID for another chance. There should be plenty of dudes available willing to sell an ID verification for cheap in poorer countries but there's also plenty in wealthy countries because very few anywhere were ever going to have a Google developer account in the first place.
Some of us started long long ago, Android 1.0 time, when Google seemed like a different company. Their first blogs didn't mention splitting your personal google account from your developer account. I never heard of anyone getting banned. Oh boy, things have changed!
Heh, I have been wondering about this for a very long time. The walled garden toll booth is too strict.
For example, the old Uber with the crazy thing they did. What if in the alternate universe they straight up got banned? That’s it. All investments would go to zero.
Isn't it simple? You do it because it makes money.
Lots of businesses can fail at any time. People still run them and work for them as long as it makes money, and WHEN it stops working, they stop that and do something else to make money. All business is ephemeral.
It doesn't matter. As long as you can spam people with crap like popups and notifications easier than on the web, we will still see all those unnecessary 'apps' that could just be a web page.
No one is going to walk away from that kind of alliance tomorrow, sure. Stuff like "we're going to remotely disable military equipment we've sold you" is going to have consequences though. It's not walking away from alliances, it's just focusing on more stable countries.
It's "block everything that depends on US clouds", which is a considerable downgrade (because you can't upload all mission parameters to an airplane without going through the cloud, and you can't use self-diagnosis features), but not entirely a kill switch. Close enough, though.