Flipping through the source code is like a time machine tour of tech's evolution over the past 50 years. It made me wonder: will our 2025 code look as ancient by 2075?
That's interesting to consider. Some of the GNU code is getting quite old and looking through it is a blast from the past. I'm frankly amazed that it continues to work so well. I suspect there is a relatively small group of GNU hackers out there rocking gray and white beards that are silently powering the foundations of our modern world, and I worry what's going to happen when they start retiring. Maybe we'll get rust rewrites of everything and a new generation will take over, but frankly I'm pretty worried about it.
I think to most (90+%?) software developers out their in the world, Assembler might as well be hieroglyphics. They/we can guess at the concepts involved of course, but actually being able to read the code end to end, and have a mental model of what is happening is not really going to happen. Not without some sort of Rosetta Stone. (Comments :) )
I think 2075 developers will feel the same way about modern Java, C#, TypeScript, etc.
They will think of themselves as software developers but they won't be writing code the same way, they'll be giving guided instructions to much higher level tools (perhaps AIs that themselves have a provenance back to modern LLMs)
Just as today there will still be those that need to write low level critical code. There are still lots of people today that have to write Assembler, though end up expressing it via C or Rust. And there will be people still working on AI technology. But even those will be built off other AI's.
In all their cities they could see buildings that they did not know how to build. And before that, public services would have broken down. It would have become impossible to find people who knew how to repair your heated floor (if you were rich), etc. The city of Rome declined from 1 million people to something like 20,000. In the late 500s, Pope Gregory the Great thought that the world was ending because of all the trouble (including vicious barbarian invasions). Monks (and presumably anyone educated) had access to a lot of ancient texts, it was only some that got lost in the West. I think they would have had a distinct sense that that past was more advanced.
I wrote this post in December 2023, and rediscovered it today. Seems like it hasn't dated yet, even though the AI landscape evolved a lot in the dev tooling space.
It's great to hear that Zig covered both cases. However, I'd still prefer the opposite behavior: a safe (without truncation) default `bcrypt()` and the unsafe function with the explicit name `bcryptWithTruncation()`.
My opinion is based on the assumption that the majority of the users will go with the `bcrypt()` option. Having AI "helpers" might make this statistic even worse.
Do you happen to know Zig team's reasoning behind this design choice? I'm really curious.
Note that the "safe" version makes very bespoke choices: it prehashes only overlong password, and does so with hmac-sha512 (which it b64-encodes). So it would very much be incompatible with other bcrypt implementations when outside of the "correct space".
These choices are documented in the function's docstring, but not obvious, nor do they seem encoded in a custom version.
Sounds like it would then make sense to hide crypto primitives and their footguns under a "hazmat" or "danger" namespace, sorta like webcrypto or libsodium.
So something like crypto.danger.bcrypt and crypto.bcryptWithTruncation
`bcrypt()` is bcrypt as implemented everywhere else, and is required for interoperability with other implementations. If you don't truncate, this is not `bcrypt` any more.
`bcryptWithTruncation()` is great for applications entirely written in Zig, but can create hashes that would not verify with other implementations.
The documentation of these functions is very explicit about the difference.
The verification function includes a `silently_truncate_password` option that is also pretty explicit.
To be fair, I can't think of a single context where I would want to truncate a password before hashing so interoperability with other systems isn't worth letting by a dangerous edge case in my opinion. I'd rather have the system break for the handful of users with a 72+ char password than overlook a potential critical security issue.
You might need it if you’re porting / reimplementing a system and have to be compatible with an existing base of hashed truncated passwords.
I would agree that it should not just be called “bcrypt” though, likely no function of this module should be, they should either explain their risks or clarify their safety.
Or possibly only a version which fails if passed more than 72 bytes or any nul.
I like following the OpenAI vs. NYT case, as it's a great example of the controversial situation:
- OpenAI created their models by parsing the internet by disregarding the copyrights, licenses, etc., or looking for a law loopholes
- by doing that, OpenAI (alongside others) developed a new progressive tool that is shaping the world, and seems to be the next “internet”-like (impact-wise) thing
- NYT is not happy about that, as their content is their main asset
- less democratic countries, can apply even less ethical practices for data mining, as the copyright laws don't work there, so one might claim that it's a question of national defense, considering the fact that AI is actively used in the miltech these days
- while the ethical part is less controversial (imho, as I'm with NYT there), the legal one is more complicated: the laws might simply say nothing about this use case (think GPL vs. AGPL license), so the world might need new ones.
Since those are mostly old movies, my immediate thought was: "maybe it's a new creative way to create a new income stream for hard-to-sell-otherwise assets?". If a decent enough number of users watch them, it could bring some cash to the publisher, couldn't it?
That's my thinking too. In fact, I was in a mood to watch OG film noirs, but between 3 streaming services, the only movie available was Sunset Blvd. Also failed to find any of the 1950s sci-fi movies I went looking for.
I've always found this approach of reducing the number of employees unwise from the company perspective (but pretty good for the employees, though).
While the unsatisfied employees are the target, my observations indicate that a high percentage of active and skilled people are willing to take this offer, as they are sure that they will find a new place within a reasonable time, so it's basically, free money. And those are the people that the company should try to keep as much as it could.
While the "give me a task with the perfect description, and I will do it" folks will stay until they are kicked out, as, usually, they are not up to taking the initiative.
That's why I saw how the companies that were changing the rules in the process: "well, it's an offer, but your manager needs to approve that first", and other tricks to be able to reject it for the top performers. Needless to say, it leads to the bad moral.
However, the companies I'm mentioning had way fewer employees than the federal workforce, so the chances are that with that size it's impossible to do it the "right" way.
Yes, but I believe this is the intent. It's not a matter of it being good for the business when the CEO of said business intentionally wants it to fail.
On a larger level, if the active & skilled people are taking the deal, doesn’t it mean they’re likely moving on to something that is a better fit and thus good for society anyway?
Part of two groups will leave, the actual skilled, hardworking people who know they can get another job. And the people who are retiring anyways.
Part of me thinks that just ends up with a higher percentage of worse workers.
Obviously many hard working people will stay, but you'd be pretty certain the people will little skill and value, ironically, are going to be the highest percentage that stay as they know its not going to be easy to get another job.
Good, we want the competent and skilled people within government to go to private enterprise in big numbers, so they can amplify their skills and effect in private enterprise. Then we can reduce the number of government agencies, because they are always less effective than private enterprises.
The problem is when we have the lazy and incompetent bureaucrats sticking around, and us the taxpayers are paying for their pension while getting back very little in return.
Honestly, I'm not sure what all the screaming and crying on hacker news is about; hackers are all about creating efficiency with unorthodox insights and skills. pretty sure no hackers have said "why yes, I've love to pay more in taxes for the bureaucrats to retire in style after doing very little"
> because they are always less effective than private enterprises.
I think this is one of those ideas that has been spouted for so long that we don't really stop to think about whether it is true. Private and public entities (and employees) certainly have different incentives, but they also have different mandates, and I've certainly known plenty of inefficient private enterprises and efficient public ones. Do you have any way of verifying or proving this idea that you can share?
> I think this is one of those ideas that has been spouted for so long that we don't really stop to think about whether it is true.
A lot of people with considerable expertise in the area have looked into whether it's true. They almost always come to the same conclusion: overall it's a wash.
If you look closer some industries (like say your corner coffee shop in a big city) are clearly better off private, and others like roads and fire fighting don't work when privately held.
In general a competitive market will out-do public owned, free markets that aren't competitive are worse than public owned.
But there are always exceptions. A fine example is the health system. I don't know why, but as the US demonstrates even with a competitive health market public ownership outperforms a purely private system by a fairly large margin.
- I'm glad to see this, as it might be an easily accessible alternative to Facebook events, which I tend to miss as I'm checking my FB only once in a while
- on the other hand, each new Apple release adds apps that might kill some small start-ups that are offering similar services for the small fee. Having a free alternative on your phone out of the box with most of your contacts using will lead to a decent number of subscriptions' cancellations. A good lesson to build smth that is harder to reproduce, though...
Not necessarily, I used to use Apple Reminders initially because it was free, i found the value of a good TODOs app by using it extensively.
Then once I loved using Todo apps, I migrated to Todoist and started paying for a Todo app.
Apple Reminders is the reason why I pay for Todoist app, it helped me learn it, same reason why I moved from free iMovies app to a paid video editing app.
Same reason why I moved from Free Apple Notes to Bear Notes + Muse App (both paid subscriptions)
In a way, provided other apps keep accelerating and moving ahead of apple, apple’s free apps kinda end up working as free trial sessions for showcasing the Utility of a good App.
Also, its a good thing apple ends up commoditizing free entry-level apps, there’s a billion software out there to build for different industries, its the only way, the prices of software will fall, which means more money in our pockets to spend on other things. So it’s fine, just as long as people don’t forget to innovate, that’s the way a free market should be.
What is anti-competitive tho, is stuff like Apple Music and Spotify, where spotify has to pay 30% cut to apple while apple music doesnt have to pay anything. But as long as apple is commoditizing entry-level apps, for other fast moving startups that can be a good thing, as they can show better value to customers who already have tried out free apple apps and see the value of those softwares.
I would have never paid for all those apps each month, were it not for apple’s free apps that helped me see the value in it.
That's a great way to view it, I haven't looked at it from this angle before, thanks.
> What is anti-competitive tho, is stuff like Apple Music and Spotify, where spotify has to pay 30% cut to apple while apple music doesnt have to pay anything.
Yeah, that's why the monopoly is rarely a good idea for the customers, in my opinion.
To point 2: who cares? As a consumer I prefer (1) a team that will not run out of money and sell my data and (2) a free service built-in and well integrated, and (3) not yet another subscription to forget to cancel
I see your point.
There are a few objective moments, imho, that I'd consider, though:
> a team that will not run out of money and sell my data
While "not run out of money" is true, the "sell my data" part is not given. For example, in 2023 Google sold its domains business (https://domains.google/) to Squarespace.
Also, while not directly selling your data, they might sell the outcomes of your data in a form of ads or AI models, for example. I believe that can objectively bother some people.
Another point: this is the way to build a monopoly, or a global dominance on the market, and then dictate the rules. I see that stories about some Big Tech monopoly controversial moves are often quite popular on HN, as those situations resonate with many tech enthusiasts.
As for the rest of the point, I agree with you, that a free, high-quality, and decent service is a benefit for us, consumers, over another subscription. I still feel sorry for small bootstrapped services. But that's my subjective feeling, I'm aware of that.
I'd say not as of today's state of AI tools, but it's difficult to predict the future.
So far, I can see many excited non-tech people who can build simple things or demos. But the real complexity starts behind that, once the solution needs to be deployed, maintained, extended with new features, bugs have to be fixed, etc. That's when it gets tricky.
I did a short experiment by trying to build an app with Cursor in the stack and domain I know nothing about. I got it to the first stage, and it was cool. But the app kept crashing once in a while with the memory issues, and my AI friend kept coming up with solutions that didn't help, but made the code more and more overengineered and harder to navigate. I'd feel sorry for those who'd need to maintain tools like this on stages like the one I described. Maybe that's the state of future start-ups out there?
And, btw, great infographics within the post.