Almonds are pretty cherry picked here as notorious for their high water use. Notably, we're not betting an entire economy and pouring increasing resources into almond production, either. Your example would be even more extreme if you chose crops like the corn used to feed cattle. Feeding cows alone requires 21.2 trillion gallons per year in the US.
The people advocating for sustainable usage of natural resources have already been comparing the utility of different types of agriculture for years.
Comparatively, tofu is efficient to produce in terms of land use, greenhouse gas emissions, and water use, and can be made shelf-stable.
>Almonds are pretty cherry picked here as notorious for their high water use.
If water use was such a dire issue that we needed to start cutting down on high uses of it, then we should absolutely cherry pick the high usages of it and start there. (Or we should just apply a pigouvian tax across all water use, which will naturally affect the biggest consumers of it.)
Yes, that's roughly what I said in my post. If we're doing a controlled economy and triaging for the health of the ecosystem, we'd start with feed for cattle, and almonds wouldn't be much further down on the list.
The contention with AI water use is that something like this is currently happening as local water supplies are being diverted for data-centers.
People have been sounding the alarm about excessive water diverted to almond farming for many years though, so that doesn't really help the counter-argument.
Aren't Californian almonds like 80% of the world's production?
Is the US AI data-centers producing 80% of the world's IT ?
I ask legitimately, I think that would already make it more apples to apples.
Also if you ask me personally, I'd rather have almonds than cloud AI compute. Imagine a future 100 years from now, we killed the almonds, never to be enjoyed ever again by future generations... Or people don't have cloud AI compute. It's personal, but I'd be more sad that I'd never get to experience the taste of an almond and all the cuisine that comes with it.
> Is the US AI data-centers producing 80% of the world's IT
You've misread it. It's not compared to AI datacenters, it's every type of datacenter, for all types of computing.
In the future scenario you've laid out it wouldn't be cloud AI compute. You wouldn't be able to use HN or send email or pay with a credit card or play video games or stream video.
AI has way more utility than you are claiming and less utility than Sam Altman and the market would like us to believe. It’s okay to have a nuanced take.
I'm more unhappy than happy, as there are plenty of points about the very real bad side of AI that are hurt by such delusional and/or disingenuous arguments.
That is, the topic is not one where I have already picked a side that I'd like to win by any means necessary. It's one where I think there are legitimate tradeoffs, and I want the strongest arguments on both sides to be heard so we get the best possible policies in the end.
If AI is so useful, we should have a ton of data showing an increase in productivity across many fields. GDP should be skyrocketing. We haven’t seen any of this, and every time a study is conducted, it’s determined that AI is modestly useful at best.
Well, I don't like marzipan, so both are useless? Or maybe different people find uses/utility from different things, what is trash for one person can be life saving for another, or just "better than not having it" (like you and Marzipan it seems).
ok in that case you don't need to pick on water in particular, if it has no utility at all then literally any resource use is too much, so why bother insisting that water in particular is a problem? It's pretty energy intensive, eg.
AI has no utility _for you_ because you live in this bubble where you are so rabidly against it you will never allow yourself to acknowledge it has non-zero utility.
What does it mean to “use” water? In agriculture and in data centers my understanding is that water will go back to the sky and then rain down again. It’s not gone, so at most we’re losing the energy cost to process that water.
The problem is that you take the water from the ground, and you let it evaporate, and then it returns to... Well to various places, including the ground, but the deeper you take the water from (drinking water can't be taken from the surface, and for technological reasons drinking water is used too) the more time it takes to replenish the aquifer - up to thousands of years!
Of course surface water availability can also be a serious problem.
No it’s largely the same situation I think. I was drawing a distinction between agricultural use and maybe some more heavy industrial uses while the water is polluted or otherwise rendered permanently unfit for other uses.
Other people might have other preferences. Maybe we could have a price system where people can express their preferences by paying for things with money, providing more money to the product which is in greater demand?
Sure.. Except some people / companies have so much more money, they can demand impractical things and pay above-market rates for them, causing all others to scramble to live day-to-day with the distorted market.
Note that <html> and <body> auto-close and don't need to be terminated.
Also, wrapping the <head> tags in an actual <head></head> is optional.
You also don't need the quotes as long the attribute doesn't have spaces or the like; <html lang=en> is OK.
(kind of pointless as the average website fetches a bazillion bytes of javascript for every page load nowadays, but sometimes slimming things down as much as possible can be fun and satisfying)
This kind of thing will always just feel shoddy to me. It is not much work to properly close a tag. The number of bytes saved is negligible, compared to basically any other aspect of a website. Avoiding not needed div spam already would save more. Or for example making sure CSS is not bloated. And of course avoiding downloading 3MB of JS.
What this achieves is making the syntax more irregular and harder to parse. I wish all these tolerances wouldn't exist in HTML5 and browsers simply showed an error, instead of being lenient. It would greatly simplify browser code and HTML spec.
Implicit elements and end tags have been a part of HTML since the very beginning. They introduce zero ambiguity to the language, they’re very widely used, and any parser incapable of handling them violates the spec and would be incapable of handling piles of real‐world strict, standards‐compliant HTML.
> I wish all these tolerances wouldn't exist in HTML5 and browsers simply showed an error, instead of being lenient.
Well, to parsing it for machines yes, but for humans writing and reading it they are helpful. For example, if you have
<p> foo
<p> bar
and change it to
<div> foo
<div> bar
suddenly you've got a syntax error (or some quirks mode rendering with nested divs).
The "redundancy" of closing the tags acts basically like a checksum protecting against the "background radiation" of human editing.
And if you're writing raw HTML without an editor that can autocomplete the closing tags then you're doing it wrong anyway.
Yes that used to be common before and yes it's a useful backwards compatibility / newbie friendly feature for the language, but that doesn't mean you should use it if you know what you're doing.
It sounds like you're headed towards XHTML. The rise and fall of XHTML is well documented and you can binge the whole thing if you're so inclined.
But my summarization is that the reason it doesn't work is that strict document specs are too strict for humans. And at a time when there was legitimate browser competition, the one that made a "best effort" to render invalid content was the winner.
The merits and drawbacks of XHTML has already been discussed elsewhere in the thread and I am well aware of it.
> And at a time when there was legitimate browser competition, the one that made a "best effort" to render invalid content was the winner.
Yes, my point is that there is no reason to still write "invalid" code just because it's supported for backwards compatibility reasons. It sounds like you ignored 90% of my comment, or perhaps you replied to the wrong guy?
I'm a stickling pedant for HTML validity, but close tags on <p> and <li> are optional by spec. Close tags for <br>, <img>, and <hr> are prohibited. XML-like self-closing trailing slashes explicitly have no meaning in XML.
Close tags for <script> are required. But if people start treating it like XML, they write <script src="…" />. But that fails, because the script element requires closure, and that slash has no meaning in XML.
I think validity matters, but you have to measure validity according to the actual spec, not what you wish it was, or should have been. There's no substitute for actually knowing the real rules.
Are you misunderstanding on purpose? I am aware they are optional. I am arguing that there is no reason to omit them from your HTML. Whitespace is (mostly) optional in C, does that mean it's a good idea to omit it from your programs? Of course a br tag needs no closing tag because there is no content inside it. How exactly is that an argument for omitting the closing p tag? The XML standard has no relevance to the current discussion because I'm not arguing for "starting to treat it like XML".
I'm beginning to think I'm misunderstanding, but it's not on purpose.
Including closing tags as a general rule might make readers think that they can rely on their presence. Also, in some cases they are prohibited. So you can't achieve a simple evenly applied rule anyway.
Well, just because something is allowed by the syntax does not mean it's a good idea, that's why pretty much every language has linters.
And I do think there's an evenly applied rule, namely: always explicitly close all non-void elements. There are only 14 void elements anyway, so it's not too much to expect readers to know them. In your own words "there's no substitute for actually knowing the real rules".
I mean, your approach requires memorizing for which 15 elements the closing tag can be omitted anyway (otherwise you'll mentally parse the document wrong (i.e. thinking a br tag needs to be closed is equally likely as thinking p tags can be nested)).
The risk that somebody might be expecting a closing tag for an hr element seems minuscule and is a small price to pay for conveniences such as (as I explained above) being able to find and replace a p tag or a li tag to a div tag.
I don't believe there are any contexts where <li> is valid that <div> would also be valid.
I'm not opposed to closing <li> tags as a general a general practice. But I don't think it provides as much benefit as you're implying. Valid HTML has a number of special rules like this. Like different content parsing rules for <textarea> and <script>. Like "foreign content".
If you try to write lint-passing HTML in the hopes that you could change <li> to <div> easily, you still have to contend with the fact that such a change cannot be valid, except possibly as a direct descendant of <template>.
Again, you're focusing on a pointless detail. Sure, I made a mistake in offhandedly using li as an example. Why do you choose to ignore the actually valid p example though? Seems like you're more interested in demonstrating your knowledge of HTML parsing (great job, proud of ya) than anything else. Either way, you've given zero examples of benefits of not doing things the sensible way that most people would expect.
IMO, all of those make logical sense. If you’re inserting a line break or literal line, it can be thought of as a 1-dimensional object, which cannot enclose anything. If you want another one, insert another one.
In contrast, paragraphs and lists do enclose content, so IMO they should have clear delineations - if nothing else, to make visually understanding the code more clear.
I’m also sure that someone will now reference another HTML attribute I didn’t think about that breaks my analogy.
I didn't have a problem with XHTML back in the day; it tool a while to unlearn it; I would instinctively close those tags: <br/>, etc.
It actually the XHTML 2.0 specification [1] that discarded backwards compatibility with HTML 4 was the straw that broke the camel's back. No more forms as we knew them, for example; we were supposed to use XFORMS.
That's when WHATWG was formed and broke with the W3C and created HTML5.
XHTML 2.0 had a bunch of good ideas and a lot of them got "backported" into HTML 5 over the years.
XHTML 2.0 didn't even really discard backwards-compatibility that much: it had its compatibility story baked in with XML Namespaces. You could embed XHTML 1.0 in an XHTML 2.0 document just as you can still embed SVG or MathML in HTML 5. XForms was expected to take a few more years and people were expecting to still embed XHTML 1.0 forms for a while into XHTML 2.0's life.
At least from my outside observer perspective, the formation of WHATWG was more a proxy war between the view of the web as a document platform versus the view of the web as an app platform. XHTML 2.0 wanted a stronger document-oriented web.
(Also, XForms had some good ideas, too. Some of what people want in "forms helpers" when they are asking for something like HTMX to standardized in browsers were a part of XForms such as JS-less fetch/XHR with in-place refresh for form submits. Some of what HTML 5 slowly added in terms of INPUT tag validation are also sort of "backports" from XForms, albeit with no dependency on XSD.)
XHTML in practice was too strict and tended to break a few other things (by design) for better or worse, so nobody used it...
That said, actually writing HTML that can be parsed via an XML parser is generally a good, neighborly thing to do, as it allows for easier scraping and parsing through browsers and non-browser applications alike. For that matter, I will also add additional data-* attributes to elements just to make testing (and scraping) easier.
Yeah, I remember, when I was at school and first learning HTML and this kind of stuff. When I stumbled upon XHTML, I right away adapted my approach to verify my page as valid XHTML. Guess I was always on this side of things. Maybe machine empathy? Or also human empathy, because someone needs to write those parsers and the logic to process this stuff.
oh man, I wish XHTML had won the war. But so many people (and CMSes) were creating dodgy markup that simply rendered yellow screens of doom, that no-one wanted it :(
i'm glad it never caught on. the case sensitivity (especially for css), having to remember the xmlns namespace URI in the root element, CDATA sections for inline scripts, and insane ideas from companies about extending it further with more xml namespaced elements... it was madness.
The fact XHTML didn't gain traction is a mistake we've been paying off for decades.
Browser engines could've been simpler; web development tools could've been more robust and powerful much earlier; we would be able to rely on XSLT and invent other ways of processing and consuming web content; we would have proper XHTML modules, instead of the half-baked Web Components we have today. Etc.
Instead, we got standards built on poorly specified conventions, and we still have to rely on 3rd-party frameworks to build anything beyond a toy web site.
Stricter web documents wouldn't have fixed all our problems, but they would have certainly made a big impact for the better.
And add:
Yes, there were some initial usability quirks, but those could've been ironed out over time. Trading the potential of a strict markup standard for what we have today was a colossal mistake.
There's no way it could have gained traction. Consider two browsers. One follows the spec explicitly, and one goes into "best-effort" mode on encountering invalid markup. End users aren't going to care about the philosophical reasoning for why Browser A doesn't show them their school dance recital schedule.
Consider JSON and CSV. Both have formal specs. But in the wild, most parsers are more lenient than the spec.
Which is also largely what happened: HTML 5 is in some ways that "best-effort" mode, standardized by a different standards body to route around XHTML's philosophies.
Yeah this is it. We can debate what would be nicer theoretically until the cows come home but there's a kind of real world game theory that leads to browsers doing their best to parse all kinds of slop as well as they can, and then subsequently removing the incentive for developers and tooling to produce byte perfect output
It had too much unnecessary metadata yes, but case insensitivity is always the wrong way to do stuff in programming (e.g. case insensitive file system paths). The only reason you'd want it is for real-world stuff like person names and addresses etc. There's no reason you'd mix the case of your CSS classes anyway, and if you want that, why not also automatically match camelCase with snake_case with kebab-case?
> It would greatly simplify browser code and HTML spec.
I doubt it would make a dent - e.g. in the "skipping <head>" case, you'd be
replacing the error recovery mechanism of "jump to the next insertion mode"
with "display an error", but a) you'd still need the code path to handle
it, b) now you're in the business of producing good error messages which
is notoriously difficult.
Something that would actually make the parser a lot simpler is removing
document.write, which has been obsolete ever since the introduction of the
DOM and whose main remaining real world use-case seems to be ad delivery.
(If it's not clear why this would help, consider that document.write can
write scripts that call document.write, etc.)
I mean, I am obviously talking about a fictive scenario, a somewhat better timeline/universe. In such a scenario, the shoddy practices of not properly closing tags and leaning on leniency in browser parsing and sophisticated fallbacks and all that would not have become a practice and those many currently valid websites would mostly not have been created, because as someone tried to create them, the browsers would have told them no. Then those people would revise their code, and end up with clean, easier to parse code/documents, and we wouldn't have all these edge and special cases in our standards.
Also obviously that's unfortunately not the case today in our real world. Doesn't mean I cannot wish things were different.
I agree for sure, but that's a problem with the spec, not the website. If there are multiple ways of doing something you might as well do the minimal one. The parser will have always to be able to handle all the edge cases no matter what anyway.
You might want always consistently terminate all tags and such for aesthetic or human-centered (reduced cognitive load, easier scanning) reasons though, I'd accept that.
<html>, <head> and <body> start and end tags are all optional. In practice, you shouldn’t omit the <html> start tag because of the lang attribute, but the others never need any attributes. (If you’re putting attributes or classes on the body element, consider whether the html element is more appropriate.) It’s a long time since I wrote <head>, </head>, <body>, </body> or </html>.
`<thead>` and `<tfoot>`, too, if they're needed. I try to use all the free stuff that HTML gives you without needing to reach for JS. It's a surprising amount. Coupled with CSS and you can get pretty far without needing anything. Even just having `<template>` with minimal JS enables a ton of 'interactivity'.
If you're at that point, the logical step would be to just support replying to the HTTP request with a
"content-type: application/wasm" and skip the initial html step entirely.
The 4-paragraph business case was useful for creating friction, which meant that if you couldn't be bothered to write 4 paragraphs you very likely didn't need the computer upgrade in the first place.
This might have been a genuinely useful system, something which broke down with the existence of LLMs.
The only definitively non-renewable resource is time. Time is often spent like a currency, whose monetary instrument is some tangible proxy of how much time elapsed. Verbosity was an excellent proxy, at least prior to the advent of generative AI. As you said, the reason Bob needs to write 4 paragraphs to get a new PC is to prove that he spent the requisite time for that computer, and is thus serious about the request. It’s the same reason management consultants and investment bankers spend 80+ hours a week working on enormous slide decks that only ever get skimmed by their clients: it proves to the clients that the firm spent time on them, and is thus serious about the case/deal. It’s also the same reason a concise thank-you note “thanks for the invite! we had a blast!” or a concise condolence note “very sorry for your loss” get a lot less well-received than a couple verbose paragraphs on how great the event was or how much the deceased will be missed, even if all that extra verbiage confers absolutely nothing beyond the core sentiment. (The very best notes, of course, use their extra words to convey something personally meaningful beyond “thanks” or “sorry.”)
Gen-AI completely negates meaningless verbosity as a proxy of time spent. It will be interesting to see what emerges as a new proxy, since time-as-currency is extremely engrained into the fabric of human social interactions.
There's an important asymmetry here: it takes a lot of to weave an intricate pattern, but much less time to assess and even appreciate it. The sender / suitor pays significantly more than the receiver / decider.
There are some parallels to that in compression and cryptography, but they are rather far-fetched.
This is the sort of workplace philosophising that I hate the most. Employees aren't children. They don't need to have artificial bullshit put up in between them and what they need, the person approving just needs to actually pay attention.
If someone wants a new computer they should just have to say why. And if it's a good reason, give it to them. If not, don't. Managers have to manage. They have to do their jobs. I'm a manager and I do my job by listening to the people I manage. I don't put them through humiliation rituals to get new equipment.
People want new computer because new hire Peter got a new one.
People want new computer because they just had lunch with a friend that works in a different company and got a new computer and they just need to one up him next time they go to lunch.
That is why I am not going to just give people computers because they ask. Worst crybabies come back because they „spilled coffee” on perfectly fine 2 years old laptop.
Yes and your job as a manager is to determine whether or not the reason is valid. And it'll be easier to do that if you ask for a 1 sentence explanation instead of 4 paragraphs.
Wooo I used to think this was how managers work and just ... Was inevitable. I'm so glad to actually be a manager now, because no, it's not. You don't call people who spill coffee, (have you never spilled anything?) crybabies. This is a bad manager.
I can say when I wanted to move my desk from one place to another I had to write up the "business justification" (I was already working from both offices on different days and still am, it was a change on paper)
so I'm sure there's large corps that do this for everything. probably ones where you're not asking your manager, but asking finance or IT for everything
The problem is, I'm a verbose writer and can trivially churn out 4 paragraphs - another person is going to struggle. The friction is targeting the wrong area: this is a 15 minute break for me, and an hour long nightmare for my dyslexic co-worker.
Social media will give you a good idea what sort of person enjoys writing 4 paragraphs when something goes wrong; do you really want to incentivize that?
Perhaps ideally we'd change English to count the "first" entry in a sequence as the "zeroth" item, but the path dependency and the effort required to do that is rather large to say the least.
At least we're not stuck with the Roman "inclusive counting" system that included one extra number in ranges* so that e.g. weeks have "8" days and Sunday is two days before Monday since Monday is itself included in the count.
Hmm, "en 8" makes sense to me in that you're using it to reference the next Whateverday that is at least 8 days apart from now.
If we're on a Tuesday, and I say we're meeting Wednesday in eight, that Wednesday is indeed 8 days away.
Now I'm fascinated by this explanation, which covers the use of 15 as well. I'd always thought of it as an approximation for a half month, which is roughly 15 days, but also two weeks.
To partially answer the other Latin languages, Portuguese also uses "quinze dias" (fifteen days) to mean two weeks. But I don't think there is an equivalent of the "en huit". We'd use "na quarta-feira seguinte" which is equivalent to "le mercredi suivant".
Music only settled on 12 equal tones after a lot of music theory and a lot of compromise. Early instruments often picked a scale and stuck with it, and even if they could produce different scales, early music stuck to a single scale without accidentals for long stretches. Many of these only had 5 or 6 notes, but at the time and place these names were settling down, 7-note scales were common, so we have the 8th note being the doubling of the 1st.
Most beginners still start out thinking in one scale at a time (except perhaps Guitar, which sorta has its own system that's more efficient for playing basic rock). So thinking about music as having 7 notes over a base "tonic" note, plus some allowed modifications to those notes, is still a very useful model.
The problem is that these names percolated down to the intervals. It is silly that a "second" is an interval of length 1. One octave is an 8th, but two octaves is a 15th. Very annoying. However, it still makes sense to number them based on the scale, rather than half-steps: every scale contains one of every interval over the tonic, and you have a few choices, like "minor thirds vs. major thirds" (or what should be "minor seconds vs. major seconds"). It's a lot less obvious that you should* only include either a "fourth" (minor 3rd) or a "fifth" (major 3rd), but not both. I think we got here because we started by referring to notes by where they appear in the scale ("the third note"), and only later started thinking more in terms of intervals, and we wanted "a third over the tonic" to be the same as the third note in the scale. In this case it would have been nice if both started at zero, but that would have been amazing foresight from early music theorists.
* Of course you can do whatever you want -- if it sounds good, do it. But most of the point of these terms (and music theory in general) is communicating with other musicians. Musicians think in scales because not doing so generally just does not sound good. If your song uses a scale that includes both the minor and major third, that's an unusual choice, and unusual choices requiring unusual syntax is a good thing, as it highlights it to other musicians.
> At least we're not stuck with the Roman "inclusive counting" system that included one extra number in ranges* so that e.g. weeks have "8" days and Sunday is two days before Monday since Monday is itself included in the count.
Yes, we are. C gives pointers one past the end of an array meaningful semantics.
That's in the standard. You can compare them and operate on them but not de-reference them.
Amusingly, you're not allowed to go one off the end at the beginning of a C or C++ array. (Although Numerical Recipes in C did it to mimic FORTRAN indices.) So reverse iterators in C++ are not like forward iterators. They're off by 1.
Note that 'first' and 'second' are not etymologically related to one or two, but to 'foremost'. Therefore, it is would make sense to use this sequence of ordinals:
In terms of another thread the item is the "rail" between the "fence posts". The address of the 'first' item starts at 0, but it isn't complete until you've reached the 1.
Where is the first item? Slot 0. How much space does one item take up* (ignoring administrative overheads)? The first and only item takes up 1 space.
This article has a strong ChatGPT smell. Things like "in the world of", "let's dive into", the bullet points, "conclusion" section, etc. Anyone else have the same feeling?
Is there a list of phrases and words to avoid, to not be accused of using AI? It's getting kind of ridiculous what people identify an "AI smell". I understand if the word "delve" shows up five times in as many paragraphs, I guess, but just having a "conclusion" section is a smell now? Using the word "innovative" is a smell?
I feel awfully sorry for kids in school these days. Teachers must think everything they write is AI, considering they're still learning to write effectively and probably like to use bullet points, popular phrases like "dive into", and structured layouts that include introduction and conclusion sections.
I don't think it's any particular word or phrase that makes it seem like AI but instead the overall feel of the writing. I can't quite describe it, but it feels like it's been sandpapered of any emotion or author's voice and just feels banal. Compare the wording and voice in this post versus one of the author's earlier ones (https://experience.prfalken.dev/english/exercism/) and I think you'll see what I mean. Some of this, especially the 2nd sentence, just reads with the standard "wall of text without substance voice" that I've personally come to associate with AI.
As another comment (https://news.ycombinator.com/item?id=43105143) notes, some of the author's earlier blog posts use a different style of punctuation so I'm willing to bet that they might be using AI to help them write or reformat some of their ideas. I don't think there's anything wrong with that but without some re-edits to the AI text it will take on that distinctly AI tone.
Kids who are still learning how to write still have a tone/voice/style that comes across in their writing and I think that's the particular distinction being made here.
Sure, but the structure seems quite ChatGPTish as well, with e.g. the bullet points and section choice. You wouldn't get that by just faithfully translating a french source text from English.
You might also note that in their first blog post they use the French language convention of puting a space before their exclamation marks and in this latest post they use the English language convention of no space.
>Now I can update this blog and push to github, instant deploy !
The sad thing is ChatGPT didn't invent that style. That was just the way clear, concise writing by skilled writers was at the time they trained the model. So now if you are actually a skilled writer who tries to convey ideas in a clear, concise way, you will appear robotic :(
Certainly I've seen elements of this style in pre-ChatGPT writing. (Of course, it didn't just invent the style.) But I disagree strongly that it's clear, concise or skillful. The way that bullet-point lists are used here is highly distracting and frankly counterproductive, and a "conclusion" hardly seems necessary for a piece that presents the rules for a simple game. Entire sentences like "Let’s dive into the rules and strategies of this captivating game." are pure fluff here, too.
Yep. Read the first few paragraphs and immediately noticed the stink. Using it as an assistant is fine, but you really shouldn't leave what it outputs untouched, especially when it's this obvious!
I wrote the article in a much simpler and "french to english" way, and asked the LLM to format it in a better english style. Looking at these comments I don't think I'll do it again. But it's also the only article that made so many reads and comments. What should I conclude ?
I got that feeling because of adjectives like “innovative”. Thought that it might be impolite to imply that in the comments, and was relieved that someone already did.
The premise is straight out of Simon Levene's short film from 2005, "Tube Poker".
The point-allocations are described as in the article at around 03:45.
FFS. I read it and learned something. Stop jousting windmills just because they seem like propellers on warplanes that are coming for you. Sometimes they are just windmills.
"Learned" what, exactly? Anyone could make up a game like this. And it really does read to me (as it apparently did to GP) like something even an LLM could make up nowadays.
Agree -- especially if this was a post about something a real person thought of, but edited with AI (the author is in France, so it seems reasonable to use AI for editing; also it's a small blog that has been around for a while and not a content farm).
For some reason I think I would find it less valuable if the idea itself came from an AI, too.
> We try to hide some of the differences between arches when reasonably feasible. e.g. Newer architectures no longer provide an open syscall, but do provide openat. We will still expose a sys_open helper by default that calls into openat instead.
Sometimes you actually want to make sure that the exact syscall is called; e.g. you're writing a little program protected by strict seccomp rules. If the layer can magically call some other syscall under the hood this won't work anymore.
Glibc definitely does this transparent mapping as well. Calling
int fd = open(<path>, O_RDONLY)
yields
openat(AT_FDCWD, <path>, O_RDONLY)
when running through strace.
This really surprised me when I was digging into Linux tracing technology and noticed no `open` syscalls on my running system. It was all `openat`. I don't know when this transition happened, but I totally missed it.
This is more than 4 times more than all data centers in the US combined, counting both cooling and the water used for generating their electricity.
What has more utility: Californian almonds, or all IT infrastructure in the US times 4?