Hacker Newsnew | past | comments | ask | show | jobs | submit | ChrisSD's commentslogin

That study only says that most Americans think they interact with AI at least a few times a week (it doesn't say how or if it's intentional). And it also says the vast majority feel they have little or no control over whether AI is used in the lives.

For example, someone getting a google search result containing an AI response is technically interacting with AI but not necessarily making use of its response or even wanting to see it in the first place. Or perhaps someone suspects their insurance premiums were decided by AI (whether that's true or not). Or customer service that requires you go through a chat bot before you get real service.


Windows also has uuids. E.g.:

    \\.\Volume{3558506b-6ae4-11eb-8698-806e6f6e6963}\

Which can be trivially mapped to directories for aliasing. Just like Linux.

Windows NT and UNIX are much more similar than many people realize; Windows NT just has a giant pile of Dos/Win9x compatibility baked on top hiding how great the core kernel design actually is.

I think this article demonstrates that very well.


In the end, if you think about it, the Win32 subsystem running on top of NT OSes it's pretty much the same concept as Wine running on Unix. That's why Wine is not an emulator. And neither is XP emulating old Win32 stuff to run Win9x binaries.

Yeah, NTFS is quite capable. I mostly blame the Windows UI for being a bit too dumbed down and not advertising the capabilities well.

They're using slide rule users as a stand-in for serious mathematician as opposed to people who incidentally use mathematics. It makes some sense in historical context but becomes a bit anachronistic after the invention of electronic calculators.


^_^ sucks when you actually need to talk about emoji though :/


Stating the Unicode code points as U+1F4A9 or (D syntax) \U0001F4A9 is a reasonable workaround.


We discourage posts that aren't relevant in some way to D programming.

One of the reasons I enjoy HackerNews is dang's enlightened and sensible moderation policy.


I think OP meant cases like, "I need to process a string with this emoji in D" etc


Would you ever need to talk about a specific emoji?


¯\_(ツ)_/¯


Tbh, the rights and wrongs aside, I suspect "everyone" is complaining about it because it's the easiest thing to talk about. Much like how feature discussions tend towards bikeshedding.


That's an entirely different issue. The kb's of overhead for backtrace printing and the format machinery is fixed and does not grow with the binary size. All combined it wouldn't account for anywhere close to 1mb let alone 100's of mb.


Tbh I really don't think it matters what the letters stand for.


You made me realize it's like NASA: a good chunk of the worlds knows it, but I bet most don't know what it stands for (at least outside the US I bet 99.9% don't know -- me included haha)


North American Space Astronauts!


That sounds like they were mandated to shoehorn "AI" into their description in some way. Because it is indeed a non sequitur.


To be clear, tier 2 targets are still expected to be well supported. It just doesn't require CI to pass after every PR (meaning PRs aren't blocked on fixing tier 2 specific issues). However, the target is officially distributed and target maintainers are still expected to fix any blocking issues before a release. If they can't then it's likely the target will be demoted to tier 3.

Windows on ARM couldn't be tier 1 until recently as there weren't Windows ARM github runners. Now that there are I think it likely that it'll be promoted to tier 1.


Does that end up meaning, in practice, that stable releases are never broken, but nightly might be, for a tier 2 target?


More or less, yes. It is guaranteed that tier 2 at least builds so they'll be a nightly available every day though it's possible they might have a serious bug. To be honest though that's always a risk for nightlies even with tier 1 targets. Tests don't catch every potential problem (even if they do catch a lot) which is why there is a beta period before a release.


In my experience every developer, company, team, sub-team, etc has their own "library" of random functions, utilities, classes, etc that just end up being included into new projects sooner or later (and everyone and their dog has their own bespoke string handling libraries). Copy/pasting large chunks of code from elsewhere is also rampant.

I'm not so sure C/C++ solves the actual problem. Only sweeps it under a carpet so it's much less visible.


It definitely does solve one problem. Like it or not, you can't be hit by supply chain attacks if you don't have a supply chain.


I mirror all deps locally and only build from the mirror. It isn’t an issue. C/C++ is my dayjob


at some point you could mirror a supply chain attack... xz was a pretty long game and only found by accident for example


I’m sure I will.


This runs the risk of shipping C/C++ libraries with known vulnerabilities. How do you keep track of that? At least with npm / cargo / etc, updating dependencies is a single command away.


Pull, update, build?


How do you even know a dependency has an open vulnerability?


Conversely, how do you know when a dependency doesn’t have a vulnerability?


> every developer, company, team, sub-team, etc has their own "library" of random functions, utilities, classes, etc

You are right. But my conclusion is different.

If it is a stable and people have been there for a while then developers know that code as well as the rest. So, when something fails they know how to fix it.

Bringing generic libraries may create long callstacks of very generic code (usually templates) that is very difficult to debug while adding a lot of functionality that is never used.

Bringing a new library into the code base need to be a though decision.


> In my experience every developer, company, team, sub-team, etc has their own "library" of random functions, utilities, classes, etc that just end up being included into new projects sooner or later

Same here. And a lot of those homegrown functions, utilities and classes are actually already available, and better implemented, in the C++ Standard Library. Every C++ place I've worked had its own homegrown String class, and it was always, ALWAYS worse in all ways than std::string. Maddening. And you could never make a good business case to switch over to sanity. The homegrown functions had tendrils everywhere and many homegrown classes relied on each other, so your refactor would end up touching every file in the source tree. Nobody is going to approve that risky project. Once you start down the path of rolling your own standard library stuff, the cancer spreads through your whole codebase and becomes permanent.


Although I like std::string for somethings becomes a little tricky with cross platform work that involves both linux and windows. It also can be tricky with unicode and lengths.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: