Hacker Newsnew | past | comments | ask | show | jobs | submit | kstrauser's commentslogin

As a portion of Internet users who regularly pay for and depend on hosted services, I bet that percentage is much, much higher.

Ah, source of the famous guru meditation error!

While I sympathize with the intent of the law, this is a great example of why it's dumb. There's no possible way you could make that ring, in a reasonably ring-sized form factor, with today's manufacturing processes in such a way that an end user could replace it.

If this law pushes back against the idea that it's ok to make endless tech products which are essentially future rubbish as soon as you buy them, then I think that's a good thing. Perhaps products like this just shouldn't exist until we have better ways of dealing with the remains.

The problem is that it makes it impossible to have a version 0 to iterate on until a whole lot of other industries have advanced. Imagine the situation of in-ear hearing aids: they shouldn't be allowed to exist until they're perfect, unless we're happy telling deaf people they have to wear much larger than necessary devices and advertise their disability.

I'm glad we're reducing e-waste. I'm not thrilled about the idea of saying you can't make a thing until 100% of the bugs are worked out, meaning you can't have a beta version for research and fundraising, meaning, you can't conjure the perfect version into existence.


"Invisible In-the-Canal" hearing aids are battery replaceable. That argument just won't fly.

1: https://assets-ae.rt.demant.com/-/media/project/retail/audik...


That's hyperbole and I think you know it. I'm pretty sure they explicitly exclude medical devices.

It's not hyperbole at all.

Fortunately, your link basically says it doesn't apply to something you wear on your hands or arms:

> By way of derogation from paragraph 1, the following products incorporating portable batteries may be designed in such a way as to make the battery removable and replaceable only by independent professionals:

> (a) appliances specifically designed to operate primarily in an environment that is regularly subject to splashing water, water streams or water immersion, and that are intended to be washable or rinseable;

But the only mention of "medical" comes right after it, and doesn't include hearing aids, future smart glasses, etc.:

> (b) professional medical imaging and radiotherapy devices, as defined in Article 2, point (1), of Regulation (EU) 2017/745, and in vitro diagnostic medical devices, as defined in Article 2, point (2), of Regulation (EU) 2017/746.

So ironically, the law allows disposable "junk devices" people are complaining about here, but doesn't allow factory-only serviceable hearing aids. How 'bout that? We can buy our smart rings and throw them away, but hearing aids will have to remain giant hunks of heavy plastic, or at least the models purchasable by average people who can't fly out of the EU to buy the good ones.

Edit: It's easy to downvote. I cited the relevant law. If I'm wrong, cite other law that explains why.


Hearing aids have had replaceable batteries since they were invented basically. I still remember my grandma 20 years ago fiddling with the small batteries, so that really is not a problem.

> Edit: It's easy to downvote. I cited the relevant law. If I'm wrong, cite other law that explains why.

> Please don't comment about the voting on comments. It never does any good, and it makes boring reading.

https://news.ycombinator.com/newsguidelines.html


If you want more freedom to design medical devices for people where there is an actual need, it would easily be done by expanding the exception for medical devices that already exists in the law.

If you think people to be able to sell unsustainable and mostly superfluous electronics because any improvements there might eventually trickle down to hearing aids, your argument is basically "we should accept the millions of tonnes of unnecessary e-waste in order to get slighly smaller hearing aids", which think many reasonable people would disagree with.


If the battery lasts for two years its exceeding the useful life of many other products already, some of which of have higher environment cost for manufacturing and disposal.

The law has chosen poor proxies for lifespan and impact.


Yes, other things cause e-waste. Sometimes worse.

That's not a good justification for more e-waste.


It's a ring. It's a tiny amount of waste.

So is plastic straws and we know what happened with those.

The problem with plastic straws was properly disposing them. For a piece of jewelry I doubt many people would throw it away on the side of the road. A ring that last for years is different than a disposal product that people may use for a couple of minutes.

Not when millions of people buy it

It's still a tiny amount of waste for those millions of people amortized over its lifespan.

Products are supposed to last two years at the very least in EU (local laws may be more strict, but not less). If your product dies before that time, the customer will cite warranty, and there you go. This device is likely one of the many 'designed to last a little bit more than two years', with the emphasis on 'little bit'. It appears to be a perfect example of planned obsolescence.

Who gets to choose what products are future rubbish?

Even if you think this product is a waste of resources, why is THIS waste of resources something we should stop, but not other, bigger wastes? Should we outlaw flying somewhere when you could take a train? The fuel spent on a short flight wastes way more resources and damages the environment much more than this smart ring does. If we are willing to ban this piece of tech because it is a waste, couldn't the same arguments be made about a short range flight?


There are already several existing and proposed bans on short haul flights when train routes exist. [1, 2]

[1] https://en.wikipedia.org/wiki/Short-haul_flight_ban

[2] https://www.forbes.com/sites/alexledsom/2024/03/18/spain-sho...



Sure, thanks for bringing it up. Short-range flights should have a higher higher threshold for permitted use in service of the environment.

Please, ask more questions.


Yep. There's some strong "How dare they interfere with Thneed production!" energy.

If their official sizing kit 3D models are accurate(better be!), there's exact amount of space for an SR721W watch/hearing aid silver oxide coin cell. And these are not even the tiniest of standardized replaceable batteries.

1: https://imgur.com/a/yupC9lN


You're too generous. I feel like the entire production run of this ring could be equivalent to a single discarded washing machine. This law is hamfisted.

Perhaps the ring need not exist.

Maybe it's the ring that is dumb?

> In the 2000's, politics interfered and browser vendors removed plug-in support, instead preferring their own walled gardens and restricted sandboxes

That's one way to say it. The more common way was that users got tired of crappy plugins crashing their browsers, and browser devs got tired of endless complaints from their users.

It wasn't "politics" of any sort that made browsers sandbox everything. It was the insane number of crashes, out-of-memories, pegged CPUs, and security vulnerabilities that pushed things over the edge. You can only sit through so many dozens of Adobe 0-days before it starts to grate.


The "walled gardens" he referred to are in fact based on open standards and open source, while the Applet runtime is not.

Not all of Java is open source. The TCK, the testing suite for standard compliance, for instance, is proprietary, and only organizations with Oracle's blessing can gain access. AdoptOpenJDK was only granted access after they stopped distributing another Java runtime, OpenJ9.


Correct me if I'm wrong but during this timeframe (circa 2005), Java was not open source at all. OpenJDK was announced in 2006 and first release was 2008, by which time the days Java in the browser were more or less over.

ActiveX was hell for security.

ActiveX was its own special kind of terrible for many reasons, but so were Java, Flash, and Silverlight. At least ActiveX didn't hide the fact you were about to grant arbitrary code execution to a website, because you might as well have assumed that the second these plugins were loaded.

The only advantage to Java applets I can think of is that they had the advantage of freezing the browser so it could no longer be hacked.

The Java applet system was designed better than ActiveX but in practice I've always found it to be so much worse of an end user experience. This probably had to do with the fact most ActiveX components were rather small integrations rather than (badly fitted) full-page UIs.


Flash is still a big loss imho, the ecosystem of games, movies and demonstration thingies was amazing and they were accessible to create by many. Unlike Java applets that slowed the main browser UI thread to a crawl if they didn't load they usually didn't), Flash didn't have such slowdowns.

One exception is early 2000s Runescape: that was Java in browser but always loaded, no gray screen and hanging browser. They knew what they were doing.


Many of the old games and movies still play back well with Ruffle installed (https://ruffle.rs/). Newgrounds embeds it by default for old interactive flash media that they couldn't convert directly to video.

It's not a perfect fit, but it works. The speed of Ruffle loading on a page is similar to that of Flash initializing, so you can arguably still make flash websites and animations to get the old look and feel if you stick to the Ruffle compatibility range. The half-to-one-second page freeze that was the norm now feels wrong, though, so maybe it's not the best idea to put Flash components everywhere like we used to do.

Runescape proved that Java could be a pretty decent system, but so many inexperienced/bad Java developers killed the ecosystem. The same is true on the backend, where Java still suffers from the reputation the Java 7 monolithic mega projects left behind.


It's good that we have the runtime to run old Flash games. What we lost is an extremely easy environment for authoring/creating them. Nothing has come even close since Flash. Not just game, but any kind of interactions and animations on the web.

Where did the tools go?

Can a person not run Flash authoring tools with an era-appropriate operating system in a VM or something?


Having to run it in a VM already makes it less approachable

That's the only way I guess. What I meant, with the death of Flash, nothing appeared to offer the same tools for any other web technology

I think perhaps what was lost is mostly this: Macromedia. They had a knack for making content creation simple. Flash was just one of the results of this: It let people create seemingly-performant, potentially-interactive content that ran almost universally on the end-user computers of the time -- and do it with relative ease because the creation tools existed and were approachable.

Macromedia also provided direction, focus, and marketing; more of the things that allowed Flash to reach saturation.

Someone could certainly come up with an open JS stack that accomplishes some of the same things in a browser on a modern pocket supercomputer. And countless people certainly have.

But without forces like marketing to drive cohesion, simplicity, and adoption then none of them can reach similar saturation.


In my experience, most of the more important Java-on-the-web stuff was Java Web Start as opposed to applets. And Java Web Start was all kinds of bad. About the only remotely good thing I could say is that it has a sandbox. Which protected no one from anything, by design, because it was the app’s choice whether to use it. And Web Start apps also often included native code, too, so they weren’t even portable.

So much JWS PTSD. Industrial automation got a giant dose of it and those things somehow weren’t even portable to different versions of internet explorer.

Exactly.

Java was so buggy and had so many security issues about 20 years ago that my local authorities gave a security advisory to not install it at all in end user/home computers. That finally forced the hand of some banks to stop using it for online banking apps.

Flash also had a long run of security issues.


In the 2000s, my bank was acquired by some bigger bank from another country. Their long standing, well working and fast banking application was replaced with a very dysfunctional Java applet thing. I was using Linux at the time and IIRC it either worked barely, or then not at all. I phoned the bank, and they told about a secret alternate 'mobile' url, that had a proper working service. I used that for a while before ultimately switching to another bank. The bank sent apology letters to customers and waived some fees also as they saw many of them leave. It made me really wake up that to the fact if the company can do these visible level blunders, what else is going on there, and also, how the customer is in such a vulnerable position.

On the other hand, NASA in the past had some really great Java applets to play with some technical concept and get updated diagrams, animations and graphs etc.


I worked for a large financial institution in the early 2010s.

They ran Windows XP, IE 8, and they stuck with a 3-4 year old JRE to support one piece of shit line of business app that was used only by about 100 (out of 50,000) users internally.

That institution had endpoints popped by drive-by exploit kits dropping banking trojans like Zeus daily.


> banks to stop using it for online banking apps

I never understood why so many banks flocked to building their online banking in applets when it wasn't like you needed anything more advanced than HTML to view balances and make transactions.


Java did many things very right. It's a really fast language. It's memory safe. It could run anywhere. It had well-thought out namespacing at a time where namespacing was a concept most people barely knew they needed it. It had an advanced security model.

It was a very reasonable bet at the time imo.


Because they’ve hired a bunch of Java devs that don’t know anything outside of Java?

Amusingly, we see the repeat of this in "desktop" apps that are just web technologies in a browser, wasting CPU time and RAM for "ease" of development. (I don't think it's easier at all - a mess of JS callbacks makes it difficult to see the initiator of anything).

> Amusingly, we see the repeat of this in "desktop" apps that are just web technologies in a browser, wasting CPU time and RAM for "ease" of development.

Web is chosen because it is the fastest way to hit all platforms, not because it's a skill issue.

> a mess of JS callbacks makes it difficult to see the initiator of anything

Async/await is available in most browsers since 2017, what year are you from?


I think the initial idea of being the fast to hit all platforms was the idea, but this has meant a lack of skill in developers who don't know about each platform, so you end up with a mess of apps that attempt to (badly) replicate native elements like menus.

You are right about async/await. I am old! (But I still find the development of software in a large JS system unmanageable versus old C++ development approaches).


Java Servlets and JSPs output HTML. I've been building a blog with them for over 15 years.

I'm getting the impression you're conveniently ignoring how piss poor HTML/AJAX/JS capabilities were back then, or even how slow internet speeds were.

Applets could do things that JS could not. Some bank applets did client side crypto with keys that were on the device. Good luck doing that in JS back then. My bank's applet could cope with connection losses so I could queue a payment while dialup did it's thing.


Yeah, a totally mind boggling statement, almost completely void of reality. I wasn't even tired of the crashes, it was just a totally awful experience of using them in every way. They took forever to load, were clunky to use and even just downright ugly because the UI had nothing to do with what you usually got to use, and was a lot worse. The idea was good on paper, but the implementation sucked.

Everyone, well almost everyone apparently, was relieved we didn't have to deal with any of that anymore.


Even Flash wasn't as bad as Java applets, and that's saying something.

Flash was great when it came to experiences. Ignoring melting CPU, crashes, loading times.

For Flash vs iPhone case, it was indeed mostly politics. People were using Flash and other plugins in websites because there were no other alternative, say to add a video player or an animation. iPhone was released in 2007 and app store in 2008. iPhone and iPad did not support then popular Flash in their browsers. Web experience was limited and broken. HTML5 was first announced in 2008 but would be under development for many years. Not standardized yet and browser support was limited. Web apps were not a thing without Flash. Only alternative for the users was the App Store, the ultimate walled garden. There were native apps for everything, even for the simplest things. Flash ecosystem was the biggest competitor and threat for the App Store at that moment. Finally in 2010 Steve Jobs addressed the Flash issue and openly declared they will never support it. iPhone users stopped complaining and in 2011 Adobe stopped the development of mobile plugins.

Adobe was in a unique position to dominate the apps era, but they failed spectacularly. They could have implemented payment/monetization options for their ecosystem, to build their own walled garden. Plugins were slow but this was mostly due to hardware at the time. This changed rapidly in the following years, but without control of the hardware, they had already lost the market.


That is almost entirely backwards.

> For Flash vs iPhone case, it was indeed mostly politics.

It was politics in the sense that Flash was one of the worst cause of instability in Safari on OS X, and was terrible at managing performance and a big draw on battery life, all of which were deal breakers on the iPhone. This is fairly well documented.

> iPhone was released in 2007 and app store in 2008. iPhone and iPad did not support then popular Flash in their browsers.

There were very good reasons for that.

> Web apps were not a thing without Flash.

That is entirely, demonstrably false. There were plenty of web apps, and they were actually the recommended (and indeed the only one) way of getting apps onto iPhones before they scrambled to release the App Store.

> Flash ecosystem was the biggest competitor and threat for the App Store at that moment.

How could it be a competitor if it was not supported?

> iPhone users stopped complaining

It was not iPhones users who were complaining. It was Android users explaining us how prehistoric iPhones were for not supporting Flash. We were perfectly happy with our apps.

> and in 2011 Adobe stopped the development of mobile plugins.

Yeah. Without ever leaving beta status. Because it was unstable, had terrible performances, and drained batteries. Just what Jobs claimed as reasons not to support it.

> Adobe was in a unique position to dominate the apps era, but they failed spectacularly.

That much is true.

> Plugins were slow but this was mostly due to hardware at the time.

Then, how could native apps have much better performance on the same hardware, on both Android and iOS?


> Then, how could native apps have much better performance on the same hardware, on both Android and iOS?

Web engines were honestly not great back then. WebKit was ok but JavaScriptCore was very slow, and of course that’s what iOS, Android, and BB10 were all running on that slow hardware. I have distinct (bad) memories that even “GPU-accelerated” CSS animations were barely 15fps, while native apps reliably got 60fps unless they really messed up. That’s on top of the infamous 300ms issue, where every tap took 300ms to fire off because it was waiting to see if you were trying to double-tap.

So I really think some of the blame is still shared with Apple, although it’s hard to say if that’s because of any malicious intent to prop up the App Store, or just because they were under pressure to build out the iOS platform that there wasn’t enough time to optimise. I suspect it was both.


I will never forget the hubub around the discovery that everything you typed on android went to a root shell. "What should I do?"... "reboot" phone reboots

The best think Jobs ever did for tech was forcing the whole industry to advance HTML to where it could replace Flash, and killing the market for proprietary browser content plugins. I don’t want to imagine what the web would be like today if Flash had won, and the whole web was a loader for one closed-source, junky plugin.

Applets also had no view-source.

Spiritually the web ought to be more than an application development platform. We haven't been doing great about that (with heavily compiled js bundles), but there's still a lot of extensions that many users take for granted. I'm using a continual wordcount extension (50 words so far), and Dark Reader right now.

Applet's are the native app paradigm, where what the app-makers writes is what you get, never a drop more. It's not great. The internet, the land of protocols, deserved better. Is so interesting because it is better.


Don’t worry, they’re trying to sneak back in with WASM and drawing everything to canvas.

At least with WASM I'm not stuck using Javascript whether I like it or not. Yes, transpiling to Javascript is a thing, but it's not too much better, since transpiled code isn't much more readable than WASM (see also ClojureScript; CoffeScript isn't too bad though, but it's almost equivalent to JS).

With v-dom and bundles, we have lost a lot of the societal wins of view-source already. Extensions still work but user agency is more limited.

The view you are saying here is one that seems resoundingly common. Is what I heard from Flutter and flutter-web: my experience as a dve is what matters. What I can make is what matters.

That's true, sure! But there's a balance. Society matters too. What agency users have, what their user agency allows them to do, how they can circuit bend your website for themselves also matters. How we judge the scales here is a factor.

I am so so excited for wasm. Particularly wasm-components, where there is more than a huge mega bundle of compile code, where there are intelligible code boundaries and calls across modules. That, I want to hope will be a semi legibile WASM. But a lot of wasm today feels like it really slides back down the ladder into this rebuffing form of development that doesn't recognize any of how the web was better because it was more than the synthetic product it created, how the web was better for it's protocols, legibility, and user agency. I hope we don't slide back into a new Flash or Applet era, where the user & legibility & agency are given no regard, where a developer egocentricity is the only guiding principle.


I wish there was a setting in major browsers to disable WASM or at least ask to enable per site

I would attribute this much more to Mobile Safari saying "no", which killed off plugins, especially Flash. Java Applets were essentially slow Flash from a user's perspective.

I don't think Safari mattered much. Java was still used for things that wouldn't work on phones without massive redesigns anyway.

I doubt you'd have been able to bootstrap Runescape in any form, even rewritten in native code, on the first iPhone to support apps. Applets worked fine on desktops and tablets which was what they were designed for.

Browser vendors killed the API because when they looked at crashes, freezes, and performance opportunities, the Flash/Java/etc. API kept standing out. Multithreaded rendering became practical only after the old extension model was refactorerd and even then browsers were held down by the terrible plugin implementations they needed to work around.


> I don't think Safari mattered much.

Apple was the first to publicly call out native plugins (jobs did so on stage) and outright refused to support them on iOS, then everyone else followed suit.


>then everyone else followed suit

NPAPI's death in non-IE-browsers started around 2015. Jobs announcing mobile Safari without Flash was 2010. Unfortunately, ActiveX still works to this very day.

Chrome built up a whole new PPAPI to support reasonably fast Flash support after the Jobs announcement. Microsoft launched a major release of Silverlight long after Jobs' speech, but Silverlight (rightfully) died with Windows Phone, which it was the main UI platform for around its practical death. Had Microsoft managed to launch a decent mobile operating system, we'd probably still be running Silverlight in some fashion today. Even still, Silverlight lasted well until 2021 before Silverlight actually fell out of support.

Jobs may have had a hand in the death of Flash websites, but when it came to Java Applets/Silverlight, the decision had little impact. That plugin model was slowly dying on its own already.


> then everyone else followed suit

There was a Flash runtime on Android. It was terrible. Java applets were already dead anyway, outside of professional contexts, which are not relevant on phones anyway.


A coworker of mine that worked at Adobe through the death of flash said a big reason for that death was Apple deciding the Ipod touch Safari would not support plugins.

Adobe had big plans on the Ipod supporting Flash and that announcement all but killed their Flash division.

Yes, Adobe supported Flash for years after that, but it was more of a life support thing and not active development. They saw the writing on the wall and knew that for flash to survive, it had to survive in a mobile world.

With the decreased support of flash, the other browser devs simply followed suit and killed off a route for something like Flash running in a browser.


Makes sense, Flash was eating battery like crazy. And it was widely used in ads, so it would appear on all pages of the internet.

One of the first things I used to install on all my computers (laptops and desktops alike) was "block Flash until clicked" add-on.


Now no web browser can reliably block autoplaying video. :(

It mostly was politics. Browser crashes and slowness were almost always traced down to microsoft own java plugin that strongarmed proper java plugin install out of the way every update and every now and then to be sure, with a semi compatible runtime and a classloarlder that insisted fronting the dow load of all resources.

It created so much uncertainty across the ecosystem even today people repeat the "applet crashes browser line, god riddance" line

But it was deliberate action by microsoft.

So yeah 100% politics because without a court document in modern society we cannot call this anything else.


Yes I’m sure jobs went on stage calling out flash as the main source of Safari crashes because of Microsoft’s Java plugin.

No? Microsoft Java was discontinued in 2004, the crashes were infamous even way later in 2010. Flash was also notorious for crashing Firefox on YouTube. Not even mentioning the bad security of these plugins.

While I understand the point you’re making, the idea that Excel is deterministic is not commonly shared among Excel experts. It’s all fun and games until it guesses that your 10th separator value, “SEP-10”, is a date.

Meanwhile, earlier this week we had a big conversation about the skyrocketing costs of RAM. While it's technically true that GC doesn't mean a program has to hold allocations longer than the equivalent non-GC code, I've personally never seen a GC'ed program not using multiple times as much RAM.

And then you have non-GC languages like Rust where you're hypothetically managing memory yourself, in the form of keeping the borrow checker happy, you never see free()/malloc() in a developer's own code. It might as well be GC from the programmer's POV.


> And then you have non-GC languages like Rust

Rust is a GC language: https://doc.rust-lang.org/std/rc/struct.Rc.html, https://doc.rust-lang.org/std/sync/struct.Arc.html


By that token so is C.

GC is a very fuzzy topic, but overall trend is that a language is GC if it's opt-out of GC. Not opt-in. And more strictly, it has to be tracing GC.


By your definition, is Nim a GC language?

So by that definition yes, as of Nim 2.0 ORC is the default. You need to opt-out of it.

I'm not sure this opt-in/out "philosophical razor" is as sharp as one would like it to be. I think "optionality" alone oversimplifies and a person trying to adopt that rule for taxonomy would just have a really hard time and that might be telling us something.

For example, in Nim, at the compiler CLI tool level, there is opt-in/opt-out via the `--mm=whatever` flag, but, at the syntax level, Nim has both `ref T` and `ptr T` on equal syntactic footing . But then in the stdlib, `ref` types (really things derived from `seq[T]`) are used much more (since it's so convenient). Meanwhile, runtimes are often deployment properties. If every Linux distro had their libc link against -lgc for Boehm, people might say "C is a GC'd language on Linux". Minimal CRTs vary across userspaces and OS kernel/userspace deployment.. "What you can rely on/assume", I suspect the thrust behind "optionality", just varies with context.

Similar binding vagueness between properties (good, bad, ugly) of a language's '"main" compiler' and a 'language itself' and 'its "std"lib' and "common" runtimes/usage happen all the time (e.g. "object-oriented", often diluted by the vagueness of "oriented"). That doesn't even bring in "use by common dependencies" which is an almost independent axis/dimension and starts to relate to coordination problems of "What should even be in a 'std'-lib or any lib, anyway?".

I suspect this rule is trying to make the adjective "GC'd" do more work in an absolute sense than it realistically can given the diversity of PLangs (sometimes not so visible considering only workaday corporate PLangs). It's not always easy to define things!


> I think "optionality" alone oversimplifies and a person trying to adopt that rule for taxonomy would just have a really hard time and that might be telling us something.

I think optionality is what gives that definition weight.

Think of it this way. You come to a project like a game engine, but you find it's written in some language and discover for your usage you need no/minimal GC. How hard is it to minimize or remove GC. Assume that changing build flags will also cause problems elsewhere due to behavior change.

> Similar binding vagueness between properties (good, bad, ugly) of a language's '"main" compiler' and a 'language itself' and 'its "std"lib' and "common" runtimes/usage happen all the time (e.g. "object-oriented", often diluted by the vagueness of "oriented")

Vagueness is the intrinsic quality of human language. You can't escape it.

The logic is fuzzy, but going around saying stuff like "Rust is a GC language" because it has an optional, rarely used Arc/Rc, is just off the charts level of wrong.


Do you consider ORC a tracing GC, even though it only uses tracing for cyclic types?

You can add your own custom GC in C — you can add your own custom anything to any language; its all just 1s and 0s at the end of the day — but it is not a feature provided by the language out of the box like in Rust. Not the same token at all. This is very different.

Well, that's mostly what a Arc<T> and Rc<T> in Rust are. Optional add-ons.

...as part of the language. Hence it being a GC language.

Is this another case of "Rustacians" randomly renaming things? There was that whole debacle where sum types bizarrely became enums, even though enums already had an established, different meaning, with all the sad talking past everyone else that followed. This is starting to look like that again.


> ...as part of the language.

Which part? It's not available in no-std without alloc crate. You can write your own Arc.

Most crates don't have to use Arc/Rc.

> Is this another case of "Rustacians" randomly renaming things?

No. This is a case of someone not having enough experience with Rust. Saying Rust is a GC language is like claiming Pascal is object oriented language because they share some surface similarities.


> Which part?

The part that is detailed on rust-lang.org.

> It's not available in no-std without alloc crate.

no-std disables features. It does not remove them from existence. Rust's worldly presence continues to have GC even if you choose to disable it for your particular project.

> This is a case of someone not having enough experience with Rust.

Nah. Even rust-lang.org still confuses sum types and enums to this very day. How much more experienced with Rust can you get than someone versed enough in the language to write comprehensive, official documentation? This thought of yours doesn't work.

> Saying Rust is a GC language is like claiming Pascal is object oriented language because they share some surface similarities.

What surface similarity does Pascal have to OO? It only has static dispatch. You've clearly not thought that one through.

Turbo Pascal has dynamic dispatch. Perhaps you've confused different languages because they happen to share similar names? That is at least starting to gain some surface similarity to OO. But message passing, of course, runs even deeper than just dynamic dispatch.

Your idea is not well conceived. Turbo Pascal having something that starts to show some very surface-level similarity to OO, but still a long way from being the real deal, isn't the same as Rust actually having GC. It is not a case of Rust having something that sort of looks kind of like GC. It literally has GC.


> no-std disables features. It does not remove them from existence.

It's the other way around; standard library adds features. Because Rust features are designed to be additive.

Look into it. `std` library is nothing more than Rust-lang provided `core`, `alloc` and `os` crates.

> Nah.

You don't seem to know how features work, how std is made or how often RC is encountered in the wild. It's hard to argue when you don't know language you are discusing.

> Even rust-lang.org still confuses sum types and enums to this very day.

Rust lang is the starting point for new Rust programmers; why in the heck would they start philosophizing about a bikesheddy naming edge case?

That's like opening your car manual to see history and debates on what types of motors preceded your own, while you're trying to get the damn thing running again.

> What surface similarity does Pascal have to OO?

Dot operator as in (dot in `struct.method`). The guy I was arguing with unironically told me that any language using the dot operator is OO. Because the dot operator is a sign of accessing an object or a struct.

Much like you, he had very inflexible thoughts on what makes or does not make something OO; it reminds me so much of you saying C++ is a GC-language.

> Your idea is not well conceived.

My idea is to capture the colloquial meaning of GC-language. The original connotation is to capture languages like C#, Java, JS, etc. That comes with a (more or less) non-removable tracing garbage collector. In practice, what this term means is

- How hard is it to remove and/or not rely on GC? Defaults matter a lot.

- How heavy is the garbage collection GC? Is it just RC or ARC?

- How much of the ecosystem depends on GC?

And finally, how many people are likely to agree with it? I don't care if my name is closest to frequency of red, if no one else agrees.


I can't say I've heard of a commonly used definition of "GC language" that includes C++ and excludes C. If anything, my impression is that both C and C++ are usually held up as exemplars of non-GC languages.

C++ didn't add GC until relatively recently, to be fair. When people from 30 years ago get an idea stuck in their head they don't usually ever change their understanding even as life continues to march forward. This isn't limited to software. If you look around you'll regularly find people repeating all kinds of things that were true in the past even though things have changed. And fair enough. There is only so much time in the day. You can't possibly keep up to date on everything.

The thing is that the usual comparisons I'm thinking of generally focused on how much the languages in question rely on GC for practical use. C++11 didn't really move the needle much, if at all, in that respect compared to the typical languages on the other side of said comparisons.

Perhaps I happen to have been around different discussions than you?


> focused on how much the languages in question rely on GC for practical use.

That's quite nebulous. It should be quantified. But, while we wait for that, if we assume by that metric C++ is not a GC language today, but tomorrow C++ developers all collectively decide that all heap allocations are to depend on std::shared_ptr, then it must become a GC language.

But the language hasn't changed in any way. How can an attribute of the language change without any changes?


> That's quite nebulous. It should be quantified.

Perhaps, but I'm reluctant to speak more definitively since I don't consider myself an authority/expert in the field.

> But the language hasn't changed in any way. How can an attribute of the language change without any changes?

The reason I put in "for practical use" is because since pedantically speaking no language actually requires GC - you "just" need to provision enough hardware (see: HFT firms (ab)use of Java by disabling the GC and resetting programs/machines at the end of the day). That's not relevant for basically everyone, though, since practically speaking you usually want to bound resource use, and some languages rely on a GC to do that.

I guess "general" or "normal" might have been a better word than "practical" in that case. I didn't intend to claim that how programmers use a language affects whether it should be considered a GC language or not.


If Rust is a GC language because of Rc/Arc, then C++ is a GC language because of std::shared_ptr, right?

Absolutely.

Interesting... To poke at that definition a bit more:

Would that also mean that C++ only became a GC language with the release of C++11? Or would C++98 also be considered a GC language due to auto_ptr?

Are no_std Rust and freestanding C++ GC languages?

Does the existence of Fil-C change anything about whether C is a GC language?


> Or would C++98 also be considered a GC language due to auto_ptr?

auto_ptr does not exhibit qualities of GC.

> Are no_std Rust and freestanding C++ GC languages?

These are not different languages. A developer opting out of certain features as enabled by the language does not change the language. If one was specific and said "no_std Rust", then you could fairly say that GC is not available, but that isn't applicable here. We are talking about "Rust" as written alone.

> Does the existence of Fil-C change anything about whether C is a GC language?

No. While Fil-C is heavily inspired by C, to the point that it is hard to distinguish between them, it is its own language. You can easily tell they are different languages as not all valid C is valid Fil-C.


> auto_ptr does not exhibit qualities of GC.

OK, so by the definition you're using C++ became a GC language with C++11?

> If one was specific and said "no_std Rust", then you could fairly say that GC is not available, but that isn't applicable here.

I'd imagine whether or not GC capabilities are available in the stdlib is pretty uncontroversial. Is that the criteria you're using for a GC language?

> No. While Fil-C is heavily inspired by C, to the point that it is hard to distinguish between them, it is its own language. You can easily tell they are different languages as not all valid C is valid Fil-C.

OK, fair; perhaps I should have been more abstract, since the precise implementation isn't particularly important to the point I'm trying to get to. I should have asked whether the existence of a fully-compatible garbage collecting implementation of C change anything about whether C is a GC language. Maybe Boehm with -DREDIRECT_MALLOC might have been a better example?


> I should have asked whether the existence of a fully-compatible garbage collecting implementation of C change anything about whether C is a GC language.

In a similar vein, tinygo allows compilation without GC[1]. That is despite the Go spec explicitly defining it as having GC. Is Go a GC language or not?

As you can see, if it were up to implementation, a GC/non-GC divide could not exist. But it does — we're talking about it. The answer then, of course, is that specification is what is significant. Go is a GC language even if there isn't a GC in implementation. C is not a GC language even if there is one in implementation. If someone creates a new Rust implementation that leaves out Rc and Arc, it would still be a GC language as the specification indicates the presence of it.

[1] It doesn't yet quite have full feature parity so you could argue it is more like the Fil-C situation, but let's imagine that it is fully compatible in the same way you are suggesting here.


You make some good points, and after thinking some more I think I agree that the existence of GC/non-GC implementations is not determinative of whether a language is typically called a GC language.

After putting some more thought into this, I want to say where I diverge from your line of thinking is that I think whether a language spec offers GC capabilities is not sufficient on its own to classify a language as a "GC language"; it's the language's dependence on said GC capabilities (especially for "normal" use) that matters.

For example, while you can compile Go without a GC, the language generally depends on the presence of one for resource management to the point that a GC-less Go is going to be relatively restricted in what it can run. Same for Java, JavaScript, Python, etc. - GC-less implementations are possible, but not really reasonable for most usage.

C/C++/Rust, on the other hand, are quite different; it's quite reasonable, if not downright common, to write programs that don't use GC capabilities at all in those languages. Furthermore, removing std::shared_pointer/Rc/Arc from the correspondning stdlibs wouldn't pose a significant issue, since writing/importing a replacement is something those languages are pretty much designed to be capable of.


That last paragraph got me off Perl to Python. The first time I wrote Python like hash1[hash2]["key"] and it worked, then tried hash1[hash2]["array_name"][3] and it worked because that's the obvious way to write something, I fell in love and never looked back.

I never wanted to have to reason my way through chasing pointers through nested hashrefs again.


Yesterday I used `pwgen` to role a random password, and at first glance I legit thought it might've been working Perl code. I'm not even slightly kidding.

That split between bytes and unicode made better code. Bytes are what you get from the network. Is it a PNG? A paragraph of text? Who knows! But in Python 2, you treated them both as the same thing: a series of bytes.

Being more or less forced to decode that series into a string of text where appropriate made a huge number of bugs vanish. Oops, forget to run `value=incoming_data.decode()` before passing incoming data to a function that expects a string, not a series of bytes? Boom! Thing is, it was always broken, but now it's visibly broken. And there was no more having to remember if you'd already .decode()d a value or whether you still needed to, because the end result isn't the same datatype anymore. It was so annoying to have an internal function in a webserver, and the old sloppiness meant that sometimes you were calling it with decoded strings and sometimes the raw bytes coming in over the wire, so sometimes it processed non-ASCII characters incorrectly, and if you tried to fix it by making it decode passed-in values, it start started breaking previously-working callers. Ugh, what a mess!

I hated the schism for about the first month because it broke a lot of my old, crappy code. Well, it didn't actually. It just forced me to be aware of my old, crappy code, and do the hard, non-automatable work of actually fixing it. The end result was far better than what I'd started with.


That distinction is indeed critical, and I'm not suggesting removing that distinction. My point is that you could give all those types names, and manage the transition by having Python 3 change the defaults (e.g. that a string is unicode).

I’m a little confused. That’s basically with Python 3 did, right? In py2, “foo” is a string of bytes, and u”foo” is Unicode. In py3, both are Unicode, and bytes() is a string of bytes.

The difference is that the two don't interoperate. You can't import a Python 3 module from Python 2 or vice versa; you have to use completely separate interpreters to run them.

I'm suggesting a model in which one interpreter runs both Python 2 and Python 3, and the underlying types are the same, so you can pass them between the two. You'd have to know that "foo" created in Python 2 is the equivalent of b"foo" created in Python 3, but that's easy enough to deal with.


Ok who would suggest this when the community could take a modicum of responsibility

I need to hear more about this Software Local 2142.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: