Hacker Newsnew | past | comments | ask | show | jobs | submit | DLoupe's commentslogin

"Another difference in Rust is that values cannot be used after a move, while they simply "should not be used, mostly" in C++"

That's one of my biggest issues with C++ today. Objects that can be moved must support a "my value was moved out" state. So every access to the object usually starts with "if (have-a-value())". It also means that the destructor is called for an object that won't be used anymore.


clang-tidy has a check for this. https://clang.llvm.org/extra/clang-tidy/checks/bugprone/use-...

MSVC and the Clang static analyzer have a analysis checks for this too. Not sure about GCC.

It's worth remembering though that values can be reinitialized in C++, after move.


I think you missed my point. The problem is not lack of guarding against programmer mistakes. It's that the compiler generates unnecessary code.


and of Even Toned Screens, the halftoning algorithm used by many linux print drivers.


Since I use move semantics all the time, this is for me the most frustrating thing about C++ full stop. I really wish they'd fix this instead of adding all those compile-time features.


> Since I use move semantics all the time (...)

Everyone who ever uses C++ uses move semantics all the time,including move elision. It's not an obscure feature.

> (...) this is for me the most frustrating thing about C++ full stop.

I've been using C++ for years and I have no idea what you could be possibly referring to. The hardest aspect of move semantics is basically the rule of 5. From that point, when you write a class you have the responsibility to specify how you want your class to be moved and how you want your moved-from class to look like, provided that you ensure you leave it in a valid state.

That's it.

What exactly do you believe needs fixing?


How would you fix this in C++?


By adding syntax and semantics for destructible moves, meaning the moved object is removed from its scope (without calling its destructor.)


I've worked with C++ for a number of years, with a few codebases that were >1M LoC. Never did I stumbled upon a situation where an object was moved and an existing symbol became a problem. I wonder what you are doing to get yourself in that situation.


> I wonder what you are doing to get yourself in that situation.

The problem with the current move semantics is that, compared to e.g. Rust: 1) the compiler generates unnecessary code and 2) instead of just implementing class T you must implement a kind of optional<T>.

Which means, that after all those years of using smart pointers I find myself ditching them in favor of plain pointers like we did in the 90's.


When you say you must, do you mean that it’s best practice, that or that this is UB or similar?


> When you say you must, do you mean that it’s best practice, that or that this is UB or similar?

I'm not OP, but the only requirements that C++ imposed on moved-from objects is that they remain valid objects. Meaning, they can be safely destroyed or reused by reassigning or even moving other objects into them. I have no idea what OP could be possibly referring to.


> The problem with the current move semantics is that, compared to e.g. Rust: 1) the compiler generates unnecessary code and 2) instead of just implementing class T you must implement a kind of optional<T>.

I don't know what you mean by "compiler generates unnecessary code" or why you see that as a problem. I also have no idea what you mean by "a kind of optional". The only requirement on moved-from objects is that they must be left in a valid state. Why do you see that as a problem?


The compiler generates code for calling the destructor after the object was moved. This was problem #1.

Regarding #2, take Resource Acquisition Is Initialization (RAII) as an example - in RAII, the existence of an object implies the existence of a resource. Now, if you want to be able to move, the object becomes "either the resource exists or it was moved out". As someone else noted in the comments, this affects not only the destructor. Methods cannot assume the existence of the resource, they have to check it first. Kind of like optional<MyResource>.


For example, boost library's "describe" and similar macro based solutions. Been using this for many years.


If you use Google to backup your WhatsApp chats (most people do), Google can already read your messages, because the backup is not encrypted.


I think this hasn't been true for a couple of years now

https://faq.whatsapp.com/490592613091019


Encrypted backups are "off" by default and need to be explicity turned on.


I haven't installed Whatsapp from scratch in a long time, aren't backups off by default, overall?


They're off by default, but there are regular prompts to enable them and I guess most people eventually do it.


It's called "dynamic pricing" and it's everywhere. https://de.wikipedia.org/wiki/Dynamic_Pricing


For me, D failed to replace C++ because of lack of design. It is more a mix of great features. But once you start learning the details simple things can get very complicated.

For example, function arguments can be "in", "out", "inout", "ref", "scope", "return ref" - and combinations.

Another example is conditional compilation. Great when used sparely, but can otherwise make it very difficult to understand how the code flows.

In the end, reading the source code of the standard library convinced me against it.

(The source code for the C++ standard library is much worse, of course).


I think it's a great mix of features. I agree it would make sense to separate the wheat from the chaff, but if you're a mature adult, you should be able to decide which features are useful to you and figure a dialect that works.

In general, the design of the standard library is much less alien and baroque than the STL, and is more battries-included, so you spend much less time puzzling over incantations and more time writing code. The code you have at the end is also much more concise and readable.

Likewise, because D is in a lot of ways "C++ with fewer problems and papercuts", I spend way less time figuring out totally inscrutable C++ compilation errors.

Consequently, I can spend more of time writing code and thinking about how to use all D's nice features to better effect. Plus, given how fungible and malleable the language is, it doesn't take a lot of effort to rework things if I want to change them in the future.

Personally, I think this is the main reason D hasn't caught on. It's selling point is that it's pragmatic and doesn't shove a lot of dogma or ideology down your throat. This isn't sexy and there's nothing to latch onto. There are many styles you can write D code in... MANY more than C++: Python-style, C#-style, C++-style, C-style... hell, bash style, MATLAB-style, R style, whatever you want. But for some of these styles, you have to build the tools! The fact that all of this is possible is the result of combining one very practical and ergonomic programming language, with a thousand different QOL improvements and handy tools... plus top tier metaprogramming.

IMO, the major thing holding D back right now is also along the same lines. It offers pragmatism and practicality, but the tooling is still weak. Languages like C++, Rust, and Python totally outclass D when it comes to tooling... but you have to sacrifice flexibility and ergonomics for baroque madness (C++) or BDSM (Rust) or slow and impossible to maintain code (Python). The choice is yours, I guess!


> For example, function arguments can be "in", "out", "inout", "ref", "scope", "return ref" - and combinations.

Reminds me of "In case you forgot, Swift has 217 keywords now" https://x.com/jacobtechtavern/status/1841251621004538183


> function arguments can be "in", "out", "inout", "ref", "scope", "return ref" - and combinations.

None of them are required. What they do is provide enforcement of semantics that otherwise would have to be put in the documentation. And, as we all know, function documentation is always either missing, outdated or just plain wrong.

For a (trivial) example:

    void foo(int* p) { *p = 3; }
vs:

    void foo(out int i) { i = 3; }
In the latter, you know that `i` is being initialized by the function. In the former, you'll have to rely on the non-existent documentation, or will have to read/understand the foo()'s internals.

`out` definitely is a win. Let's look at `scope`:

    void foo(int* p) { static int* pg = p; }
    void bar(*) {
        int x;
        foo(&x); // oops
    }
vs: void foo(scope int* p) { static int* pg = p; } // compiler flags error

This is most definitely a memory safety feature.


In general, you shouldn’t judge a language by the complexity of its standard library implementation unless you are planning to implement a comparable library.

Application code often only needs a subset of the language and can be much more readable.

The standard library needs to use every trick available to make itself expressive, reusable, backwards-compatible, and easy-to-use for callers. But for your application code you certainly want to make different tradeoffs.


> The source code for the C++ standard library is much worse, of course

But you prefer C++?

(The D standard library is in the process of being re-engineered for clarity, but still, it is far more comprehensible than C++'s.)


Thanks so much for replying, Walter. I'll give it another try.


Its's Evil (Evil = Incompetence + Power)


Evil = Incompetence + Power - Empathy

I think there needs to be one more element like agency? Because under this definition of evil, a sponge elected as president is evil although it wouldn't do anything at all in any situation. Maybe that's still evil?


Why $6,2999 when the article says he payed around $900?


> Why $6,2999 when the article says he payed around $900?

In the fifth paragraph:

“Admittedly, it did not cost me the $6300 from the article's title, much closer to $900. Nonetheless, everything I'm describing translates to every other Canon camera model!”


He discovered this information about his $900 camera but has found that it also applies to other models, likely the Canon EOS-1D X Mark III.


Because it gets more clicks.


He likely bought an old model second-hand :)

Edit: The camera he uses is a 2019 pocket camera. The 6299 must be another model that has the same restrictions.


[flagged]


What tha hell


> The safety checks have uncovered over 1,000 bugs

In most implementations of the standard library, safety checks can be enabled with a simple #define. In some, it's the default behavior in DEBUG mode. I wonder what this library improves on that and why these bugs have not been discovered before.


Being actually enforced, even in release.

Most folks don't use those #defines, and many still haven't leaned about them.


It's a great question (_LIBCPP_DEBUG was already a thing in libc++), and AFAIK the answer is supposedly "it used to be too costly to enable these in production with libc++, and it no longer is." I have no first-hand insight as to how accurate this perception is.


That's exactly right. We've had extra hardening enabled in tests, and that does catch many issues. But tests can't exercise every potential out-of-bounds issue, which is why enabling it prod enabled us to find & fix additional issues.


They turned those on and 1. checked that the software using it didn't break and 2. made sure it didn't tank performance.

Source: I worked on this apparently


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: