> Arguably the Rust language is a replacement candidate for 'C' (and C++).
Or is it C and 'C++'?
As it happens, I disagree, because I think a lot of programmers who can otherwise handle pointers and manual memory allocation are going to revolt if we made them do the explicit ownership management notation Rust demands. It's too different from anything done in any other language, and the push-back that it's "too high-level" and "inefficient" and "bloated" will kill it for sure.
I'm studying Rust, though wouldn't claim I know it yet.
I believe Rust is an effort to create a language in which you can write code that's as fast and portable as C; and yet, for which, the compiler can help you coordinate a "single owner permitted to write" strategy for thread safety -- at the expense of increased bother and fuss coaxing your program into that pattern. I wouldn't call it a C replacement; while I'm glad to have Rust available, I prefer concurrency (when I need it at all) via processes, rather than threads, with no shared memory.
I've heard about at least two languages that adore threads, and have issues with multi-processing. Perl6 has threads in its runtime helping its GC --- and calling fork kills its programs. I don't know yet if rust has threads _required_ in a fashion that makes it crash if you try to fork().
perl6 cannot fork; and golang behaves badly if you fork in a program that uses goroutines. Ends up unhappy, since shared library manipulation is kinda dependent on the process loader. Oops.
I'm tempted to hope that the Rust lang devs have kept use of threads out of its required runtime. To answer your question, I didn't mean to imply that rust cannot call fork(2), and I hope it isn't true. I meant to say that I wouldn't call Rust a C replacement; it's an effort to solve a problem (threads with shared heap or stack) that C flat doesn't attempt to consider. I love C, and feel obliged to learn Rust.
Rust has no more runtime than c. Threads are implemented in the standard library, but they'll only be used if you explicitly create one (or a library you are using does). Fork works fine :)
Rust has a very different design philosphy to c, and in that respect you are right that it isn't a complete replacement. That said, Rust is good for far more than just multithreading. It solves memory safety as well as tgread safety.
It can work but it’s quite unsafe. The standard library doesn’t make any guarantees about fork safety. The language itself doesn’t understand what fork is, and so can’t guarantee anything.
FWIW, Perl 6 does have `Proc` (to run an external process) and `Proc::Async` (to run an external process asynchronously). But `fork`, no: it was decided that `fork` was a unixism that was not supportable on OS's like Windows. And it was also decided to not make the same mistake that Perl 5 made with regards to trying to mimic `fork` on Windows, and use that idea later to try to mimic threads on non-Windows OS's.
Thank you! That's great to hear. I'm sorry that they chose that direction; I'd much, much rather not support Windows than not support Unix's fundamental primitive for multiprocessing (and its coordinated IPC). But I do realize that not everyone can act on such preferences. Fortunately, perl5 is still well and strong.
Amen. IMO the root of all evil is compiler "optimization" of code. If you really, really need the wee bit of extra performance through optimization, hire an expert at ass'y language and optimize the bit that matters. Rarely is this the case, especially with business apps.
I've seen my fair share of bugs from the compiler incorrectly hoisting variables from loops, mis-using registers, etc. I prefer to turn off compiler optimization as one of the first steps in a new project, but that's me.
> If you really, really need the wee bit of extra performance through optimization, hire an expert at ass'y language and optimize the bit that matters. Rarely is this the case, especially with business apps.
If execution efficiency is not a concern and you're using C, then you're most likely using the wrong language. Especially for "business apps".
> If execution efficiency is not a concern and you're using C
Two things.
Thing one: Execution efficient on super scalar processors is very unlike what you assume it to be. For instance you assume alignment makes you program fast, when in fact it makes it slower due to cache pressure.
Thing two: On lower end processors what's important is not speed but code size.
Thing three: (off by one error natch) The amount of critical C code is tiny tiny tiny compared to the amount of C code where execution speed just does not matter because the code is almost never executed. The bleeding hot sections are often rewritten in assembly (encryption cough decryption)
> Execution efficient on super scalar processors is very unlike what you assume it to be.
It's not. What do you think I "assume execution efficiency to be"?
> On lower end processors what's important is not speed but code size.
That's why I said "most likely", not "always". Besides, those tiny microprocessors typically don't run the "business apps" which bokglobule was talking about.
> The amount of critical C code is tiny tiny tiny compared to the amount of C code where execution speed just does not matter because the code is almost never executed.
Not true for most software. Sure, most applications spend most of their execution time in a small part of the code, but it's typically not so small that you could easily rewrite it in assembly.
> The bleeding hot sections are often rewritten in assembly (encryption cough decryption)
Cryptography kernels are written in assembly to ensure that they have a constant execution time to prevent timing attacks. Not so much for performance.
You assume that modern high performance processors execute instructions. They do not, they analyze, optimize and emit their an internal instruction stream and then execute that. That extra step is what makes fiddly UB type 'optimizations' worthless. And since you don't know what processor is going to execute the code and how it's optimized, the speed gains are basically 'noise'
> Besides, those tiny microprocessors typically don't run the "business apps" which bokglobule was talking about.
"Business apps" is usually network and io bound more than anything.
> Not true for most software.
Dan Bernstein says you're wrong.
> Cryptography kernels are written in assembly to ensure that they have a constant execution time to prevent timing attacks. Not so much for performance.
Dan Bernstein says you're wrong here too. Performance is everything with encryption. Consider video streaming over and encrypted connection. Yeah that.
> You assume that modern high performance processors execute instructions.
Yeah, stop making stuff up about me.
> they analyze, optimize and emit their an internal instruction stream and then execute that.
I know how an out-of-order, superscalar processor works. I also know that they can't do magic, because their optimizations are severely limited by time and scope constraints (although they do have access to some information that isn't available at compile time).
> That extra step is what makes fiddly UB type 'optimizations' worthless.
I'm not arguing for optimizations based on UB. This subthread is about compiler optimizations in general, which bokglobule claimed to be nearly always unnecessary.
> Dan Bernstein says you're wrong.
Do you have a study you can cite? Do you think just dropping a name will convince anyone?
> Performance is everything with encryption.
I never claimed it wasn't. I claimed that performance isn't the reason why cryptography kernels are rewritten in assembly, and that's because C + optimizing compiler is already fast enough and the small performance gain alone doesn't justify the switch to assembly.
You can care enough about efficiency to use C, and still not care about the last 0.1% of efficiency. You can especially not care about the last 0.1% of efficiency in 99% of your code.
If you're compiling C with -O0 (as OP implies)... it's not just the last 0.1% of efficiency you're missing out on. Modern C compilers generate really, really crappy code at -O0, and you're looking at a 10x, 20x slowdown by forgoing optimizations. At those slowdowns, using an interpreted language that lacks a JIT starts to look competitive in performance.
Which is why it's so important to have -- ultra-clear -- (have you looked at the clang documentation or GCC manpages?) guidelines on how to disable optimizations that can be dangerous/unpredictable (strict-aliasing comes to mind) "within" -O1 or -O2 or -Os
One shouldn't need to throw the baby out with the bathwater (-O0) in order to get some semblance of semantics that won't pull the rug under your feet when you're not looking.
In practice, what these requests tend to boil down to is requests for the compiler to read the programmer's mind (strict aliasing is pretty much the biggest exception). What optimizations do you think are "dangerous/unpredictable"? Demanding that things like traps happen predictably means that you heavily constrain the ability to do dead-code elimination (can't eliminate code that could trap!) or loop-invariant code motion, two of the biggest performance wins, especially for things that could trap such as memory loads and stores, which are the code you most want to avoid whenever possible for performance.
Undefined behavior essentially says that compilers don't have to care about what happens in the cases that would constitute undefined behavior. This doesn't manifest in the compiler as if (undefined_behavior()) { destroy_users_code(); }, contrary to popular opinion. It instead tends to manifest as logic like "along this control-flow path, this condition is true, so we can now thread the jump from block A to block C since you're redundantly checking a known-true condition" and only after unwrapping several layers of computed assumptions do you find the "we assumed overflow cannot occur" at the bottom.
There seems to be some idea that there is some "evil pass" in the compilers which looks for undefined behavior and then maliciously optimizes surrounding code to do something the programmer didn't expect.
Not at all. Even all the examples of UB leading to unexpected optimizations usually involve a chain of events, with very straightforward and necessary optimizations being involved, like value propagation, inlining, dead-code elimination, etc - and these aren't inherently related to "exploiting UB". You could (in some compilers) disable one of the things in the chain and perhaps avoid the problem for that example, but you'd also hurt a lot of other code that relied on the optimization.
That's one of the problems with people asking for specific small snippets of code where the UB-related transformation produced a big gain: you can certainly find these (but they'll often be picked apart) - but the bigger problem is not with the specific examples: it's that whatever optimization optimization you disable to make the small example work as you'd expect might then produce worse code across your application.
So people who want to disable the optimization that does "that" are often incorrectly assuming there is a small simple optimization which leads to "that" in the first place.
Still, I definitely agree that the situation regarding UB is depressing in many respects. Many of the decisions made by the C committee in the past haven't aged well. If you take a look at the low-level optimizations afforded by the largely-deterministic Java specification, they are largely at the same level as C - but Java had the benefit of coming along a couple of decades later where many open questions in C's time, such as integer overflow behavior, integer sizes, shift behavior, pointer models, etc, had largely been resolved. Platforms that don't conform to the JVM's model of an ideal machine will just have to generate slow code in some cases.
I'm not requesting the compiler to read my mind, I'm just asking for dead obvious and simple guidelines that allow me to perform a cost-benefit analysis and tune the compiler's behavior to what I consider acceptable.
Examples:
I don't care at all about optimizations that are a result of treating signed integer overflow as undefined behavior. I'll go for predictable, deterministic behavior every single time.
Same for strict aliasing rules. I find it absolutely insane and mind boggling that -O2 enables strict aliasing amongst who knows what else (50+ other flags). Why can't the impact of these optimization flags be easier to deduce? Why do I feel like I need to be a compiler developer just to get some measure of confidence in what the optimizers are doing? It's insane that there are people who treat this sort of unwarranted, dangerous complexity as a rite of passage and don't push for something that's better. Most importantly, it's terrifying that a significant chunk of C programmers _are not even aware of these issues_.
C will stick around for decades and will continue to be picked up by newcomers. It doesn't have to stay as dangerous and uncompromising as it is now.
EDIT: There is some progress with things like ubsan and asan but alas they're fairly limited platform-wise. What's worrying is that there hasn't been a clear shift in the mentality of those who control the language as the OP indicates in his report.
> I'm not requesting the compiler to read my mind, I'm just asking for dead obvious and simple guidelines that allow me to perform a cost-benefit analysis and tune the compiler's behavior to what I consider acceptable.
And how can I, as a compiler writer, know what you, the compiler user, consider acceptable without reading your mind?
It would be nice if we had some sort of contract. I, the compiler writer, will say that, so long as you write code that conforms to this contract, I will faithfully execute that code. And you, the programmer, will say that you will only write code that conforms to this contract, and that I don't need to worry about you writing code that fails to conform to this contract. Well, somebody already wrote this contract: it's the C specification.
What you're really saying is that you don't like the contract you have, which is a totally valid position to take (I myself would love to see strict aliasing go die in a fire). But almost no one who objects to the C contract is willing to actually negotiate a new contract. Instead, it's almost invariably a complaint that amounts to "you fucking compiler writers are ruining my code with your stupid optimizations." Well, no, your code was already broken per the contract; we were just too dumb to realize it earlier. And for the big undefined behaviors where well-behaved semantics are feasible (such as strict aliasing or signed integer overflow), we provide options to let you say "I don't agree to the C contract, I want to agree to a not-quite-C contract instead."
Many people right now don't realize that undefined behavior is merely a contract that the programmers not to create these statements, true. But that's why we, as compiler writers, try to educate people about why undefined behavior exists, what it means for programmers, and providing tools to make it much easier for programmers to find where it exists (hence things like asan and ubsan).
> C will stick around for decades and will continue to be picked up by newcomers. It doesn't have to stay as dangerous and uncompromising as it is now.
Exactly, even if most of us don't touch it directly, we need to use systems that make an heavy use of C (including Objective-C and C++), so it would already be an improvement to our computing stack if C could be improved as such.
Every once in a while I have to compile a kernel without optimizations. The performance penalty you pay is far away from 0.1%, it is definitely very noticeable on first glance.
There might be projects where even that does not matter, but I would not think those are the majority.
An order of magnitude slower on non-trivial computation-heavy code (i.e,. not just doing a lot of IO or calls into libraries/the kernel) is a pretty good rule of thumb. Sometimes better, sometimes much worse.
The more layers of abstraction, the worse the penalty for not optimizing, so C++ is usually more heavily affected than C, for example: many of the so called "zero-cost" abstractions in C++ rely on a good optimizer.
C is nowadays foremost a system programming language. Try compiling your operating system with -O0 and see if you still don’t need those optimizations. You’d be surprised.
there's tenuous and difficult to reason about optimizations and then there's removing all of the completely useless loads and stores of intermediate results that occurs in the default simplistic translation.
i might be with you on the former, but that first pass really cuts down on executable (icache) size, nets a huge performance win and actually makes the generated code alot easier to read.
some of those analyses can get pretty involved. but i dont think we should be discouraging people from investigating them - or from providing nice safe defaults and flags to turn them on.
one other speed-independent factor here is that like you I usually start without any optimizations. and usually when I turn it on I find a few bugs in my own code right away. thats pretty helpful.
Agree. Some of the complaints get back to not really understanding the ES6-7 feature set that was created to allow JS to become more compact yet expressive in doing common tasks (like creating an object from an existing object, yet changing a few parts).
For the Redux issue, the light bulb went off for me at one point on how best to use it to solve specific issues. React is focused on component based development of the UX; everything in the small library is there to help with this including "state" (that is, the internal logic state of the component). Since React went with the simpler one-way flow of data updates (vs. Angular's two-way), Redux came along to help bridge the divide between clusters of components that share some data.
So when designing React apps, I almost always use React state. 80-90% of the time it's fine, works well and is easy to understand since everything about the component is there (along with the "props" or arguments to the component).
I use Redux when I occasionally need to do the following:
1) Share data between otherwise unrelated components
2) I need to setup "global" state that exists between page transitions, etc.
3) A child component needs to notify a parent of some event. This can also be done with a simple callback provided as a prop to avoid Redux in this situation.
So think of this in two levels - local component state within a nest of related components, and shared state within a set of unrelated components; the latter is where Redux can help.
I much prefer (and find simpler to understand) that state is local and changes one-way. It's similar to the issue whether a language supports reference types as parameters or sticks to value types. Reference types make it easy to pass along side effects to the caller, but it can also introduce hard to understand side effects.
The tech industry keeps repeating the same business mistakes. This period strikes me more and more of the 1998-99 era when there was so much euphoria that there was no way but up. Then March 2000 came, the first Internet high flyers started to fail, and very quickly the whole thing unraveled. We're on the same road again...
The latest stats I've seen for us here in the DC metro area are that we have 7 of the 10 richest counties in the U.S. That tells you a lot about the density of opportunity here. The downtown areas have gentrified amazingly in the last several years. Nice place to be if you want to be close to things.
I was in Seattle recently and it's a truly beautiful city with a stunning landscape. But seemed kinda sleepy compared to DC. I was in Silicon Valley also recently and came away scratching my head why anyone would wish to live there now - miles upon miles of industrial parks, seemed kinda run down even in Cupertino. But just my 2 cents.
What people really mean here is that when you consume real fruit, you get fiber, vitamins, other micro nutrients and, yes, fructose. But the quantity of fructose you get is relatively small. When people drink fruit juice, basically all they are getting is the fructose (with water). And you're getting many apples worth at one time. This overloads the liver which starts turning the fructose into fat.
So eating an apple or an orange is fine. Drinking fruit juice is not. Hope this helps
Frankly I think people are insane to use any of these password manager products, whether SaaS or local. You're trusting a 3rd party to exercise control over your most sensitive digital information. Since the majority of people on HN are developer-types, you'd think "we" would write a little code, if necessary, for ourselves to make it easier to remember passwords. Basically a little DIY.
By responding to this comment, I increase my chances of being victimized by some percent. By disagreeing with you within my reply, I increase it further. By listing and drawing attention to my comment 'almost deliberately', it probably raises the 'rate' of increase. Using a paragraph much longer than this point will draw further scrutiny.
A password manager is good<>great [the] most<>majority of the time. By drawing attention to yourself in a manner as small as this or as largely as describing my exact setup and process, I should start to worry for myself and my digital security. By stating that locks are meant for honest people I should be able to draw in some agreement by readers of this comment. Any and all of these points will raise me out of the 'crowd' of password manager users and paint me some shade of a target to malicious activity.
However, I believe that notwithstanding the above information, the average user is 99<>100% safe using a password manager in best practice settings.
It's precisely the fact that Coconut oil is 100% plant-based, saturated fat that it IS healthy. The notion that saturated fat causes heart disease has been thoroughly debunked as sham science. It's the inflammation in the arteries stimulated by excess sugar consumption (refined carbs, table sugar, soda, etc) that causes the problem. The body sends cholesterol to the scene to patch the damage.
Best analogy I've heard is "Gee whiz, everytime I see a fire, I see firemen. I guess firemen cause fires". Cholesterol does not cause heart disease all by itself.
You'd think scientists and the media, by this time, would have stopped relying on Dr. Ansel Keyes for their rationale for what causes heart disease. His 1950's "Seven Countries" study was an example of cherry-picked data to support a pre-determined outcome that was desired.
Finally, I can show you cultures, such as the Inuit, who's diet is almost exclusively saturated fat and protein and which are healthy (as long as they stick to their native diet). I challenge anyone to show me a culture who's diet is primarily sugar-based which has comparable health.
So why does the AHA still say that saturated fat raises LDL, and cites multiple trials? This goes beyond the media and Ancel Keys. Are these studies not able to control for confounding factors like replacing the sat. fat with sugars and refined carbs?
For what it's worth I don't think saturated fat is necessarily bad. I don't have links but I've read a couple of good studies that showed saturated fat raised LDL in a cohort of sedentary participants, but had no impact on LDL in a cohort of regular exercisers. This suggests it may be more important to focus on exercise if you're worried about HDL/LDL.
Keep in mind there's also increasing evidence that HDL/LDL are symptoms, not causes of heart disease. Maybe the strongest is the failure of drugs that lower LDL and increase HDL - they don't make people better.
E.g. Eli Lilly's Evacetrapib: tested on 20k patients over 24 months, LDL down 30%, HDL up 125% - but zero difference in cardiovascular health outcomes. Needless to say it failed the trial and has been scrapped.
>Finally, I can show you cultures, such as the Inuit, who's diet is almost exclusively saturated fat and protein and which are healthy (as long as they stick to their native diet). I challenge anyone to show me a culture who's diet is primarily sugar-based which has comparable health.
Eating fresh raw meat you actually do consume copious sugars in the form of the raw glycogen stores of the now dead animal.
Glycogen isn't made out of fructose. Fructose is the molecule that disregates appetite and is now considered by many to be at the root of the obesity epidemic.
We've been happily using coconut oil for centuries. Its the staple. Let those who disagree use their "refined" canola oil ;). Check out which oils are good for high heat cooking.
Actually I think this self-learning approach is a great example of what it takes to be a great developer regardless of background (or formal training).
She built one project each day for 180 days learning a bit more each day. She chronicled her mistakes and successes. HN had a post on it at the time and the majority of developers that responded were very supportive.