Hacker Newsnew | past | comments | ask | show | jobs | submit | zevets's commentslogin

I think this is a major mistake for Zig's target adoption market - low level programmers trying to use a better C.

Julia is phenomenally great for solo/small projects, but as soon as you have complex dependencies that _you_ can't update - all the overloading makes it an absolute nightmare to debug.


For what it's worth, that hasn't been my experience with Julia – I've found it easier to debug than Python, Scala, or Clojure (other languages I've used at jobs.)

The tooling makes it easy to tell which version of a method you're using, though that's rarely an issue in practice. And the fact that methods are open to extension makes it really easy to fix occasional upstream bugs where the equivalent has to wait for a library maintainer in Python.

500kloc Julia over 4 years, so not a huge codebase, but not trivial either.


Ada has them, and I guess we all agree on its systems programming nature.


NOOOO!

What Ada (and Rust) calls generics is very different -- it is like template functions in C++.

In those languages the version of the function that is selected is based on the declared type of the arguments.

In CLOS, Dylan, Julia the version of the function that is selected is based on the runtime type of the actual arguments.

Here's an example in Dylan that you can't do in Ada / Rust / C++ / Java.

    define method fib(n) fib(n-1) + fib(n-2) end;
    define method fib(n == 0) 0 end;
    define method fib(n == 1) 1 end;
The `n == 1` is actually syntactic sugar for the type declaration `n :: singleton(1)`.

The Julia version is slightly more complex.

    fib(n) = fib(Val(n))
    fib(::Val{n}) where {n} = fib(n-1) + fib(n-2)
    fib(::Val{0}) = 0
    fib(::Val{1}) = 1
    
    println(fib(30))
This is perhaps a crazy way to write `fib()` instead of a conventional `if/then/else` or `?:` or switch with a default case, but kinda fun :-)

This of course is just a function with a single argument, but you can do the same thing across multiple arguments.

    define method ack(m, n) ack(m-1, ack(m, n-1)) end;
    define method ack(m == 0, n) n+1 end;
    define method ack(m, n == 0) ack(m-1, 1) end;


You missed the way Ada does OOP, and went completely overboard talking about generics.

As you can see from my comment history, I am quite aware of CLOS, Lisp variants and Dylan.


Last I checked, Ada does not have multimethods/generic functions in the sense of CLOS, Dylan and Julia. It has static function overloading, and single-argument dispatch, just like C++.


For better or worse, Fortran is still a popular language to write clever PDE schemes in, as it maximizes "time to first, fast-enough-running code".

But for anything with a userbase of more than ~15 people, C/C++ are widely preferred.


Julia is starting to pick up steam here. It's a lot easier to write mixed precision algorithms in since the type system is pretty much designed for efficiently writing generic algorithms (and it doesn't hurt that Julia's ODE solvers are SOTA)


> Julia is starting to pick up steam here

First time I saw this claim was over 9 years ago.


We must disclose that @adgjlsfhk1 works for JuliaComputing. Sometimes they forget to do so on their own.


Julia's choice to encourage people naming their variables greek letters is bad though. There's a whole group of students who struggle with the symbols, but understand the concepts (a residual). Julia, when used to its full capabilities, gains an enormous amount of its power from a huge amount of clever abstractions. But in the 1st-course-in-numerical-methods class context, this can be more offputting than the "why np?" stuff this article mentions.

For teaching linear algebra, MATLAB is unironically the best choice - as the language was originally designed for that exact purpose. The problem is that outside of a numerical methods class, MATLAB is a profound step backwards.


If students struggle with the use of Greek letters as symbols, it'll be difficult for them to deal with lots of math and physics where this is the standard notation. Intuitively it feels that the best thing that the language can do here is to enable notation that is the closest to the underlying math.


> If students struggle with the use of Greek letters as symbols

After using Character map or other "user friendly" methods to enter Greek letters as symbols on a computer, i would say, yes, people struggle with the use of Greek letters. Unless, of course, one has a Greek keyboard.


I always install WinCompose specifically for this. Then a Greek letter is as simple as pressing Compose with *, followed by the related Roman letter. Though I still struggle to remember which key maps to which for the letters with no real 1-to-1 mapping (like theta).


Whatever language you choose, you are only going to be teaching a subset of it anyway, so just ignore the Unicode identifier support. I code in Julia professionally, and never use it.


Using greek symbols as public-facing function arguments, etc. is definitely not recommended, and not that common (at least in my experience).

It's best used for internal calculations where the symbols better match the actual math, and makes it easier to compare with the original reference.


Alternatively, any implementation of operator+ should have a notional identity element, an inverse element and be commutative.


C++ would be a very different language if you couldn't use floats:

(NaN + 0.0) != 0.0 + NaN

Inf + -Inf != Inf

I suspect the algebraists would also be pissed if you took away their overloads for hypercomplex numbers and other exotic objects.


Please do this.

But first: we need to take step-zero and introduce a type "r64": a "f64" that is not nan/inf.

Rust has its uint-thats-not-zero - why not the same for floating point numbers??


You can write your "r64" type today. You would need a perma-unstable compiler-only feature to give your type a huge niche where the missing bit patterns would go, but otherwise there's no problem that I can see, so if you don't care about the niche it's just another crate - there is something similar called noisy_float


I can do it, and I do similar such things in C++ - but the biggest benefit of "safe defaults" is the standardization of such behaviors, and the resultant expectations/ecosystem.


> Rust has its uint-thats-not-zero

Why do we need to single out a specific value. It would be way better if we also could use uint-without-5-and-42. What I would wish for is type attributes that really belong to the type.

    typedef unsigned int __attribute__ ((constraint (X != 5 && X != 42))) my_type;


Proper union types would get you there. If you have them, then each specific integer constant is basically its own type, and e.g. uint8 is just (0|1|2|...|255). So long as your type algebra has an operator that excludes one of the variants from the union to produce a new one, it's trivial to exclude whatever, and it's still easy for the compiler to reason about such types and to provide syntactic sugar for them like 0..255 etc.


Those are the unstable attributes that your sibling is talking about.


Yeah of course I can put what I want in my toy compiler. My statement was about standard C. I think that's what Contracts really are and hope this will be included in C.


Oh sure, I wouldn’t call rustc a “toy compiler” but yeah, they’d be cool in C as well.


It's slightly less cynical than that - it takes about eight years to design a space mission and rocket, but doing the detailed design is expensive as hell, so in order to meet a budget, they then change the mission, so they can go back to the vastly more affordable task of talking about doing work, vs doing work


I take four pills a day and the primary side effect is weight gain. The earlier 1950s era treatment made me exhausted 24/7. There's a new trial that has a new target, and looks to solve the remaining symptoms of the disease, with effectively no side effects

The big problem is that it's a chronic blood cancer, so the pills have a list price of $180k/yr. Who knows if my insurance will cough up for a second big-money prescription/


My father is 15y out from a trial (at MD Anderson) that put his CLL into remission. You may already know about The Leukemia and Lymphoma Society[1] but they can help with the cost of prescriptions (including negotiating the prices down with the pharmaceutical companies!)

1: https://www.lls.org/


At that cost it's worth looking into moving to a country that actually has reasonable medical costs instead of laws protecting those milking the system. a Plan B?


Some countries with publicly funded healthcare make immigration much more difficult if you have an expensive health condition. This is the case for Canada for example [1].

[1] https://www.canada.ca/en/immigration-refugees-citizenship/se...


This is why people need to think about these things when they are in full health.


My actual co-pay is $10/mo for the good stuff, plus warfarin (eliquiis/xeralto were too weak for me :/) which is ~$12 for a 90 day supply from the mail order PBM pharmacy. I average about $1500/yr in out-of-pocket medical expenses. My company self insures, and has an extremely generous insurance plan.

Plan B is wait until 2028, when it goes off patent. I think I can keep my job til then. I've learned from the HR folks that they just signed another 3 yr contract with the insurance company, so I'm not forseeing any major changes to coverage. This drug is super pricey, as it was originally targeted towards people with acute cancers, but now the largest market is the chronic disease patients, but they never lowered the price.

I suspect the insurer/PBM are making a small fortune off of my care. They are also being sued by the pharmaceutical industry for using a "co-pay maximizer" which caps (patients) out-of-pocket co-pays, and goes after the pharmaceutical companies' "charities" which help patients purchase their products, which the insurer then takes a cut from.

And the weight gain isn't fluid, it's definitely body fat. I think the weight gain is from the "baseline" treatment being a mutagenic chemotherapy, and the likely fact that my (previously) enlarged spleen was impinging on my stomach limiting my appetite, and the lived fact that it massively slows your metabolism, as I'm always a bit cold.


I'm not sure how that would work, do countries accept this kind of behaviour?

It's like you've been paying your (lower) taxes in country X and now come over to enjoy the saner system. I guess you should have chosen your priorities earlier?


Out of curiosity, when some medication causes weight gain, how does it work? Does it increase appetite? Or does it slow metabolic rate?


Prednisone is a pretty common drug with weight gain as a side effect, so that might be a good place to look further.

It increases water retention (obviously not permanent or unbounded), increases appetite, and redistributes fat (giving the appearance of weight gain).


My grandmother took it most of her late life for management of COPD. She had a hard time getting off it completely. She had immense self-control and managed to control the weight gain side-effects, but she had some of the moon face appearance.


Prednisone is usually not a treatment for cancer, but rather a treatment for the cancer treatment.

A potential side effect of immunotherapy is it can cause the immune system to go haywire and start attacking non-cancer cells.


Right, but it's a well researched drug with a weight gain side effect, so it's probably a reasonable entry point for them to learn about the thing they asked (unless they happened to care about that cancer drug in particular, but that's not what it sounded like to me).


I take mirtazapine for crushing depression and now I have clinical obesity and borderline diabetes. Medicare won't cover obesity treatments other than some lifestyle habits lecturing because they consider it "my fault" with zero nuance. [0]

0. https://www.medicare.gov/coverage/obesity-behavioral-therapy


Usually, aside from water retention, it’s the appetite, I would assume. Lower metabolic rate by itself would lower the appetite because the person would feel less hungry.


Metabolic rate and appetite are loosely correlated at best. Most stimulants simultaneously reduce appetite, and increase metabolic rate. (in fact, that's where a significant portion of their negative side effects come from. Habitual meth users tend to become malnourished, mostly because of the appetite suppression, which combined with teeth grinding jitters, causes the iconic "meth mouth")


its metabolic rate. im always cold.


Not OP but I'd guess fluid retention.


Usually it's just fluid.


Are you talking about acalabrutinib and zanubrutinib? If so, have you looked into the chinese version Orelabrutinib? Chinese pharma has gone from backwater to competitive with US in like 5 years


This is bad science. Patients schedule when they go to immunotherapy appointments. People who go in the morning are still working/doing things, where once you get _really_ sick, you end up scheduling mid-day, because its such a hassle to do anything at all.


From the article -

> this paper was not a retrospective study of electronic health records, it was a randomized clinical trial, which is the gold standard. This means that we’ll be forced to immediately throw away our list of other obvious complaints against this paper. Yes, healthier patients may come in the morning more often, but randomization fixes that. Yes, patients with better support systems may come in the morning more often, but randomization fixes that. Yes, maybe morning nurses are fresher and more alert, but, again, randomization fixes that.


> Yes, maybe morning nurses are fresher and more alert, but, again, randomization fixes that

How does randomization fix that?


exactly. that one clause casts doubt on all the other reasoning; randomization controls for patient selection bias but not diurnal clinic performance


It would if the clinic is a controlled setting and they can control when the nursing shift begins.


"Forced to throw away" biases is strong. If run well, RCTs surely help manage potential biases, but it does not eliminate them. The slides saw available on X-itter didn't show a Consort diagram (accounting of patient count between screening and endpoint) or the balance of patent characteristics between the arms. This seems to be a single site study, which is significant caveat IMO. The lack of substantial mechanistic explanation, and alleged study redesign mid-stream are also caveats. All that said the reported effect is very large, and I'd like to see a more detailed reporting and analysis. If the effect that size is real, it should be able to be found in some relatively quickly retrospective studies (yes, many caveats there, but that could probably provide very large numbers rapidly in support of the RCT).


What does randomization mean in this context, and why does it fix those problems?


https://en.wikipedia.org/wiki/Randomized_controlled_trial

The same thing it means in every context: that (with enough samples) you can control for confounders.


Supposing that patients did better in the morning because, say, the nurses were more alert, no matter how many samples you take you'll find the patients do better in the morning. How does "more samples" help control for confounders rather than just confirm a bias?


> How does "more samples" help control for confounders rather than just confirm a bias?

I think you're correct that randomising patient assignments doesn't control for provider-side confounders. Curious if the study also randomised nursing assignments.


"more samples" is not what controls for confounders. Controlling for confounders is what controls for confounders, which you can only do with enough samples that you can randomize out the effect of the confounder.

Whether or not they controlled for nurse-alertness is something you'd have to read the paper (or assume the researchers are intelligent) for.


I guess I'm asking, how do you randomize out the confounder in this case.


I imagine that that particular confounder is not possible to eliminate via randomization. Perhaps you collect a bunch of data on nurse awakeness--day shift vs night-shift, measuring alertness somehow, or measuring them on other activities known to be influenced by alertness--and then ensure your results don't correlate with that.

There is also the mechanistic side: if you have lots of plausible mechanism for what's going on, and you can detect indicators for it that don't seem to correlate with nurse alertness, that's a vote against it mattering. Same if you have of lots of expertise on the ground and they can attest that nurse alertness doesn't seem to have an affect. There are lots of ways, basically, to reach pretty good confidence about that, but they might not be as rigorous as randomized assignments can be.


Have every dose be observed by another doctor?


Patients in the study are randomly assigned to the early group or the late group. They don't get to schedule their own appointments for whatever time of day they want.


How does this control for the "alert nurses" variable? In that case, patients would do better in the morning, regardless of the patient.


Based on these graphs and the differences in outcomes they show, you are not talking about "alert vs less alert" nurses but about "nurses doing their job vs nurses basically slowly killing dozens of patients".


Why would you assume nurses are scheduled on a 9-5 basis?


Why do you think you're going to poke holes in a research article when you've clearly only just heard of the concept and havent even read the article


If I thought I could poke holes in the research, I wouldn't be posting on HN. I'm asking questions to learn because obviously I don't understand :)


Patients are assigned the time for their visits. The time itself is randomized


How many dose this treatment has? How many between them?

How many patients dropped out? (Or requested a schedule change) Do they count like live or dead?


Writer of the article here: randomization fixes most of this, but the other commenters are correct in that doesnt fully account for the clinic performance (e.g. nurse performance, which does dip during the night according to the literature). I previously thought it wasn't a major issue for clinical trials, since a separate team independent from the main ward are giving the drugs, but there isn't super strong evidence to support that. I will update the article to admit this!

This said, I am inclined to believe that this isn't a major concern for chronotherapy studies, since I haven't yet seen it being raised in any paper yet as a concern and the results seem far too strong to blame entirely on 'night nurses make more mistakes'. Fully possible that that is the case! I just am on the other side of it


I always have seen mid-day appointments as also a luxury for those doing well (at least professionally/financially). If you have to go first thing in the morning, it's often because your boss wants you in relatively early and won't let you take time mid-day. If you're in a position where you can go in at 2PM and not have to sacrifice sleep to do so, that feels healthier.

Given the highly-evident strong circular nature of the body, a hypothesis that it has something to do with that seems highly likely, certainly worth following up on.


Surely your boss legally has to let you attend a health appointment? Though they might not have to pay you. That seems like a very basic workers right, the sort of thing you'd have a general strike over if it didn't exist??


The most vulnerable, at least among those who have a job at least, often have the most draconian restrictions on when and what they can do.

Believe they are being treated like robots. Maybe even literally like gears rented by the hour, not even robots.


> mid-day appointments as also a luxury for those doing well

Irrelevant to this study given randomization.


I can schedule appointments whenever I want. I'm an early riser and prefer my appointments first thing in the morning.


The appointment schedule was randomized, so your objection is incorrect.


It's honestly surprising so many programming languages ignore the needs of "floating point" users. Rust has ints that aren't 0, but no std type for floats that aren't NaN? In some sense, ieee754 floats are better than ints, as the float error modes have NaNs are just HW supported error tagged enum types.

I think its from a CS education which treats the "naturals" as fundamental, vs an engineering background where the "reals" are fundamental, and matrix math _essential_ and people live on one side of this fence.


That was true in the past, for a few reasons.

- Floating point operations used to be slow. On early PCs, you didn't even have a floating point unit. AutoCAD on DOS required an FPU, and this was controversial at the time.

- Using the FPU inside system code was a no-no for a long time. Floating point usage inside the Linux kernel is still strongly discouraged.[1] System programmers tended not to think in terms of floating point.

- Attempts to put multidimensional arrays in modern languages tend to result in bikeshedding. If a language has array slices, some people want multidimensional slices. That requires "stride" fields on slices, which slows down slice indexing. Now there are two factions arguing. Rust and Go both churned on this in the early days, and neither came out with a good language-level solution. It's embarrassing that FORTRAN has better multidimensional arrays.

Now that the AI world, the GPU world, and the graphics world all run on floating point arrays, it's time to get past that.

[1] https://www.kernel.org/doc/html/next/core-api/floating-point...


> This enables some memory layout optimization. For example, Option<NonZero<u32>> is the same size as u32

NaN doesn’t have this optimization because the optimization isn’t generic across all possible representations. Trying to make it generic gets quite complex and floats might have many such representations (eg you want NaN to be optimized, someone else needs NaN and thinks infinity works better etc). In other words:

Nonzero is primarily for size optimization of Option<number>. If you want sentinels, then write your own wrapper, it’s not hard.


I think it's operating on the assumption that the VC donor money, and the valley's pivot to defense spending is the VCs getting "ahead" of the problem that "AI" isn't going to produce trillions in _profits_ for investors.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: