Hacker Newsnew | past | comments | ask | show | jobs | submit | Maxatar's commentslogin

I only see three fairly superficial paragraphs. Is there more to the article behind a paywall?


I count 8 fairly brief paragraphs in the article. The last sentence is "Altman’s “code red” declaration is a reminder that, despite OpenAI’s unprecedented rise, it remains very much a start-up."

I use a Firefox add-on to Bypass Paywalls.



There are several published methods for reading Atlantic articles. I don't think anyone can make the judgement call for what you should and shouldn't read. Why not give it a shot?

I didn't ask about what I should or shouldn't do and don't really care about your opinion on what I should read. I was surprised by how short and superficial the article was as originally linked and wanted to know if that was due to a paywall blocking the more substantive portion of the article or whether the article is just three brief paragraphs.

The trade-off is intended to make it easier for people to write software. Garbage collected languages make it easier for people to write memory safe code at the expense of performance, significantly greater memory usage, and heavy dependencies/runtimes.

These trade-offs are wholly unnecessary if the LLM writes the software in Rust, assuming that in principle the LLM is able to do so.


I also prefer that and in fact I enable "visible whitespace" in every text editor that supports it, such as VS Code.

If you're someone who owns Microsoft, what option would you prefer?

1. Stock price remains the same but revenue doubles.

2. Revenue stays the same, but stock price doubles.

Assuming all else equal, and recognizing that this is absolutely a simplification, but if these were the two choices then it seems a no brainer that you'd go with option 2. Revenue is a means of increasing stock price.


That’s like asking whether you’d rather have a pizza with the same diameter but twice the area, or a pizza with twice the diameter but the same area. It makes no sense.

Stock prices were, are, and will always be rough approximations of the NPV of future profits. It’s not perfect, of course, but it’s roughly true.

Doubling revenue in any remotely sustainable way will have way more than a 2x impact on stock price because of exponential growth. So yeah, as a stockhodler, you’d rather double revenue with flat stock price because you’d buy the crap out of the stock when you realize the market has not factored revenue growth in to pricing.

Imagining that public companies care about stock price more than revenue is literally like saying a hungry customer care more about pizza radius than area.


I guess it depends on what kind of investor you are.

If you're holding MS stock long-term, and you plan to gradually shift away from equities as you near retirement and then gradually liquidate your holdings to fund your retirement, juicing the stock in the short term does nothing for you.

If you're holding short-term, then you also need to sell the stock after it gets juiced, so that you can move your capital to not-yet-juiced stocks.


Missed a point above - that for said short-term investor... that strategy doesn't actually work, since a "sell high buy low" strategy on individual stocks is outperformed by just holding ETFs long-term.

So really, which investors does short-term stock juicing benefit? Insider traders, I guess.


A long term investor would prefer 1. Most likely option 2 would shortly be followed by halving of the stock price. Option 1 lets someone buy shares in a company that is greatly undervalued and will lead to long terms gains.

I just think you are the other person who replied to me are making the same mistake, which is mixing the means of accomplishing a goal with the goal itself. This is a mistake I see a lot (especially in software development), where people get too attached to a specific means or method that they end up confusing the method itself with the actual thing that needs to be accomplished.

The goal for an investor is captured in the stock price itself. In programming terms, a corporation is a function whose output is its stock price/market cap, and revenue is but one of a host of inputs into that corporation that determines what the stock price is. Other inputs can be operating expenses, whether dividends are issued, future prospects for the company such as entering new markets etc etc... and you can have beliefs about how those various inputs affect the output or how these inputs change the output over time (short term vs. long term), that's perfectly fine... but when push comes to shove, the goal is not the revenue, it's not entering a new market, it's not reducing operating expenses... the goal is increasing the stock price (well technically the market cap).


You should look up the concept of value investing. If you can buy shares in an undervalued company do so.

That's precisely the argument I'm making. A company's stock price can be undervalued, which exactly means that the stock price can increase without any change in the company's revenue or profitability. The stock price can increase strictly because the actual value of the company has not yet been fully realized without any material change necessary on the part of the company.

As an investor, that's the ultimate goal of your investment.


Hyperliquid and similar exchanges aren't decentralized. That is their long term goal but they are very far from achieving it.

The few actual decentralized exchanges are too slow and expensive.


There are some exchanges that are more decentralised (and older) than Hyperliquid. Hyperliquid, while being the most popular one, is not the only horse in the town.

E.g. GMX on Arbitrum chain is no longer prohibitively expensive.

Left some comments here https://news.ycombinator.com/item?id=46172450


I mean, as soon as synchronisation is required in any system, block chain, distributed SAAS, even Peer to Peer sharing, decentralisation fails hard

That's one of the sticking points I have with the /idea/ of the technology


Ethereum and similar chains run arbitrary computation on-chain. You can make a futures exchange on Ethereum (or Solana, etc). However, the fees for doing so are very large, and confirmation times are very long, like any other on-chain transaction.

What I am loving about this comment, and the downvotes, is the idea that blockchains can escape things like basic laws of the universe.

> confirmation times are very long, like any other on-chain transaction

Yes. synchronisation is where everything breaks down because you have to get everyone to agree to the new state.

edit: Sorry, not everyone, but a consensus, and that consensus is then what everyone agrees is the state.


> HyperCore includes fully onchain perpetual futures and spot order books. Every order, cancel, trade, and liquidation happens transparently with one-block finality inherited from HyperBFT. HyperCore currently supports 200k orders / second, with throughput constantly improving as the node software is further optimized.

Key part:

> fully onchain perpetual futures and spot order books


Being on a blockchain and being decentralized are two different things. The HyperCore client isn't even open source.

That's just patently false.

> Importantly, HyperCore does not rely on the crutch of off-chain order books. A core design principle is full decentralization with one consistent order of transactions achieved through HyperBFT consensus.


The basis of decentralized software is open-source. Otherwise a centralized authority can just push an update to, for instance, blacklist addresses.

https://github.com/hyperliquid-dex/node

"For lowest latency, run the node in Tokyo, Japan."

Decentralization means to run all of the closed-source nodes in the same AWS datacenter!


And in fact they did just this when their vaults started bleeding money on an unfavourable position (JellyJelly). They handed out a closed source binary and the validators ran it immediately, closing out the market at an arbitrary price.

The basis of decentralized software is open protocol. Then it doesn't matter that somebody runs closed source while somebody runs open source.

as an operator you don't even get the real validator / node binary directly, nor can you control which version to run.

all you can do is run their visor, and they push out whatever proprietary blob they produce and restart "your" nodes at their command.


We use back testing at my firm for two primary reasons, one as a way to verify correctness and two as a way to assess risk.

We do not use it as a way to determine profitability.


This is interesting because I'm not immediately sure how you verify correctness and assess risk without also addressing profitability.

By assessing risk is that just checking that it does dump all your money and that you can at least maintain a stable investment cache?

Are you willing to say more about correctness? Is the correctness of the models, of the software, or something else?


Profitability is not in any way considered a property of the correctness of an algorithm. An algorithm can be profitable and incorrect, and an algorithm can be correct but not profitable.

Correctness has to do with whether the algorithm performed the intended actions in response to the inputs/events provided to it, nothing more. For the most part correctness of an algorithm can be tested the same way most software is tested, ie. unit tests, but it's also worth testing the algorithm using live data/back testing it since it's not feasible to cover every possible scenario in giant unit tests, but you can get pretty good coverage of a variety of real world scenarios by back testing.


>Your order can legally be “front run” by the lead or designated market maker who receives priority trade matching, bypassing the normal FIFO queue. Not all exchanges do this.

Unless you're thinking of some obscure exchange in a tiny market, this is just untrue in the U.S., Europe, Canada, and APAC. There are no exchanges where market makers get any kind of priority to bypass the FIFO queue.


Anyone can be a market maker on a trade just take the other side of an offer. All they really do is make a market with you and then make a market with the other side and pocket the change. It's good for market liquidity.

> There are no exchanges where market makers get any kind of priority to bypass the FIFO queue.

Nope, several large, active, and liquid markets in the US.

Legally it’s not named “bypass the FIFO queue”. That would be dumb.

In practice, it goes by politically correct names such as “designated market maker fill” or “institutional order prioritization” or “leveling round”.


I can tell you as someone who is a designated market maker on several ETFs in the U.S., none of this exists as a means of giving market makers priority fills. You're taking existing terms and misusing them. For example institutional order prioritization is used as a wash trade prevention mechanism, not as a way for designated market makers to get some kind of fill preference. Leveling rounds also do not involve exchanges, this is an internal tool used by a broker's OMS to rebalance residuals so accounts end up with the intended allocation, or cleaning up odd-lot/mixed-lot leftovers.

I am getting the feeling you either are not actually a quant, or you were a quant and just misheard and confused a lot of things together, but one thing is for sure... your claim that market makers get some kind of priority fills is factually incorrect.


++1

thanks


I think he was referring to the language specification, not a specific compiler.

But that is also wrong, as per the article C++ 26 got some improvements in a hardening profile.

I see that C++26 has some incredibly obscure changes in the behavior of certain program constructs, but this does not mean that these changes are improvements.

Just reviewing the actual hardening of the standard library, it looks like in C++26 an implementation may be considered hardened in which case if certain preconditions don't hold then a contract violation triggers an assertion which in turn triggers a contract violation handler which may or may not result in a predictable outcome depending on one of 4 possible "evaluation semantics".

Oh and get this... if two different translation units have different evaluation semantics, a situation known as "mixed-mode" then you're shit out of luck with respect to any safety guarantees as per this document [1] which says that mixed-mode applications shall choose arbitrarily among the set of evaluation semantics, and as it turns out the standard library treats one of the evaluation semantics (observe) as undefined behavior. So unless you can get all third party dependencies to all use the same evaluation semantic, then you have no way to ensure that your application is actually hardened.

So is C++26 adding changes? Yes it's adding changes. Are these changes actual improvements? It's way to early to tell but I do know one thing... it's not at all uncommon that C++ introduces new features that substitute one set of problems for a new set of problems. There's literally a 300 page book that goes over 20 distinct forms to initialize an object [2], many of these forms exist to plug in problems introduced by previous forms of initialization! For all we know the same thing might be happening here, where the classical "naive" undefined behavior is being alleviated but in the process C++ is introducing an entire new class of incredibly difficult to diagnose issues. And lest you think I'm just spreading FUD, consider this quote from a paper titled "C++26 Contracts are not a good fit for standard library hardening" [3] submitted to the C++ committee regarding this upcoming change arguing that it risks giving nothing more than the illusion of safety:

>This can result in violations of hardened preconditions being undefined behaviour, rather than guaranteed to be diagnosed, which defeats the purpose of using a hardened implementation.

[1] https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2025/p29...

[2] https://www.amazon.ca/dp/B0BW38DDBK?language=en_US&linkCode=...

[3] https://isocpp.org/files/papers/P3878R0.html


I believe there were some changes in the November C++ committee meeting that (ostensibly) alleviates the some of the contracts/hardening issues. In particular:

- P3878 [0] was adopted, so the standard now forbids "observe" semantics for hardened precondition violations. To be fair, the paper doesn't explicitly say how this change interacts with mixed mode contract semantics, and I'm not familiar enough with what's going on to fill in the gaps myself.

- It appears there is interest in adopting one of the changes proposed in D3911 [1], which introduces a way to mark contracts non-ignorable (example syntax is `pre!()` for non-ignorable vs. the current `pre()` for ignorable). A more concrete proposal will be discussed in the winter meeting, so this particular bit isn't set in stone yet.

[0]: https://isocpp.org/files/papers/P3878R1.html

[1]: https://isocpp.org/files/papers/D3911R0.html


Mixed mode is about the same function compiled with different evaluation semantics in different TUs, and it is legit. The only case they are wondering about is how deal with inlined functions and they suggest ABI extensions to support it during the link-time. None of what you said is an issue.

> The possibility to have a have a well-formed program in which the same function was compiled with different evaluation semantics in different translation units (colloquially called “mixed mode”) raises the question of which evaluation semantic will apply when that function is inline but is not actually inlined by the compiler and is then invoked. The answer is simply that we will get one of the evaluation semantics with which we compiled.

> For use cases where users require strong guarantees about the evaluation semantics that will apply to inline functions, compiler vendors can add the appropriate information about the evaluation semantic as an ABI extension so that link-time scripts can select a preferred inline definition of the function based on the configuration of those definitions.


Not sure what you mean by the term "legit".

The entirety of the STL is inlined so it's always compiled in every single translation unit, including the translation units of third party dependencies.

Also it's not me saying, it's literally the authors of the MSVC standard library and the GCC standard library pointing out these issues [1]:

[1] https://isocpp.org/files/papers/P3878R0.html


Legit as in allowed and not an issue as you're trying to convey, ok? I read the paper if that wasn't already obvious from my comment. What you said is factually incorrect.

Not sure I understand what point you're trying to dispute. It's not obvious at all that you read either my post or the paper I posted authored by the main contributors to MSVC and GCC about the issues mixed-mode applications present to the implementation of the standard library given that you haven't presented any defense of your position that addresses these issue. You seem to think that just declaring something "legit" and retorting "you are incorrect" is a sufficient justification.

If this is the extent of your understanding it's a fairly good indication you do not have sufficient background on this topic and may be expressing a very strong opinion out of ignorance of this topic. It's not at all uncommon that those with the most superficial understanding of a subject express the strongest views of said topic [1].

Doing a cursory review of some of your recent posts, it looks like this is a common habit of yours.

[1] https://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect


I have literally copy-pasted the fragments from the paper you're referring to which invalidate your points. How is that not obvious? Did you read the paper yourself or you're just holding strong opinions yourself, as you usually do whenever there is something to backlash against C++? I'm glad you're familiar with the Dunning-Kruger effect, this means there is some hope for you.

The implementations of hardening in libc++ and libstdc++ are available now and are straightforward to use.

https://libcxx.llvm.org/Hardening.html

https://gcc.gnu.org/wiki/LibstdcxxDebugMode (was already available for longer, the official hardening might take this over or do something else)


Violation of preconditions is always undefined behavior, this isn't new.

The problem is that violation of preconditions being UB in a hardened implementation sort of defeats the purpose of using the hardened implementation in the first place!

This was acknowledge as a bug [0] and fixed in the draft C++26 standard pretty recently.

[0]: https://isocpp.org/files/papers/P3878R1.html


>A hardened implementation that uses 'observe' is not hardened

The proposal simply included a provision to turn off hardening, nothing else. Traditionally these checks were under #ifndef NDEBUG


> The proposal simply included a provision to turn off hardening, nothing else.

(Guessing "the proposal" refers to the hardening proposal?)

I don't think that is correct since the authors of the hardening proposal agreed that allowing UB for hardened precondition violations was a mistake and that P3878 is a bug fix to their proposal. Presumably the intended way to turn off handling would be to just... not enable the hardened implementation in the first place?


Using #ifndef NDEBUG in templates is one of the leading causes of one-definition rule violations.

At least traditionally it was common to not mix debug builds with optimized builds between dependencies, but now with contracts introducing yet another set of orthogonal configuration it will be that much harder to ensure that all dependencies make use of the same evaluation semantic.


Just a cursory search on Github should put this idea to rest. You can do a code search for std::optional and .value() and see that only about 20% of uses of std::optional make use of .value(). The overwhelming majority of uses off std::optional use * to access the value.

Okay but now you're explaining that correctness is not necessarily the only reasonable state. It's possible to sacrifice some degree of correctness for enormous gains in performance because having absolute correctness comes at a cost that might simply not be worth it.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: