Hacker Newsnew | past | comments | ask | show | jobs | submit | judofyr's commentslogin

Very cool project! Always happy to see more work around static analysis.

However, looking at the recent commits it doesn't quite look like the most solid foundation: https://github.com/shuaimu/rusty-cpp/commit/480491121ef9efec...

    fn is_interior_mutability_type(type_name: &str) -> bool {
        type_name.starts_with("rusty::Cell<") ||
        type_name.starts_with("Cell<") ||
        type_name.starts_with("rusty::RefCell<") ||
        type_name.starts_with("RefCell<") ||
        // Also check for std::atomic which has interior mutability
        type_name.starts_with("std::atomic<") ||
        type_name.starts_with("atomic<")
    }
… which then 30 minutes later is being removed again because it turns out to be completely dead code: https://github.com/shuaimu/rusty-cpp/commit/84aae5eff72bb450...

There's also quite a lot of dead code. All of these warnings are around unused variable, functions, structs, fields:

    warning: `rusty-cpp` (bin "rusty-cpp-checker") generated 90 warnings (44 duplicates)

    Generated with [Claude Code](https://claude.ai/code)
    via [Happy](https://happy.engineering)

    Co-Authored-By: Claude <[email protected]>
    Co-Authored-By: Happy <[email protected]>
This isn't just vibe code. It's mobile vibe code.

No logic, no coherence———just inconsistency.

---

Note: This is an experimental shitpost. Fork it. Share it. Use it. [EMOJI ROCKET]


This whole thing feels like clever marketing. Why would the mobile app be credited?

https://github.com/shuaimu/rusty-cpp/blob/3707c09f5ff42bc5f6...

It also looks like it's skipping some lifetime checks in some sketchy way


> …which then 30 minutes later is being removed again because it turns out to be completely dead code

I'm not sure if it's a good or bad thing people expect the robots to produce proper code on the first attempt?


Just looking at the code excerpt makes it clear the code must be quite low quality

> made with AI

Yours an other similar comments are disproportionally rude given that the author was very upfront about their methodology.

And I don't think it's constructive to cherrypick commits in this context.

> I even started trying out the fully autonomous coding: instead of examining its every action, I just write a TODO list with many tasks, and ask it to finish the tasks one by one.

> I never had to fully understand the code. What I had to do is: I asked it to give me a plan of changes before implementation, it gave me a few options, and then I chose the option that seemed most reasonable to me. Remember, I’m not an expert on this. I think most of the time, anybody who has taken some undergraduate compiler class would probably make the right choice.

The idea has merits. Take it as a PoC.


I don't see anything that is the slightest bit "rude" in the comment you're replying to. It actually begins with enthusiastic praise of the project and its goals.

I don't understand why you feel it's not "constructive" to review the quality of code of a project. Are people supposed to just blindly believe in the functionality without peeking under the hood?


Let's agree to disagree then.

Initial praising doe not preclude rudeness. And complaining about a commit that was undone 30 minutes later is not only pointless in the presented context, it's a cheap attempt at insulting.

> Are people supposed to just blindly believe in the functionality without peeking under the hood

False dichotomy. No one said that. And we both know this is not the way regardless of the codebase.

I think the idea has merits and given the honesty of the post, it's rather more productive to comment on it instead.


> The idea has merits. Take it as a PoC.

Does it? There have been a gazillion such static analyzers. They all do one of two things: ignore the hard parts of tackle the hard parts. If you ignore the hard parts then your tool is useless. If you tackle the hard parts then your tool is orders of magnitude more complex and it still struggles to work well for real world projects. This is in the former category.

The article says "And since the static analysis is mostly statically scoped, it doesn’t require heavy cross-file analysis."

Oops. Suddenly you either handle aliasing soundly and your tool is plagued with zillions of false positives or you handle aliasing unsoundly and... you aren't getting what makes rust different. Separate compilation has been a problem for C++ analyzers for ages. Just declaring it to not actually be a big deal is a huge red flag.

Heck, even just approaching this as an AST-level analysis is going to struggle when you encounter basic things like templates.

The article says this: "Everybody tries to fix the language, but nobody tries to just analyze it." This is just flagrantly false. What's bizarre is that there are people at Stony Brook who have done this. Also, introducing new syntax (even if they are annotations) is more-or-less the same thing as "fixing the language" except that there is almost no chance that your dependencies (including the standard library) are annotated in the way you need.


Can you show an actual minimal C program which has this problem? I’m trying to follow along here, but it’s very hard for me to understand the exact scenario you’re talking about.


I think at this point it's reasonable to conclude that quotemstr does not have a legitimate concern until a program demonstrating the issue can be presented.


Is there a specific reason to store the key + value as an `uint64_t` instead of just using a struct like this?

    struct slot {
      uint32_t key;
      uint32_t value;
    }


The alignment constraint is different, which they use to be able to load both as a 64-bit integer and compare to 0 (the empty slot).

You could work around that with a union or casts with explicit alignment constraints, but this is the shortest way to express that.


In that case you can use bit fields in a union:

    union slot {
        uint64_t keyvalue;
        struct {
            uint64_t key: 32;
            uint64_t value: 32;
        };
    };
Since both members of the union are effectively the exact same type, there is no issue. C99: "If the member used to access the contents of a union is not the same as the member last used to store a value, the object representation of the value that was stored is reinterpreted as an object representation of the new type". Meaning, you can initialise keyvalue and that will initialise both key and value, so writing "union slot s{0}" initialises everything to 0. One issue is that the exact layout for bit fields is implementation defined, so if you absolutely need to know where key and value are in memory, you will have to read GCC's manual (or just experiment). Another is that you cannot take the address of key or value individually, but if your code was already using uint64_t, you probably don't need to.

Edit: Note also that you can cast a pointer to slot to a pointer to uint64_t and that does not break strict aliasing rules.


You can probably get away with just a union between a 64 bit and 2 32 bit integers.


C has finally gained `alignas` so you can avoid the union hack or you could just rely on malloc to alway return the maximum alignment anyway.


Maybe trying to avoid struct padding? Although having done a quick test on {arm64, amd64} {gcc, clang}, they all give the same `sizeof` for a struct with 2x`uint32_t`, a struct with a single `uint64_t`, or a bare `uint64_t`.


In any struct where all fields have the same size (and no field type requires higher alignment than its size), it is guaranteed on every (relevant) ABI that there is no padding bytes.


TIL! Thanks!


No real reason. Slightly terser to compare with zero to find an empty slot.


Or better, just store keys and values in separate arrays, so you can have compact cache lines of just keys when probing.


I think this is a bit unfair. The carpenters are (1) living in world where there’s an extreme focus on delivering as quicklyas possible, (2) being presented with a tool which is promised by prominent figures to be amazing, and (3) the tool is given at a low cost due to being subsidized.

And yet, we’re not supposed to criticize the tool or its makers? Clearly there’s more problems in this world than «lazy carpenters»?


Yes, that's what it means to be a professional, you take responsibility for the quality of your work.


Well, then what does this say of LLM engineers at literally any AI company in existence if they are delivering AI that is unreliable then? Surely, they must take responsibility for the quality of their work and not blame it on something else.


I feel like what "unreliable" means, depends on well you understand LLMs. I use them in my professional work, and they're reliable in terms of I'm always getting tokens back from them, I don't think my local models have failed even once at doing just that. And this is the product that is being sold.

Some people take that to mean that responses from LLMs are (by human standards) "always correct" and "based on knowledge", while this is a misunderstanding about how LLMs work. They don't know "correct" nor do they have "knowledge", they have tokens, that come after tokens, and that's about it.


> they're reliable in terms of I'm always getting tokens back from them

This is not what you are being sold though. They are not selling you "tokens". Check their marketing articles and you will not see the word token or synonym on any of their headings or subheadings. You are being sold these abilities:

- “Generate reports, draft emails, summarize meetings, and complete projects.”

- “Automate repetitive tasks, like converting screenshots or dashboards into presentations … rearranging meetings … updating spreadsheets with new financial data while retaining the same formatting.”

- "Support-type automation: e.g. customer support agents that can summarize incoming messages, detect sentiment, route tickets to the right team."

- "For enterprise workflows: via Gemini Enterprise — allowing firms to connect internal data sources (e.g. CRM, BI, SharePoint, Salesforce, SAP) and build custom AI agents that can: answer complex questions, carry out tasks, iterate deliverables — effectively automating internal processes."

These are taken straight from their websites. The idea that you are JUST being sold tokens is as hilariously fictional as any company selling you their app was actually just selling you patterns of pixels on your screen.


it’s not “some people”, it’s practically everyone that doesn’t understand how these tools work, and even some people that do.

Lawyers are running their careers by citing hallucinated cases. Researchers are writing papers with hallucinated references. Programmers are taking down production by not verifying AI code.

Humans were made to do things, not to verify things. Verifying something is 10x harder than doing it right. AI in the hands of humans is a foot rocket launcher.


> it’s not “some people”, it’s practically everyone that doesn’t understand how these tools work, and even some people that do.

Again, true for most things. A lot of people are terrible drivers, terrible judge of their own character, and terrible recreational drug users. Does that mean we need to remove all those things that can be misused?

I much rather push back on shoddy work no matter what source. I don't care if the citations are from a robot or a human, if they suck, then you suck, because you're presenting this as your work. I don't care if your paralegal actually wrote the document, be responsible for the work you supposedly do.

> Humans were made to do things, not to verify things.

I'm glad you seemingly have some grand idea of what humans were meant to do, I certainly wouldn't claim I do so, but I'm also not religious. For me, humans do what humans do, and while we didn't used to mostly sit down and consume so much food and other things, now we do.


>A lot of people are terrible drivers, terrible judge of their own character, and terrible recreational drug users. Does that mean we need to remove all those things that can be misused?

Uhh, yes??? We have completely reshaped our cities so that cars can thrive in them at the expense of people. We have laws and exams and enforcement all to prevent cars from being driven by irresponsible people.

And most drugs are literally illegal! The ones that arent are highly regulated!

If your argument is that AI is like heroin then I agree, let’s ban it and arrest anyone making it.


People need to be responsible for things they put their name on. End of story. No AI company claims their models are perfect and don’t hallucinate. But paper authors should at least verify every single character their submit.


>No AI company claims their models are perfect and don’t hallucinate

You can't have it both ways. Either AIs are worth billions BECAUSE they can run mostly unsupervised or they are not. This is exactly like the AI driving system in Autopilot, sold as autonomous but reality doesn't live up to it.


Yes, but they don’t. So clearly AI is a foot gun. What are doing about it?


It's a shame the slop generators don't ever have to take responsibility for the trash they've produced.


That's beside the point. While there may be many reasonable critiques of AI, none of them reduce the responsibilities of scientist.


Yeah this is a prime example of what I'm talking about. AI's produce trash and it's everyone else's problem to deal with.


Yes, it's the scientists problem to deal with it - that's the choice they made when they decided to use AI for their work. Again, this is what responsibility means.


This inspires me to make horrible products and shift the blame to the end user for the product being horrible in the first place. I can't take any blame for anything because I didn't force them to use it.


>While there many reasonable critiques of AI

But you just said we weren’t supposed to criticize the purveyors of AI or the tools themselves.


No, I merely said that the scientist is the one responsible for the quality of their own work. Any critiques you may have for the tools which they use don't lessen this responsibility.


>No, I merely said that the scientist is the one responsible for the quality of their own work.

No, you expressed unqualified agreement with a comment containing

“And yet, we’re not supposed to criticize the tool or its makers?”

>Any critiques you may have for the tools which they use don't lessen this responsibility.

People don’t exist or act in a vacuum. That a scientist is responsible for the quality of their work doesn’t mean that a spectrometer manufacture that advertises specs that their machines can’t match and induces universities through discounts and/or dubious advertising claims to push their labs to replace their existing spectrometers with new ones which have many bizarre and unexpected behaviors including but not limited to sometimes just fabricating spurious readings has made no contribution to the problem of bad results.


You can criticize the tool or its makers, but not as a means to lessen the responsibility of the professional using it (the rest of the quoted comment). I agree with the GP, it's not a valid excuse for the scientist's poor quality of work.


I just substantially edited the comment you replied to.


The scientist has (at the very least) a basic responsibility to perform due diligence. We can argue back and forth over what constitutes appropriate due diligence, but, with regard to the scientist under discussion, I think we'd be better suited discussing what constitutes negligence.


The entire thread is people missing this simple point.


I use those LLM "deep research" modes every now and then. They can be useful for some use cases. I'd never think to freaking paste it into a paper and submit it or publish it without checking; that boggles the mind.

The problem is that a researcher who does that is almost guaranteed to be careless about other things too. So the problem isn't just the LLM, or even the citations, but the ambient level of acceptable mediocrity.


> And yet, we’re not supposed to criticize the tool or its makers?

Exactly, they're not forcing anyone to use these things, but sometimes others (their managers/bosses) forced them to. Yet it's their responsibility for choosing the right tool for the right problem, like any other professional.

If a carpenter shows up to put a roof yet their hammer or nail-gun can't actually put in nails, who'd you blame; the tool, the toolmaker or the carpenter?


> If a carpenter shows up to put a roof yet their hammer or nail-gun can't actually put in nails, who'd you blame; the tool, the toolmaker or the carpenter?

I would be unhappy with the carpenter, yes. But if the toolmaker was constantly over-promising (lying?), lobbying with governments, pushing their tools into the hands of carpenters, never taking responsibility, then I would also criticize the toolmaker. It’s also a toolmaker’s responsibility to be honest about what the tool should be used for.

I think it’s a bit too simplistic to say «AI is not the problem» with the current state of the industry.


If I hired a carpenter, he did a bad job, and he starts to blame the toolmaker because they lobby the government and over-promised what that hammer could do, I'd still put the blame on the carpenter. It's his tools, I couldn't give less of a damn why he got them, I trust him to be a professional, and if he falls for some scam or over-promised hammers, that means he did a bad job.

Just like as a software developer, you cannot blame Amazon because your platform is down, if you chose to host all of your platform there. You made that choice, you stand for the consequences, pushing the blame on the ones who are providing you with the tooling is the action of someone weak who fail to realize their own responsibilities. Professionals take responsibility for every choice they make, not just the good ones.

> I think it’s a bit too simplistic to say «AI is not the problem» with the current state of the industry.

Agree, and I wouldn't say anything like that either, which makes it a bit strange to include a reply to something no one in this comment thread seems to have said.


That’s not what is happening with AI companies, and you damn well know it.


OpenAI and Anthropic at least are both pretty clear about the fact that you need to check the output:

https://openai.com/policies/row-terms-of-use/

https://www.anthropic.com/legal/aup

OpenAI:

> When you use our Services you understand and agree:

Output may not always be accurate. You should not rely on Output from our Services as a sole source of truth or factual information, or as a substitute for professional advice. You must evaluate Output for accuracy and appropriateness for your use case, including using human review as appropriate, before using or sharing Output from the Services. You must not use any Output relating to a person for any purpose that could have a legal or material impact on that person, such as making credit, educational, employment, housing, insurance, legal, medical, or other important decisions about them. Our Services may provide incomplete, incorrect, or offensive Output that does not represent OpenAI’s views. If Output references any third party products or services, it doesn’t mean the third party endorses or is affiliated with OpenAI.

Anthropic:

> When using our products or services to provide advice, recommendations, or in subjective decision-making directly affecting individuals or consumers, a qualified professional in that field must review the content or decision prior to dissemination or finalization. You or your organization are responsible for the accuracy and appropriateness of that information.

So I don't think we can say they are lying.

A poor workman blames his tools. So please take responsibility for what you deliver. And if the result is bad, you can learn from it. That doesn't have to mean not use AI but it definitely means that you need to fact check more thoroughly.


I’m sorry, but this is such a terribly unscientific approach. You want to make a case for your hypothesis? Follow a structured approach with real arguments.

Saying «I know that correlation doesn’t imply causation», but then only demonstrating correlation isn’t really bringing this discourse any further.


Would you have any examples of convincing arguments to see if I can improve it?

Appreciate that extending the date range of data would improve the claims, as would adding more sources - but anything else?


I'd say stop trying to sell and just lay out the data correctly. There are lots of factors at work here.

https://en.wikipedia.org/wiki/Loneliness_epidemic#Causes_of_...


It may be unscientific, but it starts a conversation (an important one IMO), that will hopefully lead to real study and corrective measures to get society back on track.


Blocks are fundamentally different from functions due to the control flow: `return` inside a block will return the outer method, not the block. `break` stops the whole method that was invoked.

This adds some complexity in the language, but it means that it’s far more expressive. In Ruby you can with nothing but Array#each write idiomatic code which reads very similar to other traditional languages with loops and statements.


More specifically blocks (and "proc"'s) return from the defining scope. This is just a minor clarification, but it matters, because if you pass a block down from where it is defined, and the block calls "return" it will still not just exit from the method where it was called, but from the method where it was defined.

This can sometimes be useful: A calling method can pass down a block or proc to control if/when it wants an early return.

Basically Ruby has two types of closures:

* A return in a lambda returns to the calling scope. So basically, it returns to after where the "call" method is invoked.

* A return in a block or a proc returns from the scope in which it was defined (this is also why you get LocalJumpError if you try to return a block or a proc, but not a lambda to the method calling the one where the block or proc is defined).

When you name a block, you get a Proc object, same as you get when you take the value of a lambda or proc.

In practice, that blocks in MRI are not Proc objects already is just an implementation detail/optimisation. I have a long-standing hobby project to write a Ruby compiler, and there a "proc" and a bare block are implemented identically in the backend.


You are right on return (use next in a block), but break uses block scope.


Maybe I explained it a bit imprecise. I was trying to explain the following behavior:

    def foo
      p 1
      yield
      p 2
    end

    foo { break }
This only prints "1" because the break stops the execution of the invoked method (foo).


WAT? I'm a 12+ years Ruby developer and I didn't know this.


> This has massive implications. SEC means low latency, because nodes don't need to coordinate to handle reads and writes. It means incredible fault tolerance - every single node in the system bar one could simultaneously crash, and reads and writes could still happen normally. And it means nodes still function properly if they're offline or split from the network for arbitrary time periods.

Well, this all depends on the definition of «function properly». Convergence ensures that everyone observed the same state, not that it’s a useful state. For instance, The Imploding Hashmap is a very easy CRDT to implement. The rule is that when there’s concurrent changes to the same key, the final value becomes null. This gives Strong Eventual Consistency, but isn’t really a very useful data structure. All the data would just disappear!

So yes, CRDT is a massively useful property which we should strive for, but it’s not going to magically solve all the end-user problems.


Yeah; this has been a known thing for at least the 15 years I’ve been working in the collaborative editing space. Strong eventual consistency isn’t enough for a system to be any good. We also need systems to “preserve user intent” - whatever that means.

One simple answer to this problem that works almost all the time is to just have a “conflict” state. If two peers concurrently overwrite the same field with the same value, they can converge by marking the field as having two conflicting values. The next time a read event happens, that’s what the application gets. And the user can decide how the conflict should be resolved.

In live, realtime collaborative editing situations, I think the system just picking something is often fine. The users will see it and fix it if need be. It’s really just when merging long running branches that you can get in hot water. But again, I think a lot of the time, punting to the user is a fine fallback for most applications.


good point. the reality is conflicts should often be handled in the business logic, not in the consensus logic, but not universally. For the former, having the conflict state be the consensus state is ideal, but you do risk polluting your upstream application with a bunch of unnecessary conflict handling for trivial state diffs.

With CRDT, you have local consistency and strong convergence, but no guarantee of semantic convergence (i.e. user intent). I would still hire OP, but I would definitely keep him in the backend and away from UX


My point is a good crdt should let you tune that on a per field / per instance basis. Sometimes you want automatic “good enough” merging. Sometimes you want user intervention. When you want each is not obvious at the moment. We haven’t really explored the UX state space yet.

In general the automatic merging works pretty well most of the time. Where things go wrong is - for example - when people think they can put JSON data into a text crdt and have the system behave well. Instead the automatic merging breaks the rules of JSON syntax and the system falls over.


We have LLMs now, couldn't they be used to merge conflicts in a more sensible way? It might get a little expensive I imagine.


So the entire point of the (short) article I wrote was to get people to think outside of the the little box people put CRDTs in: javascript libraries and collaborative editing.

Yet here we are, circling back to collaborative editing...

At this point I think the term "CRDT" has too much baggage and I should probably stop using it, or at least not put it in blog post titles.


I've prototyped something attempting to solve this problem of preserving user intent and maintaining application semantics. See comment here https://news.ycombinator.com/item?id=45180325


I've replied elsewhere, but on the face of it I can't see how this solves the problem of conflicts in any way. If you disagree, say more about how it solves this?

If two users concurrently edit the same word in a text document, how does your system help?


For a text document a normal CRDT is perfect. They're very good for that specific case. What I tried to solve is eventual consistency that _also_ preserves application semantics. For example a task tracker:

* first update sets task cancelled_at and cancellation_reason

* second update wants the task to be in progress, so sets started_at

CRDT's operate only at the column/field level. In this situation you'd have a task with cancelled_at, cancellation_reason, status in progress, and started_at. That makes no sense semantically, a task can't both be cancelled and in progress. CRDTs do nothing to solve this. My solution is aimed at exactly this kind of thing. Since it replicates _intentions_ instead of just data it would work like this:

action1: setCancelled(reason) action2: setInProgress

When reconciling total order of actions using logical clocks the app logic for setCancelled runs first then setInProgress runs second on every client once they see these actions. The app logic dictates what should happen, which depends on the application. You could have it discard action2. You could also have it remove the cancellation status and set in_progress. It depends on the needs of the application but the application invariants / semantics are preserved and user intentions are preserved maximally in a way that plain CRDTs cannot do.


Yes; I get all that from the readme. You pick an arbitrary order for operations to happen in. What I don't understand is how that helps when dealing with conflicts.

For example, lets say we have a state machine for a task. The task is currently in the IN_PROGRESS state - and from here it can transition to either CANCELLED or COMPLETE. Either of those states should be terminal. That is to say, once a task has been completed it can't be cancelled and vice versa.

The problem I see with your system is - lets say we have a task in the IN_PROGRESS state. One peer cancels a task and another tries to mark it complete. Lets say a peer sees the COMPLETE message first, so we have this:

    IN_PROGRESS -> COMPLETE
But then a peer sees the CANCEL message, and decides (unambiguously) that it must be applied before the completion event. Now we have this:

    IN_PROGRESS -> CANCELLED (-> COMPLETE ignored)
But this results in the state of the task visibly moving from the COMPLETE to CANCELLED state - which we said above the system should never do. If the task was complete, it can't be cancelled. There are other solutions to this problem, but it seems like the sort of thing your system cannot help with.

In general, CRDTs never had a problem arbitrarily picking a winner. One of the earliest documented CRDTs was a "Last-writer wins (LWW) register" which is a register (ie variable) which stores a value. When concurrent changes happen, the register chooses a winner somewhat arbitrarily. But the criticism is that this is sometimes not the application behaviour what we actually want.

You might be able to model a multi-value (MV) register using your system too. (Actually I'm not sure. Can you?) But I guess I don't understand why I would use it compared to just using an MV register directly. Specifically when it comes to conflicts.


It does not pick an arbitrary order for operations. They happen in total (known at the time, eventually converging) order across all clients thanks to hybrid logical clocks. If events arrive that happened before events a client already has locally it will roll back to that point in time and replay all of the actions forward in total ordering.

As for the specific scenario, if a client sets a task as COMPLETE and another sets it as CANCELLED before seeing the COMPLETE from the other client here's what would happen.

Client1: { id: 1, action: completeTask, taskId: 123, clock: ...}

Client1: SYNC -> No newer events, accepted by server

Client2: { id: 2, action: cancelTask, taskId: 123, clock: ...}

Client2: SYNC -> Newer events detected.

Client2: Fetch latest events

Client2: action id: 1 is older than most recent local action, reconcile

Client2: rollback to action just before id: 1 per total logical clock ordering

Client2: Replay action { id: 1, action: completeTask, taskId: 123, clock: ...}

Client2: Replay action { id: 2, action: cancelTask, taskId: 123, clock: ...} <-- This is running exactly the same application logic as the first cancelTask. It can do whatever you want per app semantics. In this case we'll no-op since transition from completed -> cancelled is not valid.

Client2: SYNC -> no newer actions in remote, accepted

Client1: SYNC -> newer actions in remote, none local, fetch newer actions, apply action { id: 2, action: cancelTask, ...}

At this point client1, client2, and the central DB all have the same consistent state. The task is COMPLETE. Data is consistent and application semantics are preserved.

There's a little more to it than that to handle corner cases and prevent data growth, but that's the gist of it. More details in the repo.

The great thing is that state is reconciled by actually running your business logic functions -- that means that your app always ends up in a valid state. It ends up in the same state it would have ended up in if the app was entirely online and centralized with traditional API calls. Same outcome but works totally offline.

Does that clarify the idea?

You could argue that this would be confusing for Client2 since they set the task to cancelled but it ended up as complete. This isn't any different than a traditional backend api where two users take incompatible actions. The solution is the same, if necessary show an indicator in the UI that some action was not applied as expected because it was no longer valid.

edit: I think I should improve the readme with a written out example like this since it's a bit hard to explain the advantages of this system (or I'm just not thinking of a better way)


LLMs might be able to use context to auto resolve them often with correct user intent automatically


LLMs could be good at this, but the default should be suggestions rather than automatic resolution. Users can turn on YOLO mode if their domain is non-critical or they trust the LLM to get it right.


The issue is that to preserve the CRDT property the LLM has to resolve the conflicts in a deterministic and associative way. We can get the first property (although most popular LLMs do not uphold it) but we can hardy get the second one.


I read the comment you're responding to as suggesting a way to resolve the conflicts layered atop the CRDT, not as a component of the CRDT itself. You're very right that LLMs are the wrong tool for CRDT implementation, but using them to generate conflict resolutions seems worth exploring.


Joseph Hellerstein has a series of posts on CRDTs: https://jhellerstein.github.io/blog/crdt-intro/

He very much leans toward them being hard to use in a sensible way. He has some interesting points about using threshold functions over a CRDT to get deterministic reads (i.e. once you observe the value it doesn't randomly change out from under you). It feels a bit theoretical though, I wish there were examples of using this approach in a practical application.


It's a bit like how a static type system provides useful guarantees, but you can still do:

``` fn add(x: num, y: num) = x * y ```


Why do we even need CRDTs? Why can't we have multi-user editors work like multiplayer video games?

The server has the authoritative state, users submit edits, which are then rejected or applied and the changes pushed to others. The users is always assumed to be online for multiplayer editing. No attempt is made to reconcile independent edits, or long periods of offline behavior.

To prevent data loss, when the user is offline and desyncs, he gets to keep his changes and manually merge them back.

I'm sure this isn't a Google genius worthy implementation and fails in the incredibly realistic scenario where thousands of people are editing the same spreadsheet at the same time, but its simple and fails in predictable ways.


Once I was using Slack on a bad WiFi and it was an adventure. What I saw as "sent" others never saw.


Yeah it's a common optimization technique I saw from both backend and frontend devs to hide errors and lie about the actual status.


sure, i mean that was how early group editing works, but generally you want to preserve state from both (if we both start typing in the same spot, we both add stuff). Also it prevents any offline editing or high...lag editing really. unlike gaming which needs to be realtime this is much softer.

but no you dont need it


This needs to be as realtime as WhatsApp. If your internet connection gets bad often enough to have trouble supporting WhatsApp, then my heart goes out to you, but thankfully this is clearly not normal for the most of us most of the time.

And if this happens, your experience is going to be terrible anyway.


> The standardized location is Library.

Except for Zsh (~/.zshrc), SSH (~/.ssh/config), Vim (~/.vimrc), Curl (~/.curlrc), Git (~/.gitconfig). Apple could have chosen to patch these and move the configuration files into ~/Library if they really wanted.


Apple rarely edits how big open source tools work in such a blunt way. I am not saying these tools' behavior is blessed by Apple. I'm just saying that TFA's assertion that 'these tools use .config so it's already standard' is not true - they don't. Unless you are saying that Apple is endorsing dotfile vomit outside of any particular folder.


Apple is quite happy to patch & extend stuff. Their ssh-add(1) accepts --apple-use-keychain, --apple-load-keychain; vanilla OpenSSH doesn't even know what long flags are.

I think it's entirely OK for long-established programs to adhere to their conventions; it's less surprising for the users. If you're going to change how things work, do so with minimum impact on the UI.

(I wish their GUI teams understood that.)


Your first link (Wikipedia) directly contradicts your examples:

> Although IQ differences between individuals have been shown to have a large hereditary component, it does not follow that disparities in IQ between groups have a genetic basis[18][19][20][21]. The scientific consensus is that genetics does not explain average differences in IQ test performance between racial groups.[22][23][24][25][26][27].


I suspect that this is an instance where “the scientific consensus” is wrong because to suggest contrary to that is wrongthink and enough to have one ostracized not only from science, but also society as a whole. I would love to be wrong, so if someone could explain this to me, I’d be very receptive to an explanation of why this logic is wrong:

First, let’s substitute emotionally charged terms for more neutral terms; e.g. imagine rather than discussing intelligence and race, we are discussing something else highly heritable and some other method of grouping genetically similar individuals, e.g. height and family. The analogous claim would therefore be that “although height differences have a large hereditary component, it does not follow that disparities in height between families have a genetic basis.” This seems very clearly false to me. It is in the realm of “I cannot fathom how an intelligent person could disagree with this” territory for me. If variable A has a causative correlation with variable B and two groups score similarly with respect to variable A, then they are probably similar with respect to variable B. Of course there are other variables, such as nutrition, sleep, and what have you, but that does not eliminate a correlation. In fact, for something which is “highly heritable” it seems to me that genetics would necessarily be the predominant factor.

It’s a really unfortunate conclusion, so again, I’d love to be wrong, but I cannot wrap my head around how it can be.


> Suggest contrary to that is wrongthink and enough to have one ostracized not only from science, but also society as a whole.

There's many scientists who have published the "contrary". They were not ostracized from science or from society as a whole. These saw next to none negative impact to their position while they were alive. Other scientists have published rebuttals and later some of the originals articles have been retracted.

J. Philippe Rushton: 250 published articles, 6 books, the most famous university professor in Canada. Retractions of this work came 8 years after his death.

Arthur Jensen: Wrote a controversial paper in 1969. Ended up publishing 400 articles. Remained a professor for his full life.

Hans Eysenck: The most cited living psychologist in peer-reviewed scientific journal literature. It took more than 20 years before any of his papers were retracted.

There's a lot of published articles about the "contrary view" that you can read. You can also read the rebuttals by the current scientific consensus (cited above).

> The analogous claim would therefore be that “although height differences have a large hereditary component, it does not follow that disparities in height between families have a genetic basis.” This seems very clearly false to me.

But this is not an analogous claim since you're talking about disparities between families. The analogous claim would be: "although height differences have a large hereditary component, it does not follow that disparities in height between groups have a genetic basis".

A very simple example for height[1]: The Japanese grew 10 cm taller from mid-20th century to early 2000s. Originally people thought that the shortness of the Japanese was related to their genetics, but this rapid growth (which also correlates with their improved economy) suggests that the group difference between Japanese and other groups was not related to the genetic component of height variance.

[1]: Secular Changes in Relative Height of Children in Japan, South Korea and Taiwan: Is “Genetics” the Key Determinant? https://biomedgrid.com/pdf/AJBSR.MS.ID.000857.pdf


> A very simple example for height[1]: The Japanese grew 10 cm taller from mid-20th century to early 2000s. Originally people thought that the shortness of the Japanese was related to their genetics, but this rapid growth (which also correlates with their improved economy) suggests that the group difference between Japanese and other groups was not related to the genetic component of height variance.

Every group grew taller as they got richer, but Japanese people are still short even today when they are rich. So existence of other factors doesn't rule out the genetic factor.


You're wrong. Some of the smartest kids I know are from immigrant children. It is their background - and society's response to that background - that hinders them, not their genetics. More so if they aren't lily white. Note how anything you say about this subject will be used to generalize to much larger groups (of which you can find some prime examples in this very thread) than the ones that IQ tests themselves target: individuals. And you can't say much about how an individual scores on their IQ test without accounting for their environment because that's a massive factor.

All of your arguments more or less equate to 'I don't understand the subject matter, but I'd like to see my biases confirmed'. And, predictably, you see your biases confirmed. But some of the smartest individuals that ever lived came from backgrounds and populations that - assuming the genetic component is as strong as you make it out to be - would have precluded them from being that smart.

Bluntly: wealth and access to opportunity have as much to do with how well you score on an IQ test versus what your genetic make-up is. Yes, it is a factor. No, it is not such a massive factor that it dwarfs out the other two once you start looking at larger groups. Income disparity and nutrition alone already negate it.

https://en.wikipedia.org/wiki/Effect_of_health_on_intelligen...

And that's just looking at that particular individual, good luck to you if your mom and dad were highly intelligent but you ended up as the child of drugs or alcohol consumers. Nothing you personally can do about that is going to make up for that difference vs growing up as the child of affluent and moderately intelligent people.

IQ tests are a very imprecise yardstick, and drawing far reaching conclusions about the results without appreciating the complexity behind squashing a multi-dimensional question into a single scalar, especially when you are starting out from a very biased position is not going to lead to a happy ending. Before you know it you'll be measuring skull volume.


Specifically, as well stated by [23] there is no such thing as “race.” The premise of racial group differences is not possible; we can’t have racial differences if race is not real. Sadly, a lot of people very much believe in race, especially the ones that shouldn’t!


Geneticist use the word “ancestry” to refer to summarize the historical geographic origins of genetic variants that we are inherit. Ancestry can be reliably estimated by genome analysis.

Race, like the gender, is now considered a social construct.

The meanings of words are defined by a community of users who find them useful in communicating. Race and ancestry are both useful words.


People who say there's no such thing as race are complete charlatans playing semantic word games.


There’s more genetic variation within any so-called racial group than between groups, so race obviously has no genetic justification. That's not semantics. Yes it's real, but for social, not genetic reasons.


>has no genetic justification

That is comically retarded. Like do you have any understanding of the words you are using ?

If there's no genetic justification, how would it be possible to trivially determine someone's race just from their DNA ?


> If there's no genetic justification, how would it be possible to trivially determine someone's race just from their DNA ?

Genetic ancestry is determined by correlation with geographic origins and population. In other words, where a set of genetic markers are highly concentrated. It says nothing about race.


It isn't.


So define one. A race.



There’s more genetic variation within any so-called racial group than between groups, so race obviously has no genetic justification. It's real in the way other social constructs are real.


aka "a category that appears self-evident to me".

This was my thinking, also. Good cluster of links.


A group of people that have lived long enough in relative isolation that they became unique, distinguished and separate from other races.


The problem is that Human 'races' are not in fact unique, distinguished and separate from other 'races'. Genetically, two sub Sahara African men could be more genetically distinct than one of those men and a random white man even they should be both 'black'.


The problem you describe only concerns people who don't know anything about the subject, but still have no shame to have strong opinions about it.

Nobody in race sciences (anthropology, etc) claim that there are only unique races that are separate from each other and don't mix. This is a clear strawman.

The fact that there is mixing between races does not mean that races don't exist. You can make an emulsion out of water and oil, but water and oil still are their own things.

And the science has all kinds of specific categorization for human groups that go way beyond the rough separation into 3-4 main races. All the mixing, separation, migration, isolation, etc have been taken into account.

It's a pity this kind of topic/science is basically a taboo in the Western World and for real info and honest discussion have to go to other systems/countries/languages.


My example has nothing to do with race mixing lmao. Two Sub Saharan Africans today are literally descendants of people that never left the continent, there's no amount of 'race mixing' that would causing one of them to be genetically closer to another 'race' than to each other if race was a genetic reality. You're just an idiot with poor reading comprehension.

Between us, you are clearly the one with no clue about what he's talking about.

There’s more genetic variation within any so-called racial group than between groups, race mixing or not. Clearly, 'race' has no genetic justification.


You are only further proving my exact point.

You must be one of those clueless people who think that sex is fluid and is a social construct because there is in certain characteristics a bigger difference within each sex than between the medium of each sexes.


"long enough"? "relative isolation"? "unique" how: genetically? culturally? phenotypically?


You are asking questions that are already were answered if you cared to RTFM/LMGTFY.

Shortly:

- Unique how? Optimally genetically, but this has practical problems that the field of paleogenetics is trying to work on. Until then must use: classical morphological features, odontology, dermatoglyphics, biochemical characteristics.

- How long enough? Depends on the type of group. There are different levels of human group classification, both above the traditionally understood "races" and a lot below that.


I've been told that there's no such thing as a circle because there's no such thing as a perfect circle. The claim that race does not exist seems to be in that category.


You seem to be arguing that talking about races within humans may be useful even if the reality only approximates the definition of race (similarly to the idea of a "circle", which even though it does not apply in all it's precision to any real object it may still be a useful concept as an approximation nonetheless). However, I don't think that comparison is particularly insightful, and it may even be a bit misleading in my opinion because of the important differences in how those two things are defined (circle and race).

After all, the reason why no real object is an actual circle is because the definition of circle is so to say an "ideal" definition that no real object can fit in all it's precision. It's natural to assume that no real object will have all of it's "points" perfectly distributed according to a circle's equation (without even getting philosophical as to how these mathematical definitions relate to the real world, or if they do at all). If one rejects any "approximate", non exact application of the concept, then it will be mostly useless when it comes to describing or understanding the real world (because you won't be able to use it for anything).

On the other hand, the concept of "race" is quite the opposite to ideal: it's not "ideal" as the circle is, in fact it's more of a pragmatic/working definition. It's more like the definition of "chair": many things may or may not be considered a chair, but usually people don't feel that there's "no such thing as a chair" in the real world. On the contrary, it's more common to feel that anything "could" be a chair because it has a malleable definition based on the context, instead of nothing being "precisely" a chair because there are some rigid constraints to the definition that no real object can actually fit.

When the idea of races within the human species is pushed against, it's not because "race" is an ideal concept that no real thing may implement in all it's precision (as would be the case with the circle). I won't present these actual reasons (which could get quite political) here, but I will say that I definitely wouldn't consider those two claims to be in the same category:

- Saying that X real object is not a circle, or that no real object can be (exactly) a circle has to do with the fact that the concept of circle is ideal and by definition nothing "real" will fit it perfectly.

- Saying that (in the human species) there are no races is, however, not based on a quality of the definition of the concept of "race" (specifically, it's not ideal), but on some quantitative judgements about what kind of thing qualifies as a race an what doesn't (pretty much like the concept of "chair", "food", etc. which are not ideal and there's some room for discussion based on context when it comes to whether some specific object fits the category or not).


Ok, so saying race doesn't exist is like saying chairs doesn't exist, since you can't really say what is a chair, what is a shelf and what is a table, correct? Technically you could say that a chair is a table or a shelf, but people still like to call them chairs, you know the difference when you see it.

Races is like that, scientists can't define it but its still a useful concept like a chair. Scientists can't exactly define what a chair is either, but its still a very useful concept and we can discuss chairs and everyone understand what we mean.


Two sub Sahara African men could be more genetically distinct than one of those men and a random white man even they should be both 'black'.

The thing about race is that it has no biological justification. It's still 'real' of course but in the same way money has 'real' value. It's a powerful social construct.


> The thing about race is that it has no biological justification.

> It's a powerful social construct.

This is 100% correct, and yet progressive academics have yet to figure out how to slot this fact into their ideology without creating incorrigible inconsistencies.

For instance - if race is a social construct just like gender, why is transracialism frowned upon, while transgenderism is lauded? Quoting Richard Dawkins, famous debunker of Creationist and religious bullshit [0]:

    Why is a white woman vilified and damned if she identifies as black,
    but lauded if she identifies as a man? That's topsy-turvy, because
    race really is a continuum, whereas sex is one of the few genuine binaries
    of biology.
The most coherent (but unsatisfying) answer I have found in the literature is that society has "intersubjectively" agreed to accept transgenderism and not transracialism, where "intersubjectively" ultimately translates to some level of "because we said so and this is society's new fanfiction head canon:" [1]

    What matters, then, is that intersubjectively we have all agreed that
    ancestry is relevant to the determination of one’s race.
It's worth noting that intersubjectivity is basically a religious concept, as defined in the Encyclopedia of Psychology and Religion. [2]

There is no science or biology on the far LGBTQ+ progressive left. Only pseudoscience and apologetics befitting of a Creationist.

[0] https://www.youtube.com/watch?v=cubkdBuvJAQ

[1] https://philpapers.org/archive/TUVIDO.pdf

[2] https://sci-hub.se/10.1007/978-1-4614-6086-2_9182


Is this the consensus because it’s true, or because anybody who suggests otherwise is pilloried and driven out of academia?


I think you'd do well to read this person's thoughts: https://news.ycombinator.com/item?id=44933637


That person didn't address the current climate in academia at all. Their examples of "contrarians" are all long-dead professors whose papers were published many decades ago in a different academic climate. That doesn't refute that academia in America has suffered ideological capture since, and questioning the "scientific consensus" on certain politically-charged topics is career suicide.

Also their Japan example seems poor. Japan remains a short a country relative to their prosperity. They're several centimeters shorter than a country with a similar GDP per capita, like Czech Republic. They're about the same average height as Somalians, despite having significantly better food security and a GDP per capita that's over 50 times higher.


A mutex would be the most trivial example. I don't believe that is possible to implement, in the general case, with only acquire-release.

Sequential consistency mostly become relevant when you have more than two threads interacting with both reads and writes. However, if you only have single-consumer (i.e. only one thread reading) or single-producer (i.e. only one thread writing) then the acquire-release semantics ends up becoming sequential since the single-consumer/producer implicitly enforces a sequential ordering. I can potentially see some multi-producer multi-consumer queues lock-free queues needing sequential atomics.

I think it's rare to see atomics with sequential consistency in practice since you typically either choose (1) a mutex to simplify the code at the expense of locking or (2) acquire-release (or weaker) to minimize the synchronization.


> A mutex would be the most trivial example. I don't believe that is possible to implement, in the general case, with only acquire-release.

Wait, what? So you're saying this spinlock is buggy? What's the bug?

https://en.cppreference.com/w/cpp/atomic/atomic_flag.html


No, sorry. I was just remembering where I've typically seen sequential consistency being used. For instance, Peterson's algorithm was what I had in mind. Spinlock is indeed a good example (although a terrible algorithm which I hope you haven't seen used in practice) of a mutex algorithm which only requires acquire-release.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: