It is a very common error to look at a specialist piece of software, superficially consider the basic data structure it appears to have and think ‘seems simple enough. Basic CRUD app.’
But it’s rarely the case in practice.
In a sibling comment right here for example someone bemoaned the difficulty in Canvas of having two TAs simultaneously grade separate parts of the same assignment. That sounds like something that goes beyond CRUD.
But more importantly any workflow system, which an LMS will be full of, has to handle the always tricky problem of how changes to workflows affect the things that are currently in the workflow. Assignments posted in course X need to be approved by person Y; some assignments are submitted for approval; person Y goes on leave and now the approval needs to be person Z. Not a simple CRUD problem.
These are things that occur to me with only a moment’s consideration of what an LMS system might need to deal with. The actual domain probably has considerable more complexity that I can’t even imagine.
LMSes have to balance a lot of directly competing needs.
It has to be simple enough for the average person to use (both on the learner side and the instruction side) and have enough complexity to allow for a lot of flexibility in setup because every organization is slightly different. They have to support 50 million file formats and everything has to be backwards compatible until the end of time and everything has to load properly and quickly on 50 million different device/OS/browser combinations. Yes, there's SCORM as a standard, but even that is rickety, and an LMS that doesn't support non SCORM files is dead in the water anyway.
They're simple(ish) in code, and a nightmare in requirements.
Everything you say is true, and yet it's clear you've never used Canvas.
Canvas is decidedly, not fast, fails to display even trivial files (such as source code) as well as more complex files that should just be handled by the browser (such as video), and it has a non-intuitive, verbose, and tiresome interface that would have felt old-fashioned 20 years ago.
Yes, I should have said 'in theory', because there always ends up being compromise and usually that's the thing that's chucked out first.
LMSes frankly run like shit. I don't work with Canvas right now, but every one I've used has run like shit.
However, there are reasons that the complex files aren't handled by the browser: tracking and persistence. It isn't enough to make a video file watchable, it then needs to be tracked in the same system as every other training/educational material and in the same way. If you don't care whether the students actually watch the video, then yeah, throw them a YouTube link or embed a video on a personal site or just have the LMS serve a basic embed. But being able to track video, make it mandatory, make it so that it can't be fast forwarded/people can't skip to the end etc. all matter when LMSes are used for topics that are required for compliance and regulatory purposes.
I don't disagree on the interface(s). Ours is a farce and I hate it.
It's likely that they're so bad precisely because of the simple tech and complex requirements. Simple tech doesn't mean 'easy' or 'not time consuming'. But it means you're looking for developers who have a decent level of technical proficiency (to handle the numerous edge cases and flexibility the systems demand: it's not hard but things like the data structures need to be well thought out and every single piece of the system is integrated with one another in most LMSes so you can't silo work as easily) and who want to work on problems that aren't hard and require dealing with a lot of unreasonable people (in the form of their requirements). You have to allow/design for a lot of stupid things because otherwise people will throw tantrums about it.
Then on top of that, you're developing something that doesn't directly generate profit, so nobody is going to pay for it or appreciate the work you put in.
Then on top of THAT, they're fairly insulated from the actual end users.
I think you are confusing the reality of Canvas with a different, theoretical learning management system.
In reality, Canvas does not have workflow and does not prevent race conditions in grading. I can certainly imagine an LMS that does these things, but Canvas does not.
It would probably help if you had actually used Canvas before trying to convince us that it is non-CRUD.
Sorry, I wasn’t trying to defend Canvas, so much as give general advice that ‘I could build this in a weekend’ is rarely a wise claim. The specific ways in which Canvas could not be built in a weekend do not need to be the ones I identified.
"Look at the code" is not a reasonable response to "Is this app CRUD or not?" You've already been asked to provide specifics about which componenent(s) of Canvas are allegedly non-CRUD, and simply repeating your claim without answering the question does nothing to advance your case.
It's a simple question. Since you claim to be an expert on Canvas, I'm sure that you can point me to the relevant features much faster than I can sort through thousands of lines of code, looking for the one line that says "def not_crud_function()". CRUD or not-CRUD is a judgement about the purpose of a program, not its implementation.
"I don't want to glance through the code, I'd rather you write me a detailed report on all the thousands of places where it deviates from crud" (not a direct quote) is not at all a reasonable ask.
And If you can't be bothered to take 2 minutes to click through some pages on GitHub, I don't believe you'd take the time to even read that report. So no, I'm not doing your research for you.
Edit: I will do this for you though. Here's Gemini's opinion[1]. It's quite accurate as well, and goes into reasonable high-level detail (though doesn't get into specific modules). I especially loved this quote:
> At its absolute lowest level, almost all web software boils down to pushing state to and from a database. But calling Canvas LMS "just a CRUD app" is a bit like calling a commercial airliner "just a metal tube with wings."
How reductionist and a straw man as well. No one here asked for a "detailed report." They asked you to name one (1) feature of an application that you claim to have intimate knowledge of; the original question was "What component in particular goes substantially beyond CRUD." It could take you one sentence. After multiple failures to pass that low bar, compounded by your total mischaracterization of the question and your citation of an obsequious LLM that also failed to provide any specifics, it's abundantly clear that you are unable to support your claim with facts and are not arguing in good faith. I won't waste my time further with your childish behavior.
Am also casually looking at FRED. RE 2014 decline - do you know why this was? I speculate something to do with the initial invasion of Ukraine / China cycling out of treasuries possibly to buy Russian war bonds or oil. I am purely speculating.
Food groups seem like such a strange way to quantify this, especially given that production of several of the food groups is a net macronutrient destroyer.
The macronutrient story is far more telling. For instance my math says (based on 17B bushel annual production) that the US produces 11,400 calories and 250g protein per person, per day, just in corn. The vast majority of this is used for animal feed and ethanol.
Whether resorting to eating just corn and multivitamins is a good life could be debated, but it's silly to suggest (as the paper figures do) that the US has a food security issue.
Couldn't some of the use cases presented for this be accomplished with ZSETs? I get the performance angle, but it seems that this could have been accomplished without the new API surface by selectively optimizing ZSET storage for dense values (in the same way that Arrays selectively use sparse representations).
The RE component is interesting, but as commentary here has noted it seems orthogonal to the array data structure (i.e., usable on others as well). Does this not make more sense to accomplish with Lua scripting? Or if performance of Lua is an issue perhaps abstracting OP to be composable on top of any command that returns a range of values.
I say this with reverence for Antirez as the expert in this space, but some of this new feature set feels like the sort of solution that I tend to see arise from LLM-driven development; namely creation of new functionality instead of enhancement of existing, plus overcomplicating features when composition with others might be more effective.
Unfortunately not, sorted sets are actually a bit in the other side of the spectrum: they are semantically sound, but absolutely wasteful because of the combined skiplist + array. Also, if the underlying representation is not an array, range queries and ring buffers will never be as efficient and compact as they should. In theory you can do everything with everything, but segmenting what each API can do allows you to exploit the use cases to provide the best underlying implementation.
Oh boy, you are far from what it requires, we are probably talking 3B+, but note that this is just codex, obviously codex is also doing automatic adversarial with the regular zoo (gemini-3.1-pro-preview, opus-4.6/4.7, gpt-5.3-codex, minimax-2.7, glm-5.1, mimo-2 (now 2.5) and so-on, you get the gist) :)
I don't know their margin so I can't really say, but do we have 8 OpenAI accounts, I doubt they are making that much with us seeing that there isn't a single hour where we don't saturate the accounts.
Indeed; sorry for leaving that out, it was a judgement call triggered by HN limits. However, whilst very relevant, it doesn't in fact change the point that much.
Off by an order of magnitude. Average TBO (which airplane engines routinely exceed if they don’t rust out) is 2,000 hours assuming piston, or about 300,000 miles for a Piper Arrow at cruise speed.
Thanks for clarifying, I thought that sounded wrong - otherwise aeroplane engines would have to be "rebuilt", each and every time, after more than half of all international flights in and out of Australia (5000 miles, aka 8000km, is just down the road to grab a sausage roll for us!).
For comparison, latest commercial turbofans approach 6000h (they don't have a strict TBO limit AFAIU, overhauls are decided based on various inspections and measurements). At a typical airliner speed that's something like 3,000,000 miles.
Going to have to disagree on the backup test. Opus flamingo is actually on the pedals and seat with functional spokes and beak. In terms of adherence to physical reality Qwen is completely off. To me it's a little puzzling that someone would prefer the Qwen output.
I'd say the example actually does (vaguely) suggest that Qwen might be overfitting to the Pelican.
Qwen's flamingo is artistically far more interesting. It's a one-eyed flamingo with sunglasses and a bow tie who smokes pot. Meanwhile Opus just made a boring, somewhat dorky flamingo. Even the ground and sky are more interesting in Qwen's version
But in terms of making something physically plausible, Opus certainly got a lot closer
The fundamental challenge of AI is preventing unprompted creativity. I can spin up a random initialization and call all of it's output avante garde if we want to get creative.
I recently fell down the rabbithole of AI-generated videos, and realised that many of the "flaws" that make them distinctive, such as objects morphing and doing unusual things, would've been nearly impossible or require very advanced CGI to create.
"artistically interesting" is IMHO both a subjective and 'solved' problem. These models are trained with an "artistically interesting" reward model that tries to guide the model towards higher quality photos.
I think getting the models to generate realistic and proportional objects is a much harder and important challenge (remember when the models would generate 6 fingers?).
Even the first one - Qwen added extra details in the background sure. But he Pelican itself is a stork with a bent beak and it's feet is cut off it's legs. While impressive for a local model, I don't think it's a winner.
reply