The point about CMSs having value in possibly being a more real-time collaborative UI layer to interact with that's less-scary for the average Joe is a valid driver; and is a critical factor for many use cases. But the the other stuff is clearly reasoning with a solution already in mind...
"All blog posts mentioning feature Y published after September...[more examples]...The three most recent case studies in the finance category...[etc]"
Fairly simple queries. If you're willing to build an MCP server (as they did for their solution), you could just as well build one that reads structured front matter.
"You can't. You'd need to parse frontmatter, understand your date format, resolve the category references, handle the sorting, limit the results. At which point you've built a query engine."
Well that's a scoped problem. Looks like it already exists (e.g., https://markdowndb.com/) and doesn't require moving away from Markdown files in GIT if you want.
The AI-generated points aren't as compelling as the prompter thinks. A new common problem.
Yes, you don't need flatfile-committed raw text for AI tools to work properly, in part because of things like MCP servers. Yes, semantically linked content with proper metadata enables additional use cases.
The next point to make would be "if you use our thing, you don't need to think about this", but instead goes into a highly debatable rant about markdown in git not being able to fulfil those additional use cases on a practical level.
This distracts from the what I imagine is the real intent: "git and markdown files don't come automagically with a snazzy collaborative UI. And yes you can still use AI, and use it well out of the box. If someone tells you you need markdown in git to do x,y,z with AI they are wrong."
Personally, I can get over the "AI writing style", but only if the content still nails the salient point...
The "should we use node for the core business logic for my use case" thing is absolutely valid and depends.
But you'd be surprised about the "react for the front end of the CLI" part. I used this thing a whole six years ago in a complex interactive CLI and it came off great to use, maintainable and ergonomic. React is just one framework that proscribes to the "UI as declaritive/composable trees" pattern. And that UI doesn't have to be web based. That pattern works for all UI's I've come across.
Its the pattern that is a good reason to do this. And not react/js landscape. That part is probably a bonus for many though.
Odd to see this come up. I used it about 6 years ago to produce a CLI for devs. It worked very well and React concepts map cleanly to pretty much any "UI" target as said targets can also be well represented by declarative & composable trees.
Probably, other Ui frameworks could also map, seeing as the declarative/composable tree pattern is ubiquitous now. So it's important to note it's that pattern which enables this, and not a specific framework.
But React is one of the ones that is also more removed from the web target than others.
If you are used to structural typed systems though this actually "feels" expected. Don't get me wrong, I know where you are coming from. But the realisation comes from nominal systems, which are different beasts. As long as it actually evaluates to the expectations in the head of the developers using it, then that's okay.
However, regularly it's not the case -- especially with people moving from nominal systems.
TS can't really be practically nominal when it has to be constrained by its compile target. So I guess it ultimately boils down to an anomaly/criticism born from the legacy of how web standards came about.
Nominal type systems are pretty much better across the board IMO. Much safer and much less to keep in your head. Modern langs like Rust or F# have very complete and nice to use type systems.
But yeah unfortunately not sure if JS is an appropriate target for these type systems. Nim does it, but I'm not sure how safe it is.
Note however that this API is brought by Styled System [1], which Chakra uses (and exposes), and adds the component library and other niceties on top.
My only regret with Chakra is that it's a tad runtime-heavy, and it's not really adapted to static content. I'd love to see some sort of compiler that digests a React tree into rendered HTML + CSS, with minimal JS just for style interactions (is this what Svelte does?)
that's the whole point of a manually updated status page. you don't want automation to update it because that automation can fail. automation likely caused the outage you want to know more about.
you also don't want your automation guessing at what the problem is, or what the effects are. you want real info from a real person even if it isn't given to you the millisecond you look for it.
this is why status pages aren't updated by automation. if they're updated by a person, you know that people know about the problem, you know that people are working on the problem, and so on, which is good, but while they figure out what's going on, you see a "green" status page.
this is normal.
(this is for future readers, more than the person I am replying to.)
IMO the reason status pages are not updated automatically is legal. SLA and other legal contracts might change if every time something is down the status page reflects that accurately, so people try to hide it.
Approached in that way a status page is almost useless, since it is not reliable and only after I found out via other sources it is updated.
I am perfectly happy with a status page that shows the, mm, status of the service. Could be as easy as not reachable, slower than usual or any generic information (a traffic light). I disagree that a status page has to show the why of the error, although of course it would be nice.
> IMO the reason status pages are not updated automatically is legal. SLA and other legal contracts might change if every time something is down the status page reflects that accurately, so people try to hide it.
you are right about legal reasons; some companies count SLA by the time and date stamps on the status page.
people hiding a real outage when users know damned well there is an outage is thankfully not common at all.
if you can design and run a 100% reliable status page which never reports incorrect information, while also reporting useful information, you will be a hero to many.
> people hiding a real outage when users know damned well there is an outage is thankfully not common at all.
Thankfully people are not hiding it as in a conspiracy to pretend nothing is wrong. But, as you see in many comments in this thread, status pages rarely reflect that something is down immediately (because they are updated manually by humans).
This delay, codified in processes, is very convenient, and to me this is purposely hiding that a service is down. People are not hiding it, but the processes that control the status page are, indeed, hiding this information. This makes status pages less useful, IMO.
Yet Discord has an “API Response Time” graph[0] and Reddit has a “5xx error rate” graph[1]. No, it doesn’t automatically create incidents, but it’s nice to confirm an issue is happening site-wide after experiencing it.
Actually looks like the metrics part of Reddit’s status page broke over 2 weeks ago
GitHub has such a graph too, but limited to internal employees only. In fact that’s how they detect failures with their deployments, elevated 500 statuses. Some individual teams will have their own dashboard, but some do not (like k8s upgrades) and they only monitor 500s.
Rest assured someone is looking into this problem right now
I'm going to disagree here. The point of a manually updated status page is appearances.
With proper reporting it's trivial to know which subsystem is experiencing problems, if any. It doesn't have to be very granular, just "normal", "experiencing issues", "offline". If reporting doesn't work, you should be alerted it doesn't work, and if alerting doesn't work, there needs to either be out-of-band alerting for that or someone monitoring the status at all times.
Manual overrides for status pages should exist for when the automation doesn't work of course.
At my last job we had a big screen in the office we monitored (Grafana) and we usually saw problems before the alerting kicked in - it had about a minute delay. When not in-office/during work hours, the on-call received alerts. It wasn't technically nor organisationally complex.
This is so wrong that it makes me wonder if it's satire.
"The whole point" (as you put it) of status pages was to publish high-level monitoring data to users. The monitoring process should occur outside the system that is being monitored, perhaps even on a different cloud.
Eventually, many companies realized this revealed expensive SLA violations and ended that level of transparency.
Your status page can and should report import metrics to users, like elevated error rates. Most status pages used to.
no company will put any amount of monitoring online for anyone to see, no matter how high level. for it to be useful info, it must contain details, and information about infrastructure is usually well guarded for very good reasons.
> no company will put any amount of monitoring online for anyone to see, no matter how high level
Many companies used to do this. I remember the first time someone on HN commented, "Hey, is it possible this status page is just a useless blog now?" And people were trying to figure it out.
Companies arguably have a contractual obligation to be transparent about this data with their customers anyway, so a company like Github (where such a huge percentage of the industry is a customer) is going to leak the data one way or another.
Exactly, status pages tend to be updated by the humans responding to the incident, they're not automatic (that'd be pretty useless, you already know it's down, you want to know when they know it's down). Coordinating what to put on the status page when an incident happens can take time, getting the correct scope of impact from responding engineers etc.
Sorry, I'm not following you, how do you know it's down when the status page says it's all working? At that point you assume it's not down and start checking your own systems. They're just lying to avoid fallout; it's not better than an automated page.
"Humans responding to the incident" is what Twitter and email communications are for. Status pages are supposed to be realtime status, and they should show downtime as soon as users suspect it.
As a user, you often don't know if the vendor's system is really down or if there's something wrong with your own system.
from the message I'm getting it seems like the load balancer is not able to spawn up server to handle new connections. Again, status page needs to reflect that, which means the status server page is NOT running on the same infrastructure as the main server group. Stop using AWS(or whatever fill in the blank hosting provider) for the status and production environments.
TBH I think its not because JS somehow fundamentally attracts a different group of people, its probably more like:
- It being web based means you can target this kind of software for maximum "impact"
- NPM dep trees are massive and you generally have thousands of tiny libs. The chance of something like this happening and being noticed goes up therefore.
- NPM ecosystem is a bit more wild west which again leads to increased chance of something like this being able to occur in the first place.
The web needs APIs that enables certain blocks of codes to run under specific permission constraints. Such constraints might include ability to read/write to DOM; window.alert; redirecting (well I guess CSP covers that one) etc. At least let us mitigate it.
This already exists: iframe sandboxes and content security policies. All we need now is a library that allows you to easily load, run, and interact with code in a sandboxed frame, using something like Comlink to make it feel as if it's all running in the same environment.
I know about CSP and iFrames, but I think they aren't ergonomic enough to be used as mechanisms to restrict deps right?
Iframes need a full web context whilst CSP cant target individual code blocks. For example, I might want my code to be able to do alerts, but I dont want dependency x to be able to.
EDIT: Ah I think thats what you meant by your "code in a sandboxed iframe thing". Fair.
"All blog posts mentioning feature Y published after September...[more examples]...The three most recent case studies in the finance category...[etc]"
Fairly simple queries. If you're willing to build an MCP server (as they did for their solution), you could just as well build one that reads structured front matter.
"You can't. You'd need to parse frontmatter, understand your date format, resolve the category references, handle the sorting, limit the results. At which point you've built a query engine."
Well that's a scoped problem. Looks like it already exists (e.g., https://markdowndb.com/) and doesn't require moving away from Markdown files in GIT if you want.
Or use something like content collection in astro (https://docs.astro.build/en/guides/content-collections/). Hell, looks like that lets you have the MD files somewhere else instead of git if you please.
The AI-generated points aren't as compelling as the prompter thinks. A new common problem.
Yes, you don't need flatfile-committed raw text for AI tools to work properly, in part because of things like MCP servers. Yes, semantically linked content with proper metadata enables additional use cases.
The next point to make would be "if you use our thing, you don't need to think about this", but instead goes into a highly debatable rant about markdown in git not being able to fulfil those additional use cases on a practical level.
This distracts from the what I imagine is the real intent: "git and markdown files don't come automagically with a snazzy collaborative UI. And yes you can still use AI, and use it well out of the box. If someone tells you you need markdown in git to do x,y,z with AI they are wrong."
Personally, I can get over the "AI writing style", but only if the content still nails the salient point...