I think when you are new with good ideas, you are judged against average. If you are above average, you are listened to.
As years pass, you are judged against the standard you set, and if you do not keep raising this standard, you start being seen as average, even if you are performing the same when you joined.
I've seen this play out many, many times.
When an incompetent person is hired, even if issues are acknowledged, if they somehow stay, the expectations from them will be set to their level. The feedback will stop as if you complain about same issues or same person's work every time, people will start seeing this as a you problem. Everyone quietly avoids this, so the person stays.
When a competent person is hired, it plays out the same. After 3/5/10 years, you are getting the same recognition and rewards as the incompetent person as long as you both maintain your competency.
However, I've seen (very few) people who consistently raised their own standards and improved their impact and they've climbed quickly.
I've seen people lowering their own standards and they were quickly flagged as under-performers, even if their reduced impact was still above average.
I agree with this summary to a degree. Additional problem arises when you simply cannot raise the standard as you lack political influence to do so. As it is said in the article - sometimes companies are comfortable with status quo, irregardless of the problems, whether they are technical or not. Another issue stems when product, rather than looking at tech as a partner in pursuit of common goal starts to see it as an underling.
While I can't say that I observe that kind of radical shift for myself, one of the reasons I still can see something similar is AI development.
Basically manager asks me something and asks AI something.
I'm not always using so-called "common wisdom". I might decide to use library of framework that AI won't suggest. I might use technology that AI considers too old.
For example I suggested to write small Windows helper program with C, because it needs access to WinAPI; I know C very well; and we need to support old Windows versions back to Vista at least, preferably back to Windows XP. However AI suggest using Rust, because Rust is, well, today's hotness. It doesn't really care that I know very little of Rust, it doesn't really care that I would need to jump through certain hoops to build Rust on old Windows (if it's ever possible).
So in the end I suggest to use something that I can build and I have confidence in. AI suggests something that most internet texts written by passionate developers talk about.
But manager probably have doubts in me, because I'm not world-level trillion-dollar-worth celebrity, I'm just some grumpy old developer, so he might question my expertise using AI.
You mention the tradeoffs between rust. Including the high level of uncertainty and increased lead time as you need to learn the language.
The manager, now having that information, can insist on using rust, and you get er great opportunity to learn rust. Now being totally off the hook, even if the project fails, as you mentioned the risks.
“Truly I tell you,” he continued, “no prophet is accepted in his hometown."
- Luke 4:24
It's why people often trust consultants over the people inside the organization. It's why people often want to elect new leaders even if the current leaders are doing a decent job.
The baby almost always gets thrown out with the bath water.
I find this hilarious given that I've experienced it from both viewpoints - 1. consultant implemented their half baked solution that continued to bite us for my tenure and imo was completely unmaintainable; how were they able to convince leadership about their ideas - sometimes it's just snake oil 2. In new place am preaching certain things to people that do listen and seem to want to do it - it makes me a bit uncomfortable and to a degree scary in how easily you can find acolytes. They do validate my suggestions, ask questions and most importantly - think, so I am hopeful that I won't turn out to be a false prophet
I've also played both roles myself at times. I've been the wise consultant. And I've been the Cassandra that nobody would listen to. My wisdom was never as good as presumed when I was the consultant. And my wisdom was far better than was assumed when I as the Cassandra.
The prevalent pattern I can see is making things mundane. Capabilities that you are enabling are no longer something that only you could do, was you expertise there at all? Things running smoothly is something that is granted. Doing your job well becomes unexceptional
This is fascinating. It sounds like you're building "cloud datastructures" based on S3+CAS. What are the benefits, in your view, of doing using S3 instead of, say, dynamo or postgres? Or reaching for NATS/rabbitmq/sqs/kafka. I'd love to hear a bit more about what you're building.
It's just trade-offs. If you have a lot of data, s3 is just the only option for storing it. You don't want to pay for petabytes of storage in Dynamo or Postgres. I also don't want to manage postgres, even RDS - dealing with write loads that S3 handles easily is very annoying, dealing with availability, etc, all is painful. S3 "just works" but you need to build some of the protocol yourself.
If you want consistently really low latency/ can't tolerate a 50ms spike, don't retain tons of data, have <10K/s writes, and need complex indexing that might change over time, Postgres is probably what you want (or some other thing). If you know how your data should be indexed ahead of time, you need to store a massive amount, you care more about throughput than a latency spike here or there, or really a bunch of other use cases probably, S3 is just an insanely powerful primitive.
Insane storage also unlocks new capabilities. Immutable logs unlock "time travel" where you can ask questions like "what did the system look like at this point?" since no information is lost (unless you want to lose it, up to you).
Everything about a system like this comes down to reducing the cost of a GET. Bloom filters are your best friend, metadata is your best friend, prefetching is a reluctant friend, etc.
I'm not sure what I'm building. I had this idea years ago before S3 CAS was a thing and I was building a graph database on S3 with the fundamental primitive being an immutable event log (at the time using CRDTs for merge semantics, but I've abandoned that for now) and then maintaining an external index in Scylla with S3 Select for projections. Years later, I have fun poking at it sometimes and redesigning it. S3 CAS unlocked a lot of ways to completely move the system to S3.
> You can imagine us continuing to iterate here to Step 5, Step 6, ... Step N over time. The tradeoff of each step is complexity, and complexity has to be deserved. This is working exceptionally well currently.
> Failover happens by missing a compare-and-set so there's probably a second of latency to become leader?
Conceptually that makes sense. How complicated is it to implement this failover logic in a safe way? If there are two processes, competing for CAS wins, is there not a risk that both will think they're non-leaders and terminate themselves?
7. On failure due to CAS, fail active requests and terminate
The client should have a retry mechanism against the broker (which may include looking up the address again).
From the brokers PoV, it will never fail a CAS until another broker wins a CAS, at which point that other broker is the leader. If it does fail a CAS the client will retry with another broker, which will probably be the leader. The key insight is that the broker reads the file once, it doesn't compete to become leader by re-reading the data and this is OK because of the nature of the data. You could also say that brokers are set up to consider themselves "maybe the leader" until they find out they are not, and losing leadership doesn't lose data.
The mechanism to start brokers is only vaguely discussed, but if a host-unreachable also triggers a new broker there is a neat from-zero scaling property.
This is the hardest part because you can easily end up in a situation like you're describing, or having large portions of clients talking to a server just to have their writes rejected.
Further, this system (as described) scales best when writes are colocated (since it maximizes throughput via buffering). So even just by having a second writer you cut your throughput in ~half if one of them is basically dead.
If you split things up you can just do "merge manifests on conflict" since different writers would be writing to different files and the manifest is just an index, or you can do multiple manifests + compaction. DeltaLake does the latter, so you end up with a bunch of `0000.json`, `0001.json` and to reconstruct the full index you read all of them. You still have conflicts on allocating the json file but that's it, no wasted flushing. And then you can merge as you please. This all gets very complex at this stage I think, compaction becomes the "one writer only" bit, but you can serve reads and writes without compaction.
Love this writeup. There's so much interesting stuff you can build on top of Object Storage + compare-and-swap. You learn a lot about distributed systems this way.
I'd love to see a full sample implementation based on s3 + ecs - just to study how it works.
If you like Lee Pace, check out The Fall (2006). It's my favorite film, incredibly ambitious and funny and yet virtually unknown to the public. Lee's performance is incredible, as is his young co-star's.
Yeah, it's somewhat splintered in that you're unsure what movie you're watching between different parts, but I have a strong love for movies that dare, and that one certainly does.
I'll also second your comment about the kid, which is one of the best child performances I've seen.
Does anyone have experience transforming a typescript codebase this way? Typescript's LSP server is not powerful enough and doesn't support basic things like removing a positional argument from a function (and all call sites).
Would jscodeshift work for this? Maybe in conjunction with claude?
jscodeshift supports ts as a parser, so it should work.
If you want to also remove argument from call sites, you'll likely need to create your own tool that integrates TS Language Service data and jscodeshift.
LLMs definitely help with these codemods quite a bit -- you don't need to manually figure out the details in manipulating AST. But make sure to write tests -- a lot of them -- and come up with a way to quickly fix bugs, revert your change and then iterate. If you have set up the workflow, you may be able to just let LLM automate this for you in a loop until all issues are fixed.
We've been using tsgo for a few months in our team and it just works (10x faster). We bumped into some small differences in behavior with tsc but it was always because we were doing something weird in our code, and it was easy to fix.
Mad props to the team, it's an amazing achievement
We're using tsgo only on the command line btw, we're not using the vscode plugin yet
I've learned more from Charity about telemetry than from anyone else. Her book is great, as are her talks and blog posts. And Honeycomb, as a tool, is frankly pretty amazing
reply