sure, but now it's giving me days old crappy posts with 3 votes from those subs as it leans wholeheartedly in the Dark Pattern of always feeding me something so I keep reflexively coming back for more.
I have stub notes for people I've met, and link to them in the journal section of my daily note when I've met them; I can then check the backlinks on that person's note when I want to check where I've seen them before.
Not the OP, but I've got a "Names to remember" evergreen note in my Reference folder. Within it, I have a few headings (e.g. neighbours, or locations), and a bullet point for each person, with context that will trigger the memory. That might sound like there's a lot of structure, but it's really the act of writing it down in the first place that helps me remember.
I use Obsidian for the same purpose as the sister comment. I have a long old note where I add the name and a minor info for new acquaintances. Mine is charavterised by the environment of acquaintance, i.e Work/Town/Rave/Hobby/Online. Rarely need to refer back once ive written down.
I make a new card in a folder called People whenever I meet someone. Then I add information like email address, phone number, connections to other people when relevant.
I find engaging like this helps my memory already on its own, but if I'm ever really stuck with a name I just take a quick look at my phone. The person is usually linked to the event where I first met them or similar.
I don’t work at a dating company, but I do work in machine learning applications.
My best guess is this: they are not optimizing for good vs great matches, and they are probably not even building a model of what that would even mean, not even trying to represent the concept in their algorithms.
Most likely they are optimizing for one or more metrics that are easy to measure and hence optimize, and these metrics have the side effect of producing an excitement for the user without actually pairing them up.
But the end result is effectively the same. If you throw in the constraints of what GP mentioned about customer retention, at the Pareto frontier it boils down to the same optimization, just that instead of manually optimizing the specific variables they become latent variables. There is no difference in the resultant enshittification.
If they're optimizing for engagement (the same as Netflix and everyone else), that comes from the the number of active conversations you wind up having. If you're swiping a ton but matches never message you, you'll give up and try another app -- that's not engagement. The more engagement you have from real back-and-forth messages (not spam), the more real-life dates you go on (if you're doing it right). And the more people you meet, the more likely you are to leave.
It's not "enshittified", dating is just hard. People are picky and it's difficult to get a good read on people from just their online profiles. The dating apps just want to keep you engaged and spending money. They don't need to make finding matches harder because they're worried you'll leave -- finding matches is hard enough to begin with. Trying to make it harder is the least of their concerns. They're trying to give you as good of a service as possible, while getting you to pay. So they limit numbers of matches to get you to pay. They're not limiting quality of matches.
I think this problem existed before AI. At least in my current job, there is constant, unrelenting demand for fast results. “Multi-day deep thinking” sounds like an outrageous luxury, at least in my current job.
Even 30 years ago when I started in the industry, most jobs required very little deep thinking. All of mine has been done on personal projects. Thats just the reality of the typical software engineering job.
My feeling is that AI-generated code is disposable code.
It’s great if you can quickly stand up a tool that scratches an itch for you, but there is minimal value in it for other people, and it probably doesn’t make sense to share it in a repo.
Other people could just quickly vibe-code something of equal quality.
That's how I've been using and treating it, though I'm not primarily a developer. I work in ops, and LLMs write all sorts of disposable code for me. Primarily one-off scripts or little personal utilities. These don't get shared with anyone else, or put on github, etc. but have been incredibly helpful. SQL queries, some python to clean up or dig through some data sets, log files, etc. to spit out a quick result when something more robust or permanent isn't needed.
Plus, so far, LLMs seem better at writing code to do a thing over directly doing the thing, where it's more likely to hallucinate, especially when it comes to working with large CSV or Json files. "Re-order this CSV file to be in Alphabetical order by the Name field" will make up fake data, but "Write a python script to order the Name filed in this CSV to be alphabetical" will succeed.
That's exactly my experience as well. AI will read only the first 100 lines of a file, decide that's good enough, and spit out a garbage result. But ask it to write a bash one-liner and it will work perfectly.
My growing (cynical) feeling is that AI-generated code is legacy-code-as-a-service. It is by nature trained on other people and company's legacy code. (There's the training set window which is always in the past. There's the economics question of which companies would ever volunteer to opt-in their best proprietary production code into training sets. Sure there are a few entirely open source companies, but those are still the exception and not the rule.) "Vibe code" is essentially delivered as Day Zero "Legacy Code" in the sense that the person who wrote that code is essentially no longer at the company (even if context windows get extended to incredibly huge sizes and you have great prompt preservation tools, eventually you no longer have the original context and not to mention that the Models themselves retrain and get upgraded every so many months are essentially "different people" each time. But most importantly the Models themselves can't tell you the motivating "how" or "why" of anything, at best maybe good specs documents and prompts do, but even that can be a gamble).
The article starts with a lot of words about how the meaning and nature of "tech debt" are going to change a lot as AI adoption increases and more vibe coding happens, but I think I disagree on what that change means. I don't AI reduces "tech debt". I don't think it is "deflationary" in any way. I think AI are going to gift us a world of tech debt "hyperinflation". When every application in a company is "legacy code" all you have is tech debt.
Having worked in companies with lots of legacy code, the thing you learn is that those apps are never as disposable as you want to believe. The sunk cost fallacy kicks in. (Generative AI Tokens are currently cheap, but cheap isn't free. Budgets still exist.) Various status quo fallacies kick in: "that's how the system has always worked", "we have to ensure every new version is backwards compatible with the old version", "we can't break anyone's existing process/workflow", "we can't require retraining", "we need 1:1 all the same features", and so forth.
You can't just "vibe code" something of equal quality if you can't even figure out what "equal quality" means. That's many the death of a legacy code "rewrite project". By the time you've figured out how every user uses it (including how many bugs are load-bearing features in someone's process) you have too many requirements to consider, not enough time or budget left, and eventually a mandate to quit and "not fix what isn't broken". (Except it was broken enough to start up a discovery process at least once, and may do so again when the next team thinks they can dream up a budget for it.)
Tech debt isn't going away and tech debt isn't getting eliminated. Tech Debt is getting baked into Day Zero of production operations. (Projects may be starting already "in hock to creditors". The article says "Dark Software Factory" but I read "Dark Software Pawn Shop".) Tech debt is potentially increasing at a faster than human scale of understanding it. I feel like Legacy Code skills are going to be in higher demand than ever. It is maybe going to be "deflationary" in cost for those jobs but only because the supply of Legacy Code projects will be so high and software developers will have a buffet to choose from.
I don't see why AI would be able to help you solve all your legacy code problems.
It still struggles making changes to large code bases, but it doesn't have any problems explaining those code bases to you helping you research or troubleshoot functionality 10x faster, especially if you're knowledgable enough not to take it at its responses as gospel but willing to have the conversation. A simple layman prompt of "are you sure X does Y for Z reason? Then what about Q?" will quickly get to them bottom of any functionality. 1 million token context window is very capable if you manage that context window properly with high level information and not just your raw code base.
And once you understand the problem and required solution, AI won't have any problems producing high quality working code for you, be it in RUST or COBOL.
In my experience with Legacy Code projects the problem is very rarely "what is this code doing?" Some languages like VB6 (or even COBOL) are just full of very simple "what" answers. Obfuscation is rare and the language itself is easy to read. Reading the code with my own eyes gives me plenty of easy enough answers for the "what". LLMs can help with that, sure, but that's almost never the real skill in working with "legacy code".
The problem with working with legacy code, and where most of the hardest won skills are, is investigating the "how" and the "why" over the "what". I haven't seen LLMs be very successful at that. I haven't seen very many people including myself always be very successful at that. A lot of the "how" and the "why" becomes a mystery of the catacombs of ancient commit messages and mind reading seance with developers no longer around to question directly. "Why is this code doing what it is doing?" and "How did this code come to use this particular algorithm or data structure?" are frighteningly, deeply existential questions in almost any codebase, but especially as code falls into "legacy" modes of existence.
Some of that becomes actual physical archeology that LLMs can't even think to automate: the document you need is trapped in a binder in closet in a hallway that the company sealed up and forgot about for 30 years.
Usually the answers, especially these days, were never written down on anything truly permanent. There was a Trello board that no one bothered to archive when the project switched to Jira. Some of the # references seem to be to BitBucket Issues and Pull Requests numbers, was the project ever hosted on Bitbucket? No one archived that either. (This is an old CVS ID. I didn't even realize this project pre-dated git.) The original specs at the time of the MVP were a whiteboard and a pizza party. One of the former PMs preferred "hands on" micro-management and only ever communicated requirements changes in person to the lead dev in a one hour "coffee" meeting every Wednesday and sometimes the third Thursday of a month. The team believed in a physical Kanban board at the time and it was all Post-It Notes on the glass window in the conference room named "Cactus Joe". I heard from Paul who was on a different project at the time that Cathy's cube was right next to that window and though she was only an Executive Assistant at the time she moved a lot of those Post-It Notes around and might be able to tell you stories about what some of them said if you treat her to a nice lunch.
Software code is poetry written by people. The "what" is sometimes just the boring stuff like does every other line rhyme and are the right syllables stressed. The "how" and "why" are the stories that poetry was meant to tell, the reasons for it to exist, and the lessons it was meant to impart. Sometimes you can still even read some of that story in the names of variables and the allegories in its abstractions, when a person or two last shaped it, as you start to pick up their cultural references and build up an empathy for their thought processes ("mind reading", frighteningly literally).
That's also why I fear for LLMs only accelerating that process: a hallway with closets getting bricked up takes time and creates certain kinds of civic paperwork. (You'll discover it eventually, if only because the company will renovate again, eventually.) Whereas, a prompt file for a requirements change never getting saved anywhere is easy to do (and generally the default). That prompt file probably wasn't kicked up and down a change management process nor debated by an entire team in a conference room for days, human memory of it will be just as nonexistent as the file no one saved. LLMs aren't even always given the "how" or "why" as they are from top to bottom "what machines", that stuff likely isn't even in the lost prompts. If a team is smaller or using a "Dark Software Factory" is there even reason to document the "how" or "why" of a spec or a requirement?
In further generalization, with no human writing the poetry the allegories and cultural references disappear, the abstractions become just abstractions and not illuminating metaphors. LLMs are a blender of the poetry of many other people, there's no single mind to try to "read" meaning from. There's no clear thought process. There's no hope that a ranty monologue in a commit message unlocks the debate that explains why a thing was chosen despite the developer thinking it a bad idea. LLMs don't write ranty monologues about how the PM is an idiot and the users are fools and the regulatory agency is going to miss the obvious loophole until the inevitable class action suit. Most of those are concepts outside of the scope of an LLM "thought process" altogether.
The "what is this code doing" is the "easy" part, it is everything else that is hard, and it is everything else that matters more. But I know I'm cynical and you don't have to take my word for it that LLMs with "legacy code" mostly just speed up the already easy parts.
It’s a broad subject. You have the existence of the Philippines itself, which is the only majority Christian nation in Asia, and given it passed from Spain to the USA, this influences everything from US force projection in the Pacific to the prevalence of BPO outsourcing to the sectarian conflicts in Mindanao.
More directly from the trade itself and the silver it brought permanently altered the demographics, flora and fauna of the region. Southern Chinese traders migrated in greater numbers throughout the region, new world fruits and vegetables such as sweet potato and pineapple are introduced. Chinese merchants in the region are producing Christian religious art for export back to the old world, giving rise to a unique hybrid artistic style.[0] The region as a whole becomes more attractive for the Dutch and the British, which had its own set of downstream consequences …
I’m sorry I don’t have a single source that pulls this all together, you would probably need to pick an area and start reading!
- Is the work easier to do? I feel like the work is harder.
- Is the work faster? It sounds like it’s not faster.
- Is the resulting code more reliable? This seems plausible given the extensive testing, but it’s unclear if that testing is actually making the code more reliable than human-written code, or simply ruling out bugs an LLM makes but a human would never make.
I feel like this does not look like a viable path forward. I’m not saying LLMs can’t be used for coding, but I suspect that either they will get better, to the point that this extensive harness is unnecessary, or they will not be commonly used in this way.
I have a feeling that the real work of writing a complex application is in fully understanding the domain logic in all its gory details and creating a complete description of your domain logic in code. This process that OP is going through seems to be "what if I materialize the domain logic in tests instead of in code." Well, at first blush, it seems like maybe this is better because writing tests is "easier" than writing code. However, I imagine the biggest problem is that sometimes it takes the unyielding concreteness of code to expose the faults in your description of the domain problem. You'd end up interacting with an intermediary, using the tests as a sort of interpreter as you indirectly collaborate with the agent on defining your application. The cost of this indirection may be the price to pay for specifying your application in a simpler, abstracted form. All this being said, I would expect the answers to "is it easier? is it faster?" would be: well, it depends. If it can be better, it's certainly not always better.
Its not mutually exclusive. We write test precisely because expressing a complex application is hard without them. But to your point, we should not wave away applications that cannot be understood with extra tests. I agree.
> - Is the work faster? It sounds like it’s not faster.
The author didn't discuss the speed of the work very much. It is certainly true that LLMs can write code faster than humans, and sometimes that works well. What would be nice is an analysis of the productivity gains from LLM-assisted coding in terms of how long it took to do an entire project, start to finish.
Don’t automation technologies improve the productivity of labor?
If I can make one widget per hour, and some new tool lets me make 10 widgets per hour?
Conventional economic theory suggests the gain will be split between the widget-maker and the widget-consumer, in proportions determined by the relative slopes of the supply-demand curves, but definitely the product will become somewhat cheaper.
It depends on what you mean by "high quality", I suspect. Above a relatively low floor, price of furniture seems unrelated to (e.g.) sturdiness or expected lifespan. It's more like fashion, in that you are paying for names or decoration.
Probably a difficult question to answer. My sense is that high quality new hardwood furniture hasn't gotten especially more expensive over recent time (adjusted for inflation) but you'd probably have to ask someone in the industry who is more plugged into various cost inputs.
>seems unrelated to (e.g.) sturdiness or expected lifespan
I'll disagree somewhat. At least in New England, there are smallish manufacturers who make high quality products that you're not going to find in most, if any, of the large retailers.
This is exactly Baumol: If by “high quality” you mean “difficult to mass produce” then yes, the lack of productivity gains in hand-made furniture makes real costs go up.
Of course, it’s easy to mass produce sturdy furniture, such as office furniture. But it’s not what consumers think of as “high quality”.
reply