Hacker Newsnew | past | comments | ask | show | jobs | submit | islandert's commentslogin

Would the JVM ecosystem almost be a working example of this? Since there are a variety of languages with editor integration that all compile down to the same byte code, it feels pretty close to what you’re describing.


I believe that JVM bytecode is too low level for a source format. At the very least you would need to preserve some form of comments. Also I think JVM locals lose their names during compilation.


The JVM languages are pretty damn close!


Unison would fit here


I made a CLI tool called kilojoule that is similar to jq. I addition to the normal suite of JSON manipulations, it also has support for a couple of other file formats and can call other shell commands.

https://github.com/stevenlandis/kilojoule

I’ve found it to be a pleasant multi tool for interacting with the shell when I need something a little more than bash.

It took a couple of weekends and evenings to get working but was a really fun way to learn about parsers, interpreters and Rust.


TikTok?


If you don't have access to COPY if the postgres instance is managed, I've had a lot of luck with encoding a batch of rows as a JSON string, sending the string as a single query parameter, and using `json_to_recordset` to turn the JSON back into a list of rows in the db.

I haven't compared how this performs compared to using a low-level sql library but it outperforms everything else I've tried in sqlalchemy.


When I see suggestions like this (totally reasonable hack!), I do have to wonder what happened to JDBC’s “addBatch()/executeBatch()”, introduced over 25 years ago.

https://docs.oracle.com/javase/8/docs/api/java/sql/PreparedS...

Did modern APIs & protocols simply fail to carry this evolutionary strand forward?


I love that writing LLM-friendly docs is just... writing good docs. There's a ton of overlap between accessibility work and preparing things to be used by LLMs.

I wonder if an unintended side effect of this AI hype cycle is a huge investment in more accessible applications.


Exactly! It's very much garbage-in-garbage-out.

One surprising (to me at least) benefit of hooking up an LLM to your docs is that it is actually a really useful way to find gaps in your docs. For example, when an LLM cannot answer a user question, there's a good chance it's because the answer is not documented anywhere.


What about when it answers with confabulation?


Confabulation should not be stable. Ie. you can generate answer let's say 3 times with different seed / non-zero temp and if it arrives at different answers you can categorize it as confabulation - which likely means, again, that the answer is not present and humans will likely also have different interpretations.


I guess then it becomes a threat to Google's business model.


Just like how (it used to be) that writing good content would rank high in search results. But writing good content is hard. So we'll kid ourselves into writing good content for LLMs. Which is equally hard, but we'll feel like we're getting a leg up over everyone else -- who are all also doing the same thing.

It's unfortunate that people are more motivated to write for LLMs that are then used by humans than write for humans to begin with. Especially when the reason to use LLMs is because, on average, content is subpar making it difficult to find the good content.

Another case of Tragedy of the Commons Ruins Everything Around Me.


Fair point. Although good writing for humans = good writing for LLMs and vice versa. So I'm hopeful this new excitement around AI for docs if anything will just encourage folks to put even more effort into writing great docs.


No, it's not a fair point IMHO. LLMs are arguably the best way we've found to organize and represent textual information.

A document you can hold a meaningful conversation with is a big freaking deal, far superior to any conventional resource when you're trying to learn how to do something new.


My current working hypothesis is that the way to get the best out of an LLM (and any AI which uses them as the human interface layer) is the same way to get the best out of a human — because it's trained on humans interacting with other humans.

If you yell and swear at the chatbot, you'll get the response most similar to how a human would respond to yelling and swearing. I know the stereotype about drill instructors, but does that even work for marines, or is it just an exercise in learning to cope with stress?


There’s a straightforward way to reach this testing state for optimization problems. Write 2 implementations of the code, one that is simple/slow and one that is optimized. Generate random inputs and assert outputs match correctly.

I’ve used this for leetcode-style problems and have never failed on correctness.

It is liberating to code in systems that test like this for the exact reasons mentioned in the article.


Non-overlapping problem spaces.

Leet-code ends in unit-testing land, this product begins in system-testing land.


Not high level but a RTO policy seems like an easy way to downsize without having to fire people now that interest rates are higher.


I'm finishing up reading Cadillac Desert, a book describing the history of dam building and irrigation project in the US. It goes deep into all aspects of irrigation projects from environmental to economic to political.

Very cool to hear that this dam is getting torn down just 30 years after the book was published.


That book’s a classic for a reason! It’s a fascinating read, whether you’re a water nerd like I am or not - very accessible. I tell people to check it out if they enjoyed Chinatown, since the historical episode it drew inspiration from is so much wilder


Great book. If you like fiction, also check out The Monkey Wrench Gang.


Seconding that recommendation!


My take is that dicts are fine as long as your code is well tested. Yeah, dataclasses and frozen classes have much better typing support, but if you code is mostly reading and writing JSON like many modern cloud apps, it can be easier to use plain dicts combined with decent tests to make sure you don't break downstream services.


I’m currently in the middle of refactoring a well-tested but dict-ly typed Python codebase into dataclasses.

Once your app involves a certain amount of business logic, those dicts just beg for you to shove in just that one more temporary field that you’ll be needing later in the calculation.

Of course you can abuse classes just the same. But I feel it happens less often. There’s more social control. The type has a name now. And you have to visit it at home whenever you’re trying to add a field to it. My impression is that some people are underestimating the psychological power of those nudging factors.


Refactoring those services sucks though if the data model changes. :(


I remember when that popped up on my screen in college. Fun problems. There was one where you derived the page rank algorithm, but they framed it as radioactive decay.

Definitely a little questionable though for them to use their browser monopoly and search history to poach developers...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: