Hacker Newsnew | past | comments | ask | show | jobs | submit | lubujackson's commentslogin

As a writer and engineer, I don't see it.

Can AI kludge together a ripping story? Sure. But there is a reason people still write new books and buy new books - we crave the human connection and reflection of our current times and mores.

This isn't just a high art thing. My kids read completely different YA novels than I did, with just a few older canon titles persisting. I can hand them a book I loved as a kid and it just doesn't connect with them anymore.

How I think AI CAN produce art that people want is through careful human curation and guided generation. This is structurally the same as "human-in-the-loop" programming. We can connect to the artistry of the construction, in other words the human behind the LLM that influenced how the plot was structured, the characters developed and all the rest.

This is akin to a bad writer with a really good editor, or maybe the reverse. Either way, I think we will see a bunch of this and wring our hands because AI art is here, but I don't think we can ever take the human out of that equation. There needs to be a seed of "new" for us to give a shit.


Again, this article is not discussing the quality of generative AI. Sanderson clearly believes himself that AI is already able to produce things that are indiscernible to art from his eyes.

What this article is trying to get across is that art is a transformative process for the human who creates it, and by using LLMs to quickly generate results, robs the would be artist of the ability for that transformation to take place. Here's a quote from Sanderson:

"Why did I write White Sand Prime? It wasn’t to produce a book to sell. I knew at the time that I couldn’t write a book that was going to sell. It was for the satisfaction of having written a novel, feeling the accomplishment, and learning how to do it. I tell you right now, if you’ve never finished a project on this level, it’s one of the most sweet, beautiful, and transcendent moments. I was holding that manuscript, thinking to myself, “I did it. I did it."


This is such a sad comment.

This is great, I will definitely make use of this!

Love the idea, but tried "japantown" which is mentioned in the README but doesn't exist in the app? https://microclimates.solofounders.com/sf-weather/japantown

thanks for catching this. just fixed.

note that I also have a system where if the temperature seems outlier compared to direct neighbors it averages the 3 nearest neighbors. this usually occurs in neighborhoods with a single sensor that can skew the results heavily at certain times of the day, etc.


I understand the hand-wringing in the humanities, but even if LLMs can output wonderful stories they can't really ever get past pulp fiction levels.

Question: why do people write and buy new books? There are millenia of amazing works, but people prefer to read new things. Some gems persist, but even my kids read entirely different YA novels then I did as a kid.

Art is communication. A big part of why we like new art is because it reflects the culture and mores of the now. LLMs can only give you the most like next token prediction - it is antithetical to what we want when we interact with art.

Not to say it can't spit out reasonable plots for shows or be a valuable aid to writers, but I think it will continue to serve best as an aid to humans, even more so than in coding. Maybe artists will turn into LLM wranglers like software engineers are starting to, but a human-in-the-loop is going to remain valuable.


I agree whole-heartedly with the source article as well as this comment. The point is that the work of estimation is most of the work. We can have better estimates if we break things down to bite-sized chunks, but "when will this be done" is largely impossible and comes down to many external factors. Laypeople understand this implicitly in other contexts.

My favorite metaphor is building something like a new shopping mall. If you ask for an estimate you first need to architect the entire thing. This is equivalent to breaking down the task into sprints. In most companies the entire architecture phase is given very little value, which is insane to me.

Once we have our blueprints, we have other stakeholders, which is where things really go off the rails. For the mall, maybe there is an issue with a falcon that lives on the land and now we need to move the building site, or the fixtures we ordered will take 3 extra months to be delivered. This is the political part of estimating software and depends a lot on the org itself.

Then, finally building. This is the easy part if we cleared the precursor work. Things can still go wrong: oops we hit bedrock, oops a fire broke out, oos the design wasn't quite right, oops we actually want to change the plan.

But yes, estimates are important to businesses. But businesses have a responsibility to compartmentalize the difference. Get me to a fully ticketed and approved epic and most engineers can give you a pretty good estimate. That is what businesses want, but they consider the necessary work when they Slack you "how long to build a mall?"


I've also seen it argued that real world estimates for things like construction projects are so good because 99% of it is do-overs from similar projects in the past; everyone knows what it takes to pour a column or frame a floor or hang a beam.

Whereas with software most of what was done previously is now an import statement so up to 80-100% of the project is the novel stuff. Skilled leaders/teams know to direct upfront effort toward exploring the least understood parts of the plan to help reduce down-stream risk but to really benefit from that instinct the project plan has to regularly incorporating its findings.


Real world estimates for construction projects are often way off. Especially for remodeling or renovation of older buildings, where the most serious problems can remain hidden until you get into the demolition phase.

Indeed yes. Union Station in Toronto has been like this; twenty years in and no end in sight because every wall they open reveals more problems to solve.

Having done a similar "rendition" to a book of poetry, I agree it is not the same as translating directly. It does open up a question about the fuzziness of "what is even translation?"

Especially when we talk about translating historic writing. Yes, not knowing the source language is a huge barrier. But so is not knowing specific cultural touchstones or references in the text. In-depth translations usually transliterate as a part of the process. Many words and language patterns are untranslatable, which is why perfect translations are impossible.

When translating poetry, issues of meter and rhythm are even more important. It comes down to what the purpose of a translation is meant to achieve. Yes, there are ideas and themes but there is no hiding the fact that translators always imprint their own perspective on a work - it's unavoidable and personally shouldn't even be the goal.

Most translators of popular texts look closely at other translations to "triangulate" on meaning and authorial intent. Older translations may use archaic writing but have historical understanding, well-researched translations may be more precise about tricky words or concepts. More "writerly" translations tend to rebuild the work from the building blocks and produce a more cohesive whole. None of these are wrong approaches.

I like the term "rendition" because it throws away the concept of the "authoritative translation". I like to think of translations the same way as cover songs. The best covers may be wildly different from the original but they share the same roots.

As a reader, if you can't ever "hear" the original because you don't know thr language you can still appreciate someone's "cover version", or triangulate the original by reading multiple translations.


Beautifully, this reads like it came right out of Le Guin's rendition of the Tao Te Ching:

Most translators of popular texts look closely at other translations to "triangulate" on meaning and authorial intent. Older translations may use archaic writing but have historical understanding, well-researched translations may be more precise about tricky words or concepts. More "writerly" translations tend to rebuild the work from the building blocks and produce a more cohesive whole. None of these are wrong approaches.


For those with a passing interest in this topic and quite some patience, "le ton beau de Marót" by Douglas Hofstadter is a whole book of musings about translation, particularly of poetry.

It's a fun book full of interesting linguistic trivia.

The patience would be needed to get through the 50 or so translations of the same poem, all different and "wrong" in some way.


also recommended: After Babel by George Steiner.

https://en.wikipedia.org/wiki/After_Babel


See also the preface[1] to Hofstadter's translation[2] of Eugene Onegin.

[1] https://jasomill.at/HofstadterOneginPreface.pdf

[2] https://en.wikipedia.org/wiki/Special:BookSources/0-465-0209...


I guess this is Anthropic's "don't be evil" moment, but it has about as much (actually much less) weight then when it was Google's motto. There is always an implicit "...for now".

No business is every going to maintain any "goodness" for long, especially once shareholders get involved. This is a role for regulation, no matter how Anthropic tries to delay it.


At least when Google used the phrase, it had relatively few major controversies. Anthropic, by contrast, works with Palantir:

https://www.axios.com/2024/11/08/anthropic-palantir-amazon-c...


> Anthropic incorporated itself as a Delaware public-benefit corporation (PBC), which enables directors to balance stockholders' financial interests with its public benefit purpose.

> Anthropic's "Long-Term Benefit Trust" is a purpose trust for "the responsible development and maintenance of advanced AI for the long-term benefit of humanity". It holds Class T shares in the PBC, which allow it to elect directors to Anthropic's board.

https://en.wikipedia.org/wiki/Anthropic

Google didn't have that.


It says: This constitution is written for our mainline, general-access Claude models. We have some models built for specialized uses that don’t fully fit this constitution; as we continue to develop products for specialized use cases, we will continue to evaluate how to best ensure our models meet the core objectives outlined in this constitution.

I wonder what those specialized use cases are and why they need a different set of values. I guess the simplest answer is they mean small fim and tools models but who knows ?



> This is a role for regulation, no matter how Anthropic tries to delay it.

Regulation like SB 53 that Anthropic supported?

https://www.anthropic.com/news/anthropic-is-endorsing-sb-53


Yes, just like that. Supporting regulation at one point in time does not undermine the point that we should not trust corporations to do the right thing without regulation.

I might trust the Anthropic of January 2026 20% more than I trust OpenAI, but I have no reason to trust the Anthropic of 2027 or 2030.


There's no reason to think it'll be led by the same people, so I agree wholeheartedly.

I said the same thing when Mozilla started collecting data. I kinda trust them, today. But my data will live with their company through who knows what--leadership changes, buyouts, law enforcement actions, hacks, etc.


I don’t think the “for now” is the issue as much as the “nobody thinks they are doing evil” is the issue.

This is fun math to play with, but completely misses the point of how and why options are priced the way they are. Think of horse racing - when a horse is 1000 to 1 odds the odds of that horse winning are much, much lower. The odds are non-zero, but the oddsmakers are considering who the other side is and why they are buying that ticket.

Most options are actually used to hedge large positions and are rolled over well before the "due date". YOLOing calls and puts is a Robin Hood phenomenon and the odds of "fair pricing" are heavily affected by these big players, so using that data as some sort of price discovery is flawed from the get go.


Are you saying options are not priced at the cost of hedging them? That implies a lot of money could be made by arbitraging between the hedge and the option.

That sounds like an egregious statement. Markets don't have simple persistent arbitrage opportunities like that, do they?


Based on the trajectory of LLMs I bet a good tech writer will soon be a more valuable engineer than a "leetcode-hard" engineer for most teams.

Obviously we still need people to oil the machine, but... a person who deeply understands the product, can communicate shortcomings in process or user flows, can quickly and effectively organize their thoughts and communicate them, can navigate up and down abstraction levels and dive into details when necessary - these are the skills LLMs require.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: