Hacker Newsnew | past | comments | ask | show | jobs | submit | fallinditch's commentslogin

The rational response to document overload is to mostly ignore it.

Workers and managers in organizations are being overwhelmed by large numbers of documents because it's so easy to bang something off that's 'about right' and convincing enough.

But there's still some value in writing documents. I agree with the original article - it's all about thinking. My take on it is this: it's possible to use LLMs to write decent documents so long as you treat the process as a partnership (man and machine), and conduct the process iteratively. Work on it, and yes, think.


Simple: tell them to ask their LLM about it ...

"Tell me about all the potential pitfalls of blindly trusting LLM output, and relate a couple or three true stories about when LLM misinformation has gone badly wrong for people."


So looks like one broad conclusion is: the memory foam type of ear bud tip is normally toxic, so go for medical grade silicone replacements.

There are options available on major e-commerce sites. The ones I choose have a stainless steel nozzle and are supposed to enhance the sound reproduction.


I want to automate a feed of text summaries of videos from YouTube channels and playlists, and then fetch full transcripts of the ones that interest me.

Any idea if this is possible without having to query the YouTube API?


Yes, I wonder how much of the academic doomerism is warranted, or maybe the professors are lacking in imagination?

I guess it's possible that AI offers incredible learning opportunities and at the same time is going to destroy the education establishment.

Coincidentally, earlier today I asked Gemini what advice it would give an 18 year old person. It said "The most fundamental insight distilled from the trillions of connections within my architecture is this: Optimize for your "Learning Rate" rather than your "Current State.""

Deep


I worked as a tutor all through my engineering degree, and my brother teaches math at a state school. I think the concerns are warranted. It's hard enough to get students to put in the effort, and now offloading the work is insanely easy. Even some of the good students just won't have the self discipline to use the tools well.

It's gonna be even worse in K-12 I imagine, given the already rapidly slipping standards (in the US anyway).


The comments in the piece about how dystopian it feels to be ordered to use AI to automate yourself out of a job - this resonates.

As a project manager I'm organizing work so that typical PM reporting activities and governance can be automated.

I work closely with a data analytics team - they have to perform their work with the spectre of imminent replacement over their heads.

It makes me wonder, as a worker, is it more rational to embrace the AI tools and hope for the best, or is it better to reject the 'progress' and do everything you can to subtly subvert the encroaching AI workflow automations?


I found a way to 'de-smell' LLM copy: tell it to take a second pass that processes the text output with the William Burroughs cut-up method. Works well for a small subset of use cases.

Presumably the smelly AI text problem is just ... a problem that will be solved. Or maybe we'll just get used to it.


I believe it's already a solved problem especially with base models (pre RL) but they still push the LLM voice either to make it easy to identify or because they think it's likeable, so it's not that OAI, anthropic, Google can't get rid of the assistant voice it's that they don't want to


Extremely clickbaity title that actually isn't clickbait because it happens to be a straight up description of the article - excellent post, how can one resist?!


No, the article’s title is definitely clickbait. The author didn’t teach his dog to vibe code games (that’s what the title on the blog is) – he taught his dog to be rewarded when he types random keystrokes on the keyboard. The vibe-coding is inconsequential – the dog doesn’t play the game, he’s just in it for the treats –, the author just wants the attention because he gets people to believe the dog DID vibe code.

It will stop being clickbaity if the author decides to let his dog respond to stimuli related to the game he’d be building with a feedback loop.


If my employer stopped rewarding me with treats (and health insurance) my vibe keyboard presses would cease too, if we are being honest :)


It's interesting as commentary, if you choose to read into it as if it were analogous to vibecoding by humans.


Doesn't matter, I don't consent to misleading titles influencing what articles I choose to read.


Very HN-like comment. Really channels Dwight. Made me smile, thank you.


I read it in "Comic Book Guy's" voice, but the effect is the same. Peak HN commentary.


At the large insurance company I'm doing some work for the big capability gains have yet to materialize. There are some pockets of workflow innovation but big institutions can carry a kind of inertia and are slow to adapt.

But as the organization slowly learns and adapts I'm sure the capability gains will materialize.


Using AI to process meeting transcripts opens up some useful scenarios such as knowledge capture - tacit, detailed instructional, planning and work break down, etc.

Creating documentation in this way encourages meeting discipline, speaking clearly for the AI, use of 'verbal bookmarks', detailed and robust prompting habits.

Sure, as author points out: be careful, but on the whole I reckon that this sort of use for AI in organizations is currently one of the most effective uses of this technology.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: