> copy pasting some configuration I might not really understand
Uh, yea... why would you? Do you do that for configurations you found that weren't from LLMs? I didn't think so.
I see takes like this all the time and I'm really just mind-boggled by it.
There are more than just the "prompt it and use what it gives me" use cases with the LLMs. You don't have to be that rigid. They're incredible learning and teaching tools. I'd argue that the single best use case for these things is as a research and learning tool for those who are curious.
Quite often I will query Claude about things I don't know and it will tell me things. Then I will dig deeper into those things myself. Then I will query further. Then I will ask it details where I'm curious. I won't blindly follow or trust it like I wouldn't a professor or anyone or any thing else, for that matter. Just like I would when querying a human for or the internet in general for information, I'll verify.
You don't have to trust it's code, or it's configurations. But you can sure learn a lot from them, particularly when you know how to ask the right questions. Which, hold onto your chairs, only takes some experience and language skills.
My comment is mainly in opposition to the "five minutes" part from parent.
If you have 5 minutes then you can't as you say :
> Then I will dig deeper into those things myself ...
So my point is I don't care if it's coming from LLM or a random blog, you won't have time to know if it's really working (ideally you would want to benchmark the change).
If you can't invest the time better to stay with the defaults, which in most project the maintainers spent quite a bit of time to make sensible.
Original commenter here. I don't disagree with your larger point. However, it turns out that the default settings for PostgreSQL have been super conservative for years; as a stable piece of infrastructure they seem to prefer defaulting to a constrained environment rather than making assumptions about resources. To their credit, PostgreSQL does ship with sample configs for "medium" and "large" deployments which are well-documented with comments and can be simply copied over the original default config.
I happen to have a good bit of experience with PostgreSQL, so that colored the "5 minutes" part of it. Still, most of the time, you "have" more than 5 minutes to create the orchestrator's deployment config for the service (which never exists by default on any k8s-based orchestrator). I'm simply saying to not be negligent of the service's own config, even though a default exists.
It's crazy how wildly inaccurate "top-of-the-list" LLMs are for straightforward yet slightly nuanced inquiries.
I've asked ChatGPT to summarize Go build constraints, especially in the context of CPU microarchitectures (e.g. mapping "amd64.v2" to GOARCH=amd64 GOAMD64=v2). It repeatedly smashed its head on GORISCV64, claiming all sorts of nonsense such as v1, v2; then G, IMAFD, Zicsr; only arriving at rva20u64 et al under hand-holding. Similar nonsense for GOARM64 and GOWASM. It was all right there in e.g. the docs for [cmd/go].
This is the future of computer engineering. Brace yourselves.
Isn't that the whole point, to ask it specific tidbits of information? Are we to ask it large, generic pontifications and claim success when we get large, generic pontifications back?
ChatGPT is exceptionally good at using search now, but that's new this year, as of o3 and then GPT-5. I didn't trust GPT-4o and earlier to use the search tool well enough to be useful.
You can see if it's used search in the interface, which helps evaluate how likely it is to get the right answer.
I use it as a tool that understands natural language and the context of the environments in work in well enough to get by, while guiding it to use search or just facts I know if I want more one-shot accuracy. Just like I would if I were communicating with a newbie who has their own preconceived notions.
I mean, like most tools they work when they work and don't when they fail. Sometimes I can use an llm to find a specific datum and sometimes I use google and sometimes I use bing.
You might think of it as a cache, worth checking first for speed reasons.
The big downside is not that they sometimes fail, its that they give zero indication when they do.
How was the LLM accessing the docs? I’m not sure what the best pattern is for this.
You can put the relevant docs in your prompt, add them to a workspace/project, deploy a docs-focused MCP server, or even fine-tune a model for a specific tool or ecosystem.
I've done a lot of experimenting with these various options for how to get the LLM to reference docs. IMO it's almost always best to include in prompt where appropriate.
For a UI lib that I use that's rather new, specifically there's a new version that the LLMs aren't aware of yet, I had the LLM write me a quick python script that just crawls the docs site for the lib and feeds the entire page content back into itself with a prompt describing what it's supposed to do (basically telling it to generate a .md document with the specifics about that thing, whether it's a component or whatever, ie: properties, variants, etc in an extremely brief manner) as well as build an 'index.md' that includes a short paragraph about what the library is and a list of each component/page document that is generated. So in about 60 seconds it spits out a directory full of .md files and I then tell my project-specific LLM (ie: Claude Code or Opencode within the project) to review those files with the intention of updating the CLAUDE.md in the project to instruct that any time we're building UI elements we should refer to the index.md for the library to understand what components are available and when appropriate to use one of them we _must_ review the correlating document first.
Works very very very well. Much better than an MCP server specifically built for that same lib. (Huge waste of tokens, LLM doesn't always use it, etc) Well enough that I just copy/paste this directory of docs into my active projects using that library - if I wasn't lazy I'd package it up but too busy building stuff.
Don't ask LLMs that are trained on a whole bunch of different versions of things with different flags and options and parameters where a bunch of people who have no idea what they're doing have asked and answered stackoverflow questions that are likely out of date or wrong in the first place how to do things with that thing without providing the docs for the version you're working with. _Especially_ if it's the newest version, regardless if it's cutoff date was after that version was released - you have no way to know if it was _included_. (Especially about something related to a programming language with ~2% market share)
The contexts are so big now - feed it the docs. Just copy paste the whole damn thing into it when you prompt it.
So run the LLM in an agent loop: give it a benchmarking tool, let it edit the configuration and tell it to tweak the settings and measure and see how much if a performance improvement it can get.
That's what you'd do by hand if you were optimizing, so save some time and point Claude Code or Codex CLI or GitHub Copilot at it and see what happens.
Probably about 10 cents, if you're even paying for tokens. Plenty of these tools have generous free tiers or allowances included in your subscription.
I run a pricing calculator here - for 50,000 input tokens, 5,000 output tokens (which I estimate would be about right for a PostgreSQL optimization loop) GPT-5 would cost 11.25 cents: https://www.llm-prices.com/#it=50000&ot=5000&ic=1.25&oc=10
I use Codex CLI with my $20/month ChatGPT account and so far I've not hit the limit with it despite running things like this multiple
times a day.
Anyone can learn to unblock a sink by watching YouTube videos these days, and yet most people still hire a professional to do it for them.
I don't think end users want to "optimize their PostgreSQL servers" even if they DID know that's a thing they can do. They want to hire experts who know how to make "that tech stuff" work.
My analogy holds up. Anyone could type "optimize my PostgreSQL database by editing the configuration file" into an LLM, but most people won't - same as most people won't watch YouTube to figure out how to unblock a sink.
If you don't like the sink analogy what analogy would you use instead for this? I'm confident there's a "people could learn X from YouTube but chose to pay someone else instead" that's more effective than the sink one.
You're exactly right (original commenter here). I began my career in professional software engineering in 1998. I've despaired that trained monkeys could probably wreck this part of the economy for over 25 years. But we're still here. :D
Personally I'd like to hire a DB expert who also knows how to drive an agentic coding system to help them accelerate their work. AI tools, used correctly, act as an amplifier of existing knowledge and experience.
Some years ago when everybody here gave their anecdotal evidence about how Bitcoin and Blockchain were the future and they used it every day. You were a fool if you did not jump on the bandwagon.
If the personal opinions on this site were true, half of the code in the world would be functional, lisp would be one of the languages most used and Microsoft would have not bougth DropBox.
I really think HN hive minds opinions means nothing. Too much money here to be real.
You can become a DB expert by reading books, forums and practicing hard.
These days you can replace those books and forums with a top tier LLM, but you still need to put in the practice yourself. Even with AI assistance that's still a lot of work.
I don't appreciate how you accuse me of "making statements that are just not true" without providing a solid argument (as opposed to your own opinion) as to why what I'm saying isn't true.
I think I have more trust in the PG defaults that in the output of a LLM or copy pasting some configuration I might not really understand ...