The LLM is resistant to website updates that would break normal scraping
If you do like the author did and ask it to generate xPaths, you can use it once, use the xPaths it generated for regular scraping, then once it breaks fall back to the LLM to update the xPaths and fall back one more time to alerting a human if the data doesn't start flowing again, or if something breaks further down the pipeline because the data is in an unexpected format.
This is absolutely true, but it does have to be weighed against the performance benefits of something that doesn't require invoking an LLM to operate.
If the cost of updating some xPath things every now and then is relatively low - which I guess means "your target site is not actively & deliberately obfuscating their website specifically to stop people scraping it"), running a basic xPath scraper would be maybe multiple orders of magnitude more efficient.
Using LLMs to monitor the changes and generate new xPaths is an awesome idea though - it takes the expensive part of the process and (hopefully) automates it away, so you get the benefits of both worlds.
If you do like the author did and ask it to generate xPaths, you can use it once, use the xPaths it generated for regular scraping, then once it breaks fall back to the LLM to update the xPaths and fall back one more time to alerting a human if the data doesn't start flowing again, or if something breaks further down the pipeline because the data is in an unexpected format.