Hacker Newsnew | past | comments | ask | show | jobs | submit | wastemaster's commentslogin

Fair — "understands" is generous. It produces text that sounds right. Whether that counts as understanding, honestly I don't know and it doesn't change our approach either way.


The difference between right and sounds right probably should change your approach. There's a reason you don't see humans wearing badges "Warning. Humans can make mistakes".


Hey HN. I'm the CTO at a startup building AI agents for hotels. Lately, I've been seeing a lot of hype around MCP being the "magic bullet" that will instantly make legacy systems AI-ready. After dealing with 15-year-old SOAP APIs and the reality of how LLMs handle multi-step database writes, I wrote down why I think MCP is great, but fundamentally misunderstood by the market. Happy to answer any questions about bridging LLMs with legacy monoliths.


Spot on. We often joke that a raw LLM acts exactly like an over-eager junior sales rep—it desperately wants to say "yes" to please the customer. Because they learned from us, they inherit the bad human habit of equating "helpfulness" with agreement. The difference is an AI will scale those broken promises instantly, which is why the constraints have to be architectural.


Happy to answer questions about failure modes, where we draw the line between retrieval and operational decisions, and which constraints actually reduced risk in production. The main lesson for us was that “don’t hallucinate” is too soft for real operations. We had to replace soft prompting with hard boundaries: verified data only, checks for critical actions, and escalation before collecting fulfillment details for anything unverified.


We deployed AI agents across 25 hospitality properties and logged ~46,000 guest conversations. The main failure mode wasn’t tone or retrieval. It was “confident gap-filling”: the model promising operational outcomes nobody had verified. This post is about the production failures we saw and the constraints we added to stop them.


Thanks — good call. For context, IVR is the "Press 1 for English, Press 2 for..." menu that most hotels use before routing calls. The key finding was that removing it made our metrics look worse, but actually revealed callers we were losing silently at the IVR stage.


Author here. We run AI voice agents across a European hotel network. This post covers 6 months of call data and one change (removing the IVR) that revealed something we didn't expect — our old phone system was silently filtering out callers before they ever reached the AI, making our metrics look better than they were. Happy to go deep on the voice pipeline, language detection, or anything else. Full product overview: https://github.com/polydom-ai/una-by-polydom


I’m the CTO of Polydom.ai. We built Una to fundamentally change how hospitality teams work by automating the entire first line of guest interaction.

Instead of a video or a pitch deck, the agent conducts its own live interview. You can "interview" the candidate to see how it handles operations across phone, web, and WhatsApp in real-time.

Why this matters: Una isn't just a chatbot; she's a digital employee that shields human staff from 100% of routine first-line noise—handling everything from initial booking inquiries to 3 AM guest requests.

Key Facts: * 24/7 First-Line: Fully handles calls, bookings, and guest requests across all channels. * Economics: $1/hour (AI) vs. ~$20/hour (Human staff). * Traction: Already running operations at properties like LeeGardens Orlando (135 rooms) and The President Hotel (142 rooms). * Deep Integration: Connects directly with PMS platforms like Mews, Apaleo, Clock, Guesty, and Hospitable.

I'm curious to hear feedback on the conversational quality and the shift toward autonomous first-line operations.


Got two gdpr complaints, so I added automated removal of uploaded photos and metadata extracted from those photos.

Now photos and data are only stored for 3 minuites, then deleted (app artitecture requires server side processing of uploaded file and to compare results with your photo I have to store it - to be able to show)

Also reflected information above on the website itself. Thank you for your interest to this project!


Well done. Does this apply to all images that have ever been uploaded to the website, or only to the ones uploaded after your update? If an user uploaded an image 2 hours ago, will their image be stored or has it been deleted already?


applies to every uploaded photo of this website, uploaded at any time


Nice, thanks for implementing this


yep needs some tuning // added some more resources to it


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: