Hacker Newsnew | past | comments | ask | show | jobs | submit | JimsonYang's commentslogin

Being someone who took a month or two to look into mcp servers, I'm geniunely surprised the monetization of mcp servers have not come yet. I was pretty positive as ai agents grow bigger then the need for paid quality servers will surely increase. That said I do understand it's still very early and 99% of people have never heard mcp servers.

My personal take is that mcp servers had very little value outside of info gathering like connecting claude code to supabase to save on token usage. But we'll obv see as the ecosystem develops and people develop new unique servers


Counter point:As a renter, I get faster replies from AI and I don't really mind it, I get the business rational and most of my questions are basic anyways: "what's the pet policy, am I responsible for water and electricity, what's the cost of parking and how many are available".

While the article hints at AI guided tours, I highly doubt that would happen purely to liability or squatting concerns.

Overall the article feels more like a rage piece for anti-ai sentiment rather than a fair constructive criticisms of the prevailing use of AI in the real estate industry


AI makes technical debt so much worse. Speaking from the frontend perspective- reducing technical debt is all about making changeable standardized code. However models change structure, variables names, or changing variables means I'm building a lot of frontend on a shaky foundation. I routinely spend hours a week just fixing weird frontend builds. And as I'm the frontend role, my cofounder doesn't understand design, so I often get messy code


i love how modern phones have a bunch of UX and design to solve extremely mundane problems that we would've never considered


I think that is probably what I'll end up doing. Since the data is text based data. Combined that with the approach of pre analzying quantitative data. To feed to the LLM

I figured I ask this question because there might've been a technique I'm not aware about


ty


you got it perfectly. Taking scraped results and presenting the data to customers. Based on these results, SEO/AEO experts then make suggestions like "write about X" to help you "improve" in the results. Is it part dark magic part truth? probably, but that's the SEO industry

I think the biggest issue in the industry is there's no way to defintiely tell how useful are these converted users or whether performance is drastically better by showing up in chatgpt. Although attribution itself is a difficult issue in the online marketing industry


How third parties query your product: For ChatGPT specifically, they open a headless browser, ask a question, and capture the results like the response and any citations. From there, they extract entities from the response. During onboarding I’m asked who my competitors are and the response is going to be recongized via the entities there. For example, if the query is “what are the best running shoes” and the response is something like “Nike is good, Adidas is okay, and On is expensive,” and my company is On, using my list of compeitotrs entity recognition is used to see which ones appear in the response in which order.

If this weren’t automated, the process would look like this: someone manually reviews each response, pulls out the companies mentioned and their order, and then presents that information.

2) Distribution of queries This is a bit of a dirty secret in the industry (intentional or not): usually what happens is you want to take snapshots and measure them overtime to get distribution. However a lot of tools will run a query once across different AI systems, take the results, and call it done.

Obviously, that isn’t very representative. If you search “best running shoes,” there are many possible answers, and different companies behave differently. What better tools do like Profound is run the same prompt multiple times. From my estimates, Profound runs up to 8 times. This gives a broader snapshot of what tends to show up everyday. You then aggregate those snapshots over time to approximate a distribution.

As a side note: you might argue that running a prompt 8 times isn’t statistically significant, and that’s partially true. However, LLMs tend to regress toward the mean and surface common answers over repeated runs and we found 8 times to be a good indicator- the level of completeness depends on the prompt(i.e. "what should i have for dinner" vs "what are good accounting software for startups", i can touch on that more if you want


As I understand, in normal SEO the number of unique queries that could be relevant to your product is quite large but you might focus on a small subset of them "running shoes" "best running shoes" "running shoes for 5k" etc. because you assume that those top queries capture a significant portion of the distribution. (e.g. perhaps those 3 queries captures >40% of all queries related to running shoe purchases).

Here the distribution is all queries relevant to your product made by someone who would be a potential customer. Short and directly relevant queries like "running shoes" will presumably appear more times than much longer queries. In short, you can't possibly hope to generate the entire distribution, so you sample a smaller portion of it.

But in LLM SEO it seems that assumption is not true. People will have much longer queries that they write out as full sentences: "I'm training for my first 5k, I have flat feet and tore my ACL four years ago. I mostly run on wet and snowy pavement, what shoe should I get?" which probably makes the number of queries you need to sample to get a large portion of the distribution (40% from above) much higher.

I would even guess it's the opposite and the number of short queries like "running shoes" fed into an LLM without any further back and forth is much lower than longer full sentence queries or even conversational ones. Additionally because the context of the entire conversation is fed into the LLM, the query you need to sample might end up being even longer

for example: user: "I'm hoping to exercise more to gain more cardiovascular fitness and improve the strength of my joints, what activities could I do?"

LLM: "You're absolutely right that exercise would help improve fitness. Here are some options with pros and cons..."

user: "Let's go with running. What equipment do I need to start running?"

LLM: "You're absolutely right to wonder about the equipment required. You'll need shoes and ..."

user: "What shoes should I buy?"

All of that is to say, this seems to make AI SEO much more difficult than regular SEO. Do you have any approaches to tackle that problem? Off the top of my head I would try generating conversations and queries that could be relevant and estimating their relevance with some embedding model & heuristics about whether keywords or links to you/competitors are mentioned. It's difficult to know how large of a sample is required though without having access to all conversations which OpenAI etc. is unlikely to give you.


short answer it depends and idk. When I was doing some testing with prompts like "what should I have for dinner" adding variations, "hey ai, plz, etc" doesn't deviate intention much. As ai is really good at pulling intent. But obv if you say "i'm on keto what should i have for dinner" it's going to ignore things like "garlic, pesto, and pasta noodles". Although it pulls a similar response to "what's a good keto dinner". From there we really assume the user can know their customers what type of prompts led them to chatgpt. You might've noticed sites asking if you came from chatgpt, i would take that a step further and asked them to type the prompt they used.

But you do bring a good perspective because not all prompts are equal especially with personaliztion. So how do we solve that problem-I'm not sure. I have yet to see anything in the industry. The only thing that came close was when a security focused browser extension started selling data to aeo companies- that's how some companies get "prompt volume data".


I see what you are saying, perhaps no matter the conversation before as long as it doesn't filter out some products via personalized filters (e.g. dietary restrictions) it will always give the same answers. But I do feel the value prop of these AI chatbots is that they allow personalization. And then it's tough to know if 50% of the users who would previously have googled "best running shoes" instead now ask detailed questions about running shoes given their injury history etc and that changes what answers the chatbot gives.

I feel like without knowing the full distribution, it's really tough to know how many/what variations of the query/conversation you need to sample. This seems like something where OpenAI etc. could offer their own version of this to advertisers and have much better data because they know it all.

Interesting problem though! I always love probability in the real world. Best of luck, I played around with your product and it seems cool.


oh wow, didn't even noticed that. That's a good catch. sry for having it happening to you. Will be removing the tours


mb, I use reddit a lot more than hn, so that's a complete miss on my end


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: