Hacker Newsnew | past | comments | ask | show | jobs | submit | pierre's commentslogin

Yes, it evaluate using frontier model for parsing from all 3 major provider (google, anthropic and openai). It is also easy to extend to evaluaye new model (code/dataset is available)

Build a benchmark to evaluate how good document parser work on a dataset of 2000 PDFs manually annotated, trying to evaluate accross multiple dimensions: charts, tables, text styling, text correctness, and attribution.

The benchmark evaluate performance on full page (not selected part of the pages), and evaluaye different OSS / crobtier model / commercial approach.

For transparency it is available as a HF leaderbaord.

Paper: https://arxiv.org/abs/2604.08538


A fast and open source spatial text parser


A new document (including PDF) parser that outperform traditional tool such as PyPDF or MuTools.

Link to open source repo: https://github.com/run-llama/liteparse


Contributor here, happy to answer any questions!



Main issue is that token are not equivalent across provider / models. With huge disparity inside provider beyond the tokenizer model:

- An image will take 10x token on gpt-4o-mini vs gpt-4.

- On gemini 2.5 pro output token are token except if you are using structure output, then all character are count as a token each for billing.

- ...

Having the price per token is nice, but what is really needed is to know how much a given query / answer will cost you, as not all token are equals.


yeah I am going to add an experiment that runs everyday and the cost of that will be a column on the table. It will be something like summarize this article in 200 words and every model gets the same prompt + article


For me, and I suspect a lot of other HN readers, a comparison/benchmark on a coding task would be more useful. Something small enough that you can affordably run it every day across a reasonable range of coding focused models, but non trivial enough to be representative of day to day AI assisted coding.

One other idea - for people spending $20 or $200/month for AI coding tools, a monitoring service that tracks and alerts on detected pricing changes could be something worth paying for. I'd definitely subscribe at $5/month for something like that, and I'd consider paying more, possibly even talking work into paying $20 or $30 per month.


On gemini 2.5 pro output token are token except if you are using structure output, then all character are count as a token each for billing.

Can you elaborate this? I don’t quite understand the difference.


I hadn't heard of this before either and can't find anything to support it on the pricing page.

https://ai.google.dev/gemini-api/docs/tokens


LlamaIndex | Senior/Staff Software Engineer (LlamaParse) | San-Francisco, CA | Remote | Full-time | $100K – $300K + Equity | https://www.llamaindex.ai/careers

LlamaIndex is building a platform for AI agents that can find information, synthesize insights, generate reports, and take actions over the most complex enterprise data.

We are seeking an exceptional engineer to join our growing LlamaParse team. Will work at the intersection of document processing, machine learning, and software engineering to push the boundaries of what's possible in document understanding. As a key member of a focused team, will have significant impact on our product's direction and technical architecture.

We are also hiring for a range of other roles, see our career page:

- Backend Software Engineer

- Forward Deploy Engineer

- Founding AI Engineer

- Open Source Engineer Python

- Founding Lead Product Manager

- Platform Engineer

- Senior Developer Relation Engineer

- Senior / Staff Backend Engineer

- Product Marketing Manager


Hi Pierre, I see that the Platform Engineer position (which probably matches me most) says it's Hybrid. I'm very interested, but I live in Ohio. I understand sometimes things get clicked on accident, and just wanted to know if there might be an issue with this listing or if it's truly hybrid and the one you posted is remote, etc. Don't want to gum up the works :)


You mention Product Manager but the role isn't mentioned on the career page.


If you want to try agentic parsing we added support for sonnet-3.7 agentic parse and gemini 2.0 in llamaParse. cloud.llamaindex.ai/parse (select advanced options / parse with agent then a model)

However this come at a high cost in token and latency, but result in way better parse quality. Hopefully with new model this can be improved.


This is a nice UI for end users, however it seems to be a seems wrapper on top of mutool, which is distributed as AGPL. If you want to process PDF locally, legally and safely you should use their CLI instead.


How did you figure that out? Couldn't it be Poppler as well?


I read the output header, and see the Artifex (mutools / gs team) headers


Alrighty, that's a smoking gun.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: