Hi HN - I built this agent to make it easier to get feature distribution metrics into a Prometheus-Grafana observability stack when deploying with Tensorflow Serving.
The pain point this addresses is that TF Serving's default /metrics endpoint only exposes performance metrics (request size, request counts) but not metrics on the features or predictions, which is often interesting and valuable to monitor.
Well, if you click through and look at the OPs work they have previously written an article about the company they describe as, "basically accused the school and its chief officer of fraud".
On the comment itself though it is looking for former employees and stressing the anonymous source angle. If I'm promising not to reveal the identities off the bat I'm clearly fishing for negative views on something. Have you ever read an article where an anonymous source said something glowingly positive about a subject?
I sort of agree that Lambda School is probably over selling itself but if I were looking to learn more about it in a balanced manner I would probably use more neutral language when soliciting information.
I'd argue that this is a resume overindexed on getting past recruiting filters looking for specific JavaScript libraries or UNIX commands. Even the recruiters action may not be necessarily bad - startups with a decided tech stack might decide they are not able to provide the time for a new hire to ramp up.
Effectively you penalise not catering the candidate resume to what you are looking for or deem important.
One will filter you out for having too many keywords.
One will filter you out for not having enough keywords.
One will filter you out for including a picture.
One will filter you out for not including a picture.
One will ... ad infinitum
Your resume is being evaluated by different filters constantly. Different filters filter different things. There is no "right" answer. There are only answers you will or won't be filtered out for depending on which filter is being applied.
Oh sure, I know they exist, but on the resume side there's no common knowledge of anything besides keyword stuffing. I'm wondering if that's actuallly the best way given how the filters actually work (which I don't know).
The pain point this addresses is that TF Serving's default /metrics endpoint only exposes performance metrics (request size, request counts) but not metrics on the features or predictions, which is often interesting and valuable to monitor.