I’m building JobOps because I got tired of the "black box" nature of job hunting. I’m currently a final-year CS student, and I wanted a tool that didn't just scrape jobs but actually closed the loop on whether recruiters are even looking at my resume.
The Two Big Changes:
1. Hiring Cafe Integration: I added an extractor for Hiring Cafe roles. They now flow into the same scoring/tailoring pipeline as all other roles. So it's the same UI with more job sources. Immediate win.
2. Tracer Links: This is the part I’m most excited (and nervous) about. I added tracked redirects for outbound links in the generated resumes.
How it works: When enabled, JobOps rewrites links to your host (/cv/...), logs the click, and redirects to your destination link.
The Goal: Moving from "fire and forget" to "this specific company actually clicked my portfolio link."
The Trade-off: I know this is a double-edged sword. Some email scanners might flag these redirects as suspicious, which is why it's an explicit opt-in feature. I’ve added rough human-vs-bot filtering, but I’d love to hear from anyone who has dealt with PDF-based link deliverability before.
I’d love your feedback on:
The Redirect Logic: Is there a more "stealthy" way to track clicks that won't annoy IT security?
Source Expansion: Which niche job boards are actually worth the effort of writing an extractor for next?
Repo (star if you find it interesting!): https://github.com/DaKheera47/job-ops (247 stars and counting—thank you to the community for the feedback so far!)
I built JobOps to solve a problem I kept running into: job links expire, descriptions get edited, and by the time an interview invitation arrives weeks later, I have no idea what I actually applied for or which resume version I sent.
What it does:
Snapshots job descriptions at application time (so dead links don't kill your context)
Generates tailored resumes and stores the exact version you sent with each application
Supports local LLMs (Ollama, LM Studio) alongside cloud providers for resume tailoring
Command bar (Cmd+K) and keyboard shortcuts for bulk operations
Webhook integration for automation (n8n, Zapier, etc.)
Dashboard for tracking application velocity and success rates
Technical details:
TypeScript monorepo. React 18 + Vite 6 frontend, Node/Express backend, SQLite + Drizzle ORM. Self-hosted via Docker Compose with persistent ./data volume.
The LLM integration is provider-agnostic—swap between OpenAI, Gemini, OpenRouter, or local models without code changes. Built an adapter pattern that made adding Ollama support trivial.
Job extraction uses a pluggable system: JobSpy (Python wrapper), Gradcracker, UKVisaJobs, manual imports, Glassdoor. Everything normalizes to a unified schema with deduplication.
Why local-first:
Your job hunt data is sensitive. This runs entirely on your machine. No SaaS, no tracking, no data leaving your infrastructure unless you explicitly configure webhooks.
Current state:
Been shipping daily for 3 weeks. 200+ stars, 570+ Docker pulls, 5 contributors I've never met, users answering each other's questions in issues. It's gone from "tool for myself" to something people are actually using.
It would also be quite cool if it could read your emails and notify you when you get and email about an OA or an interview or something. Have had a couple of those emails slip through the cracks previously. Would also make the part of manually entering status into the tool a lot easier.
These all sound like really useful features. Since it tracks which resume you sent, does it give you some insight into which variations are the best performers?
i built JobOps, basically a devops style pipeline for my jobhunt. im in the UK, on a student visa, trying to find a graduate job that’ll sponsor skilled worker.
it’s basically my job application process turned into a pipeline, because i got sick of the same two things happening every time:
- i apply, then the link dies and i lose the job description i actually applied to
- i apply, then two weeks later i can’t remember what version of my CV i sent
so the whole app is built around one idea: for every job, i want to make a tailored resume, and i want to be able to see it and the job description in the future if i get an interview.
the steps in the pipeline is:
- step 1; find the jobs (extractors)
it pulls jobs from a few sources (some off-the-shelf, some custom), then maps everything into one schema and dedupes it.
- step 2; make the artifacts
for a job i actually care about, it generates an ATS-friendly CV pdf for that role, by changing the top level summary, the keywords and the projects shown. when i get an interview, i can look back to see the job description (to see what the company wants), and my tailored resume (to see what i sent the company).
- step 3; track + automate the boring bits
the UI has the obvious stuff (stages, extraction, sources), but the fun part is when you mark “applied” it emits a webhook. i use that to push the job into my notion db via n8n, so i’m not copy pasting titles/companies/locations like a caveman. i treat notion as my "source of truth", not because it's good, just because it's what i've been using since the start.
it’s open source, local-first, and self-hosted (docker). i’m not selling anything and i’m not trying to turn this into a SaaS, i just wanted something i could run myself and figured it might help someone else too.
The Two Big Changes:
1. Hiring Cafe Integration: I added an extractor for Hiring Cafe roles. They now flow into the same scoring/tailoring pipeline as all other roles. So it's the same UI with more job sources. Immediate win.
2. Tracer Links: This is the part I’m most excited (and nervous) about. I added tracked redirects for outbound links in the generated resumes.
How it works: When enabled, JobOps rewrites links to your host (/cv/...), logs the click, and redirects to your destination link.
The Goal: Moving from "fire and forget" to "this specific company actually clicked my portfolio link."
The Trade-off: I know this is a double-edged sword. Some email scanners might flag these redirects as suspicious, which is why it's an explicit opt-in feature. I’ve added rough human-vs-bot filtering, but I’d love to hear from anyone who has dealt with PDF-based link deliverability before.
I’d love your feedback on:
The Redirect Logic: Is there a more "stealthy" way to track clicks that won't annoy IT security? Source Expansion: Which niche job boards are actually worth the effort of writing an extractor for next?
Repo (star if you find it interesting!): https://github.com/DaKheera47/job-ops (247 stars and counting—thank you to the community for the feedback so far!)