A recruiter once told me: "The candidates who do their research always stand out. Know the company, know the people, know the product."
Good advice. Terrible in practice.
When you're applying to 10+ roles, you can't spend two hours researching each company. You skim the website, glance at LinkedIn, maybe check Glassdoor. You walk into the interview with surface-level knowledge and hope the questions stay generic.
I got caught out. Went into an interview missing context that was freely available online. The interviewer mentioned a recent product launch. I had no idea. That was the moment I decided to automate the research.
What It Does
You paste a job listing. Optionally paste your CV. Hit generate. In about 30 seconds, you get a six-section intelligence brief that covers everything you'd spend two hours gathering manually.
Section 1: Company Intel. What they do, their tech stack, key people, company size, founding date, recent news, competitors, and culture signals decoded from the job posting language. "Fast-paced environment" means understaffed. "Wear many hats" means no clear role boundaries. The tool flags these.
Section 2: Role Decoder. Takes the job description and breaks it into must-have vs nice-to-have vs hidden requirements. Identifies the real seniority level (sometimes the title says "Senior" but the duties say "Junior"). Explains the reporting structure, team context, and where this role goes in 2-3 years.
Section 3: Your Fit Map. If you gave it your CV, this section maps every requirement to your experience. Strong match, partial match, or gap. For each one, it gives you a talking point: "When they ask about X, mention your work at Y where you did Z." Overall match percentage so you know where you stand.
Section 4: Battle Cards. Prepared answers for the 5-7 questions they're most likely to ask. Not word-for-word scripts, but structured frameworks. "Tell me about yourself" tailored to this specific role. "Why this company?" using real facts about the company. Each card includes what to say, why it works, and what to avoid.
Section 5: Day in the Life. An hour-by-hour walkthrough of what a typical day looks like in this role. Who you work with, what tools you use, what meetings you attend, what challenges come up. Uses fictitious names but real job functions. Helps you visualize whether you actually want this job.
Section 6: Red Flags and Questions. Things to watch for during the interview. Salary signals. And five smart questions to ask the interviewer that are specific to this company, not generic "what's the culture like" filler.
How It Works Under the Hood
The system runs in two phases with two different AI approaches.
Phase 1: Web research with Groq Compound. Before any brief generation happens, the tool does real research using Groq's compound-beta model. This is the model that has built-in web search tooling. It can actually browse the internet, not just recall training data. I run five research tasks in parallel:
- Scrape the company's website directly (homepage + about page, using Cheerio)
- Groq compound-beta search for company overview (products, services, size)
- Groq compound-beta search for LinkedIn intel (leadership, employee count)
- Groq compound-beta search for recent news specifically about the company
- Groq compound-beta search for Glassdoor reviews (culture, salary, pros/cons)
The direct website scraping uses plain HTTP + Cheerio DOM parsing. The four Groq compound searches use the model's built-in web search tools to find real-time information. All five run simultaneously with Promise.all. The results get compiled into a research context document. Real data, not hallucinated.
Phase 2: Section generation with Llama 3.3 70B. All six sections generate in parallel using Groq's llama-3.3-70b-versatile model, each with its own specialized system prompt. Every prompt receives the job listing, the full research context from Phase 1, and your CV (if provided). The model runs in JSON mode (response_format: json_object) so every section returns structured data that the frontend can render cleanly.
Results stream back to the browser via Server-Sent Events as each section completes. If a section fails (API timeout, parsing error), the brief still delivers the other five. Partial briefs are first-class. Getting 5 out of 6 sections is still more prep than most candidates do.
The Culture Decoder
This is my favorite part. Job postings are full of coded language. The Company Intel section reads the posting and decodes it:
- "Competitive salary" = might not be top of market
- "Fast-paced" = probably understaffed
- "Self-starter" = minimal onboarding
- "Collaborative" = team-oriented, probably good
- "Autonomous" = could mean empowered or could mean unsupported
Each signal gets tagged as positive, warning, or neutral with an explanation. You learn to read between the lines before you even talk to anyone.
Battle Cards Changed How I Prep
Before this tool, my interview prep was: re-read the job description, think of some stories, hope for the best.
Now I walk in with prepared frameworks for the exact questions they'll ask. Not scripts. Frameworks. The "Why this company?" card uses real facts from the web research. The technical question card targets the top requirement from the role decoder. The behavioral card suggests which of my experiences maps best.
The "avoid saying" field is surprisingly useful. For every question, there's a common mistake candidates make. The battle cards flag it.
The Stack
- Frontend: Vue 3 + Vite + TypeScript
- Backend: Express + TypeScript
- Database: PostgreSQL (Docker) + Prisma ORM
- Web research: Groq compound-beta (built-in web search tooling) + direct website scraping with Cheerio
- Brief generation: Groq Llama 3.3 70B (JSON mode, 6 specialized prompts in parallel)
- Streaming: Server-Sent Events for real-time section delivery
It also integrates with a separate interview simulator tool I built. The "Launch Simulator" button deep-links to the simulator with the brief data pre-loaded, so you can practice answering the battle card questions in a mock interview.
What I Learned
Groq compound-beta is the key ingredient. Without real web research, the LLM gives you plausible but vague company descriptions based on training data. With compound-beta's web search tools pulling actual website content, search results, and recent news, you get real product names, real leadership names, actual recent developments. The difference between a generic brief and a useful one is real data.
Parallel generation is fast. Six sections generating simultaneously takes about 30 seconds total. If they ran sequentially, it would be 3 minutes. SSE streaming means the first section appears in 5-10 seconds while others are still generating.
The prep compounds. Reading the full brief takes 10 minutes. But you walk into the interview knowing the company's products, their tech stack, their leadership, their culture signals, your exact fit percentage, prepared answers for likely questions, and smart questions to ask. That's the kind of prep that takes two hours manually, if you even know where to look.