Founder-Market-Execution: A Structured Framework for First-Pass Screening
Timur here — founder of Grizzz.ai.
Earlier in this series I wrote about evidence-linked outputs: every claim must be traceable to source. This post is about the structure those claims should live inside.
Most funds already use some version of Founder-Market-Execution. The labels are familiar. The problem is that familiarity often hides inconsistency.
When “FME” is only a naming convention, analysts apply different standards under the same headings. That weakens comparability across deals.
An informal framework looks aligned from a distance. In practice, interpretation drifts quickly.
Two analysts can review the same startup, use the same three labels, and still produce non-comparable conclusions because they asked different questions and weighted different evidence.
For a VC workflow, that is not a cosmetic issue. First-pass screening controls where partner attention goes next.
FME became genuinely useful for us only after we treated it as a schema, not a checklist.
That meant:
Explicit fields for each dimension
Defined evidence expectations per field
Versioned configuration tied to current fund thesis
The shift from “What do you think of this founder?” to “Which evidence supports founder-market fit under our current thesis criteria?” changed the work at every layer.
Analysts looked for different signals. Reviews became faster because disagreements were easier to localize. Historical comparisons became meaningful because the framework version was explicit.
For partners, this meant less time reconstructing analyst reasoning before IC. A structured schema reduces screening chaos: instead of each analyst applying their own interpretation of “strong founder,” the framework defines what evidence is required and what threshold moves a deal forward. That compresses the pre-IC review from judgment calls to verifiable outputs.
This is what converts a familiar concept into operational infrastructure.
Framework quality comes from constraint and versioning.
If definitions are loose, application drifts. If thesis changes are not versioned, historical outputs become ambiguous. Precision is what keeps first-pass decisions consistent over time.
Audit your current FME workflow with one practical question per dimension: “What specific evidence would change this rating?”
If the answer is vague, that dimension is still subjective narrative, not a reliable filter.
Then add version tagging to your framework so the team can tell which thesis assumptions were active for each decision.
A defined framework is necessary but not sufficient. Coming up: what it took to make this hold up in production, where the real failures appeared, and what those failures taught us.

