← All insights
BuildApril 15, 20269 min readby Sav Banerjee

The Gore Lens: encoding expert knowledge as toggleable rules.

A scientist will never trust a black-box relevance score. Here's how we built a 9-rule expert lens that scientists could reason about — and turn off — one rule at a time.

The Gore M2 Intelligence Hub had one non-negotiable acceptance criterion: the lead scientist had to trust the relevance ranking. That meant no opaque embeddings-as-relevance, no LLM-as-judge with hidden criteria, no statistical magic.

What worked: encoding the scientist's actual decision criteria as nine explicit, MCP-compatible rules. Temperature floor. Material class. Chemistry scope. PFAS sensitivity. Market size. Liability exposure. Recency. Novelty. The 200°C gap.

Each rule is independently toggleable. The dashboard shows the score with the rule on and off. The scientist can A/B their own expertise against the system. That's where trust comes from — not from the score, but from the ability to interrogate it.

The pattern generalizes: wherever you need expert trust, encode the expert's heuristics explicitly, and make every one of them inspectable.


Want to scope an engagement around this?

Send a briefMore insights