AI MATCH RULE ASSISTANT
AI-guided matching rule suggestions for financial reconciliation. From 2–3 days to ~5 minutes; 95% adoption.
/// An explainable matching-rules assistant suggests confidence-ranked rules (with a clear “why”), shows an instant impact preview, and lets users safely tweak tolerances/windows before applying. It’s fully editable with industry templates, an audit trail, and built-in instrumentation—so teams decide faster, lower cognitive load, and keep precision.
Role: Senior Product Designer — owner of the full lifecycle, from problem to post-launch.
Owned the brief & outcomes. Set success metrics (time-to-valid-ruleset, adoption, precision guardrails) and the initial problem frame.
Hands-on research. Session replays, implementer shadowing, 5 client interviews (bank, delivery, retail, orchestrator, ecommerce), support tickets, product analytics → consolidated insight map and root causes.
Product direction with PM. Synthesized findings, agreed bets and guardrails, and maintained a living decision log.
IA & core flows. Designed the view → preview → apply funnel, explainability surfaces (“why this”), bounded relaxation, and templates per reconciliation type.
Wireframes & alignment. Iterated low/mid-fi, resolved naming/consistency across teams, and converged on the solution.
High-fidelity & quality bars. Final UI with full states, AA accessibility, interactive prototypes, and per-block p95 ≤ 400 ms.
Instrumentation & contracts. Authored event schema (view/preview/apply, backtracks, deltas), minimal UI/API contracts, and partnered with Eng/DS on flags, canary, latency, and observability.
Executable handoff & follow-through. Delivered a build-ready package (flows/states, glossary, can-vary/never-changes, anti-patterns), managed rollout & dashboards, led iterations—feature ultimately monetized as an add-on.
/// Signals we saw
•Session replays: up to 1h on the rules screen; add/remove/edit loops.
•Experts too: ended with basic rules and failed to discover stricter/relaxed compositions.
•Excel loop: run reconciliation → export → compare externally → tweak → repeat. 2–3 days to settle.
/// Fieldwork
•5 clients (banking, delivery, retail, payment orchestrator, ecommerce).
•Cross-industry pattern: uncertainty choosing what to match, tolerances, and conditions; tendency to maximize “% reconciled” without anchoring to business intent (missing guardrail).
/// Root causes
1.Fuzzy mental model: no visible repertoire of rules by recon type.
2.Low discoverability: UI didn’t adapt to uploaded columns or domain.
3.Late feedback: impact visible only after a full run.
4.Knowledge asymmetry: internal implementers solve it in <10 min; customers need days.
/// What to show
•2 anonymized replay screenshots with “remove/add loop” annotations.
•A simple diagram of the Excel loop (Run → Export → Compare → Adjust → Repeat).
•Small table “Novice vs Expert” — both get stuck for different reasons.
/// 045WRK
/// When configuring reconciliations, users don’t know which matching rules to apply or how to calibrate them. They fall into trial-and-error loops (up to 2–3 days, Excel comparisons), and even experts end up with basic rules and decisions not traceable to business intent.
/// An explainable matching-rules assistant suggests confidence-ranked rules (with a clear “why”), shows an instant impact preview, and lets users safely tweak tolerances/windows before applying. It’s fully editable with industry templates, an audit trail, and built-in instrumentation—so teams decide faster, lower cognitive load, and keep precision.







