AI MATCH RULE ASSISTANT

AI-guided matching rule suggestions for financial reconciliation. From 2–3 days to ~5 minutes; 95% adoption.

Client

SIMETRIK

Year

2024

Category

RECONCILIATION

Project Image
Project Image
Project Image

Overview

Overview

Overview

/// An explainable matching-rules assistant suggests confidence-ranked rules (with a clear “why”), shows an instant impact preview, and lets users safely tweak tolerances/windows before applying. It’s fully editable with industry templates, an audit trail, and built-in instrumentation—so teams decide faster, lower cognitive load, and keep precision.

Responsabilities

Responsabilities

Responsabilities

Role: Senior Product Designer — owner of the full lifecycle, from problem to post-launch.


  • Owned the brief & outcomes. Set success metrics (time-to-valid-ruleset, adoption, precision guardrails) and the initial problem frame.

  • Hands-on research. Session replays, implementer shadowing, 5 client interviews (bank, delivery, retail, orchestrator, ecommerce), support tickets, product analytics → consolidated insight map and root causes.

  • Product direction with PM. Synthesized findings, agreed bets and guardrails, and maintained a living decision log.

  • IA & core flows. Designed the view → preview → apply funnel, explainability surfaces (“why this”), bounded relaxation, and templates per reconciliation type.

  • Wireframes & alignment. Iterated low/mid-fi, resolved naming/consistency across teams, and converged on the solution.

  • High-fidelity & quality bars. Final UI with full states, AA accessibility, interactive prototypes, and per-block p95 ≤ 400 ms.

  • Instrumentation & contracts. Authored event schema (view/preview/apply, backtracks, deltas), minimal UI/API contracts, and partnered with Eng/DS on flags, canary, latency, and observability.

  • Executable handoff & follow-through. Delivered a build-ready package (flows/states, glossary, can-vary/never-changes, anti-patterns), managed rollout & dashboards, led iterations—feature ultimately monetized as an add-on.

Project Image
Project Image

Context

Context

Context

/// Signals we saw
Session replays: up to 1h on the rules screen; add/remove/edit loops.
Experts too: ended with basic rules and failed to discover stricter/relaxed compositions.
Excel loop: run reconciliation → export → compare externally → tweak → repeat. 2–3 days to settle.


/// Fieldwork
•5 clients (banking, delivery, retail, payment orchestrator, ecommerce).
Cross-industry pattern: uncertainty choosing what to match, tolerances, and conditions; tendency to maximize “% reconciled” without anchoring to business intent (missing guardrail).


/// Root causes
1.Fuzzy mental model: no visible repertoire of rules by recon type.
2.Low discoverability: UI didn’t adapt to uploaded columns or domain.
3.Late feedback: impact visible only after a full run.
4.Knowledge asymmetry: internal implementers solve it in <10 min; customers need days.


/// What to show
•2 anonymized replay screenshots with “remove/add loop” annotations.
•A simple diagram of the Excel loop (Run → Export → Compare → Adjust → Repeat).
•Small table “Novice vs Expert” — both get stuck for different reasons.

Project Image
Project Image
Project Image

Problem

Problem

Problem

/// 045WRK

/// When configuring reconciliations, users don’t know which matching rules to apply or how to calibrate them. They fall into trial-and-error loops (up to 2–3 days, Excel comparisons), and even experts end up with basic rules and decisions not traceable to business intent.

Project Image
Project Image
Project Image

solution

solution

solution

/// An explainable matching-rules assistant suggests confidence-ranked rules (with a clear “why”), shows an instant impact preview, and lets users safely tweak tolerances/windows before applying. It’s fully editable with industry templates, an audit trail, and built-in instrumentation—so teams decide faster, lower cognitive load, and keep precision.

/// 063GAB

/// 063GAB

/// 063GAB

COLLABORATION & GOVERNANCE

COLLABORATION & GOVERNANCE

COLLABORATION & GOVERNANCE

  • Shift-left with Engineering. Feature flags per tenant, 10% canary, rollback plan; preview latency budget p95 ≤ 400 ms; observability hooks from day one.

  • Cadence & alignment. Weekly decision log (bet, trade-off, guardrails), shareable one-pager for stakeholders; DS/ML reviews on inputs, scoring, and confidence semantics; implementer reviews for template quality.

  • Ethics & compliance. Explainability built-in (“why this rule”), audit trail (who/what/when), no PII exposure, human-in-control (always editable; manual build path).

  • Executable handoff. Flows + states, event schema, UI/API contracts (preview endpoints, error/fallback rules), glossary for naming, “what can vary / what never changes.”

  • Release governance. Region-by-region rollout, dashboards for adoption, TTV, false positives, tickets, and threshold alerts; A11y AA checklist baked into PR reviews.

  • Shift-left with Engineering. Feature flags per tenant, 10% canary, rollback plan; preview latency budget p95 ≤ 400 ms; observability hooks from day one.

  • Cadence & alignment. Weekly decision log (bet, trade-off, guardrails), shareable one-pager for stakeholders; DS/ML reviews on inputs, scoring, and confidence semantics; implementer reviews for template quality.

  • Ethics & compliance. Explainability built-in (“why this rule”), audit trail (who/what/when), no PII exposure, human-in-control (always editable; manual build path).

  • Executable handoff. Flows + states, event schema, UI/API contracts (preview endpoints, error/fallback rules), glossary for naming, “what can vary / what never changes.”

  • Release governance. Region-by-region rollout, dashboards for adoption, TTV, false positives, tickets, and threshold alerts; A11y AA checklist baked into PR reviews.

RESULTS

RESULTS

RESULTS

  • Adoption: 95% of target users engaged with the assistant post-launch.

  • Time-to-value: configuration dropped from 2–3 days to ~5 minutes (median) to reach a valid ruleset.

  • Operations: clear reduction in configuration-related tickets and a visible drop in Excel loop behaviors (from replays + events).

  • Business: packaged as an add-onnew revenue line.

  • Quality: false-positive rate stayed within threshold; no material regressions on precision.

  • Behavior change: high usage of preview and why, fewer backtracks, and more first-try applies; experts still retained full control (edit-before-apply stayed healthy).

  • Adoption: 95% of target users engaged with the assistant post-launch.

  • Time-to-value: configuration dropped from 2–3 days to ~5 minutes (median) to reach a valid ruleset.

  • Operations: clear reduction in configuration-related tickets and a visible drop in Excel loop behaviors (from replays + events).

  • Business: packaged as an add-onnew revenue line.

  • Quality: false-positive rate stayed within threshold; no material regressions on precision.

  • Behavior change: high usage of preview and why, fewer backtracks, and more first-try applies; experts still retained full control (edit-before-apply stayed healthy).

  • More Works More Works

  • More Works SEE ALSO

nicolasgarcia

nicolasgarcia

nicolasgarcia

nicolasgarcia