Section E · Reference

Compliance Context

Two AI engineers will interview you, but they work inside the Compliance org. You don't need depth — you need not to flinch when these terms come up.

The big acronyms

AcronymMeaningWhat it means for AI
AMLAnti-Money LaunderingDetect/prevent illicit money flows. Big AI surface: alert triage, transaction monitoring, narrative drafting.
CFTCounter Financing of TerrorismAdjacent to AML. Same tooling, different threat.
KYCKnow Your CustomerOnboarding identity verification. Doc analysis, sanctions checks, EDD.
CDD / EDDCustomer Due Diligence / Enhanced Due DiligenceTiered scrutiny. EDD = high-risk customers. AI helps draft EDD reports.
SARSuspicious Activity ReportFiled with regulators when a transaction looks suspicious. Highest-stakes AI artifact.
STRSuspicious Transaction ReportSome jurisdictions' equivalent of SAR.
PEPPolitically Exposed PersonHigher-risk class. EDD required.
OFACUS Office of Foreign Assets ControlMaintains the US sanctions list.
BSABank Secrecy ActFoundational US AML law.
FinCENFinancial Crimes Enforcement NetworkUS Treasury bureau, takes SAR filings, runs FinCEN advisories.
FATFFinancial Action Task ForceInternational standard-setter for AML/CFT.
FCAFinancial Conduct AuthorityUK financial regulator.
MASMonetary Authority of SingaporeSingapore regulator.
BaFinGerman federal financial regulator.
FinTRACCanadian AML regulator.
MiCAMarkets in Crypto-AssetsEU crypto-specific regulation. Highly relevant to crypto exchanges.
MiFID IIEU markets regulationTrading-side reporting; capital markets compliance.
GDPREU data protection regulationConstraint on what data can flow where.
TMTransaction MonitoringThe system that generates alerts.

How Compliance work actually flows

A simplified mental model:

  1. Onboarding (KYC): customer signs up. Identity verified, screened against sanctions/PEP lists, risk-rated. Tier 1 / 2 / 3 customer.
  2. Transaction monitoring: every transaction is scored by rules + ML for anomalies. Suspicious ones generate alerts.
  3. Alert triage (L1): an analyst reviews each alert. ~80%+ are false positives — dismissed. The rest escalate.
  4. Investigation (L2): an investigator opens a case, gathers evidence, writes a case narrative, decides outcome.
  5. SAR filing: if the investigator decides activity is suspicious, a SAR is drafted (to specific regulator format), reviewed, filed with FinCEN/equivalent.
  6. Periodic review: high-risk customers get scheduled re-review (EDD refresh).
  7. Regulatory updates: rules change. Compliance leads read, summarize, update controls and policies.
  8. Audit / examination: regulators show up periodically. Compliance produces evidence of what was done and why.

Each step has AI leverage opportunities (and risks). The JD specifically calls out: alert pre-screening, case narrative generation, regulatory change summaries, EDD drafting, audit-ready documentation. Map those onto the flow above.

Key concepts and where AI fits

Alerts and false positives

Modern transaction monitoring produces a lot of alerts. Most are FPs. AI's wedge: pre-screen alerts so humans only see ones with real signal. Risk: you suppress a true positive. So pre-screening must have very high recall on must-catch cases; precision is secondary. (See 07-evals.)

Case narrative

A free-text document an investigator writes describing a case: what was alerted, what evidence gathered, what conclusion reached. AI's wedge: draft the narrative from structured case data. Investigator edits and signs. Saves time, preserves judgment.

EDD report

Detailed risk write-up on a high-risk customer. Pulls together: KYC info, transaction patterns, sanctions/PEP findings, news (adverse media), source of wealth. AI's wedge: assemble + draft. Risk: hallucinate wealth source from training data. Defense: heavy RAG with citations to authoritative records, no recall-from-training.

Sanctions screening

Match a name/entity against sanctions lists (OFAC SDN, EU consolidated list, UN, others). Hard problem because of: aliases, transliteration (Cyrillic→Latin), partial matches, ambiguous matches. ML and string-matching tools are well established here. AI's wedge: explain matches in natural language, summarize hit packages for review.

Adverse media

News searches for negative coverage of customers ("X arrested," "X under investigation"). LLMs are great at this — read articles, classify relevance, summarize. Risk: hallucinated articles or misattributed claims. Defense: cite source URL, retrieve article text fresh.

Regulatory change management

New regulation is published. Compliance has to: read it, identify what changes for the firm, propose updates, train teams. AI's wedge: ingest regulation, draft impact analysis. Compliance lead reviews. High value, low risk (it's a draft, not an action).

Audit response

Regulator asks "show me how you handled X cases over the last 6 months." Today that's painful manual work. AI's wedge: produce summary + supporting documentation from logs. Risk: misrepresent the actual evidence. Defense: cite specific log/case IDs, never paraphrase.

Risk-tiering compliance AI workloads

A useful framework for the interview. Walk through tiers when discussing design:

TierExamplesGates
LowDrafting an internal regulatory summary, pre-formatting an alert for a human, RAG over policiesLLM alone, lightweight review, eval-monitored
MediumDrafting a case narrative, drafting an EDD section, classifying alert severityHuman review pre-action, structured citations, full audit trail
HighRecommending SAR filing, freezing accounts, communicating with customers about KYCMultiple human approvals, scoped tools, kill switch, mandatory eval, MRM sign-off
ForbiddenAutonomous decisions on filings, autonomous account actions, autonomous customer commsDon't build it. Period.
The shortcut

If asked "would you let the agent do X?" — risk-tier the X first, then answer.

Know-your-jurisdictions (lightweight)

You don't need to know each rule. You need to know: jurisdictions differ, and the architecture must respect that.

  • US: BSA/FinCEN, OFAC sanctions, NYDFS for NY-licensed firms, state-level money transmission.
  • EU: AMLD (now AMLR — the new EU regulation), MiCA for crypto, GDPR for data.
  • UK: FCA, separate from EU post-Brexit, broadly aligned but distinct.
  • APAC: MAS (Singapore), JFSA (Japan), each with its own framework.

What this means for AI: data residency, model deployment region, data classification. EU customer data going to a US-hosted model is a flag. Architectures that route by jurisdiction are common.

The "thresholds and lists" reality

Compliance leans heavily on lists and thresholds:

  • Sanctions lists (updated frequently — your retrieval index must keep up).
  • PEP lists (commercial vendors maintain these).
  • Adverse media corpora.
  • Internal blocked lists.
  • Risk-rating thresholds ("transactions above $X to country Y").
  • Periodic review schedules (Tier 3 customer = annual EDD).
Strong design pattern

When AI is involved, the lists and thresholds are inputs the agent must consult — not things it should infer. Agent calls a tool that returns "is X on list Y as of today" — never relies on its training-time knowledge.

Concepts that signal "AI-aware compliance maturity"

Drop these in conversation if relevant:

  • Risk-based approach: regulators expect AI controls scaled to risk. Not all controls equal.
  • Explainability requirement: in many jurisdictions (notably under the EU AI Act), high-risk uses must be explainable to affected individuals.
  • Defense in depth: multiple overlapping controls. AI is one layer, not the only layer.
  • Four-eyes principle: significant decisions require two reviewers. AI doesn't replace either pair of eyes — but the second eyes can review the AI's draft.
  • Segregation of duties: the analyst who triages can't be the same human who approves the SAR. AI agents need similar role-based access controls.
  • Materiality / proportionality: don't deploy heavyweight controls on low-stakes flows.

Crypto-exchange-specific wrinkles

If the role is at a crypto exchange (or other firm with on-chain exposure), additional patterns apply:

  • Wallet screening: chain-analytics tools (Chainalysis, TRM, Elliptic) score addresses for risk (mixer exposure, sanctioned address proximity, hack proceeds). Often called via API; perfect for MCP-fronted tools.
  • Travel Rule: cross-VASP transfers above thresholds carry originator/beneficiary info. AML over rails like TRP, Notabene.
  • Mixers / privacy tools: certain interactions raise risk scores automatically.
  • DeFi exposure: harder to attribute. New ground for compliance and AI.
  • MiCA compliance: EU's specific crypto regime — comes online progressively through 2025-2026.

If asked about crypto compliance, the AI angle is: ingest chain-analytics data + transaction graph + KYC + adverse media, draft a risk write-up. The hard part isn't the AI — it's the data fusion and ground-truth labeling for evals.

Communicating with non-technical stakeholders

The JD calls this out. Practice this exact translation:

Technical termPlain English
"Eval""How we measure whether the AI is doing a good job before and during production."
"Hallucination""When the AI states something confidently that isn't actually true — like making up a case citation. We design specifically against that."
"Tool use""The AI is given a small set of approved actions — like 'look up this customer' or 'fetch the latest regulation' — and can only do those, with a record of every call."
"Human-in-the-loop""Every consequential output is reviewed and approved by a human before any external action."
"Audit trail""Every input the AI saw, every step it took, every output it produced, every human who reviewed — all of it logged immutably and retrievable."
"Drift""Quality of AI output can degrade over time as conditions change. We monitor specifically for that and alert."

Practice saying these out loud. They'll come up.