Method / How We Build
Sprout builds AI systems that operate in regulated industries, handle personal data, and make decisions that affect real people. That makes AI ethics an engineering discipline, not a PR statement. OJK's April 2025 AI Governance Guidance is the Indonesian regulatory baseline. ISO/IEC 42001 is the international AI management system standard. BSSN's emerging AI governance posture, the EU AI Act, and NIST AI RMF set the broader frame. This page is what we publicly commit to, how we build to those commitments, and what “responsible AI” means in practice, not in slogans.
Every AI engagement at Sprout operates under a defined governance structure. Model validation, bias testing, drift monitoring, human oversight on high-risk decisions, documentation that audit teams can defend. The OJK April 2025 AI Governance Guidance defines six role profiles (AI Owner, Model Owner, Data Steward, Model Validator, Auditor, Compliance Lead) across the AI lifecycle. We staff those roles on regulated engagements. ISO/IEC 42001 is the emerging international standard for AI management systems. Our practice is aligned and our certification path is documented. The specific commitments below are the ones we'll defend in writing.
Signature Visual
A horizontal lifecycle flow through six phases (design, build, validate, deploy, monitor, audit) with governance gates at each and regulator chips (OJK, UU PDP, ISO 42001, BSSN) attached to the appropriate sign-offs. A footer strip lists Sprout's published AI principles: transparency, fairness, human oversight, data stewardship, continuous improvement, audit readiness. Governance-documentation aesthetic. Coming soon.
Four principles: specific, published, testable.
Every regulated-sector AI engagement operates under OJK's six-role governance structure. Role responsibilities are staffed and documented. “Responsible AI” without named role holders is theater. We staff accordingly.
Every production AI deployment passes an evaluation harness: accuracy against ground truth, bias testing across customer segments, adversarial testing where applicable, human-oversight trigger testing. The validation report is part of deployment paperwork.
Drift, bias, and accuracy decay are ongoing realities in production AI. We wire continuous monitoring from first deploy, not after the first incident. Drift reviews are scheduled, not opportunistic.
OJK, BSSN, internal audit, client audit, regulator audit: the evidence pack, documentation trail, and governance records are produced once and kept current. An audit request should be a 2-day fulfillment, not a 2-month scramble.
Four specific commitments we make publicly and build to operationally.
Six-role governance structure staffed on OJK-supervised engagements. Model validation documentation, bias testing, drift monitoring, human-oversight trigger design. Evidence packs produced for OJK audit.
International AI Management System standard alignment. Policy, risk assessment, operational controls. Certification path documented; current status published transparently.
Model provenance documentation. Data source and licensing disclosure. Training-data lineage where applicable. Where Sprout uses frontier models (Claude, OpenAI, Google), the vendor and model version are documented in engagement artifacts.
High-risk decisions (credit, claims denial, medical advice, fraud flagging, regulatory decisioning) do not operate without human-in-the-loop. Oversight triggers defined, escalation paths documented, and override logging comprehensive.
Sprout's AI governance operates under a regulatory reality that's still in active definition.
OJK's April 2025 AI Governance Guidance defines six role profiles across the AI lifecycle: AI Owner, Model Owner, Data Steward, Model Validator, Auditor, and Compliance Lead. For Indonesian AI deployments in OJK-supervised contexts, these roles are not optional. Sprout's practice staffs these roles on regulated-sector engagements.
EU AI Act (full applicability August 2026), NIST AI RMF, and ISO/IEC 42001 are converging on a shared governance vocabulary: risk classification, model validation, continuous monitoring, human oversight. For Indonesian AI deployments serving international audiences or cross-border data flows, alignment to these frameworks complements OJK compliance rather than duplicating it.
The EU AI Act's penalty structure (€40M / 7% of global turnover for prohibited AI; €20M / 4% for high-risk; €10M / 2% for other violations) has set the global bar for how seriously AI governance compliance must be taken. Even firms operating exclusively in SEA face indirect exposure through clients or vendors with EU nexus.
What it actually looks like to staff AI Owner, Model Owner, Data Steward, Model Validator, Auditor, and Compliance Lead on a real OJK-supervised AI engagement. Role scopes, handoff patterns, and the documentation each produces.
Why ISO/IEC 42001 is emerging as the international AI governance baseline, what certification requires, and how SEA firms should sequence certification relative to ISO 27001 and SOC 2.
The operational patterns that separate “launched an AI system” from “running an AI system responsibly.” Drift review cadences, bias revalidation, incident-response wiring, and the metrics that tell the truth.
Tell us the engagement: the AI use case, the regulated surface (OJK / BI / BPJPH / SATUSEHAT / cross-border), the risk classification. We'll scope an engagement with OJK role structure, validation harness, and human-oversight design as scope requirements. Audit-ready is the default, not the upgrade tier.
Start a project