Skip to content

Capabilities

AI Overview →Workflow AutomationDocument IntelligenceConversational AIRAG & Knowledge SystemsAgentic Systems

Solutions

Claims ProcessingInvoice & AP AutomationMedical RecordsCustomer Service AIFraud Detection & AML
Engineering Productivity

AI / Solutions

Engineering Productivity.

Your engineers are already using AI. The question is whether they're using it well. Claude Code, Cursor, GitHub Copilot are in every IDE in every tech-native company in Indonesia. Adoption is solved. Value is not. We work with engineering leaders to operationalize AI across their teams, the tooling, the rituals, the evaluation discipline, the cultural shifts, so "AI-native engineering" stops being a recruiting line and starts being a measurable velocity and quality uplift.

AI Tooling RolloutSpec-Driven DevelopmentEval-Aware ReviewVelocity Measurement

From "everyone uses AI" to "our team ships 2x"

There's a gap between "our engineers use Claude Code" and "our engineering org is AI-native." The first is individual habit. The second is a system, specs written with AI in mind, reviews that catch AI-generated failure modes, evaluation harnesses for AI-generated code, pairing rituals that preserve architectural judgment, and metrics that actually distinguish "shipped faster" from "shipped worse faster." We work with your engineering leaders, CTOs, VPs, Staff+ engineers, to build that system. Not a tool rollout. A practice.

70%+
Of developers globally now use AI coding tools regularly
APAC leadership
Asia Pacific is accelerating from generative to agentic engineering practices
Anthropic · OpenAI · Microsoft · Google
Every major model vendor ships AI coding tools as a first-class product in 2026
Tokopedia · GoTo · BCA Digital · Alodokter
Indonesian tech-native orgs running AI-assisted engineering at scale

How we make your engineering org AI-native

Four phases from "individual AI-tool users" to "AI-native engineering practice." Less a tool rollout, more a practice shift.

01

Discover

We interview engineers, team leads, and the CTO. How is AI actually being used today? Where is it working? Where is it creating quiet technical debt? Which teams are ahead, which are resistant? Which workflows lose the most time to mechanical work? We instrument baseline velocity and quality metrics before we change anything.

02

Pilot

A six-week pilot with one or two engineering squads. We roll out tooling (Claude Code / Cursor / Copilot / organization-specific agents), establish pairing rituals, introduce spec-driven development patterns, and install eval-aware review practices. Pass/fail on squad-level velocity, defect rate, and engineer sentiment.

03

Validate

We measure. PR cycle time, deploys per engineer per week, review-comment density, test coverage, defect rate, and, crucially, engineer-reported sentiment. We identify what's working, what isn't, and what needs org-wide vs. squad-level application.

04

Scale

Rollout across the engineering organization. Documentation, internal training, office hours, and the cultural scaffolding that keeps the practice alive after the engagement ends. You get the playbook, the metrics dashboard, and a relationship for ongoing evolution as tools mature.

What we build for your engineering org

Four disciplines that together turn "everyone has AI in their IDE" into a compounding engineering practice.

AI Tooling Rollout & Integration

Thoughtful deployment of AI coding tools, Claude Code, Cursor, GitHub Copilot, and internal agents. Policy work (what's safe to generate, what requires human sign-off, what goes in the PR template). IP and security posture for AI-generated code.

Tool SelectionPolicy DesignIP + SecurityInternal Agents

Spec-Driven Development Rituals

AI amplifies whatever you feed it. Vague spec in, vague code out. We install practices that make specs the primary engineering artifact, testable, scoped, and AI-friendly, so your engineers spend more time on judgment and less on translation.

Spec TemplatesAcceptance CriteriaTestable SpecsScoping Rituals

Eval-Aware Code Review

AI-generated code fails in specific, patterned ways, hallucinated imports, subtle correctness regressions, incomplete error handling, security-pattern drift. We train your reviewers to catch these, and install eval harnesses for critical modules that catch what humans miss.

AI Failure ModesReview ChecklistsEval HarnessesCritical-Path Testing

Velocity, Quality & Sentiment Measurement

The metrics that actually tell you whether AI is making your team better. PR cycle time, deploys per engineer, review quality, defect rate, alongside engineer sentiment, because a team that's fast and unhappy isn't a team that stays.

PR MetricsDefect TrackingSentiment SamplingWeekly Dashboards

Engineering productivity in action

One engineering-team engagement we support, plus the market shape for AI-native engineering.

Sprout WorkMajor Indonesian Digital Health Platform

Coaching an engineering org through AI-native practice adoption

We work with a major Indonesian digital health platform's engineering team on operationalizing AI in day-to-day development, tooling, spec-driven development rituals, eval-aware review practices, and the cultural shifts that turn individual tool use into team-level velocity. Engagement focuses on sustainable adoption rather than one-time tool rollout.

OngoingEngineering-org transformation in progress
Market BenchmarkGlobal engineering · AI tool adoption

AI coding tools are now the default, not the exception

Every major model vendor, Anthropic, OpenAI, Microsoft, Google, has shipped AI coding tools as a first-class product in 2026. Industry surveys consistently report majority-developer adoption across regions. The question has shifted from "should our engineers use AI?" to "is our engineering org extracting value from the tools already on their laptops?"

70%+Global developer adoption of AI coding tools
Market BenchmarkAPAC · Engineering AI

APAC is accelerating from generative to agentic engineering

Asia Pacific technology organizations are moving faster than global peers from "engineers using AI assistants" to "engineering teams operating alongside AI agents", code generation, automated refactoring, agentic test creation. Indonesian tech-native companies are part of that shift.

2025–26APAC acceleration from generative to agentic engineering practice

The team behind our engineering-productivity work

The engineering leaders who treat AI-native practice as a discipline, not a tool.

Arnold Sebastian

Arnold Sebastian

Founder & Head of AI

View profile
Muhammad Azki Darmawan

Muhammad Azki Darmawan

Lead ML Engineer

View profile

How much of your engineers' day is spent on work that doesn't need them?

Tell us where your team is, adopting, resistant, overeager, plateaued. We'll start with a conversation about your engineering org's current practice, baseline metrics, and the cultural reality of your team. A six-week pilot typically focuses on one or two squads, tooling, rituals, and measurement.

Start a project