AI / Solutions
Your engineers are already using AI. The question is whether they're using it well. Claude Code, Cursor, GitHub Copilot are in every IDE in every tech-native company in Indonesia. Adoption is solved. Value is not. We work with engineering leaders to operationalize AI across their teams, the tooling, the rituals, the evaluation discipline, the cultural shifts, so "AI-native engineering" stops being a recruiting line and starts being a measurable velocity and quality uplift.
There's a gap between "our engineers use Claude Code" and "our engineering org is AI-native." The first is individual habit. The second is a system, specs written with AI in mind, reviews that catch AI-generated failure modes, evaluation harnesses for AI-generated code, pairing rituals that preserve architectural judgment, and metrics that actually distinguish "shipped faster" from "shipped worse faster." We work with your engineering leaders, CTOs, VPs, Staff+ engineers, to build that system. Not a tool rollout. A practice.
Four phases from "individual AI-tool users" to "AI-native engineering practice." Less a tool rollout, more a practice shift.
We interview engineers, team leads, and the CTO. How is AI actually being used today? Where is it working? Where is it creating quiet technical debt? Which teams are ahead, which are resistant? Which workflows lose the most time to mechanical work? We instrument baseline velocity and quality metrics before we change anything.
A six-week pilot with one or two engineering squads. We roll out tooling (Claude Code / Cursor / Copilot / organization-specific agents), establish pairing rituals, introduce spec-driven development patterns, and install eval-aware review practices. Pass/fail on squad-level velocity, defect rate, and engineer sentiment.
We measure. PR cycle time, deploys per engineer per week, review-comment density, test coverage, defect rate, and, crucially, engineer-reported sentiment. We identify what's working, what isn't, and what needs org-wide vs. squad-level application.
Rollout across the engineering organization. Documentation, internal training, office hours, and the cultural scaffolding that keeps the practice alive after the engagement ends. You get the playbook, the metrics dashboard, and a relationship for ongoing evolution as tools mature.
Four disciplines that together turn "everyone has AI in their IDE" into a compounding engineering practice.
Thoughtful deployment of AI coding tools, Claude Code, Cursor, GitHub Copilot, and internal agents. Policy work (what's safe to generate, what requires human sign-off, what goes in the PR template). IP and security posture for AI-generated code.
AI amplifies whatever you feed it. Vague spec in, vague code out. We install practices that make specs the primary engineering artifact, testable, scoped, and AI-friendly, so your engineers spend more time on judgment and less on translation.
AI-generated code fails in specific, patterned ways, hallucinated imports, subtle correctness regressions, incomplete error handling, security-pattern drift. We train your reviewers to catch these, and install eval harnesses for critical modules that catch what humans miss.
The metrics that actually tell you whether AI is making your team better. PR cycle time, deploys per engineer, review quality, defect rate, alongside engineer sentiment, because a team that's fast and unhappy isn't a team that stays.
One engineering-team engagement we support, plus the market shape for AI-native engineering.
We work with a major Indonesian digital health platform's engineering team on operationalizing AI in day-to-day development, tooling, spec-driven development rituals, eval-aware review practices, and the cultural shifts that turn individual tool use into team-level velocity. Engagement focuses on sustainable adoption rather than one-time tool rollout.
Every major model vendor, Anthropic, OpenAI, Microsoft, Google, has shipped AI coding tools as a first-class product in 2026. Industry surveys consistently report majority-developer adoption across regions. The question has shifted from "should our engineers use AI?" to "is our engineering org extracting value from the tools already on their laptops?"
Asia Pacific technology organizations are moving faster than global peers from "engineers using AI assistants" to "engineering teams operating alongside AI agents", code generation, automated refactoring, agentic test creation. Indonesian tech-native companies are part of that shift.

The practice shift from "coding with AI help" to "specifying with AI in mind." Why specs are now the primary engineering artifact, and what good spec hygiene looks like in a Claude Code + Cursor world.

Hallucinated imports, subtle correctness regressions, security-pattern drift, completeness gaps. The patterned ways AI-generated code fails, and the review checklist we install at client engagements to catch them.

PR cycle time, defect rate, sentiment. The measurement stack that distinguishes "faster" from "faster but worse," and why every CTO installing AI tools needs this baseline before the rollout, not after.
The engineering leaders who treat AI-native practice as a discipline, not a tool.


Tell us where your team is, adopting, resistant, overeager, plateaued. We'll start with a conversation about your engineering org's current practice, baseline metrics, and the cultural reality of your team. A six-week pilot typically focuses on one or two squads, tooling, rituals, and measurement.
Start a project