Logo
Statsig enabled Code.org to confidently measure product impact at scale, uncover invisible user behavior, accelerate experimentation, and support its mission to bring CS education to every classroom.

How Code.org unlocked data visibility for millions of users

How Code.org unlocked data visibility for millions of users

40%

of users now visible who could not be tracked before

7.5%

lift in U.S. teachers associating with schools during sign up

Code.org is a nonprofit on a mission to expand access to computer science and artificial intelligence in schools—and increase participation by young women and underrepresented groups. With millions of students and teachers using their platform, every product decision impacts real classrooms around the world. But as the team sought to mature its product development process, they faced a challenge familiar to many: building a culture of data-informed decision-making, without access to the right tools.

Life before Statsig: seeking better product visibility

Before adopting Statsig, Code.org's data-driven initiatives faced several significant challenges. Experimentation had stagnated following the discontinuation of their contract with Optimizely, due to pricing that exceeded the nonprofit’s budget. This left the team without a robust A/B testing solution for several years.

“We got priced out of Optimizely several years ago. Since then, we got to a point where most people at the company didn’t even remember that we ever used to run A/B tests.”
Dani LaMarca

Dani LaMarca

PM, Code.org
Code.org logo

Before adopting Statsig, Code.org relied on Amplitude for analytics—but quickly ran into major limitations. Amplitude pricing restricted the volume of logged events, forcing Code.org to selectively track only smaller, isolated user behaviors, significantly obscuring insights into broader product usage.

Because many of their student users are anonymous, this meant the team was missing a critical view of how a large portion of users engaged with the platform. Beyond sampling, Amplitude only supported frontend logging, leaving backend interactions invisible and disconnecting key workflows from analysis. Most importantly, they couldn’t tie product changes to real impact.

“We’d see a graph go up and assume it was because of a launch—but we couldn’t tell if it was the product or just the time of year. There was no way to separate correlation from causation.”
Dani LaMarca

Dani LaMarca

PM, Code.org.
Code.org logo

Recognizing these limitations, the team began searching for a solution that could unlock full product visibility and allow them to measure the true impact of every change.

Reigniting a data-driven product culture

Code.org adopted Statsig in spring 2023 to bring more rigor and clarity to product development. With millions of students and teachers—many using the platform anonymously—the team needed more than analytics. They needed a way to confidently attribute product changes to outcomes, standardize metric definitions, and unify product data in one trusted system.

The team’s first experiment launched just weeks after integrating Statsig: a revamp of the teacher-school association step during signup. The change resulted in a 7.5 percentage point lift in U.S. teachers successfully associating with schools—an estimated gain of 6,000 fully attributed teacher accounts over the next year.

Another experiment focused on adding a new educator role field to the signup flow. The goal was to collect better data on which users were classroom teachers versus parents or administrators—without hurting conversion. The result: no drop in signup rate, validating that additional questions could be added without creating friction.

Since then, experimentation has become a regular part of product development—enabling the team to validate assumptions, de-risk major changes, and iterate with greater confidence.

Statsig is now widely adopted across the broader product organization. Product managers rely on it for reporting and trend analysis, the global team measures adoption in key regions, and the business analytics team pulls Statsig data into Redshift to support engagement analysis.

Deeper insights on student behavior

Many of Code.org’s student users access the learning platform and engage with the curriculum without ever formally creating an account. However, the company’s data collection has traditionally only captured user data for signed-in users. This meant that Code.org had no data on the usage and behaviors of the hundreds of thousands of students who accessed the platform and learned anonymously.

Statsig brought the ability to not just count anonymous users, but to match them up with signed-in users if those students later decided to create accounts. Today, about 40% of Code.org’s Statsig user events are associated with anonymous users. This allows the curriculum team to develop a more complete picture of the products and curriculum that students are choosing. It also allows the development and marketing teams to share information with funders that reflects the organization’s full reach and impact, rather than undercounting by relying only on signed-in students.

Making every metric count: analytics as a core workflow

While experimentation gave Code.org more confidence in product decisions, it was Statsig’s analytics capabilities that became the backbone of visibility and continuous improvement across teams. The team now uses Statsig to capture a complete picture of user behavior—including both signed-in and anonymous users—across frontend and backend events, giving every product team a shared view of how the platform is used and where improvements can be made.

Dashboards are now part of the daily workflow. Product managers monitor key trends, the curriculum team tracks lesson-level drop-off, and the business analytics team exports event data to Redshift for downstream reporting. A common setup pairs funnel conversion graphs with conversion-over-time trends, helping teams track both performance snapshots and directional movement. Dashboards are easy to duplicate, resize, and customize—allowing teams to tailor views to their workflows.

One standout use case: regression detection. When Code.org uploaded its annual batch of government-supplied school data, a backend issue quietly disrupted the signup flow. Statsig helped the team pinpoint when conversions dropped, isolate the affected cohort, and estimate the impact—enabling a fast diagnosis and resolution before the issue scaled.

Other features, like the Group By experiment tool, have also proven essential. In one case, a product manager forgot to tag a key metric in the original setup but was able to segment the data post-hoc—turning a potential miss into a useful learning opportunity.

Compared to Amplitude—where data sampling and frontend-only logging limited what the team could see—Statsig gives Code.org complete, reliable, and unified measurement. With richer data, direct attribution, and flexible self-serve tools, teams across the organization now have a clear view into how their work drives impact.

From guesswork to product confidence

Statsig’s adoption has delivered clear business value for Code.org—both in streamlining tooling and enabling better decision-making. Consolidating experimentation and analytics into a single platform allowed the team to fully replace Amplitude, cutting costs and reducing engineering overhead while increasing data fidelity. With all product data—frontend, backend, signed-in, and anonymous—in one place, teams across the organization can now make decisions based on a single source of truth.

This unified ecosystem has changed how product teams work. Rather than guessing which changes matter, teams can now tie experiments directly to business outcomes and confidently prioritize features that move the needle. That attribution model is especially powerful at scale, where even small changes in conversion or engagement can impact thousands of classrooms.

Beyond product capabilities, the team highlighted Statsig’s hands-on support and rapid iteration. Whether it was making dashboards easier to build, or helping unblock anonymous user tracking, the Statsig team remained a partner—not just a vendor—through every step of Code.org’s journey.

“Most businesses, once you’ve signed the contract, you’re on your own. But Statsig has continued to collaborate, answer questions in Slack, and quickly incorporate our feedback into the product. That’s such a valuable thing for us.”
Dani LaMarca

Dani LaMarca

PM, Code.org.
Code.org logo

What’s next: a stronger, smarter product culture

Looking ahead, Code.org plans to scale both experimentation and analytics across more teams. While initial A/B testing efforts were led by the Acquisitions team, other groups—like curriculum, global, and those building new AI tools—are beginning to adopt Statsig to guide product decisions.

On the analytics side, continued improvements to data stitching and anonymous user tracking are helping unlock even deeper insights into long-term engagement—enabling the team to better measure learning outcomes and understand how users interact with the platform over time.

About Code.org

Code.org is a nonprofit dedicated to expanding access to computer science and artificial intelligence education in schools and increasing participation by women and underrepresented groups. With a mission to make computer science a foundational part of K-12 education, Code.org supports millions of students and teachers globally through free curriculum, training programs, and tools.

We use cookies to ensure you get the best experience on our website.
Privacy Policy