Platform

Developers

Resources

Pricing

Statsig x enterprise: Building for scale and complexity

Fri Jan 05 2024

Cooper Reid

Solutions Engineer, Statsig

Statsig has grown tremendously over the past year.

We’ve had the pleasure (and the challenge) of working with more and more enterprise businesses than ever before, putting our platform to the test and accelerating platform capabilities to accommodate these organizations.

Enterprise companies are defined by their organizational structure: having multiple business units or lines of business whereby the success of their experimentation program requires a more carefully orchestrated effort across many teams and systems. The organizational structure and technical requirements are often far more complex than SMBs in a variety of areas.

Below are some of the key areas enterprises care about as it relates to SaaS experimentation providers—areas in which we have and will continue to heavily invest within the core product and via APIs to promote extensibility:

Skip ahead:

Data governance

Data is often considered the most important asset in today’s digital world and enterprises have codified policies on how data is collected, owned, and consumed. These internal policies require technology providers to offer interoperability, ensuring an organization comprised of many business units with a myriad of data consumption use cases can remain in compliance with these policies.

Statsig recognizes that teams coalescing around the data warehouse to build products has become the de facto pattern among most enterprises, and our data warehouse native platform allows businesses to leverage this data using a model that promotes both:

Central governance & compliance

Data teams have invested a ton around ACL permissioning, data quality standards, monitoring, etc., for their warehouse. Leveraging Statsig’s Warehouse-native product allows enterprises to experiment within the confines of their defined governance mechanisms without having to reinvent the wheel with each new SaaS vendor.

The warehouse-native model calls for a single access point to the warehouse, with narrowly-scoped access granted to a service user as detailed in our documentation.

Unified definition of their metrics used for analysis

Historically, enterprises had to redefine all of their metrics in each of their last-mile analysis platforms, resulting in data being redundantly copied over the network to 3rd party platforms and data discrepancies on the reporting side due to the fact that it’s impossible to guarantee consistent metric definitions across disparate platform—especially when they have varying SLAs on latency and computations.

Enterprises have a myriad of use cases for their data and want to consume a consistent set of metrics across their tools (e.g., internal data tools and 3rd party platforms for analytics and experimentation). For that reason, they choose to implement a Semantic Layer to serve as the canonical metrics catalogue.

Today, our enterprise engineering team is working with enterprises to reduce the burden of copying metric definitions from their warehouse using automation with our Metrics API using some of the techniques outlined in our article about semantic layers.

☝️🤖 Related reading: The semantic layer and Statsig: A partnership for better experimentation.

Access controls

Enterprises typically have employees who have more granular responsibilities compared to SMBs, where a small amount of people have a wide variety of responsibilities. For this reason, enterprises want to ensure access controls are in place to promote the principle of least privilege, ensuring users have access to only the capabilities they need in order to perform their work.

Statsig has begun enhancing our Role-Based Access Controls (RBAC), enabling admins to control the rights that users have within a project. This feature, paired with the Approvals Workflow capabilities gives large teams confidence that the platform has guardrails for minimizing human error and narrowing the scope of access and actions available to users.

In the spirit of minimizing human error and promoting consistency, we’ve also released support for Experiment Policies where you can configure organization-wide experiment default settings (such as Bayesian vs. Frequentist analysis, Confidence Intervals, Stats techniques, Hypothesis blueprint) and determine if you want to enforce those.

The controls described above were delivered as Phase 1 and will serve as the building blocks for enhancing enterprise-grade access controls further this year to include:

  • Team management capabilities, whereby various entities within a Statsig project can be owned by a specific “team” and RBAC can be controlled both globally and at a team scope.

  • iPD Role Syncing, whereby an employee’s access can be centrally governed in an Identity Provider (e.g., Azure, Okta) and those roles can be mirrored when the user is created in Statsig.

Monitoring, auditing, alerting

In an organization with dozens of teams and hundreds or thousands of employees, it’s critical to have a birds-eye view of the systems that are being accessed. Enterprises want a paper trail of all systems being accessed and activities being performed in order to remain in compliance.

Statsig offers various APIs and Integrations that can be leveraged to promote greater monitoring within an organization. Today, this includes:

Automations

Enterprises have a common need for automating workflows across a number of systems. Many enterprises are managing so many applications across teams and want these applications to be interoperable with their experimentation and feature management platform to minimize the room for human error and the need for humans to log into multiple systems to perform a single task.

Statsig offers various APIs and Integrations that our enterprise customers are leveraging for automations, including:

  • Companies with Support teams building tooling on top of our Console API to allow support agents to remove a customer from a holdout to grant them access to a feature that was publicly announced.

  • Companies have operations tools for their country/city level teams to toggle features on and off for their city, without directly having access to Statsig or the gates.

  • Companies use our Datadog trigger integration to automate disabling a feature if a system monitor is triggered in DataDog.

  • Companies use our Terraform integrations to manage features/experiments in code and have these configurations propagate into downstream systems.

  • Companies use the Statsig GitHub Action plugin to configure features & experiments at build time, add kill switches to tests, and run experiments on performance improvements.

Introducing Product Analytics

Learn how leading companies stay on top of every metric as they roll out features.
an isometric module showing graphs with the word "query"

Experimentation program oversight

Enterprises invest heavily in resources and technology to build the best experimentation program for their business and want to continuously ensure that their program is running efficiently and they’re seeing ROI from the SaaS platforms they’re using.

Business units are tasked with capturing and communicating the impact of their experimentation program to key business stakeholders and decision-makers. What we’ve consistently heard from these teams is the desire to be able to effectively communicate metric performances over a fiscal quarter, share experiment summaries and reports, and monitor platform usage.

  • This quarter, we released tools in the Console for generating experiment summaries and reports that can be shared with key stakeholders.

  • We also offer tools for monitoring and exporting billable event volumes, and some customers decide to build automations to get periodical meter readings on usage and billing via the Console API.

  • Businesses use Holdouts to measure the cumulative impact that experimentation has had on revenue over the period of a quarter. This helps them validate the overall net positive impact experimentation is having, and even enables them to validate their proxy metrics that correlate to revenue.

  • Insights can be used to see which features and tests (both active and historical) had the biggest impact on your metrics. For example, looking at an "add to cart" metric and observing that V2 cart layout features caused a lift.

Honestly, we understand that there are tons of specific complexities and needs that vary from enterprise to enterprise, and it’s challenging to describe how we address each of these in a blog post such as this. It would be a very long post.

If you’re curious about how Statsig will cater to your specific needs, don’t hesitate to request a demo below, and we can use the time to chat about the challenges you’re facing.

Request a demo

Statsig's experts are on standby to answer any questions about experimentation at your organization.
request a demo cta image


Try Statsig Today

Get started for free. Add your whole team!
We use cookies to ensure you get the best experience on our website.
Privacy Policy