AdvancedSystem-Design
45 min
Design an A/B Experimentation Platform
ExperimentationDataStatisticsGovernance
Advertisement
Interview Question
Design an experimentation platform supporting randomization, exposure logging, guardrails, sequential testing, and per-metric analysis at scale.
Key Points to Cover
- Assignment services: consistent bucketing, exposure logging, holdouts
- Metric definitions, event pipelines, and late data handling
- Stats: sequential testing, CUPED, variance reduction, guardrails
- Experiment governance: ethics, kill-switches, mutual exclusion
- Results UI, diagnostics, and reproducibility
- Multi-tenant isolation and audit trails
Evaluation Rubric
Correct bucketing & exposure logging25% weight
Reliable metric computation & latency25% weight
Sound statistical methods & guardrails25% weight
Governance, safety, and UX25% weight
Hints
- 💡Use stable hashing for user bucketing across services.
Common Pitfalls to Avoid
- ⚠️Insufficiently robust hashing for consistent bucketing across different user contexts (e.g., logged in vs. anonymous).
- ⚠️Race conditions or data loss in the exposure logging pipeline under high traffic.
- ⚠️Inadequate guardrail metric selection or insufficient real-time monitoring leading to unchecked negative impacts.
- ⚠️Ignoring the statistical complexities of sequential testing, leading to premature stopping or false positives/negatives.
- ⚠️Lack of a clear strategy for handling late-arriving data, resulting in inaccurate or outdated experiment results.
Potential Follow-up Questions
- ❓How do you avoid p-hacking?
- ❓How to support multi-armed bandits?
Advertisement