The term "Bayesian" refers to a statistical approach that is based on Bayes' theorem, named after the 18th-century mathematician Thomas Bayes. This approach differs from classical (frequentist) statistics in its interpretation of probability. Bayesian statistics allows for the incorporation of prior knowledge or belief into the statistical analysis, which is updated with new data to provide a posterior probability.
In the context of Bayesian experiments, the Bayesian approach allows you to start with some preconceived notion of what the probability is, and since you don’t have perfect information about what it is, it is allowed to be a probability distribution. This distribution is parameterized by a certain parameter (for example, a beta distribution with the parameters α and β). When you collect new data, you have new information and thus can update your beliefs of what the probability is. This updating process leverages the Bayes Rule.
Bayesian A/B testing leverages this statistical framework for hypothesis testing. It is often selected for its simplicity and intuitiveness. Practitioners do not have to deal with the complexities of p-values, null hypotheses, nor unintuitive confidence intervals. Instead, you can make statements like: "We expect a lift in our metric by 10% +/- 5%. Overall, there is an 80% chance that the test variant beats the control variant and if we roll it out, there is a risk that we degrade our metric by an expected amount of 0.5."
However, building a useful and robust prior model depends on a lot of factors including the amount of information you have, experimental context, and assumptions you’ve made. This is quite challenging to construct and is often criticized for introducing bias to your experimental results.