3 Principles of Experimental Design: What to Consider in Statistical Principles

three principles of experimental design

In this blog post, we will discuss statistical principles in experimental design and regression experimental design. There are three principles of experimental design: statistical power, statistical significance and statistical inference. Statistical power is the probability that a statistical test will detect an effect when there really is one; statistical significance is the probability that a statistical test’s results were not due to chance, and statistical inference assesses how likely it is for future data to support or refute the null hypothesis. In other words, it’s a statistical significance calculation to assess how likely your statistical tests are going to give you accurate results. Statistically speaking, statistical power can be defined as:

  1. Where E represents the expected value of the mean difference under the null hypothesis and H stands for the alternative hypothesis. The formula above tells us that statistical power increases with sample size but decreases with variability (standard deviation). It also means statistically significant experimental designs need at least an 80% chance of detecting changes in a response variable if they were true effects from the treatment variables – otherwise, we could end up looking for something that isn’t actually there, or miss something important because the statistical power of the study wasn’t sufficient. One benefit to this type of statistical analysis is that it gives you a way to quantify what kind of sample size or statistical power your experiment needs in order for you to detect an effect if there actually is one, and how much statistical noise (variability) you are willing put up with given your resources.
  2. A more common method of reporting results is by using regression experimental design, where treatment effects are estimated as simple regressions on x variables representing factors related to treatments. The important assumption behind these types of designs is that each factor has only additive effects on the response variable – instead of interactions between two or more factors, which can complicate predictions about treatment responses without additional information from experiments specifically designed to test these interactions.
  3. Another statistical approach to designing experiments is the use of factorial designs, where two or more factors are simultaneously varied in order to study their effects on an output response and then fit a statistical model that explains how each factor affects the response variable. The main assumption behind this design type is that all possible combinations of factor levels have been used which may not be feasible for many applications especially when there are several factors affecting responses; however, it can provide valuable information about individual influences between factor variables (or collinearities) by studying both main effects and interaction terms at different settings of other independent variables.
  4. A third method commonly employed during experimental planning involves screening designs that allow experimenters to test larger numbers of treatments than they would typically be able to test under other designs.

Statistical testing is conducted on the data resulting from each factor combination by examining whether it supports any of the hypotheses about its main effects and interaction terms.