In studies conducted with a sample size that is too small, clinically significant differences may not be found, and in studies conducted with a sample size that is too large, clinically insignificant differences may be found to be statistically significant. In order to avoid all problems, the minimum sample size required is calculated in light of published studies in all scientific research. This calculation is also called Power Analysis for studies that are compared.
There are three important factors affecting the minimum sample size required in a study;
Effect size: The expected prevalence obtained from the literature, the difference between two means, the difference between two ratios, the correlation coefficient and the area under the ROC curve can be given as examples of effect sizes used for different studies.
Type I error (α): The probability of finding a difference that does not actually exist in the comparison. The P value gives the Type I error for each comparison. If the P value is desired to be significant at the 0.05 level, the Type I error 0.05 is selected.
Power (1-β): The β error is the probability of finding a difference that does not exist in a study. If this probability is kept at 20%, the power of the test will be 80%.
Cool Sample Size App



