Monday, January 17, 2022

TYPE I TYPE II ERRORS IN RESEARCH

TYPE I TYPE II ERRORS IN RESEARCH
TYPE I  TYPE II ERRORS 
Type I Type II Errors in Research
In statistical test theory, the notion of a statistical error is an integral part of hypothesis testing. Hypothesis  testing  is  the  art  of  testing  if  variation  between  two  sample distributions can just be explained through random chance or not. It is based on probabilities so there is always a chance of making an incorrect conclusion. There are possibilities for having two types of error: type I and type II. Type I and type II error are essential components of statistical hypothesis testing.
Type I Error 
Also known as a “false positive”.  The error of rejecting a null hypothesis when it is actually true. In other words, this is the error of accepting an alternative hypothesis (the real hypothesis of interest) when the results can be attributed to chance. Plainly speaking, it occurs when we are observing a difference when in truth there is none (or more specifically - no statistically significant difference). So the probability of making a type I error in a test with rejection region R is P(R | H0 is true). Type I error is denoted by a (alpha) known as a. error, also called the level of significance of test.
Type II Error 
Also known as a "false negative".  The error of not rejecting a null hypothesis when the alternative hypothesis is the true state of nature. In other words, this is the error of failing to accept an alternative hypothesis when you don't have adequate power. Plainly speaking, it occurs when we are failing to observe a difference when in truth there is one. So the probability of making a Type II error in a test with rejection region R is 1 (| is true) a −P R H. The power of the test can be (| is true) a P R H.  Type II error is denoted by (beta) known as error.

No comments:

Post a Comment

LEARNING CONCEPTS AND DEFINITIONS

PDF - LEARNING