Noun
(statistics) A measure of how likely it is to draw a false conclusion in a statistical test, when the results are really just random variations.
(statistics) The probability, usually expressed as a percentage, of making a decision to reject the null hypothesis when the null hypothesis is actually true; the probability of making a type I error.
Used other than figuratively or idiomatically: see significance, level.
Source: en.wiktionary.org; Conservative test : A test is conservative if, when constructed for a given nominal significance level, the true probability of incorrectly rejecting the null hypothesis is never greater than the nominal level. Source: Internet
By construction, hypothesis testing limits the rate of Type I errors (false positives) to a significance level. Source: Internet
If the p-value is not less than the required significance level (equivalently, if the observed test statistic is outside the critical region), then the test has no result. Source: Internet
Interpretation If the p-value is less than the required significance level (equivalently, if the observed test statistic is in the critical region), then we say the null hypothesis is rejected at the given level of significance. Source: Internet
One response involves going beyond reporting only the significance level to include the p-value when reporting whether a hypothesis is rejected or accepted. Source: Internet
While in principle the acceptable level of statistical significance may be subject to debate, the p-value is the smallest significance level that allows the test to reject the null hypothesis. Source: Internet