Multiple comparisons
In statistics, the multiple comparisons or multiple testing
http://en.wikipedia.org/wiki/Multiple_testing
False positives
Type I error, also known as an error of the first kind, an α error or a false positive is the error of rejecting a true null hypothesis(H0). An example of this would be if a test shows that a woman is pregnant (H0: she is not) when in reality she is not, or telling a patient he is sick (H0: he is not), when in fact he is not . Type I error can be viewed as the error of excessive credulity . In terms of folk tales, an investigator may be "crying wolf" (setting a false alarm) without a wolf in sight (H0: no wolf).
http://en.wikipedia.org/wiki/False_positives#Type_I_error
False discovery rate
False discovery rate (FDR) control is a statistical method used in multiple hypothesis testing to correct for multiple comparisons. In a list of rejected hypotheses, FDR controls the expected proportion of incorrectly rejected null hypotheses (type I errors). It is a less conservative procedure for comparison, with greater power than familywise error rate (FWER) control, at a cost of increasing the likelihood of obtaining type I errors.
http://en.wikipedia.org/wiki/False_discovery_rate