First and foremost I am going to provide you with a brief description of the term validity, because if you are anything like me the knowledge acquired from year one statistics has packed its bags and left me over the summer.
Validity is a criterion for evaluating the quality of any measurement procedure, the validity of a measurement procedure depends on how effectively the measurement process measures the variable is claims to measure. Validity is particularly important when using an operational definition to measure a hypothetical construct. Operational definitions help us convert an abstract variable into a concrete entity. For example, we are unable to measure intelligence directly; we cannot simply measure it with a ruler. Instead our best attempt at measuring intelligence would be to measure intelligent behaviour, we can do this by giving participants IQ tests. IQ tests measure intelligent behaviour by measuring responses to questions. How valid is this method of measuring intelligence? Well, there are always concerns about the quality of operational definitions and the measurement they produce. Hypothetical constructs are not physical entities and cannot be directly measured; therefore, the validity of measurements produced by the operational definitions will always be under scrutiny.
Fortunately, we can assess the validity of a measurement procedure. Face, concurrent, predictive, construct, convergent and divergent validity are the six most commonly used definitions of validity and help us assess the validity of a measurement (Gravetter & Forzano, 2009).
Your findings may present themselves as valid when the measurement procedure superficially appears to measure what it claims to measure, this is face validity. Face validity is the least scientific definition of validity, but is simple and time-efficient. If there is something drastically wrong with your measurement procedure you will most probably be able to detect in on face value alone. However, it would be very un-scientific to assume that your measurement of a construct was valid just because it appeared to be. You may be inclined to think that your findings are valid if the scores you obtain using your measurement procedure are concurrent with the scores obtained using an already established procedure. However, obtaining concurrent scores with an established procedure does not prove that your procedure measured the construct you intended to. The two measurement procedures may have both measured the same variable, but it may not have been the variable either procedure wanted to measure. You may also deduce that your findings are valid when the scores you obtain from your measure accurately predict behaviour according to a theory; this is a demonstration of predictive validity. If you obtain scores from your measurement procedure that behave in exactly the same way as variable, your findings have demonstrated construct validity according to Gravetter and Forzano (2009). Convergent validity is demonstrated by a strong relationship between the scores obtained from two different methods of measuring the same construct. And finally, divergent validity involves demonstrating that you are measuring one specific construct and not combining two different constructs in the same measurement process.
I have now presented you with six ways of assessing the validity of your measurement; face validity, concurrent validity, predictive validity, construct validity, convergent and divergent validity. Some forms of validity are slightly dubious, for example, as I mentioned earlier you cannot infer that your results are valid just because they superficially appear valid. In fact, you cannot ever prove that your findings are valid when measuring a hypothetical construct. However, it is fair to assume that your findings are valid if you have demonstrated the six aforementioned definitions of validity.