How to Write a Reliability Report

Reliability reports communicate a test's validity through repeated trials. Reliability equates to stability, and measures results over time in order to determine how reliable they are. The more errors, the less reliable; fewer errors mean more reliability. Reporting these results means reporting each test of reliability and its results. Reliability can be measured using several different tests: through repeated re-tests or "test re-test"; using a "parallel" measure that looks at similar tests; inter-rater reliability, or the degree of accuracy among the test raters; and internal consistency, which measures how consistently each item yields results.

Instructions

    • 1

      Report the test-re-test correlation coefficients, or degrees to which they correlated or did not correlate when repeated tests were applied. For example, correlation coefficients might range from .84 to .95 on the aptitude scale, and .83 to .97 on the verbal scale using a similar population sample over repeated trials. This would communicate consistency among the testing subscales on an intelligence test.

    • 2

      Plot the repeated tests and range of consistencies out on a scatter plot or another diagram to illustrate the number of repeated trials and their outcomes as they correlate to the actual results, or the "true value." Reliability tests determine how stable a measurement instrument is or is likely to be. These repeated tests weigh in on this reliability.

    • 3

      Report the inter-rater reliability, or measurement of agreement among raters. Examine how accurately and consistently test scorers report the same progress using the same instruments on the same or similar population sample. If there is consistency, there is less error; if there isn't consistency, then there is room for more error, and thereby reliability is affected. Plot this out on a simple matrix, with each rater by name or number, their "before training" ratings, and "after training" ratings to delineate any differences once they were trained, or calibrated, in their rating agreements.

    • 4

      Report the internal consistency, or the measurement of how consistent or similar test items were altogether. The less consistent or related they are to one another, the more room for error. If you're dealing with a standardized test, for example, report the subscales for each group of test items to determine consistency among them. A mathematics test would have numbers, measurement, geometry and statistics as related subscales.

    • 5

      Report and graph out all errors, inconsistencies and large variances in equations on any of the reliability tests. This section will culminate your reliability report, and allow you to ascertain the reliability of the study or the study instrument being discussed.

Learnify Hub © www.0685.com All Rights Reserved