How validity is determined with regard to informal assessment process?

Validity in informal assessment, unlike formal standardized tests, is more subjective and determined through a more holistic and contextual approach. There's no single definitive statistical measure. Instead, validity is established by considering multiple sources of evidence that demonstrate the assessment accurately measures what it intends to measure. Here's how it's determined:

1. Content Validity: This focuses on whether the assessment adequately covers the relevant content domain. For informal assessments, this means:

* Alignment with learning objectives: Does the assessment accurately reflect the knowledge, skills, and attitudes that were taught? This requires careful consideration of the specific learning goals and how the assessment tasks relate to them.

* Representativeness of the content: Does the assessment sample a sufficient range of the material covered, avoiding overemphasis on certain areas?

* Expert judgment: Feedback from teachers, colleagues, or subject matter experts can validate whether the content of the assessment is appropriate and comprehensive.

2. Criterion-Related Validity: This examines the relationship between the informal assessment and an external criterion. This can be challenging with informal assessments, but possibilities include:

* Comparison to other assessments: If the informal assessment shows a strong correlation with a more formal assessment (though be cautious about over-reliance on a single criterion), it strengthens its validity.

* Performance in subsequent tasks: Does the student's performance on the informal assessment predict their success on related future tasks or projects?

3. Construct Validity: This refers to how well the assessment measures the underlying theoretical construct being assessed (e.g., problem-solving ability, creativity). For informal assessments, this is established by:

* Multiple sources of evidence: Combining observations from different contexts (classwork, projects, discussions) provides a richer understanding of the student's ability.

* Triangulation: Using different assessment methods (e.g., observation, portfolio review, interview) can increase confidence in the findings.

* Logical argument: Clearly articulating the connection between the assessment tasks and the construct being measured is essential.

4. Face Validity: While not a strong form of validity on its own, it's important for informal assessment. This concerns whether the assessment *appears* to measure what it intends to. A teacher who uses an irrelevant or nonsensical assignment will damage face validity, even if it's technically well-designed in other respects. This is often judged by students and teachers alike.

In summary: Validity in informal assessment is determined through a process of ongoing evaluation and judgment, relying on a variety of evidence to support the claim that the assessment truly measures what it's designed to measure within its specific context. It emphasizes professional judgment, alignment with learning goals, and the triangulation of different data sources. It's less about precise statistical calculations and more about building a coherent picture of student learning.

EduJourney © www.0685.com All Rights Reserved