Alternate assessment is moving more firmly into a standards-based accountability
world, due in large part to the No Child Left Behind Act of 2001 (NCLB)
and the 2004 reauthorization of IDEA (Quenemoen, Rigney, and Thurlow, 2002).
The NCLB standards and assessment peer review process increased the requirements
for documenting the technical quality of all assessments, but the biggest shift
was for alternate assessments based on alternate achievement standards (AA-AAS).
The type of technical documentation necessary to fulfill the peer review requirements
for regular education assessments has never been expected from AA-AAS
developers previously. Additionally, the alternate assessment systems in many states
are now being reviewed by the states’ technical advisory committees (TAC).
Many of these traditionally-trained measurement experts justifiably expect substantial
documentation of the psychometric worth of the AA-AAS for them
to be considered legitimate assessment activities. Building a convincing case to
support the technical adequacy of any large-scale assessment is a challenging undertaking,
but doing so for AA-AAS has been daunting at both a conceptual
and operational level.
The recently completed New Hampshire Enhanced Assessment Initiative (NHEAI)
and the currently funded National Alternate Assessment Center (NAAC),
particularly Goal 1, was very successful in developing a framework, conceptual papers,
and practical tools to assist states and their organizational partners in documenting
the technical quality of alternate assessments. The Validity GSEG
Consortium will work with five states—at various stages of system “maturity”—to
begin the task of building validity arguments for their AA-AAS. Haertel (1999) reminded
us of the importance of constructing a validity argument compared with simply conducting
a variety of studies. He noted that the individual pieces of evidence do not make
the assessment system valid or not, it is only by weaving these pieces of evidence
together into a coherent argument can we judge the validity of the assessment program.
As well, we intend to borrow from Ryan’s (2002) approach for organizing and collecting
validity evidence within the context of high-stakes accountability systems to assist
states in weaving these study results into a defensible validity argument. Together
with the participating states, we will convene a panel of measurement and special
education experts in order to guide states in developing a validity evaluation plan,
designing specific studies, and constructing a validity argument. Each state’s validity
plan will be based on the maturity of their system, the assessment format, and unique
factors related to assessing this diverse population.
Four over arching goals guide an iterative process of stakeholder involvement and
expert review in these validity evaluations. The primary goals of this GSEG
Consortium are to:
- demonstrate through our partnerships with states high quality validity evaluation
models,
- provide models of validity-based technical documentation for AA-AAS
and eventually for AA-MAS and general assessments,
- add to the growing research base on high quality, technically sound AA-AAS
to provide technical assistance to states as they endeavor to conduct their validity
evaluation studies, and
- provide a range of research-to-practice products that explicate the process and
results.
To access the members' files, please log-in below. If you need access to members'
and have not been given access, please contact
Jacqui F. Kearns.
|