Early Learning Frameworks

Desired Results Developmental Path (DRDP)

For Head Start and State-Funded Programs


drdp rating using learning storiesDRDP Overview

The Desired Results Developmental Profile (DRDP) is a developmental continuum from early infancy to kindergarten entry. It is a formative assessment instrument used to inform instruction and program development that aligns to a number of early learning frameworks, including the ELOF and the Pre-K Common Core State Standards.

HSELOF to DRDP Alignment here

DRDP is accepted by Head Start and is being used in California, Minnesota, Missouri and other states as the assessment instrument for State-Funded programs.

Many teachers in CA are using a learning story pedagogy – creating meaningful learning experiences using stories to focus on a child’s uniqueness, parent collaboration and next steps – and then linking to DRDP outcomes as the last step to meet State & Head Start requirements.  This process is now supported in Educa.

See how to rate using the DRDP in Educa

DRDP Rating 101

Essential vs Fundamental vs Comprehensive View

There are now three versions of the DRDP (2015)

  1. Essential View – 29 measures – Infant/Toddler and Preschool versions
  2. Fundamental View – 43 measures – Preschool only
  3. Comprehensive View – Infant/Toddler and Preschool versions

The most commonly used version is the Essential View, with the Fundamental View often required for children on IEPs. There is also a School Age version.  All versions are available in English and in Spanish.

As of February, 2020, Educa supports DRDP Essential View only. It is adding DRDP Fundamental View shortly.

Evidence & Rating Rules

Requirements vary by agency. Head Start currently requires an assessment to be filed for every child three times a year with with two items of evidence to support each rating.  California state programs typically require two submissions a year with one item of supporting evidence.

Most measures have seven levels, each one including examples to help educators in their ratings.

All versions of the DRDP include these domains.

  1. Attitudes to Learning – Self-Regulation
  2. Social and Emotional Development
  3. Language and Literacy Development
  4. English Language Development
  5. Cognition, including Math and Science
  6. Physical Development – Health

The difference in versions is the number of sub-domains. For instance there are seven ATL-REG subdomains in the Fundamental View, only four are in the Essential View, number 4-7, ATL-REG1, 2 and 3 are not in the Essential View.

DRDP ratings in Educa
DRDP ratings and learning stories in Educa

DRDP Rating Submission & Inspection

The current requirement of all agencies to to rate every measure with supporting evidence in every rating period.

Most authorities require ratings for each child to be submitted at the end of each period. They do not need this evidence to be submitted to support every measure, however they do require programs this evidence to support ratings in any period during inspections.

DRDP Online Reports

For California, child data has to be uploaded to the DRDP Online site using  DRDP rating upload template spreadsheet that has a child on each row.

Once this raw child data is uploaded to DRDP Online, programs are able to see analytical data online:

  • Ratings by class
  • Growth from period to period

 

DRDP is Research-Based, Valid & Reliable

The DRDP rating gives teachers the ability to assess children’s learning along a continuum of multiple, critical developmental levels, using learning stories and other observations as evidence.

“The DRDP is a tool that consistently produces valid, reliable, and useful estimates of children’s developmental progress within each domain, using information gathered from individual measures about children’s behaviors, knowledge, and skills associated with that domain. The assessment, which reflects the child development research literature, is readily interpretable by all early childhood teachers. Measures are presented in a simple and straightforward manner that clearly demonstrates how learning and development in each area typically progresses for children from early infancy to kindergarten entry. “

(Page 32, Technical Report)

“Assessment information gained from using the DRDP is intended to support teachers with planning next steps for scaffolding young children’s learning in key areas… Teachers and administrators can use the data to gauge the status and progress of children’s development and learning in an effort to inform instructional and programming decisions in support of individuals and groups within the programs.”

(Page 12, Technical Report)

Research Background

Ten quality indicators were used in the development of the DRDP rating, guided by federal and state reporting requirements and published early childhood guidelines and psychometric standards for assessment (American Educational Research Association [AERA], American Psychological Association [APA], and National Council on Measurement in Education [NCME] 2014; National Association for the Education of Young Children [NAEYC] 2009; NRC 2008).

These 10 quality indicators were intended to ensure that the instrument adheres to the standards and recommended practices for assessment in early childhood settings and is appropriate, as well as developmentally appropriate, for assessing all young children enrolled in ELCD and SED early childhood education programs.

The 10 quality indicators that guided the development of the DRDP (2015) are listed below:

  1. Alignment (to State, Common Core State Standards, HSELOF)
  2. Acceptability (for State and Head Start)
  3. Authenticity
  4. Cultural and Linguistic Appropriateness
  5. Multifactors (evidence from multiple sources)
  6. Sensitivity
  7. Universal Design
  8. Utility 43
  9. Validity
  10. Reliability

Please refer to the Technical Report for the Desired Results Developmental Profile (2015) by the DRDP Collaborative for details on each item.
The Technical Report states that the DRDP instrument consistently produces valid, reliable, and useful estimates of children’s developmental progress within each domain, and that it has sufficient sensitivity to detect growth between rating periods, as confirmed in a 2013 Sensitivity Study.

DRDP Testing & Validation

The validity and reliability of an assessment instrument requires using a large sample of children who represent the nation’s population. This allows teachers and administrators to assume that the instrument will be effective in all instructional settings and for children with different backgrounds, races, ethnicities, and special needs.

The DRDP Collaborative followed this approach. Here is the validation process they followed (from Technical Report).

timeline of DRDP research

The studies varied in size, the Calibration Study having 1,500 children as per Federal guidelines. All studies, including the pilot and field studies, covered at least 600 children in 50+ facilities and in 15 different California counties.

The 142-page Technical Report for the DRDP goes into great detail on the quality factors used in developing the framework domains and measures, and the results of testing – covering distributions by measure:

  • Frequency distribution by measure ratings
  • Symmetry of distribution
  • Fit statistics
  • Item Characteristic Curves
  • Wright Maps

In studies of the DRDP as a school readiness assessment, domain measures were cross-tested using other validated research tools, including various Woodcock-Johnson and subjected to other statistical tests. Reliability ranged from 0.83 from the Self-regulation Development Domain (for 4 measures) to 0.90 for the 8 measures in the Language and Literacy Development domain.

DRDP- School Readiness Validation Report states (Page 18):*
“DRDP-SR provides reliable and valid psychometric measurement of the development of individual children on the 5 key domains of school readiness. The domain scale reliability coefficients are quite good, particularly considering the limited number of measures (items) comprising each domain. (See Figure 4) It is necessary to keep the number of measures to a minimum, to reduce the burden on teachers. The balance between these two factors is well achieved by DRDP-SR. “

Validity and Reliability Quality Measures

On the subject of Validity, the DRDP was tested as follows:

  • Content validity – alignments and research supports this
  • Response validity – cognitive interviews in 2014 provided evidence of the fit between the intent of the measures and the resulting ratings.
  • Internal structure – calibration studies confirmed expected order of and relationships between item/step difficulty and child performance across domains. Older children were consistently rated higher.

The Reliability indicator refers to “the consistency of measurements, gauged by any of several methods, including when the testing procedure is … administered by different raters (inter- rater reliability)” (NRC 2008, 427).

Internal Consistency

In the Calibration Study of 2015, The expected a posteriori/plausible value (EAP/PV) reliability indices31 ranged from 0.73 to 0.99, indicating that DRDP (2015) domains and sub-domains all had adequate score reliability. EAP/PV reliability indices are an estimate of how reliably the measures can be used to distinguish students’ underlying abilities. Refer to appendix 12 for domain separation EAP/PV reliability estimates.

Inter-rater Reliability

(Page 69 of Technical Report) Inter-rater reliability data was collected at various times in 2015, 2015 and 2016, to gather evidence about rating agreements between pairs of teacher and pairs of special education assessors who independently rated the same child on the same DRDP measures within the same time period.

For the SED domain, inter-rater agreement percentages were calculated for both exact agreement (results ranged from 48 to 81 percent) and agreement within one rating level (results ranged from 83 to 98 percent; Desired Results Access Project 2015).

ELCD DRDP (2015) inter-rater reliability data was collected in fall 2015 and spring 2016. The focus of the study was to examine the relationship between rater agreement and the circumstances that influence rater agreement.

Data was collected from 82 pairs of teachers in early childhood settings (42 pairs from infant/toddler settings and 40 pairs from preschool settings) who independently rated the same children on the same DRDP (2015) measures within the same time period. Pairs represented 37 early childhood programs from across California.

Data was reported for a total of 421 children (214 infants/toddlers and 207 preschool-aged children). Inter-rater agreement percentages were calculated using individual measure ratings for both exact agreement (results ranged from 54 to 64 percent for infants/toddlers and from 50 to 75 percent for preschool-aged children) and agreement within one rating level (results ranged from 87 to 98 percent for infants/toddlers and from 84 to 97 percent for preschool-aged children).

Inter-rater agreement percentages were calculated using domain-scaled ratings for exact agreement because this is the information that is provided to teachers and administrators through DRDP reports to support planning for individual children and programs (exact agreement for domain-scaled ratings ranged from 95 to 100 percent for infants/toddlers and from 92 to 97 percent for preschool-aged children).

 

DRDP Rating and Studies

A summary of research results is available for download here.

DRDP Technical Report
https://www.desiredresults.us/sites/default/files/docs/resources/research/DRDP2015_Technical%20Re port_20180920_clean508.pdf

DRDP-School Readiness Validation Study
https://drdpk.org/docs/DRDP-SR_ValidationStudiesSummary.pdf