The DRDP Framework

The DRDP For Head Start and state-funded ECE programs

Educa is a licensed DRDP platform vendor

DRDP (2015) Overview

The Desired Results Developmental Profile (DRDP) is a developmental continuum from early infancy to kindergarten entry. It is a formative assessment instrument approved by Head Start and several states.

The DRDP was developed in California and aligns to the ELOF, the Pre-K Common Core State Standards and the California Preschool Foundations.

Many teachers in North America use learning stories linked to DRDP as the primary evidence source to meet State & Head Start assessment requirements. This process is supported in Educa.

DRDP Views

There are four versions of the DRDP (2015).

  1. Essential View – 29 measures – Infant/Toddler, Preschool and Kindergarten versions
  2. Fundamental View – 43 measures – Preschool only
  3. Comprehensive View – Infant/Toddler and Preschool versions
  4. MODIFIED – a version to accommodate Covid-related absences 
The most commonly used version is the Essential View, with the Fundamental View often required for children on IEPs. There is also a School Age version and DRDP for Kindergarten. All versions are available in English and in Spanish.

Educa supports all DRDP views, including the Modified Views used when children are not always on site and so not available for in person observations.

Evidence & Rating Rules

Requirements vary by agency. Head Start currently requires an assessment to be filed for every child three times a year with with two items of evidence to support each rating. California state programs typically require two submissions a year with one item of supporting evidence. Most measures have seven levels, each one including examples to help educators in their ratings. All versions of the DRDP include these domains.
  1. Attitudes to Learning – Self-Regulation
  2. Social and Emotional Development
  3. Language and Literacy Development
  4. English Language Development
  5. Cognition, including Math and Science
  6. Physical Development – Health
The difference in versions is the number of sub-domains. For instance there are seven ATL-REG subdomains in the Fundamental View, only four are in the Essential View, number 4-7, ATL-REG1, 2 and 3 are not in the Essential View.

DRDP Rating Submission & Inspection

The current requirement of all agencies to to rate every measure with supporting evidence in every rating period.
Most authorities require ratings for each child to be submitted at the end of each period. They do not need this evidence to be submitted to support every measure, however they do require programs this evidence to support ratings in any period during inspections.

DRDP Online Reports

For California, child data is uploaded to DRDP Online using a universal DRDP upload template spreadsheet that has a child on each row.

Using this raw child data, programs can then see analytical data online:

  • Ratings by class
  • Growth from period to period
drdp rating in Educa

Above shows Educa’s DRDP selector that helps teachers link their Learning Stories.

Below shows how the DRDP rating in Educa pre-populates links from your Learning Stories.

Screen Shot 2021-10-02 at 1.44.53 PM

DRDP is Research-Based, Valid & Reliable

The DRDP rating gives teachers the ability to assess children’s learning along a continuum of multiple, critical developmental levels, using learning stories and other observations as evidence.
“The DRDP is a tool that consistently produces valid, reliable, and useful estimates of children’s developmental progress within each domain, using information gathered from individual measures about children’s behaviors, knowledge, and skills associated with that domain. The assessment, which reflects the child development research literature, is readily interpretable by all early childhood teachers. Measures are presented in a simple and straightforward manner that clearly demonstrates how learning and development in each area typically progresses for children from early infancy to kindergarten entry. “ (Page 32, Technical Report)
“Assessment information gained from using the DRDP is intended to support teachers with planning next steps for scaffolding young children’s learning in key areas… Teachers and administrators can use the data to gauge the status and progress of children’s development and learning in an effort to inform instructional and programming decisions in support of individuals and groups within the programs.”
(Page 12, Technical Report)

Research Background

Ten quality indicators were used in the development of the DRDP rating, guided by federal and state reporting requirements and published early childhood guidelines and psychometric standards for assessment (American Educational Research Association [AERA], American Psychological Association [APA], and National Council on Measurement in Education [NCME] 2014; National Association for the Education of Young Children [NAEYC] 2009; NRC 2008).
These 10 quality indicators were intended to ensure that the instrument adheres to the standards and recommended practices for assessment in early childhood settings and is appropriate, as well as developmentally appropriate, for assessing all young children enrolled in ELCD and SED early childhood education programs.

The 10 quality indicators that guided the development of the DRDP (2015) are listed below:

  1. Alignment (to State, Common Core State Standards, HSELOF)
  2. Acceptability (for State and Head Start)
  3. Authenticity
  4. Cultural and Linguistic Appropriateness
  5. Multifactors (evidence from multiple sources)
  6. Sensitivity
  7. Universal Design
  8. Utility 43
  9. Validity
  10. Reliability

Please refer to the Technical Report for the Desired Results Developmental Profile (2015) by the DRDP Collaborative for details on each item.

The Technical Report states that the DRDP instrument consistently produces valid, reliable, and useful estimates of children’s developmental progress within each domain, and that it has sufficient sensitivity to detect growth between rating periods, as confirmed in a 2013 Sensitivity Study.

DRDP Testing & Validation

The validity and reliability of an assessment instrument requires using a large sample of children who represent the nation’s population. This allows teachers and administrators to assume that the instrument will be effective in all instructional settings and for children with different backgrounds, races, ethnicities, and special needs.

The DRDP Collaborative followed this approach. Here is the validation process they followed (from Technical Report).

The studies varied in size, the Calibration Study having 1,500 children as per Federal guidelines. All studies, including the pilot and field studies, covered at least 600 children in 50+ facilities and in 15 different California counties.

The 142-page Technical Report for the DRDP goes into great detail on the quality factors used in developing the framework domains and measures, and the results of testing – covering distributions by measure:

  • Frequency distribution by measure ratings
  • Symmetry of distribution
  • Fit statistics
  • Item Characteristic Curves
  • Wright Maps
In studies of the DRDP as a school readiness assessment, domain measures were cross-tested using other validated research tools, including various Woodcock-Johnson and subjected to other statistical tests. Reliability ranged from 0.83 from the Self-regulation Development Domain (for 4 measures) to 0.90 for the 8 measures in the Language and Literacy Development domain.

DRDP- School Readiness Validation Report states (Page 18):*

“DRDP-SR provides reliable and valid psychometric measurement of the development of individual children on the 5 key domains of school readiness. The domain scale reliability coefficients are quite good, particularly considering the limited number of measures (items) comprising each domain. (See Figure 4) It is necessary to keep the number of measures to a minimum, to reduce the burden on teachers. The balance between these two factors is well achieved by DRDP-SR. “

Validity and Reliability Quality Measures

On the subject of Validity, the DRDP was tested as follows:
  • Content validity – alignments and research supports this
  • Response validity – cognitive interviews in 2014 provided evidence of the fit between the intent of the measures and the resulting ratings.
  • Internal structure – calibration studies confirmed expected order of and relationships between item/step difficulty and child performance across domains. Older children were consistently rated higher.

The Reliability indicator refers to “the consistency of measurements, gauged by any of several methods, including when the testing procedure is … administered by different raters (inter- rater reliability)” (NRC 2008, 427).

Internal Consistency

In the Calibration Study of 2015, The expected a posteriori/plausible value (EAP/PV) reliability indices31 ranged from 0.73 to 0.99, indicating that DRDP (2015) domains and sub-domains all had adequate score reliability. EAP/PV reliability indices are an estimate of how reliably the measures can be used to distinguish students’ underlying abilities. Refer to appendix 12 for domain separation EAP/PV reliability estimates.

Inter-rater Reliability

(Page 69 of Technical Report) Inter-rater reliability data was collected at various times in 2015, 2015 and 2016, to gather evidence about rating agreements between pairs of teacher and pairs of special education assessors who independently rated the same child on the same DRDP measures within the same time period.

For the SED domain, inter-rater agreement percentages were calculated for both exact agreement (results ranged from 48 to 81 percent) and agreement within one rating level (results ranged from 83 to 98 percent; Desired Results Access Project 2015).
ELCD DRDP (2015) inter-rater reliability data was collected in fall 2015 and spring 2016. The focus of the study was to examine the relationship between rater agreement and the circumstances that influence rater agreement.
Data was collected from 82 pairs of teachers in early childhood settings (42 pairs from infant/toddler settings and 40 pairs from preschool settings) who independently rated the same children on the same DRDP (2015) measures within the same time period. Pairs represented 37 early childhood programs from across California.
Inter-rater agreement percentages were calculated using domain-scaled ratings for exact agreement because this is the information that is provided to teachers and administrators through DRDP reports to support planning for individual children and programs (exact agreement for domain-scaled ratings ranged from 95 to 100 percent for infants/toddlers and from 92 to 97 percent for preschool-aged children).

DRDP Rating and Studies

DRDP-School Readiness Validation Study
https://drdpk.org/docs/DRDP-SR_ValidationStudiesSummary.pdf