Menu

FAQ

Where were the IGDIs originally created?

The Individual Growth and Development Indicators (IGDIs), sometimes referred to as “Get it, Got it, Go!”, were created in a collaborative effort under the Early Childhood Research Institute on Measuring Growth and Development (ECRI-MGD) at the University of Minnesota’s Center for Early Education Development (CEED). Research efforts were led by Dr. Scott McConnell and his team of early childhood research associates.

How are IGDIs different from “Get It, Got It, Go!”?

“Get It, Got It, Go” was the name of the website that originally hosted information regarding the original edition of the IGDIs assessment. Therefore, the names are often used interchangeably. However, moving forward, “myIGDIs” will be the new umbrella under which the IGDIs assessment materials will be available.

How are IGDIs different from DIBELS?

DIBELS and IGDIs are similar in many respects. They are both tools to monitor the progress of beginning reading development. However, one difference is that DIBELS are for older children as they are intended to be used with children in kindergarten and older grades, while IGDIs were designed to be used with preschoolers. Another difference is the specific nature of the tasks. However, conceptually and functionally they are very similar.

How are IGDIs different from other early childhood assessments?

IGDIs are different from other assessment systems in early childhood because they: identify children at risk (e.g., tell us when to do something), evaluate the effectiveness of intervention (e.g., tell us if a child is making progress during an intervention), and can be used repeatedly over short periods of time for progress monitoring.

Will I still be able to use the “Get It, Got It, Go!” website?

No. The University of Minnesota decided to no longer host the ggg.umn.edu website.

Who else is using IGDIs?

The IGDI test materials have been used across the country for several years. Those currently using IGDIs include: Head Start instructors, special education instructors, speech and language pathologists, Title 1 coordinators, preschool specialists, parents, and early childhood researchers.

As of Jan 1, 2015, IGDIs have been used by over 12,000 schools and have tested over 250,000 preschool children.

Is there a Spanish version?

What payment methods do you accept?

We accept Purchase Orders or Credit Card payments.

What is Item Response Theory (IRT)?

Clink on the link provided to view a document prepared by our researcher partners at the University of Minnesota explaining Item Response Theory (IRT) and how the IGDI “+” measures were designed.

Why are there so many “X’s” in the Fall, Sound Identification set?

Sound Identification, as with all Early Literacy+ measures are calibrated using a measurement methodology called Rasch Modeling. Rasch modeling allows for the items to be placed on a continuum of difficulty such that the easiest items are at one end of the continuum and the items requiring the most ability are at the other.

Along this continuum there are three particular performance standards that we set: fall, winter and spring benchmarks. Its important to note that these benchmarks are not set on a normative standard (that is we don’t set standards based on performance compared to peers), instead we use a criterion benchmark. A criterion benchmark references a particular ability that is indicative of each seasonal benchmark. These benchmarks were developed and revised over 5 years of work with a national sample of students and an extensive contributions from lead researchers and practitioners. In this way, each season has an ability level that is characteristic of its location. (fall, winter and spring each have their own benchmark).

Now, back to my first note about the continuum of performance. If you imagine all the items spread in a line to represent the continuum where the easiest items are at the far left and the most difficult are at the far right, you can see that somewhere in that line we will find the ability representative of each benchmark. We find that location and then select the items that are around the benchmark. This is how we create the sets.

So, to your question, we don’t “pick” items for each set, they are simply the items that are at the ability level, or difficulty. Sometimes there are strange or coincidental patterns that occur at these locations, for example the “i” frequency. It happens to be the case that the “i” is often featured in items that are at the particular benchmark location. This could be for a couple of reasons. It could be that there is something specific about knowing the letter i sound that helps to differentiate students who are doing fine in the Tier 1 and students who need more support. It also could be that the other letters on the items contribute to the difficulty that make the specific items at a difficulty that separates Tier 1 performance from those who need additional support. For example, an item that has differing topographies for each letter (say G, I and O) may be easier to respond to than an item with similar topographies (say I, T, and H). When the difficulty is characteristic of that required at the benchmark location it may be the case that the distractors are contributing to the frequency of its occurrence.

Again, its not the case that we select items to be in the benchmark sets, we instead find the location characteristic of the benchmark and assemble the items that are at that location into the sets. This is an empirically robust methodology and eliminates variables attributed to “expert” decision making.

  TOP