Inside IES Research

Notes from NCER & NCSER

The Scoop on Replication Research in Special Education

Replication research may not grab the headlines, but reproducing findings from previous studies is critical for advancing scientific knowledge. Some have raised concerns about whether we conduct a sufficient number of replication studies. This concern has drawn increased attention from scholars in a variety of fields, including special education.

Photo array, top left going clockwise: Therrien, Lemons, Cook, and Coyne

Several special education researchers explored this issue in a recent Special Series on Replication Research in Special Education in the journal, Remedial and Special Education. The articles describe replication concepts and issues, systematically review the state of replication research in special education, and provide recommendations for the field. One finding is that there may be more replication studies than it seems—but authors don’t call them replications.

Contributors to the special issue include Bryan Cook from the University of Hawaii, Michael Coyne from the University of Connecticut, and Bill Therrien from the University of Virginia, who served as guest editors, and Chris Lemons, from Peabody College of Vanderbilt University. They shared more about the special issue and their collective insights into replications in special education research.

(In photo array, top left going clockwise: Therrien, Lemons, Coyne, and Cook)

How did you become interested in replication work?

Replication is a core component of the scientific method. Despite this basic fact that we all learned in Research 101, it is pretty apparent that in practice, replication is often ignored. We noticed how much attention the lack of replication was starting to get in other fields and in the press and were particularly alarmed by recent work showing that replications often fail to reproduce original findings. This made us curious about the state and nature of replication in the field of special education.

What is the state of replication research in special education?

It depends on how you define replication and how you search for replication articles. When a narrow definition is used and you require the term “replication” to be in the article, the rate of replication doesn’t look too good. Using this method, Lemons et al. (2016) and Makel et al. (2016) reported that the rate of replication in special education is between 0.4 to 0.5%, meaning that out of all the articles published in our field, less than 1% are replications. We suspected that—for a number of reasons (e.g., perceptions that replications are difficult to publish, are less prestigious than novel studies, and are hostile attempts to disprove a colleague’s work)—researchers might be conducting replication studies but not referring to them as such. And, indeed it’s a different story when you use a broad definition and you do not require the term replication to be in the article. Cook et al. (2016) found that out of 83 intervention studies published in six non-categorical special education journals from 2013-2014, there were 26 (31%) that could be considered replications, though few authors described their studies that way. Therrien et al. (2016) selected eight intervention studies from 1999-2001 and determined whether subsequently published studies that cited the original investigations had replicated them. They found that six of the eight original studies had been replicated by a total of 39 different studies (though few of the replications identified themselves as such).

What were some other key findings across the review articles?

Additional findings indicated that: (a) most replications conducted in special education are conceptual (i.e., some aspects are the same as the original study, but some are different) as opposed to direct (i.e., as similar to the original study as possible), (b) the findings of the majority of replications in special education agreed with the findings of the original studies, and (c) most replications in the field are conducted by one or more authors involved in the original studies. In three of the four reviews, we found it was more likely for a replication to produce the same outcome if there was author overlap between the original and replication studies. This may be due to the challenges of replicating a study with the somewhat limited information provided in a manuscript. It also emphasizes the importance of having more than one research team independently replicate study findings.  

What are your recommendations for the field around replicating special education interventions?

The article by Coyne et al. (2016) describes initial recommendations for how to conceptualize and carry out replication research in a way that contributes to the evidence about effective practices for students with disabilities and the conditions under which they are more or less effective:

  • Many studies evaluate an approach that has previously been studied under different conditions. In this case, researchers should specify which aspects replicate previous research;
  • Conceptualize and report intervention research within a framework of systematic replications, or a continuum of conceptual replications ranging from those that are more closely aligned to the original study to those that are less aligned;
  • Design and conduct closely aligned replications that duplicate, as faithfully as possible, the features of previous studies.
  • Design and conduct less closely aligned replications that intentionally vary essential components of earlier studies (e.g., participants, setting, intervention features, outcome measures, and analyses); and
  • Interpret findings using a variety of methods, including statistical significance, directions of effects, and effect sizes. We also encourage the use of meta-analytic aggregation of effects across studies.

One example of a high-quality replication study is by Doabler et al. The authors conducted a closely aligned replication study of a Tier 2 kindergarten math intervention. In the design of their IES-funded project, the authors planned a priori to conduct a replication study that would vary on several dimensions, including geographical location, participant characteristics, and instructional context. We believe this is a nice model of designing, conducting, and reporting a replication study.

Ultimately, we need to conduct more replication studies, we need to call them replications, we need to better describe how they are alike and different from the original study, and we need to strive for replication by researchers not involved in the original study. It is this type of work that may increase the impact research has on practice, because it strengthens our understanding of whether, when, and where an intervention works.

By Katie Taylor, Program Officer, National Center for Special Education Research

The Institute of Education Sciences at AERA

The American Educational Research Association (AERA) will hold its annual meeting April 8 through April 12 in Washington, D.C.—the largest educational research gathering in the nation. This will be a special meeting for AERA, as it is celebrating 100 years of advocating for the development and use of research in education. The program includes hundreds of sessions, including opportunities to learn about cutting edge education research and opportunities to broaden and deepen the field. 

About 30 sessions will feature staff from the Institute of Education Sciences (IES) discussing IES-funded research, evaluation, and statistics, as well as training and funding opportunities.

On Saturday, April 9, at 10:35 a.m., attendees will have a chance to meet the Institute’s leadership and hear about the areas of work that IES will be focusing on in the coming year. Speakers include Ruth Curran Neild, IES’ delegated director, and the leaders of the four centers in IES: Thomas Brock, commissioner of the National Center for Education Research (NCER); Peggy Carr, acting commissioner of the National Center for Educational Statistics (NCES); Joy Lesnick, acting commissioner of the National Center for Education Evaluation and Regional Assistance (NCEE), and Joan McLaughlin, commissioner of the National Center for Special Education Research (NCSER).

On Monday, April 11, at 9:45 a.m., attendees can speak to one of several IES staffers who will be available at the Research Funding Opportunities—Meet Your Program Officers session. Program officers from NCER, NCSER, and NCEE will be on hand to answer questions about programs and grant funding opportunities. Several IES representatives will also be on hand Monday afternoon, at 4:15 p.m. for the Federally Funded Data Resources: Opportunities for Research session to discuss the myriad datasets and resources that are available to researchers.

NCES staff will lead sessions and present on a variety of topics, from The Role of School Finance in the Pursuit of Equity (Saturday, 12:25 p.m.) to Understanding Federal Education Policies and Data about English Learners (Sunday, April 10, 8:15 a.m.) and what we can learn from the results of PIAAC, a survey of adult skills (also Sunday, 8:15 a.m.). Dr. Carr will be a part of several sessions, including one on Sunday morning (10:35 a.m.) about future directions for NCES longitudinal studies and another on Monday morning (10 a.m.) entitled Issues and Challenges in the Fair and Valid Assessment of Diverse Populations in the 21st Century

On Monday, at 11:45 a.m., you can also learn about an IES-supported tool, called RCT-YES, that is designed to reduce barriers to rigorous impact studies by simplifying estimation and reporting of study results (Dr. Lesnick will be among those presenting). And a team from the IES research centers (NCER/NCSER) will present Sunday morning (10:35 a.m.) on communication strategies for disseminating education research (which includes this blog!).

IES staff will also participate in a number of other roundtables and poster sessions. For instance, on Tuesday, April 12, at 8:15 a.m., grab a cup of coffee and attend the structured poster session with the Institute’s 10 Regional Educational Laboratories (RELs). This session will focus on building partnerships to improve data use in education.  REL work will also be featured at several other AERA sessions.  

Did you know that the National Library of Education (NLE) is a component of IES? On Friday and Monday afternoon, attendees will have a unique opportunity to go on a site visit to the library. You’ll learn about the library’s current and historical resources – including its collection of more than 20,000 textbooks dating from the mid-19th century. The Library offers information, statistical, and referral services to the Department of Education and other government agencies and institutions, and to the public.

If you are going to AERA, follow us on Twitter to learn more about our sessions and our work.  And if you are tweeting during one of our sessions, please include @IESResearch in your tweet. 

By Dana Tofig, Communications Director, IES

The PI Meeting in 140 Characters

By Wendy Wei, Program Assistant, National Center for Education Research

How can practitioners and policymakers apply education research to their everyday work if they never hear about it or do not understand it? Communicating and disseminating research findings plays an integral role in promoting the education sciences and advancing the field.

That is why we made communication and dissemination a major theme at the IES Principal Investigators’ Meeting held earlier this month (December 10-11). The two-day meeting in Washington, D.C., featured five sessions that focused on communications – ranging from data visualization techniques to effective dissemination strategies to hearing journalists’ perspectives on how to share scientific results with the general public.

There was a lot of talk about social media during the meeting and plenty of tweeting about the presentations. We used the Twitter hashtag, #IESPIMtg, to foster an ongoing conversation for meeting attendees and to share findings that emerged from sessions.  Any tweet that included #IESPIMtg was automatically pooled together, generating a live Twitter feed that was on display in the lobby throughout the meeting.

 You can see all of the #IESPImtg tweets online, but here are some highlights:

"There is a tremendous sense of urgency to bridge the gap between research and practice..." --John B King #IESPIMtg

— Leah Wisdom (@lifelnglearner) December 10, 2015

.@StanfordEd's Sean Reardon: Good partnership work can lead to new knowledge, change policy+practice, improve data quality #IESPIMtg

— Bill Penuel (@bpenuel) December 11, 2015

#IESPIMtg Practitioner partners play a critical role in making sense of data and analyses in RPPs.

— Jennifer Russell (@Jenn_L_Russell) December 10, 2015

And we can get a little bit meta now…communicating about how to communicate:

Hirsh-Pasek & Golinkoff urges researchers to create "'edible science' that is accessible, digestible and usable." #IESPIMtg

— Tomoko Wakabayashi (@twakabayashi264) December 10, 2015

Awesome presentation on #DataVisualization by @jschwabish: Show the data, reduce the clutter, stop distracting attention. #IESPIMtg

— Rudy Ruiz (@RudyRuiz_BMore) December 10, 2015

.@KavithaCardoza Explaining your research--Don't think of it as "dumbing down." Think of it as simplifying. #IESPIMtg

— Dana Tofig (@dtofig) December 11, 2015

And, of course, what's Twitter without a little fun? When we tweeted this picture...

The poster session is going strong. Principal investigators present findings from #iesfunded research. #IESPIMtg

— IES Research (@IESResearch) December 10, 2015

...Chris Magnuson, Director of Innovation for Live It, Learn It, posted this reply: 

@IESResearch careful...photo looks like it was taken on Death Star! May the force be with all grantees! #SBIR #IES

— Chris Magnuson (@cromagnuson) December 10, 2015

The National Center for Education Research (NCER) and the National Center for Special Education Research (NCSER) have made a commitment to be active contributors in communicating with and engaging the general public in the exciting findings of NCER- and NCSER-funded work. Over the past few years, we have been active on Twitter (you can follow us @IESResearch), and this past year, we launched our blog (the very one you are reading!). These two platforms have provided us with an outlet to share research findings, provide updates about events and deadlines, and connect with audiences we otherwise might not reach.

For those of you who could not make the PI meeting, videos will be posted on the conference website in about a month. So stay tuned!

We hope you’ll continue the conversation started at the PI meeting by following us on Twitter at @IESResearch or sharing your thoughts with us at IESResearch@ed.gov.

 

IES Honors Statistician Nathan VanHoudnos as Outstanding Predoctoral Fellow

By Phill Gagne and Katina Stapleton, NCER Program Officers

Each year, IES recognizes an outstanding fellow from its Predoctoral Interdisciplinary Research Training Programs in the Education Sciences for academic accomplishments and contributions to education research. The 2014 winner, Dr. Nathan VanHoudnos completed his Ph.D. at Carnegie Mellon University and wrote his dissertation on the efficacy of the Hedges Correction for unmodeled clustering. Nathan is currently a postdoctoral fellow at Northwestern University. In this blog, Nathan provides insights on becoming an education researcher and on research study design. 

How did you become interested in education research?

I was born into it. Before he retired, my father was the Director of Research for the Illinois Education Association. Additionally, my grandparents on my mother's side were both teachers. 

 

As a statistician, how do you explain the relevance of your research to education practitioners and policy-makers?

I appeal to the crucial role biostatisticians play in the progress of medical research. Doctors and medical researchers are able to devote their entire intellectual capacity towards the development of new treatments, while biostatisticians are able to think deeply about both how to test these treatments empirically and how to combine the results of many such studies into actionable recommendations for practitioners and policy makers.  I aim to be the education sciences analogue of a biostatistician. Specifically, someone whose career success is decided on (i) the technical merits of the new methodology I have developed and (ii) the usefulness of my new methodology to the field. 

Your research on the Hedges correction suggests that many education researchers mis-specify their analyses for clustered designs. What advice would you give researchers on selecting the right analyses for clustered designs? 

My advice is to focus on the design of the study. If the design is wrong, then the analysis that matches the design will fail, and it is likely that no re-analysis of the collected data will be able to recover from the initial mistake. For example, a common design error is randomizing teachers to experimental conditions, but then assuming that how the school registrar assigned students to classes was equivalent to the experimenter randomizing students to classes. This assumption is false. Registrar based student assignment is a kind of group based, or clustered, random assignment. If this error is not caught at the design stage, the study will necessarily be under powered because the sample size calculations will be off. If the error is not caught at the publication stage, the hypothesis test for the treatment effect will be anti-conservative, i.e. even if the treatment effect is truly zero, the test statistic is still likely to be (incorrectly!) statistically significant. The error will, however, be caught if the What Works Clearinghouse decides to review the study. Their application of the Hedges correction, however, will not fix the design problem. The corrected test statistic will, at best, have low power, just like a re-analysis of the data would. At worst, the corrected test statistic can have nearly zero power. There is no escape from a design error. 


To give a bit of further, perhaps self-serving advice, I would also suggest engaging your local statistician as a collaborator. People like me are always looking to get involved in substantively interesting projects, especially if we can get involved at the planning stage of the project. Additionally, this division of labor is often better for everyone: the statistician gets to focus on interesting methodological challenges and the education researcher gets to focus on the substantive portion of the research. 

How has being an IES predoc and now an IES postdoc helped your development as a researcher?

This is a bit like the joke where one fish asks another "How is the water today?" The other fish responds "What's water?" 

I came to Carnegie Mellon for the joint Ph.D. in Statistics and Public Policy, in part, because the IES predoc program there, the Program for Interdisciplinary Education Research (PIER), would both fund me to become and train me to become an education researcher. The PIER program shaped my entire graduate career. David Klahr (PIER Director) gave me grounding in the education sciences. Brian Junker (PIER Steering committee) taught me how to be both methodologically rigorous and yet still accessible to applied researchers. Sharon Carver (PIER co-Director), who runs the CMU lab school, built in a formal reflection process for the "Field Base Experience" portion of our PIER training. That essay, was, perhaps, the most cathartic thing I have ever written in that it helped to set me on my career path as a statistician who aims to focus on education research. Joel Greenhouse (affiliated PIER faculty), who is himself a biostatistician, chaired my thesis committee. It was his example that refined the direction of my career: I wish to be the education sciences analogue of a biostatistician. 

The IES postdoc program at Northwestern University, where I am advised by Larry Hedges, has been very different. Postdoctoral training is necessarily quite different from graduate school. One thread is common, however, the methodology I develop must be useful to applied education researchers. Larry is, as one might suppose, quite good at focusing my attention on where I need to make technical improvements to my work, but also how I might better communicate my technical results and make them accessible to applied researchers. After only a year at Northwestern, I have grown considerably in both my technical and communication skills.

What career advice would you give to young researchers?

Pick good mentors and heed their advice. To the extent that I am successful, I credit the advice and training of my mentors at Carnegie Mellon and Northwestern. 


Comments? Questions? Please write to us at IESResearch@ed.gov.

Experts Discuss the Use of Mixed Methods in Education Research

By Corinne Alfeld and Meredith Larson, NCER Program Officers

Since IES was founded more than a dozen years ago, it has built a reputation for funding rigorous research to measure the causal effects of education policies and programs.  While this commitment remains solid, we also recognize the value of well-designed qualitative research that deepens understanding of program implementation and other educational processes and that generates new questions or hypotheses for study. In this blog post, we highlight the outcomes from a recent meeting we hosted focused on the use of mixed methods – that is, studies that combine qualitative and quantitative methods – and share some of the ways in which our grantees and other researchers incorporate mixed methods into their research.

On May 29, 2015, 10 researchers with experience designing and conducting mixed methods research met with staff from the two IES research centers in a technical working group (TWG) meeting. The TWG members shared their experiences carrying out mixed methods projects and discussed what types of technical assistance and resources we could provide to support the integration of high-quality mixed methods into education research. There was consensus among the TWG members that qualitative data is valuable, enriches quantitative data, and provides insight that cannot be gained from quantitative research alone.  Participants described how mixed methods in currently used in education research, proposed potential NCER and NCSER guidance and training activities to support the use of high-quality mixed methods, and offered suggestions for researchers and the field. Below are just a few examples that were shared during the meeting:

  • Dr. Carolyn Heinrich and colleagues used a longitudinal mixed method study design to evaluate the efficacy of supplemental education services provided to low-income students under No Child Left Behind. One of the critical findings of the study was that there was substantial variation across school districts in what activities were included in an hour of supplemental instruction, including (in some cases) many non-instructional activities.  This was revealed as the team examined the interview data describing what activities lay behind the shared metric of an hour of instructional time.  Having that level of information provided the team with critical insights as they examined the site-by-site variation in efficacy of supplemental education services.  Dr. Heinrich emphasized the need for flexibility in research design because the factors affecting the impact of an intervention are not always apparent in the design phase. In addition, she reminded the group that while statistical models provide an average impact score, there is valuable information included in the range of observed impacts, and that that variability is often best understood with information collected using in-depth field research approaches.
  • Dr. Mario Small used mixed methods research to examine social networks in childcare centers in New York City. Using observational methods, he discovered that variations in the level of networking among mothers depended on the individual child care center, not the neighborhood. He hypothesized that child care centers that had the strictest rules around pick-up and drop-off, as well as more opportunities for parent involvement (such as field trips), would have the strongest social networks. In such settings, parents tend to be at the child care center at the same time and, thus, have more interaction with each other. Dr. Small tested the hypotheses using analysis of survey and social network data and found that those who developed a social network through their child care center had higher well-being than those who did not. He concluded from this experience that without the initial observations, he would not have known that something small, like pick-up and drop-off policies, could have a big effect on behavior.
  • Dr. Jill Hamm described a difficult lesson learned about mixed methods “after the fact” in her study, which was funded through our National Research Center on Rural Education Support. In planning to launch an intervention to be delivered to sixth-grade teachers to help adolescents adjust to middle school, she and her colleagues worked with their school partners to plan for possible challenges in implementation. However, because some of the qualitative data collected in these conversations were not part of the original research study – and, thus, not approved by her Institutional Review Board – the important information they gathered could not be officially reported in publications of the study’s findings. Dr. Hamm encouraged researchers to plan to use qualitative methods to complement quantitative findings at the proposal stage to maximize the information that can be collected and integrated during the course of the project.
  • In a study conducted by Dr. Tom Weisner and his colleagues, researchers conducted interviews with families of children with disabilities to determine the level of “hassle” they faced on a daily basis and their perceptions of sustainability of their family’s routines. Findings from these interviews were just as good at predicting family well-being as parental reports of coping or stress on questionnaires. The findings from the analysis of both the qualitative and quantitative data collected for this study enhanced researchers’ understanding of the impact of a child’s disability on family life more than either method could have alone. Dr. Weisner observed that the ultimate rationale of mixed methods research should be to gather information that could not have been revealed without such an approach. Because “the world is not linear, additive, or decontextualized,” he suggested that the default option should always be to use mixed methods and that researchers should be required to provide a rationale for why they had not done so, where feasible.

Curious to learn more about what was discussed? Additional information is available in the meeting summary.

Comments? Questions? Please email us at IESResearch@ed.gov.