Photo of Heritage Hall and WMU flag

Join 麻豆传媒 at the 2024 AEA Conference!

Discover how WMU is making its mark at AEA 2024 through engaging posters, thought-provoking panels, presentations, and more. Explore our contributions and connect with us in Portland. Let's shape the future of evaluation together!

The Exhibit Hall

State of Michigan illustration in brown and gold with EC stickers on it

Visit Booth #111, hosted by The Evaluation Center!

Swing by our booth to dive deeper into The Evaluation Center and our initiatives, including:

  • , delivering open-access webinars and resources for evaluation practitioners and consumers.
  • The Evaluation Checklist Project, providing more than 30 checklists on diverse topics to guide evaluation and practice.
  • , publishing insights and research on evaluation through an open-access, peer-reviewed journal 
  • , offering a suite of self-paced courses on program evaluation, combining evaluation theory, proven practices, and insights from experienced evaluators.
  • Evaluation Caf茅, a speaker series creating space for engagement, dialogue, and knowledge sharing in the evaluation community.

Don't forget to pick up our unique evaluation-themed vinyl stickers, Sharpies, and word games to complete your conference experience! 

WMU Posters

Wednesday, October 23
5:30 - 7 p.m. PST

 

13 - Strategic Integration: Leveraging Artificial Intelligence to Enhance Collaborative Evaluation Practices
Presenter: John Akwetey
 
 
36 - Evaluating Diversity, Equity, and Inclusion Practices Across Multiple Sectors
Authors: Ana Laura L. Vasquez Quino, Mahamat Abdoulaye Kerim, Rohullah Wahidi, Laura B. Carroll, Gary Miron, and Aaron Adusei
Thursday, October 24
6 - 7:30 p.m. PST
 
170 - Cross-Site Variation in the Impact of Education Programs: Empirical Evidence from Multi-Site and Multi-Level Evaluations
Authors: Thuy Dung Pham, Jessica Spybrook
 
 
224 - Findings from a Systematic Review of Evaluating Mindfulness Programming
Authors: Michael A Harnar, Michael Coplen, Jackie Quan
 
232 - CGIAR's Experience of Using Evaluability Assessments as a Means Toward Promoting Evaluation Readiness and Inclusivity
Presenter: Amy Jersild
 
261 - Practice Profiles: A Powerful Implementation Science Tool for Evaluation Practice
Presenters: Valerie Marshall, Jan Fields
 
287- These Are A Few of My Favourite Things: A Toolbox of Interpersonal Skills for Evaluators
Presenter: Allison Prieur

Presentation Schedule

Amplifying and Empowering Voices through Evaluation Journals

Time: 4:15 - 5:15 PM PST

Location: B117-119

Academic publication, and social inquiry more broadly, have a problematic history with equitable representation of voices. Articles and manuscripts have suppressed knowledge and harmed systematically underrepresented populations (Gordon et al., 1990; Smith, 2021). Today, many evaluation journals represent one way that the field aims to amplify and empower voices of the typically unheard, the historically underrepresented, and today鈥檚 new evaluators and graduate 麻豆传媒s. Meet editors from the African Evaluation Journal, American Journal of Evaluation, Canadian Journal of Evaluation, Educational Evaluation and Policy Analysis, Evaluation and Program Planning, Journal of MultiDisciplinary Evaluation, New Directions for Evaluation, and The Evaluation Review to learn about each journal, what is unique about each one, and what the journal is doing to foster greater inclusivity. This is a unique opportunity to hear from and connect with some of the leaders in the field. Questions are encouraged; please bring them!

Presenters: Bianca Montrosse-Moorhead, Florence Etta, Laura Peck, Nicole Bowman, Ayesha Boyce, Thomas Archibald, Daniel Balsalobre-Lorente, Michael A. Harnar, Sarah Mason, and Hanh Cao Yu.

 

Ghosts in the Evaluation Machine: Ethics, Data Protection, Meta-Evaluation, and Evaluation Quality in the Age of Artificial Intelligence

Time: 4:15-5:15 PM PST

Location: E144

The integration of generative artificial intelligence (GenAI) technologies, exemplified by OpenAI's ChatGPT, into evaluation practice presents both groundbreaking 麻豆传媒 and significant ethical challenges. This evolution in machine learning and AI is changing transdisciplinary evaluation, pushing practitioners, researchers, and organizations to reassess the frameworks guiding quality evaluations in the face of such disruptive technologies. The rapid advancement and application of these tools in Monitoring, Evaluation, Research, and Learning (MERL) Tech practices outpace existing guidelines on their responsible use, underscoring a critical need for updated meta-evaluative standards that address the ethical dimensions of AI-enabled evaluations. This session delves into the juxtaposition of AI safety and AI ethics research camps, emphasizing the latter's focus on the societal and ecological risks posed by current AI technologies, including their potential to perpetuate bias and inequality. It probes into how the data pools underpinning AI tools, reflecting a multitude of human voices and values, affect the integrity of evaluation processes and outcomes. By examining the influence of algorithmic bias and value representation on evaluative quality, this session aims to uncover the voices and values amplified or sidelined by AI in evaluation. Featuring insights from evaluators, data scientists, AI ethicists, and privacy professionals, this multi-paper session explores the theoretical, empirical, and practical aspects of GenAI-enabled evaluation practice. It strives to address pressing questions surrounding the quality of AI-enabled evaluations and offers practical recommendations for evaluators looking to harness technology for enhancing evaluation quality, all while maintaining a steadfast commitment to ethical principles and inclusivity in evaluation practice.

Chair: Michael A. Harnar

Presenters: Alex Robinson, Michael Osei, Zach Tilton, Shaddrock Roberts

Generative AI: Navigating the Ethical Frontier in Evaluation

Time: 8:30 - 10 a.m. PST

Location: Oregon Ballroom

Panelists: Olivia Deich, Linda Raftree, Aileen M. Reid, Zach Tilton

This session will explore the transformative impact of Generative AI on evaluation practices, examining both the 麻豆传媒 and challenges it presents. Panelists will discuss the ethical implications of AI integration, sharing personal insights on the balance between its benefits and risks. Attendees will gain a deeper understanding of how AI can be both a powerful tool and a challenge in evaluation, and what it means to integrate this technology responsibly in their work.

 

Beyond the Classroom: Lessons Learned From Four Evaluation Labs

Time: 11:30 a.m. - 12:30 p.m.

Location: D135-136

Presenters: Ayesha Boyce, Audrey Cooper, Aileen M. Reid, Tiana Yom, Brandon W. Youker, Lori A. Wingate

Fieldwork experiences are the cornerstone of evaluation training. This panel brings together evaluation educators from across the United States who run evaluation laboratories (training centers). They reflect upon and share insights and experiential learning at their respective university-housed evaluation labs. Further, panelists will discuss lessons learned about developing, implementing, funding, and organizing their laboratories. Panelists will overview their efforts to train new evaluators and empower them to amplify their voices. Participatory evaluation embedded in communities, service learning, culturally responsive, and social justice-oriented evaluation within these contexts will be discussed in addition to structural and logistical considerations of evaluation labs. Each panelist will share joys and frustrations with mentoring and learning from new evaluators both in academic and community settings. The following panelists will share their expertise and insights: STEM Program Evaluation Lab (SPEL) co-located at UNC Greensboro and Arizona State University Aileen Reid, Ph.D. is an Assistant Professor in the Department of Information, Library, and Research Sciences at the University of North Carolina Greensboro (UNCG). Ayesha Boyce, Ph.D. is an associate professor within the Division of Educational Leadership and Innovation at Arizona State University (ASU). Drs. Reid and Boyce have developed and field-tested the values-engaged, educative (VEE) training framework fusing the VEE evaluation approach (Greene, et al., 2006) and AEA competencies to produce high-quality mentoring and training for novice evaluators (Reid, et al., 2023). The three domains of the VEE training model are culturally responsive and anti-deficit pedagogies, social justice-oriented curriculum, and STEM education contexts. University of New Mexico Evaluation Lab Audrey Cooper, M.P.H., R.N. is the Associate Director of the University of New Mexico Evaluation Lab. Audrey began her work with the Lab as a graduate fellow in 2017 and continued taking on additional leadership responsibilities until moving into the Associate Director position in 2023. Audrey is a staff member at the University of New Mexico and an example of grow-your-own leadership in evaluation through mentorship and progressive 麻豆传媒. 麻豆传媒 Evaluation Lab Brandon Youker, Ph.D., MSSW is the Evaluation Lab Director at The Evaluation Center at 麻豆传媒. Brandon co-founded the Evaluation Lab in January 2024 which evaluates local nonprofit organizations with social justice missions and programming. Brandon works with multidisciplinary evaluation teams consisting of 麻豆传媒 employees most of whom are undergraduates. Northeastern University Public Evaluation Lab (NU-PEL) Tiana Yom, Ed.D., MPH, CHES is an Assistant Research Professor in Public Policy and Health Sciences and Director of NU-PEL. Dr. Yom is committed to building community-academic partnerships via evaluation research and leads a multigenerational team of 麻豆传媒s, staff, and faculty. Dr. Yom鈥檚 specialization is in teaching and developing evaluation techniques, driven by Community-Based Participatory Research (CBPR) and Culturally Responsive Evaluation (CRE) frameworks.

How Do We Know What We Don't Know 麻豆传媒 the Program Evaluation Standards? Critical Review of PES to Understand Constructs and Navigate Pain Points

Time: 2:30 - 3:30 p.m.

Location: B117-119

Presenters: Brad Watts, Art E. Hernandez, Julie Q. Morrison, Paula Egelson

The Program Evaluation Standards (PES) produced by the Joint Committee on Standards for Educational Evaluation (JCSEE) are the official source for evaluation standards in the United States and Canada and a foundational document of the American Evaluation Association. Last updated in 2010, the third edition of the PES is still widely used across many types of program evaluation but is now due for an update. As such, the JCSEE has undertaken a critical review of the PES to identify core constructs, pain points, and navigational strategies. During this demonstration session, JCSEE members will draw on their assessment to (1) identify primary and secondary constructs embedded in each standard; (2) demonstrate how less prominent (e.g., implicit) constructs in the standards can be crucial to everyday practice; (3) articulate pain points in interpretation and use of standards; and (4) demonstrate how practitioners navigate seen and unseen pain points to incorporate the standards in evaluation to meet diverse aims. Throughout the session, all attendees will be engaged in a real-time, interactive exercise to validate the constructs and pain points presented based on their experience. The participation of a diverse group of evaluation practitioners and theorists in this process is critical for providing input and feedback on the revision of this most important set of standards for evaluators in North America.

The Value of 360-degree Evaluation in Higher Education: Lesson's Learned from Four Project-Based Courses

Time: 3:15 - 3:30 p.m. EST

Location: Multi-paper session

Presenters: Daniela Schroeter, Heather Jach Turner, Michael Hart

This presentation offers insights from a study of the value of 360-degree evaluation assignments within higher education, focusing on four project-based public administration courses offered in Spring 2024. By employing this performance evaluation approach, our study sought to enrich undergraduate and graduate 麻豆传媒s' learning experiences within and outside of evaluation, to promote curriculum alignment, and to contribute to program and unit-level learning outcomes assessment. Diverse 麻豆传媒 groups from both graduate and undergraduate levels (two courses respectively) collaboratively developed their 360-degree evaluation method within their project groups. Grounded in the logic of evaluation, this method enabled 麻豆传媒s to assess their individual performance, evaluate their peers, and reflect on the overall group performance. Through an exploratory approach, we analyzed artifacts generated by 麻豆传媒s, including group-designed evaluation methods and individual 360-degree evaluation reports. The study's key findings illuminate the value of the 360-degree assignment as an equitable and inclusive assessment tool. This method empowered 麻豆传媒s to leverage their unique capabilities and developmental needs, fostering a more comprehensive understanding of their learning journey. Additionally, our research delved into individual 麻豆传媒s鈥 contributions to final deliverables and assessed the assignment's impact on accountability, engagement, and collaborative learning within 麻豆传媒 groups. Attendees of our session can anticipate actionable insights into implementing 360-degree evaluation methodologies within their own academic courses. We will provide practical recommendations for fostering 麻豆传媒 collaboration, overcoming implementation challenges, and leveraging our findings to enhance curriculum design and delivery. Moreover, we will offer concrete examples to illustrate the real-world application and transformative impact of this method. Attendees will also have access to downloadable templates and step-by-step guides, facilitating seamless integration into their educational practices. Join us as we share our research findings, exchange best practices, and empower educators to revolutionize 麻豆传媒 learning experiences through the use of innovative evaluation approaches as course assignments. Together, let's embark on a journey towards fostering excellence and inclusivity in higher education.

Current Practices and Perspectives of Open Science Among Evaluators

Time: 3:45 - 4 p.m. PST

Location: C125-126

Presenters: Dana Linnell, Zach Tilton

Open science describes the practice of making research open for all by promoting transparency through reproducibility, replicability, and knowledge sharing (Kathawalla et al., 2021). Given the similarities between research and evaluation (Wanzer, 2021), the relevancy of open science values and processes already being discussed and practiced in research spaces should be explored for evaluation. Particularly when evaluation is viewed as a science (Patton, 2018), using systematic inquiry to investigate the evaluand, then evaluation may not be immune to some of the problems plaguing other scientific endeavors and therefore some of the proposed solutions within open science may be relevant for the field of evaluation.

This study examined the extent to which open science is relevant to evaluation work through asking evaluators about their knowledge, understanding, use, and barriers of open science practices in their evaluation work. Participants were AEA members who have practiced evaluation within the last year. Findings will focus on understanding the breadth and depth of open science within evaluation, and understand who is using open science practices under what conditions. This can inform future developments in open science within the field of evaluation.

Findings from a Systematic Review of a Half-Century of Transdisciplinary Meta-Evaluation Practice: Meta-Evaluation and the General Logic of Evaluation

Time: 4-4:15 p.m. PST

Location: C125-126

Authors: Michael A. Harnar, Amy Jersild, Rachael Kenney, Valerie Marshall, Mustafa Nazari, Michael Osei, Razia Ibrahim Rasheed, Kari Ross Nelson, Zach Tilton, Takara Tsuzaki

Meta-evaluation stands as a method of quality assurance for evaluations and a form of peer-review of evaluators. The explicit practice of meta-evaluation dates to Scriven鈥檚 1969 naming of the concept. Since then, it has taken on a full life, some claiming it to have entered a period of maturity since 2010 when the Joint Committee on Standards for Educational Evaluation published a new edition of program evaluation standards that included evaluation accountability (meta-evaluation) as a standalone quality criterion. While much has been written to guide practice, no comprehensive study has looked at practice reports to assess how well practice aligns with such guidance. To address this absence, we systematically gathered a corpus of meta-evaluation practice reports (n = 174), developed an extensive data extraction and coding protocol, and analyzed this unique set of reports. As a foundational quality check, we asked if the vitally important general logic of evaluation was reflected in this corpus of meta-evaluation practice reports. We found that 82% explicitly reported criteria of merit against which the meta-evaluand was observed, 58% explicitly reported standards of achievement, and 91% provided judgments on the meta-evaluand as a direct result of the meta-evaluation. In this presentation, we unpack these numbers and discuss the value of the general logic of evaluation in understanding and describing meta-evaluation practice. Our systematic review uniquely illustrates the first half-century practice of meta-evaluation and lays out a path for its improvement in the next. New and emerging evaluators will find this presentation useful in understanding the general logic and how it 鈥渞olls up鈥 to meta-evaluation practice and be prepared to participate in the important discussions about our discipline鈥檚 future. Furthermore, the concept for this systematic review grew out of the interests of 麻豆传媒s enrolled in a graduate meta-evaluation course and the presentation will also share lessons learned from mentoring emerging evaluation researchers through a research on evaluation project and how this collaboration enhanced the research. As evaluation quality has direct consequences for program stakeholders and society at large, understanding the mechanism of improving that quality, meta-evaluation practice, is pivotal for professionalizing the transdiscipline and sub-fields of evaluation.

 

68 - Leveraging Online Self-Paced Learning to Strengthen the Capacity of Emerging Evaluators

Time: 12:45 - 1:15 PM PST

Location: Exhibit Hall A

Presenters: Lori Wingate, Kelly Robertson

This roundtable will explore online self-paced learning as a vehicle for strengthening evaluation knowledge and skills among emerging evaluators. After sharing highlights from 麻豆传媒鈥檚 online, self-paced evaluation course offerings (Valeo), we will engage roundtable participants in dialogue to deepen our shared understanding of the 麻豆传媒 and limitations of this learning format for developing evaluation competence. Key discussion questions: 1) What are the most pressing learning needs for emerging evaluators? 2) What are the benefits or limitations of online, self-paced learning? 3) What instructional features of online courses make learning 鈥渟ticky鈥? (4) To what extent do online, self-paced learning 麻豆传媒 complement existing professional development offerings delivered via other mechanisms? Online, self-paced learning 麻豆传媒 can help emerging evaluators learn what they need to know when they need it. Such courses can be completed on the learner鈥檚 schedule and offer job aids to support their application of the course content on the job (or in the classroom). The asynchronous format allows a global audience to access courses without regard to time zone differences. Through embedded simulations, learners can practice what they learn and receive feedback. But the format also has distinct limitations. In the roundtable, we鈥檒l explore how to maximize the 麻豆传媒 and benefits while mitigating shortcomings of online, self-paced learning as a vehicle for developing evaluation competence.

106 - Evaluating Health Plan Partnerships

Time: 1:30 - 2 p.m. PST

Location: Exhibit Hall A

Authors: Danielle Gritters, Marie Djeni, Mariah Black-Watson

Healthcare continues to evolve from a fee-for-service industry to a value-based industry. A portion of the healthcare sector has historically focused on addressing health inequities that are interconnected with value-based models. In the last 5-years health delivery and health plan accreditation has increased the focus and expectations of improving health equity. While efforts to improve health equity commonly address social determinants of health and demographic disparities, the role of partnerships is also being recognized. To increase the overall health of communities, healthcare organizations will need to continue partnering with local communities to effectively improve health. A summative evaluation with formative attributes was conducted to assess the effectiveness of health plan partnerships in improving the overall health of the community, and to identify areas for improvement. The health plan partnership evaluation included the development of key evaluation deliverables including an evaluation plan, logic model, data operationalization plan, and data collection instruments. The latter included the development of an electronic survey and semi-structured focus group facilitation guides. The development of these evaluation materials was guided by accreditation standards, relevant literature, and alignment with the organizational vision. The identified goals of the health plan partnerships include increasing sustainability, creating tangible community and health benefits; favorable policy and practice change; and increasing the ability to leverage funds from the partnership. While intermediate outcomes focused on increasing the mutual benefits, bi-directional support, shared vision, and equitable partnership dynamics of the partnership. The findings from this evaluation will be useful for evaluators and organizations engaged in partnership to deliver health services. This session will focus on the process, tools, and standards used to develop the health plan partnership evaluation.

Doctoral Student Experiences with Research on Evaluation: Insights and Opportunities from a Collaborative Autoethnography

Time: 4: 15 p.m. - 4:30 p.m. PST

Location: D133-134

Presenters: Amanda Sutter, Valerie Marshall, Rachael Kenney, Allison Prieur, Kari Ross Nelson, Christine Liboon

What is the experience of doctoral 麻豆传媒s in learning and being prepared to conduct RoE? What can be done to improve the experience and the learning? There is little evidence on how doctoral 麻豆传媒s learn to conduct research on evaluation (RoE), in part because formal evaluator education programs prioritize evaluation practice. Therefore, most of the next generation of evaluation researchers is trained through informal processes and applied projects. A diverse group of 7 evaluation practitioners who are current doctoral 麻豆传媒s joined forces to examine the experience of PhD 麻豆传媒s in learning and preparing to conduct RoE. This group of 麻豆传媒s examined their experiences learning about RoE through a collaborative autoethnographic (CAE) methodology and systematic reflection on topics ranging from how academic learning has been applied to projects, research-practice gaps, access and inclusion, the role of interpersonal relationships, and more.

This session will provide a novel perspective on evaluator education through the 麻豆传媒 experience. The presenters will share findings from their CAE reflection and offer 麻豆传媒 for session participants to reflect on their own engagement with RoE. They will also share tips for using CAE as a method to conduct RoE or to support reflective practice. Furthermore, they will share lessons learned for university programs, faculty, and mentors on supporting the process of learning RoE. Understanding how the next generation is learning to conduct RoE, identifying strengths in that process, and determining ways to better prepare researchers and advance RoE are crucial for the continued growth of evaluation as a discipline.

If I Must Die, Let It Be A Tale: Fostering Evaluation Voices in Times of Crisis

Time: 3:45 - 4:45 p.m. EST

Location: C123

Authors: Najat Elgeberi, Ali Khatib, Zach Tilton

This multi-paper discussion addresses three key themes centered around evaluation methodologies and the role of evaluators during crisis situations. Firstly, it emphasizes the importance of hearing the voices of affected populations and ensuring accountability even amidst war or natural disasters. Secondly, it examines the challenges of implementing conventional evaluation and data collection methods during crises and explores the adaptation of innovative approaches to address these challenges. Lastly, it delves into strategies for leveraging the impact of evaluators in amplifying the voices of those affected by crises. The discussion highlights the difficulties inherent in traditional evaluation methods in crisis settings, including limited access to affected populations, time constraints, safety concerns, data security, and reliance on biased secondary data sources. To overcome these obstacles, a nuanced approach is essential, involving the adaptation of innovative evaluation methodologies tailored to crisis contexts. Key considerations include global evaluators collaborating with local experts in crisis-affected regions to amplify the voices of affected communities and advocate for their needs. Local Voluntary Organizations for Professional Evaluation (VOPEs) play a vital role in supporting communities by facilitating data collection, conducting evaluations, and disseminating findings. Furthermore, the discussion explores strategies for disseminating evaluation findings effectively and maximizing their impact on international decision-making processes. It also examines the role of global evaluators in the most severe crisis scenarios, emphasizing the importance of ethical conduct and accountability within the evaluation community. The Gaza crisis serves as a poignant example for this discussion, injecting elements of black comedy to engage in constructive criticism grounded in evidence-based analysis. Through this multi-paper discussion, participants gain insights into the complexities of evaluation in crisis situations and identify practical strategies for overcoming challenges and amplifying the voices of those most affected.

 

Promoting Cost-Inclusive Evaluation (CIE) and Expanding the Toolkit of CIE Methodologies in Order to Strengthen Professional Evaluations

Time: 8 - 8:15 a.m. PST

Presenters: Nadini Persaud, Ruqayyah Ab-Obaid

This year鈥檚 conference theme is focused on new and emerging perspectives in evaluation鈥攁 theme that is particularly relevant and exceedingly important at a time when social programs are experiencing severe budgets cuts, whilst experiencing increasing demand for human services. In light of these challenges, evaluators should reflect on lessons learnt and wisdom gleaned over time and reevaluate whether the traditional manner of conducting evaluations is still suited to a vastly changing environment. In short, the time is right for evaluators to start to think CIE. Moving away from the traditional way of doing things and one鈥檚 comfort zone is, however, never easy. Embracing methodologies which have traditionally been used to maximize profitability is a tough sell, since social programs have vastly different objectives. This has already been observed from negative comments received from some reviewers for scholarly publications that promote this concept. Acceptance and buy-in to change is no small task. However, those that are passionate about an issue must persevere.

Economic Evaluation of Interventions for Alleviating Boredom: A Systematic Review

Time: 8:30 - 8:45 a.m. PST

Author: John F. Akwetey

Chair: Brian T. Yates

Presenters: Michelle Rincones-Rodriguez, Michael Osei, John F. Akwetey

This study investigates the economic implications of boredom and its potential adverse consequences on individuals and society, particularly its correlation with depression. Despite numerous empirical studies supporting the effectiveness of boredom interventions, evidence regarding their cost-outcome remains unclear. Therefore, this study systematically reviews literature on economic evaluations (i.e., cost-outcome analyses) of boredom interventions, including cost-effectiveness, cost-benefit, and cost-utility analyses. The methodology involves a preliminary search in various databases to identify relevant literature published since January 2003. A full search strategy will then be conducted using predefined inclusion/exclusion criteria across multiple databases, including MEDLINE, PsychINFO, Cochrane Central, Scopus, and others. Articles reporting interventions for reducing boredom will be included, encompassing various approaches such as diagnosis, screening, treatment, prevention, training, education, and self-management facilitation. Data were extracted utilizing standardized tools for economic evaluations, with information synthesized into themes using narrative synthesis. Critical appraisal was conducted using established quality assessment checklists, including the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) protocol guidelines, the 35-item checklist for quality assessment of economic evaluations, and the Consolidated Health Economic Evaluation Reporting Standards (CHEERS) checklist. By systematically reviewing economic evaluations of boredom interventions, this study provides insights into the cost-effectiveness and societal impact of addressing boredom-related issues. The findings will be shared with conference participants to contribute to the understanding of how interventions aimed at reducing boredom can be optimized from an economic perspective, ultimately informing decision-making processes in healthcare and policy development.

Practical Considerations on How the Teacher ICCs Vary/Co-Vary with School Characteristics: Implications for Equity in Educational Evaluation

Time: 9:55 - 10:05 a.m. PST

Location: Portland Ballroom 251

Presenter: Dea Mulolli

Authors: Dea Mulolli, Eric Hedberg, Jessaca Spybrook

Impact evaluations using cluster randomized trials (CRTs) or multisite cluster randomized trials (MSCRTs) are commonly used in evaluating educational intervention efficacy. Considering the nested structure of the educational system with 麻豆传媒s nested in teachers nested in schools, studies sometimes involve the teachers, which requires teacher-level design parameters. These parameters are likely not universal across schools. The existing literature lacks a systematic exploration of how teacher intra-class correlation coefficients (ICCs) vary across school characteristics, which is crucial for accurate power analysis and study planning. This study investigates this variability across three states 鈥 Michigan, Kentucky, and North Carolina, revealing nuanced patterns in teacher ICCs. Notably, urban and non-urban schools exhibit opposing teacher ICCs across states, while lower teacher-to-麻豆传媒 ratios are consistently associated with higher teacher ICCs. The relationship between free or reduced lunch eligibility (FRL) and teacher ICCs also varies across states. These findings inform a more accurate estimation of design parameters for CRTs and MSCRTs, emphasizing the need to account for school-specific characteristics in educational impact evaluations. Practical examples will illustrate how researchers and evaluators can utilize these insights to enhance study planning.