摘要 :
Definitions help us understand the characteristics of an object or phenomenon and are a necessary precursor to understanding what a good version of it looks like. Evaluation as a field has resisted a common definition (Crane, 1988...
展开
Definitions help us understand the characteristics of an object or phenomenon and are a necessary precursor to understanding what a good version of it looks like. Evaluation as a field has resisted a common definition (Crane, 1988; Morell & Flaherty, 1978 M. F. Smith, 1999), which has implications for marketing, training, practice, and quality assurance. In this position paper, I describe the benefits and challenges of not having a clear, agreed-upon definition, then propose and explore the implications of two definitions for the evaluation profession based on values and valuation as the core of evaluation practice. The purpose is to describe a possible way forward through definition that would increase our professional profile, power, and contribution to social justice. The paper concludes with implications for evaluator competencies and evaluation education and questions for further research.
收起
摘要 :
Family-and-home-based interventions are an important vehicle for preventing childhood obesity. Systematic process evaluations have not been routinely conducted in assessment of these interventions. The purpose of this study was to...
展开
Family-and-home-based interventions are an important vehicle for preventing childhood obesity. Systematic process evaluations have not been routinely conducted in assessment of these interventions. The purpose of this study was to plan and conduct a process evaluation of the Enabling Mothers to Prevent Pediatric Obesity Through Web-Based Learning and Reciprocal Determinism (EMPOWER) randomized control trial. The trial was composed of two web-based, mother-centered interventions for prevention of obesity in children between 4 and 6 years of age. Process evaluation used the components of program fidelity, dose delivered, dose received, context, reach, and recruitment. Categorical process evaluation data (program fidelity, dose delivered, dose exposure, and context) were assessed using Program Implementation Index (PII) values. Continuous process evaluation variables (dose satisfaction and recruitment) were assessed using ANOVA tests to evaluate mean differences between groups (experimental and control) and sessions (sessions 1 through 5). Process evaluation results found that both groups (experimental and control) were equivalent, and interventions were administered as planned. Analysis of web-based intervention process objectives requires tailoring of process evaluation models for online delivery. Dissemination of process evaluation results can advance best practices for implementing effective online health promotion programs.
收起
摘要 :
This article conceptualises the problem of selecting teaching content that supports the practice of programme evaluation. Knowledge for evaluation practice falls within one of three categories of knowledge that are defined by the ...
展开
This article conceptualises the problem of selecting teaching content that supports the practice of programme evaluation. Knowledge for evaluation practice falls within one of three categories of knowledge that are defined by the different roles they play in supporting practice. First, core knowledge relates to the defining activity of evaluation practice, i.e., that it informs the intellectual task of the determination of a programme's value. Second, accessory knowledge informs activities that support and facilitate the concretisation of the previous activity in a delivery context (e.g., stakeholder participation, evaluation use, project management, etc.). Third and finally, supplementary knowledge informs activities that may, on occasion, occur during evaluation practice, but without relating to the determination of value, either inherently or in a support role. The selection of knowledge for the teaching of evaluation must match the knowledge needed for the pursuit of effective evalu-ation practice: core, accessory, and supplementary knowledge. The specifics of these three needs ultimately depend on the characteristics of a given practice. The selection of content for the teaching of evaluation should ideally address these specific needs with the best knowledge available, regardless of its disciplinary origins.
收起
摘要 :
Internationally, healthcare is undergoing a major reconfiguration in a post-pandemic world. To make sense of this change and deliver an integrated provision of care, which improve both patient outcomes and satisfaction for key sta...
展开
Internationally, healthcare is undergoing a major reconfiguration in a post-pandemic world. To make sense of this change and deliver an integrated provision of care, which improve both patient outcomes and satisfaction for key stakeholders, healthcare leaders must develop an insight into the context in which healthcare is delivered, and leadership is enacted. Formal leadership development programmes (LDPs) are widely used for developing leaders and leadership in healthcare organizations. However, there is a paucity of rigorous evaluations of LDPs. Existing evaluations often focus on individual-level outcomes, with limited attention to long-term outcomes that might emerge across team and organizational levels. Specifically, evaluation models that have been closely associated with or rely heavily on qualitative methods are seldom used in LDP evaluations, despite their relevance for capturing unanticipated outcomes, investigating learning impact over time, and studying collective outcomes at multiple levels. The purpose of this paper is to review the potential of qualitative models and approaches in healthcare leadership development evaluation. This scoping review identifies seventeen evaluation models and approaches. Findings indicate that the incorporation of qualitative and participatory elements in evaluation designs could offer a richer demonstration and context-specific explanations of programme impact in healthcare contexts.
收起
摘要 :
Internal evaluations are numerous but the literature is largely focused on external evaluations. There have been few explorations of the factors affecting the use of findings from internal evaluations that are carried out by progr...
展开
Internal evaluations are numerous but the literature is largely focused on external evaluations. There have been few explorations of the factors affecting the use of findings from internal evaluations that are carried out by program staff in community organizations. This study examined the instrumental use of internal evaluation findings within 19 community mental health organizations in Ontario, Canada. All but one respondent reported instrumental use in their organization, using the evaluation findings to make program-related decisions. For these non-controversial programs, qualities such as the ability of internal evaluators to identify relevant information, their role/expertise within the organization and the consistency of evaluation findings with current understanding appeared to influence use more strongly than evaluator objectivity. (C) 2015 Elsevier Ltd. All rights reserved.
收起
摘要 :
After years in preparation, we are pleased to introduce this special issue of Evaluation and Program Planning that focuses on evaluator education, a topic that we believe is of considerable importance to the field's future. Before...
展开
After years in preparation, we are pleased to introduce this special issue of Evaluation and Program Planning that focuses on evaluator education, a topic that we believe is of considerable importance to the field's future. Before describing the issue's content, let us ground the articles in two ways: by examining the larger context within which evaluator education finds itself, and by briefly explaining who we are and describing how the issue came into being.
收起
摘要 :
This paper proposes a framework on Sustainable Development Goal (SDG) evaluation, arguing that attainment of the 17 goals and 169 related targets depends significantly on practice-based monitoring and evaluation. The SDGs' 15-year...
展开
This paper proposes a framework on Sustainable Development Goal (SDG) evaluation, arguing that attainment of the 17 goals and 169 related targets depends significantly on practice-based monitoring and evaluation. The SDGs' 15-year time frame can helpfully be divided into three 5-year phases: a planning phase driven by proactive evaluation and evaluability assessment, an improvement phase characterized by formative evaluation and monitoring, and a completion phase involving outcome and impact evaluations. Under these phases, in order not to miss the SDGs' fundamental philosophy of "no one left behind," local relevance must be considered when evaluating SDG programs, particularly to capture the overarching concepts applicable across the 17 goals, such as educational dynamics and resilience.
收起
摘要 :
Image segmentation is a prerequisite for image processing. There are many methods for image segmentation, and as a result, a great number of methods for evaluating segmentation results have also been proposed. How to effectively e...
展开
Image segmentation is a prerequisite for image processing. There are many methods for image segmentation, and as a result, a great number of methods for evaluating segmentation results have also been proposed. How to effectively evaluate the quality of image segmentation is very important. In this paper, the existing image segmentation quality evaluation methods are summarized, mainly including unsupervised methods and supervised methods. Based on hot issues, the application of metrics in natural, medical and remote sensing image evaluation is further outlined. In addition, an experimental comparison for some methods were carried out and the effectiveness of these methods was ranked. At the same time, the effectiveness of classical metrics for remote sensing and medical image evaluation is also verified.
收起
摘要 :
This article compares and contrasts the evaluation activities described in Practical Participatory Evaluation (Cousins & Whitmore, 1998), Values-engaged Evaluation (Greene, 2005), and Emergent Realist Evaluation (Mark, Henry, & Julnes, 1998). Using the logic models developed to depict each of the three evaluation theories (Hansen, Alkin, & Wallace, 2013) as a starting point, both quantitative and qualitative analysis techniques are employed to discuss the similarities and differences across the practice prescriptions. The approaches are then described according to Miller's (2010) standards for empirical examinations of evaluation theory. Specifically, 1 offer speculation about their operational specificity and feasibility in practice. I argue that none of the models is completely specific, or wholly unique, and they all present challenges of adaptation into the field. However, the models each offer varying degrees of guidance and unique elements through their prescriptions....
展开
This article compares and contrasts the evaluation activities described in Practical Participatory Evaluation (Cousins & Whitmore, 1998), Values-engaged Evaluation (Greene, 2005), and Emergent Realist Evaluation (Mark, Henry, & Julnes, 1998). Using the logic models developed to depict each of the three evaluation theories (Hansen, Alkin, & Wallace, 2013) as a starting point, both quantitative and qualitative analysis techniques are employed to discuss the similarities and differences across the practice prescriptions. The approaches are then described according to Miller's (2010) standards for empirical examinations of evaluation theory. Specifically, 1 offer speculation about their operational specificity and feasibility in practice. I argue that none of the models is completely specific, or wholly unique, and they all present challenges of adaptation into the field. However, the models each offer varying degrees of guidance and unique elements through their prescriptions.
收起
摘要 :
This article presents a study of the effects of stakeholder involvement on perceptions of an evaluation's credibility. Crowdsourced members of the public and a group of educational administrators read a description of a hypothetic...
展开
This article presents a study of the effects of stakeholder involvement on perceptions of an evaluation's credibility. Crowdsourced members of the public and a group of educational administrators read a description of a hypothetical program and two evaluations of the program: one conducted by a researcher and one conducted by program staff (i.e. program stakeholders). Study participants were randomly assigned versions of the scenario with different levels of stakeholder credibility and types of findings. Results showed that both samples perceived the researcher's evaluation findings to be more credible than the program staffs, but that this difference was significantly reduced when the program staff were described to be highly credible. The article concludes with implications for theory and research on evaluation dissemination and stakeholder involvement.
收起