摘要 :
In the industry of fine chemicals, including pharmaceutical and agricultural chemicals, analytical tests are performed by production departments or contract research organizations at some stage in the research and development of p...
展开
In the industry of fine chemicals, including pharmaceutical and agricultural chemicals, analytical tests are performed by production departments or contract research organizations at some stage in the research and development of products. These external organizations are required to maintain the capabilities to perform analytical tests using methods that are equivalent to or better than those specified by analytical method validation. For this reason, transfer of analytical procedures to an alternative site becomes necessary.In this review, the relationship between transfer of analytical procedures and assay validation is introduced, focusing on analytical procedures that include HPLC.
收起
摘要 :
Background: The Hungarian National Cancer Registry (HNCR) was legally established as a population-based cancer registry in 1999, and its operation started in 2000 supporting the planning and development of the Hungarian oncology n...
展开
Background: The Hungarian National Cancer Registry (HNCR) was legally established as a population-based cancer registry in 1999, and its operation started in 2000 supporting the planning and development of the Hungarian oncology network as well as informing national cancer control policies. Ensuring comparable, ac-curate, and complete data on malignant and in situ neoplasms is critical in determining the applicability of the database. The aim of this study was to perform a comprehensive evaluation of the data quality at the HNCR. Methods: Based on qualitative and semiquantitative methods from current international guidelines, we assess the comparability, completeness, validity, and timeliness of the collected data over the diagnostic period 2000-2019, with a focus on the year 2018. Results: Coding practices and the classification system used at the HNCR are based on the International Classi-fication of Diseases (ICD-10), which differs from the internationally recommended ICD-O. The annual trends in incidence did not indicate major fluctuations, that may have resulted from data collection discrepancies, while comparisons of the mortality-to-incidence ratio (M:I) compared with 1 minus 5-year observed survival indicated some systematic differences requiring further exploration. The age-standardized (European standard) incidence rate per 100 000 measured by the HNCR in 2018 was very high: 647.9 for men and 501.6 for women, 11.6% and 14.6% higher than the International Agency for Research on Cancer (IARC) estimates respectively. Behind the overall differences between the two data sources, we identified that the vast majority were due to ill-defined ICD codes: malignant neoplasm of other and ill-defined sites (C76), and malignant neoplasm without specification of site (C80). Otherwise, there were no major discrepancies by localization. The proportion of morphologically verified cancer cases was 57.8% overall, that of death certificates was 2.3%, and that of unknown primary tumors was 1.4%. Conclusion: Further implementations and interventions are required to ensure that the operations, coding practices, and the classification system used at the national registry are in accordance with international stan-dards, and to increase the completeness and validity of the collected cancer data. In particular, the low morphologically verified proportion questions the overall accuracy of the stated diagnoses within the database. Nevertheless, our examination implies that the data of the HNCR are reasonably comparable, and without doubt fulfill the requirements to support national oncology services and cancer planning. However, most importantly, a review of registry personnel and resource requirements to run the national population-based cancer registry should be an essential part of Hungary's national cancer strategy.
收起
摘要 :
The assessment of linguistic minorities often involves using multiple language versions of assessments. In these assessments, comparability of scores across language groups is central to valid comparative interpretations. Various ...
展开
The assessment of linguistic minorities often involves using multiple language versions of assessments. In these assessments, comparability of scores across language groups is central to valid comparative interpretations. Various frameworks and guidelines describe factors that need to be considered when developing comparable assessments. These frameworks provide limited information in relation to the development of multiple language versions of assessments for assessing linguistic minorities within countries. To this end, we make various suggestions for the types of factors that should be considered when assessing linguistic minorities. Our recommendations are tailored to the particular constraints potentially faced by various jurisdictions tasked with developing multiple language versions of assessments for linguistic minorities. These challenges include having limited financial and staffing resources to develop comparable assessments and having insufficient sample sizes to perform psychometric analyses (e.g., item response theory) to examine comparability. Although we contextualize our study by focusing on linguistic minorities within Canada due to its bilingual status, our findings may also apply to other bilingual and multilingual countries with similar minority/majority contexts.
收起
摘要 :
Building energy simulation analysis plays an important supporting role in the conservation of building energy. Since the early 1980s, researchers have focused on the development and validation of building energy modeling programs ...
展开
Building energy simulation analysis plays an important supporting role in the conservation of building energy. Since the early 1980s, researchers have focused on the development and validation of building energy modeling programs (BEMPs) and have basically formed a set of systematic validation methods for BEMPs, mainly including analytical, comparative, and empirical methods. Based on related papers in this field, this study systematically analyzed the application status of validation methods for BEMPs from three aspects, namely, sources of validation cases, comparison parameters, and evaluation indicators. The applicability and characteristics of the three methods in different validation fields and different development stages of BEMPs were summarized. Guidance were proposed for researchers to choose more suitable validation methods and evaluation indicators. In addition, the current development trend of BEMPs and the challenges faced by validation methods were investigated, as well as the existing progress of current validation methods under this trend was analyzed. Subsequently, the development direction of the validation method was clarified.
收起
摘要 :
Background: Several different measures have been proposed to solve persistent validity problems, such as high task-sampling variability, in the assessment of students' expertise in 'doing science'. Such measures include working wi...
展开
Background: Several different measures have been proposed to solve persistent validity problems, such as high task-sampling variability, in the assessment of students' expertise in 'doing science'. Such measures include working with a-priori progression models, using standardised item shells and rating manuals, augmenting the number of tasks per student and comparing different measurement methods. Purpose: The impact of these measures on instrument validity is examined here under three different aspects: structural validity, generalisability and external validity. Sample: Performance assessments were administered to 418 students (187 girls, ages 12-16) in grades 7, 8 and 9 in the 2 lowest school performance tracks in (lower) secondary school in the Swiss canton of Zurich. Design and methods: Students worked with printed test sheets on which they were asked to report the outcomes of their investigations. In addition to the written protocols, direct observations and interviews were used as measurement methods. Evidence of the instruments' validity was reported by using different reliability and generalisability coefficients and by comparing our results to those found in literature. Results: An a-priori progression model was successfully used to improve the instrument's structural validity. The use of a standardised item shell and rating manual ensured reliable rating of the written protocols (.79 <= p(0) <= .98; .56 <= kappa <= .97). Augmenting the number of tasks per student did not solve the challenge of reducing task-sampling variability. The observed performance differed from the performance assessed via the written protocols. Conclusions: Students' performance in doing science can be reliably assessed with instruments that show good generalisability coefficients (rho(2) = 0.72 in this case). Even after implementing the different measures, task-sampling variability remains high . More elaborate studies that focus on the substantive aspect of validity must be conducted to understand why students' expertise as shown in written protocols differs so markedly from their observed performance.
收起
摘要 :
To evaluate the reliability and validity of six PROMIS measures (anxiety, depression, fatigue, pain interference, physical function, and sleep disturbance) telephone-administered to a diverse, population-based cohort of localized ...
展开
To evaluate the reliability and validity of six PROMIS measures (anxiety, depression, fatigue, pain interference, physical function, and sleep disturbance) telephone-administered to a diverse, population-based cohort of localized prostate cancer patients.
收起
摘要 :
School mathematics examination papers are typically dominated by short, structured items that fail to assess sustained reasoning or problem solving. A contributory factor to this situation is the need for student work to be marked...
展开
School mathematics examination papers are typically dominated by short, structured items that fail to assess sustained reasoning or problem solving. A contributory factor to this situation is the need for student work to be marked reliably by a large number of markers of varied experience and competence. We report a study that tested an alternative approach to assessment, called comparative judgement, which may represent a superior method for assessing open-ended questions that encourage a range of unpredictable responses. An innovative problem solving examination paper was specially designed by examiners, evaluated by mathematics teachers, and administered to 750 secondary school students of varied mathematical achievement. The students' work was then assessed by mathematics education experts using comparative judgement as well as a specially designed, resource-intensive marking procedure. We report two main findings from the research. First, the examination paper writers, when freed from the traditional constraint of producing a mark scheme, designed questions that were less structured and more problem-based than is typical in current school mathematics examination papers. Second, the comparative judgement approach to assessing the student work proved successful by our measures of inter-rater reliability and validity. These findings open new avenues for how school mathematics, and indeed other areas of the curriculum, might be assessed in the future.
收起
摘要 :
We present a study in which mathematicians and undergraduate students were asked to explain in writing what mathematicians mean by proof. The 175 responses were evaluated using comparative judgement: mathematicians compared pairs ...
展开
We present a study in which mathematicians and undergraduate students were asked to explain in writing what mathematicians mean by proof. The 175 responses were evaluated using comparative judgement: mathematicians compared pairs of responses and their judgements were used to construct a scaled rank order. We provide evidence establishing the reliability, divergent validity and content validity of this approach to investigating individuals' written conceptions of mathematical proof. In doing so, we compare the quality of student and mathematician responses and identify which features the judges collectively valued. Substantively, our findings reveal that despite the variety of views in the literature, mathematicians broadly agree on what people should say when asked what mathematicians mean by proof. Methodologically, we provide evidence that comparative judgement could have an important role to play in investigating conceptions of mathematical ideas, and conjecture that similar methods could be productive in evaluating individuals' more general (mathematical) beliefs.
收起
摘要 :
Although the fields of comparative effectiveness research (CER) [1] and implementation science (IS) [2,3] both have historical precursors, they are relatively new and emerging as health research fields of inquiry [4]. CER and IS s...
展开
Although the fields of comparative effectiveness research (CER) [1] and implementation science (IS) [2,3] both have historical precursors, they are relatively new and emerging as health research fields of inquiry [4]. CER and IS share similar goals such as studying interventions under more typical or 'real-world' conditions and improving health outcomes through translating research and evidence based findings into practice. The intersection of IS and CER with areas such as pragmatic trials, quality improvement and evaluation research further strengthen this connection. Furthermore, functional connections created between IS and CER within clinical transla-tional science awards, publications and training courses suggest an increasing acknowledgement of the role IS can play to inform CER [4-6]. Despite this overlap and acknowledged connection, many strategies, methods and findings from IS are not routinely used in the design and conduct of CER studies.
收起