摘要 :
The purpose of this meta-analysis was to determine sample weighted mean validity effect sizes for occupational performance assessments, and their general-izability from research to clinical settings. The bare-bones Validity Genera...
展开
The purpose of this meta-analysis was to determine sample weighted mean validity effect sizes for occupational performance assessments, and their general-izability from research to clinical settings. The bare-bones Validity Generalization (VG) guidelines developed by Hunter and Schmidt (2004) augmented by Maximum Likelihood (ML) procedures were used to complete the meta-analysis. The sample consisted of 27 studies in which convergent, divergent, and predictive validity estimates of occupational performance assessments were investigated. The mean coefficients of assessments validated in the studies constituting the sample for this meta-analysis ranged from medium to large. Further meta-analysis with complete dis-attenuation of observed mean validity coefficients is indicated.
收起
摘要 :
Objective: Performance validity testing is a necessary practice when conducting research with undergraduate students, especially when participants are minimally incentivized to provide adequate effort. However, the failure rate on...
展开
Objective: Performance validity testing is a necessary practice when conducting research with undergraduate students, especially when participants are minimally incentivized to provide adequate effort. However, the failure rate on performance validity measures in undergraduate samples has been debated with studies of different measures and cutoffs reporting results ranging from 2.3 to 55.6%. Method: The current study examined multiple studies to investigate failures on performance validity measures in undergraduate students, and how these rates are influenced by liberal and conservative cutoffs. Failure rates were calculated using standalone performance validity tests (PVTs) and embedded validity indices (EVIs) from eight studies conducted at two universities with over one thousand participants. Results: Results indicated that failure rates in standalone PVTs were up to four times greater when using liberal versus conservative cutoffs. EVI rates varied for conservative versus liberal cutoffs with some measures showing almost no difference and others showing 10 times greater failure rates. Conclusions: Findings provide further descriptive data on the base rate of validity test failure in undergraduate student samples and suggest that EVIs might be more sensitive to alterations made in cutoff scores than standalone PVTs. Overall, these results highlight the variability in failure rates across different measures and cutoffs that researchers might employ in any individual study.
收起
摘要 :
BACKGROUND: Veterans with polytrauma have suffered injuries to multiple body parts and organs systems, including the brain. The injuries can generate a triad of physical, neurologic/cognitive, and emotional symptoms. Accurate diag...
展开
BACKGROUND: Veterans with polytrauma have suffered injuries to multiple body parts and organs systems, including the brain. The injuries can generate a triad of physical, neurologic/cognitive, and emotional symptoms. Accurate diagnosis is essential for the treatment of these conditions and for fair allocation of benefits. To accurately diagnose polytrauma disorders and their related problems, clinicians take into account the validity of reported history and symptoms, as well as clinical presentations.
收起
摘要 :
Objective: To cross-validate the Dot Counting Test in a large neuropsychological sample. Method: Dot Counting Test scores were compared in credible (n = 142) and non-credible (n = 335) neuropsychology referrals. Results: Non-credi...
展开
Objective: To cross-validate the Dot Counting Test in a large neuropsychological sample. Method: Dot Counting Test scores were compared in credible (n = 142) and non-credible (n = 335) neuropsychology referrals. Results: Non-credible patients scored significantly higher than credible patients on all Dot Counting Test scores. While the original E-score cut-off of >= 17 achieved excellent specificity (96.5%), it was associated with mediocre sensitivity (52.8%). However, the cut-off could be substantially lowered to >= 13.80, while still maintaining adequate specificity (>= 90%), and raising sensitivity to 70.0%. Examination of non-credible subgroups revealed that Dot Counting Test sensitivity in feigned mild traumatic brain injury (mTBI) was 55.8%, whereas sensitivity was 90.6% in patients with non-credible cognitive dysfunction in the context of claimed psychosis, and 81.0% in patients with non-credible cognitive performance in depression or severe TBI. Thus, the Dot Counting Test may have a particular role in detection of non-credible cognitive symptoms in claimed psychiatric disorders. Alternative to use of the E-score, failure on >= 1 cut-offs applied to individual Dot Counting Test scores (>= 6.0" for mean grouped dot counting time, >= 10.0" for mean ungrouped dot counting time, and >= 4 errors), occurred in 11.3% of the credible sample, while nearly two-thirds (63.6%) of the non-credible sample failed one of more of these cut-offs. Conclusions: An E-score cut-off of 13.80, or failure on >= 1 individual score cut-offs, resulted in few false positive identifications in credible patients, and achieved high sensitivity (64.0-70.0%), and therefore appear appropriate for use in identifying neurocognitive performance invalidity.
收起
摘要 :
We validate the performance of three P-velocity models built with different approaches on regional travel-time prediction for the Tethyan margin in order to test how well they predict independently observed travel times. The three...
展开
We validate the performance of three P-velocity models built with different approaches on regional travel-time prediction for the Tethyan margin in order to test how well they predict independently observed travel times. The three models are constructed with travel-time tomography, a compilation of a priori geologic and geophysical information, and empirical scaling with adjustment from P-arrival inversion, respectively. We compared the synthetics with reference travel times (ground truth data) obtained by using events or explosions located within 25 km with 95% confidence. We found variance of travel times is not an adequate tool to assess the performance of velocity models, because predicted travel times that have small variance can have very different mean value from that of observed ones. Therefore, we propose an alternative way, variance estimation with mean of observed travel times (zero mean). This technique is more efficient to assess the mismatch between synthetics and observed travel times. Among the three models we investigated, the EAPV11 model built mainly with the empirical scaling shows better performance on the travel-time prediction. This result is intriguing, because this model inherits crustal velocity structure, Moho depth, Pn velocities, and upper-mantle structure that affects travel times at regional distance, mostly from a scaled 3D S-velocity model for the same region. This fact may imply that although errors may be included in this scaling, this way would work better than conventional P-arrival inversion. This difference likely exists because surface waves have a better lateral resolution for the crust and uppermost mantle than travel times.
收起
摘要 :
Combining these two sets of wordswould seem to lead one to conclude thata test can be job related based on con-tent validity evidence and predictive ofjob performance but not valid. Based onwhat I understand to be the modern under...
展开
Combining these two sets of wordswould seem to lead one to conclude thata test can be job related based on con-tent validity evidence and predictive ofjob performance but not valid. Based onwhat I understand to be the modern under-standing of test validation and validity,this conclusion does not follow.
收起
摘要 :
Building energy simulation analysis plays an important supporting role in the conservation of building energy. Since the early 1980s, researchers have focused on the development and validation of building energy modeling programs ...
展开
Building energy simulation analysis plays an important supporting role in the conservation of building energy. Since the early 1980s, researchers have focused on the development and validation of building energy modeling programs (BEMPs) and have basically formed a set of systematic validation methods for BEMPs, mainly including analytical, comparative, and empirical methods. Based on related papers in this field, this study systematically analyzed the application status of validation methods for BEMPs from three aspects, namely, sources of validation cases, comparison parameters, and evaluation indicators. The applicability and characteristics of the three methods in different validation fields and different development stages of BEMPs were summarized. Guidance were proposed for researchers to choose more suitable validation methods and evaluation indicators. In addition, the current development trend of BEMPs and the challenges faced by validation methods were investigated, as well as the existing progress of current validation methods under this trend was analyzed. Subsequently, the development direction of the validation method was clarified.
收起
摘要 :
Рассматривается процедура проведения валидации с представлением примера протокола. В нем приводится краткое описание запла...
展开
Рассматривается процедура проведения валидации с представлением примера протокола. В нем приводится краткое описание запланированных и проведенных валидационных исследований, а также указываются запланированные и фактические сроки проведения валидации.
收起
摘要 :
Рассматривается процедура проведения валидации с представлением примера протокола. В нем приводится краткое описание запла...
展开
Рассматривается процедура проведения валидации с представлением примера протокола. В нем приводится краткое описание запланированных и проведенных валидационных исследований, а также указываются запланированные и фактические сроки проведения валидации.
收起
摘要 :
Growing recognition and concerns of non-credible performance in pediatric populations have led clinicians to investigate the utility of performance and symptom validity tests (PVT/SVTs) among children and adolescents. Yet current ...
展开
Growing recognition and concerns of non-credible performance in pediatric populations have led clinicians to investigate the utility of performance and symptom validity tests (PVT/SVTs) among children and adolescents. Yet current research has indicated that a minority of clinicians routinely utilize a free-standing PVT in pediatric neuropsychological evaluations. The current article investigates the rationale for using PVT/SVTs, and the impact that failure of such exams have on other neurocognitive tests. A review of common adult PVTs and their appropriateness for use with specific pediatric clinical populations is presented, as well as empirical evidence for evaluating embedded validity indicators. The limited literature on SVTs with youth is also reviewed and provides additional insight into symptom exaggeration. There are various reasons children would provide noncredible performance, many of which are different from adults. A review of how the clinician should handle this behavior in pediatric evaluations is provided and what patient populations may present with a higher base rate of failure. Finally, various approaches are offered on how to explain these results to children and their caregivers.
收起