Browsing by Subject "Research Design"
Now showing 1 - 4 of 4
- Results Per Page
- Sort Options
Item A bias correction method in meta-analysis of randomized clinical trials with no adjustments for zero-inflated outcomes(John Wiley & Sons, Inc., 2021-09-03) Zhou, Zhengyang; Xie, Minge; Huh, David; Mun, Eun-YoungMany clinical endpoint measures, such as the number of standard drinks consumed per week or the number of days that patients stayed in the hospital, are count data with excessive zeros. However, the zero-inflated nature of such outcomes is sometimes ignored in analyses of clinical trials. This leads to biased estimates of study-level intervention effect and, consequently, a biased estimate of the overall intervention effect in a meta-analysis. The current study proposes a novel statistical approach, the Zero-inflation Bias Correction (ZIBC) method, that can account for the bias introduced when using the Poisson regression model, despite a high rate of inflated zeros in the outcome distribution of a randomized clinical trial. This correction method only requires summary information from individual studies to correct intervention effect estimates as if they were appropriately estimated using the zero-inflated Poisson regression model, thus it is attractive for meta-analysis when individual participant-level data are not available in some studies. Simulation studies and real data analyses showed that the ZIBC method performed well in correcting zero-inflation bias in most situations.Item Centralizing prescreening data collection to inform data-driven approaches to clinical trial recruitment(BioMed Central Ltd., 2023-05-03) Kirn, Dylan R.; Grill, Joshua D.; Aisen, Paul; Ernstrom, Karin; Gale, Seth; Heidebrink, Judith; Jicha, Gregory; Jimenez-Maggiora, Gustavo; Johnson, Leigh A.; Peskind, Elaine; McCann, Kelly; Shaffer, Elizabeth; Sultzer, David; Wang, Shunran; Sperling, Reisa; Raman, RemaBACKGROUND: Recruiting to multi-site trials is challenging, particularly when striving to ensure the randomized sample is demographically representative of the larger disease-suffering population. While previous studies have reported disparities by race and ethnicity in enrollment and randomization, they have not typically investigated whether disparities exist in the recruitment process prior to consent. To identify participants most likely to be eligible for a trial, study sites frequently include a prescreening process, generally conducted by telephone, to conserve resources. Collection and analysis of such prescreening data across sites could provide valuable information to improve understanding of recruitment intervention effectiveness, including whether traditionally underrepresented participants are lost prior to screening. METHODS: We developed an infrastructure within the National Institute on Aging (NIA) Alzheimer's Clinical Trials Consortium (ACTC) to centrally collect a subset of prescreening variables. Prior to study-wide implementation in the AHEAD 3-45 study (NCT NCT04468659), an ongoing ACTC trial recruiting older cognitively unimpaired participants, we completed a vanguard phase with seven study sites. Variables collected included age, self-reported sex, self-reported race, self-reported ethnicity, self-reported education, self-reported occupation, zip code, recruitment source, prescreening eligibility status, reason for prescreen ineligibility, and the AHEAD 3-45 participant ID for those who continued to an in-person screening visit after study enrollment. RESULTS: Each of the sites was able to submit prescreening data. Vanguard sites provided prescreening data on a total of 1029 participants. The total number of prescreened participants varied widely among sites (range 3-611), with the differences driven mainly by the time to receive site approval for the main study. Key learnings instructed design/informatic/procedural changes prior to study-wide launch. CONCLUSION: Centralized capture of prescreening data in multi-site clinical trials is feasible. Identifying and quantifying the impact of central and site recruitment activities, prior to participants signing consent, has the potential to identify and address selection bias, instruct resource use, contribute to effective trial design, and accelerate trial enrollment timelines.Item Continuous quality improvement at the clinical research site: implementing methods for coordinators in the Heart and Lung Transplant and Pulmonary department at Baylor Scott and White Research Institute(2020-05) Norgan Radler, Charlene R.; Mathew, Stephen O.; Chaudhary, Pankaj; Martinez, Horacio; Felius, JoostNorgan Radler, Charlene R. Continuous quality improvement at the clinical research site: implementing methods for coordinators in the Heart and Lung Transplant and Pulmonary department at Baylor Scott and White Research Institute Master of Science (Clinical Research Management), April 2020 Introduction: The following research project is a Quality Improvement (QI) study to assess resource utilization for six ongoing clinical trials and evaluate the impact of quality improvement methods on the completion of critical trial activities in the Heart and Lung Transplant and Pulmonary (HLTP) department at Baylor Scott and White Research Institute (BSWRI). Methods: The project design is a case series in which observations were made on research staff before and after an intervention, with no control group. Non-probability sampling with purposeful, maximum variation was used due to the study's qualitative research design. Metrics were collected regarding the completion of key trial activities of subject screening, subject enrollment, and data entry before and after intervention using a spreadsheet tool. Collected metrics were reviewed to identify areas for improvement and QI interventions were designed and implemented to reallocate resources as appropriate. The data was maintained in a run chart to monitor changes during the pre-intervention and post-intervention periods. Statistical analysis was performed on the data to evaluate the effect of the intervention. Results: The Wilcoxon Signed-Rank test was used to statistically analyze differences in medians of activity metrics across all studies before and after intervention. All variables improved in the direction of the applied interventions except time screening subjects and data entered in the electronic data capture (EDC) system. Median differences were found statistically non-significant, except the combined variable of number of open queries and case report forms (CRF) not entered weekly which demonstrated a statistically significant decrease following intervention. Median time screening subjects demonstrated a non-significant decrease following intervention while median number of subjects screened showed a non-significant increase. Median time enrolling subjects and median number of subjects enrolled increased post intervention, but statistical testing was not performed due to the small sample size below the minimum critical threshold required. Median time entering data in the EDC demonstrated a non-significant increase following intervention while median number of CRFs entered in the EDC showed a non-significant decrease. Conclusion: Implementation of the quality improvement process for clinical research staff provided a tool for our site to continuously assess and improve trial outcomes. Five of the seven variables receiving quality improvement interventions improved in the direction of the intervention, with one demonstrating a statistically significant difference. The small sample size used may have decreased the power of the study to detect statistical significance. Future studies should be completed to apply the quality improvement methodology used to a larger sample size. In conclusion, this study established 'proof of concept' for the completion of future, larger-scale quality improvement projects at our research site.Item Quality Assurance Training: Will a New Training Intervention Improve Data Collection of the Texas Emergency Medicine Research Associate Program (TEMRAP)?(2018-12) Saldana, Miguel Antonio; Hodge, Lisa; Pierce, Ava; Krishnamoorthy, RaghuIntroduction: Data collection is vital for the success of a clinical research project. The purpose of this practicum was to address the inadequate data collection by the Texas Emergency Medicine Research Associate Program (TEMRAP) research associates (RAs). The primary goal was to incorporate a more efficient training method to reduce the RAs' error rate in the documentation. The secondary aim of this experiment was to determine if RAs' knowledge of clinical research studies and/or their self-confidence when enrolling a patient had an effect on quality of data collection and if these variables could be improved by a new training method. Methods: A randomized clinical trial was used to evaluate the efficacy of simulated clinical research enrollment training as a teaching and/or learning method to reduce the error rate in submitted research packets by RAs. The returning RAs were randomized into an intervention group with new training (simulations) and a control group with current training (didactic presentations). A self-confidence survey and a knowledge questionnaire were completed by RAs pre/post-training and one-month follow-up. Quality of data collection was measured by comparing the error rates of data collection in completed clinical research enrollment packets submitted by the RAs in the intervention group versus the control group. Results: Results showed no statistically significant difference in the level of knowledge, confidence or error rates between the patient enrollment simulation (intervention) group and the didactic presentations (control) group after their respective training (p [greater than] .05). However, there was a statistically significant increase in knowledge and confidence post-training in patient simulations group. A significant association was present between confidence and error rate but not between knowledge and error rate for research associates in either training group. Conclusion: Clinical simulation training was not a significantly more effecting training method compared to current TEMRAP didactic presentation training. Even though knowledge and confidence did increase post-training there was no significant difference between the two types of training. Future experiments should explore the possibility of combining the two types of training and observing other potential variables affecting the quality of data, such as research associates' motivation. Additionally, the need for a larger sample size and enrolling participants with no prior research experience should be explored for significant results.