@Article{info:doi/10.2196/29271,作者=“Trojan, Andreas and Leuthold, Nicolas and Thomssen, Christoph and Rody, Achim and Winder, Thomas and Jakob, Andreas and Egger, Claudine and Held, Ulrike and Jackisch, Christian”,标题=“电子患者报告结果协同评价对接受全身治疗的癌症患者报告毒性一致性的影响:前瞻性、多中心、观察性临床试验”,期刊=“J Med Internet Res”,年=“2021”,月=“8”,日=“5”,卷=“23”,号=“8”,页=“e29271”,关键词=“癌症;consilium;应用程序;电子健康;ePRO;CTCAE;一致;patient-reported;背景:电子患者报告结果(Electronic patient- reporting outcomes, ePRO)是一种相对较新的数据形式,有可能改善癌症患者的临床实践。 In this prospective, multicenter, observational clinical trial, efforts were made to demonstrate the reliability of patient-reported symptoms. Objective: The primary objective of this study was to assess the level of agreement $\kappa$ between symptom ratings by physicians and patients via a shared review process in order to determine the future reliability and utility of self-reported electronic symptom monitoring. Methods: Patients receiving systemic therapy in a (neo-)adjuvant or noncurative intention setting captured ePRO for 52 symptoms over an observational period of 90 days. At 3-week intervals, randomly selected symptoms were reviewed between the patient and physician for congruency on severity of the grading of adverse events according to the Common Terminology Criteria of Adverse Events (CTCAE). The patient-physician agreement for the symptom review was assessed via Cohen kappa ($\kappa$), through which the interrater reliability was calculated. Chi-square tests were used to determine whether the patient-reported outcome was different among symptoms, types of cancer, demographics, and physicians' experience. Results: Among the 181 patients (158 women and 23 men; median age 54.4 years), there was a fair scoring agreement ($\kappa$=0.24; 95{\%} CI 0.16-0.33) for symptoms that were entered 2 to 4 weeks before the intended review (first rating) and a moderate agreement ($\kappa$=0.41; 95{\%} CI 0.34-0.48) for symptoms that were entered within 1 week of the intended review (second rating). However, the level of agreement increased from moderate (first rating, $\kappa$=0.43) to substantial (second rating, $\kappa$=0.68) for common symptoms of pain, fever, diarrhea, obstipation, nausea, vomiting, and stomatitis. Similar congruency levels of ratings were found for the most frequently entered symptoms (first rating: $\kappa$=0.42; second rating: $\kappa$=0.65). The symptom with the lowest agreement was hair loss ($\kappa$=--0.05). With regard to the latency of symptom entry into the review, hardly any difference was demonstrated between symptoms that were entered from days 1 to 3 and from days 4 to 7 before the intended review ($\kappa$=0.40 vs $\kappa$=0.39, respectively). In contrast, for symptoms that were entered 15 to 21 days before the intended review, no congruency was demonstrated ($\kappa$=--0.15). Congruency levels seemed to be unrelated to the type of cancer, demographics, and physicians' review experience. Conclusions: The shared monitoring and review of symptoms between patients and clinicians has the potential to improve the understanding of patient self-reporting. Our data indicate that the integration of ePRO into oncological clinical research and continuous clinical practice provides reliable information for self-empowerment and the timely intervention of symptoms. Trial Registration: ClinicalTrials.gov NCT03578731; https://clinicaltrials.gov/ct2/show/NCT03578731 ", issn="1438-8871", doi="10.2196/29271", url="//www.mybigtv.com/2021/8/e29271", url="https://doi.org/10.2196/29271", url="http://www.ncbi.nlm.nih.gov/pubmed/34383675" }
Baidu
map