@Article{info:doi/10.2196/26914,作者=“Sung, MinDong and Cha, Dongchul and Park, Yu Rang”,标题=“医疗领域保护敏感信息的局部差分隐私:算法开发与现实世界验证”,期刊=“JMIR Med Inform”,年=“2021”,月=“11”,日=“8”,卷=“9”,数=“11”,页=“e26914”,关键词=“隐私保护”;微分隐私;医学信息学;医疗数据;隐私;电子健康档案;算法;发展;验证;大数据; feasibility; machine learning; synthetic data", abstract="Background: Privacy is of increasing interest in the present big data era, particularly the privacy of medical data. Specifically, differential privacy has emerged as the standard method for preservation of privacy during data analysis and publishing. Objective: Using machine learning techniques, we applied differential privacy to medical data with diverse parameters and checked the feasibility of our algorithms with synthetic data as well as the balance between data privacy and utility. Methods: All data were normalized to a range between --1 and 1, and the bounded Laplacian method was applied to prevent the generation of out-of-bound values after applying the differential privacy algorithm. To preserve the cardinality of the categorical variables, we performed postprocessing via discretization. The algorithm was evaluated using both synthetic and real-world data (from the eICU Collaborative Research Database). We evaluated the difference between the original data and the perturbated data using misclassification rates and the mean squared error for categorical data and continuous data, respectively. Further, we compared the performance of classification models that predict in-hospital mortality using real-world data. Results: The misclassification rate of categorical variables ranged between 0.49 and 0.85 when the value of $\epsilon$ was 0.1, and it converged to 0 as $\epsilon$ increased. When $\epsilon$ was between 102 and 103, the misclassification rate rapidly dropped to 0. Similarly, the mean squared error of the continuous variables decreased as $\epsilon$ increased. The performance of the model developed from perturbed data converged to that of the model developed from original data as $\epsilon$ increased. In particular, the accuracy of a random forest model developed from the original data was 0.801, and this value ranged from 0.757 to 0.81 when $\epsilon$ was 10-1 and 104, respectively. Conclusions: We applied local differential privacy to medical domain data, which are diverse and high dimensional. Higher noise may offer enhanced privacy, but it simultaneously hinders utility. We should choose an appropriate degree of noise for data perturbation to balance privacy and utility depending on specific situations. ", issn="2291-9694", doi="10.2196/26914", url="https://medinform.www.mybigtv.com/2021/11/e26914", url="https://doi.org/10.2196/26914", url="http://www.ncbi.nlm.nih.gov/pubmed/34747711" }
Baidu
map