隐私保护论文在ins期刊发表-尊龙凯时官方app下载

尊龙凯时官方app下载
隐私保护论文在ins期刊发表
来源: 甘文生/
暨南大学
423
4
0
2023-02-25

隐私保护模式挖掘论文在国际期刊ins在线发表

      课题组关于联邦框架下的隐私保护频繁模式挖掘的论文"privacy-preserving federated mining of frequent itemsets", 在人工智能等领域的权威期刊information sciences (sci, if:8.233, jcr q1, 中科院一区, ccf b) 在线发表, 。本文作者包括yao chen (2021级研究生)、wensheng gan教授 (通讯作者) 、yongdong wu教授、美国伊利诺伊大学芝加哥分校的philip s. yu教授。暨南大学为论文的第一单位, 该研究得到了国家自然科学青年基金和面上项目、广东省基础与应用基础研究基金、琶洲实验室青年学者项目等资助。 information sciences 期刊是计算机科学的人工智能领域具有高影响力的国际学术刊物之一,影响因子为8.233中科院一区,主要发表和报道人工智能、数据科学、机器学习、隐私安全等领域的最新研究进展和技术。 

 

论文题目: privacy-preserving federated mining of frequent itemsets

文章链接: 

authors:  yao chen (研究生), wensheng gan*, yongdong wu, and philip s. yu

abstract:    in the growing concerns about data privacy and increasingly stringent data security regulations, it is not feasible to directly mine data or share data if the dataset contains private data. collecting and analyzing data from multiple parties becomes difficult. federated learning can analyze multiple datasets while preventing the original data from being sent. however, existing federated learning frameworks are based on the apriori property of mining frequent patterns, which has the disadvantage of low efficiency and multiple scanning datasets. therefore, to improve mining efficiency, a federated learning framework (named fedfim) is proposed in this paper. fedfim collects the noisy responses sent by participants, which are used by the server to reconstruct the noisy dataset. after that, the noisy dataset is applied to the non-apriori algorithm to mine frequent patterns. in addition, fedfim incorporates a differential privacy-preserving mechanism into federated learning, which addresses the need for federated modeling and protects data privacy. experiments show that fedfim has a shorter running time and better applicability compared to the most advanced benchmark. 

 

 

 

 


登录用户可以查看和发表评论, 请前往  登录 或  注册
scholat.com 学者网
免责声明 | 关于尊龙凯时官方app下载 | 联系尊龙凯时官方app下载
联系尊龙凯时官方app下载:
网站地图