My research interests include data privacy and security, and adversarial machine learning. As a practitioner in this field, I am interested in finding and solving real problems in a pragmatic manner. The following are some research fields that I am currently working on.
Privacy-Preserving Data Collection with Local Differential Privacy (LDP)
With the prevalence of big data analytics, service providers become increasingly enthusiastic in collecting and analyzing usage data to improve their services. However, the collection of user data comes at the price of privacy risks, not only for users but also for service providers who are vulnerable to internal and external data breaches. As an answer to privacy-preserving data collection, Local Differential Privacy (LDP) has been proposed to perturb data at the user side before being collected. In LDP, the data collector does not need to be trusted. Due to its strong privacy guarantee and decentralized nature, LDP has been adopted in a lot of IT giants for data collection, such as Apple, Google, and Microsoft.
In the literature, some LDP-based techniques have been developed for simple data types, such as categorical, numerical, and set-valued data. However, these are far from adequate for complicated data types and various data mining tasks in many real-world applications. Over the years I have been developing novel techniques for differentially private data collection and analytics, over different data types such as key-value, graph and time series. Key-value pair is an extremely popular NoSQL data model and a generalized form of set-valued and numerical data, which is pervasive in big data analytics. Graph data analytics, in recent years, has received great attention and nurtured numerous applications in web, social network, transportation, and knowledge base. Privacy-preserving time series data analytics is also a big challenge.
- Q. Ye, H. Hu, M. H. Au, X. Meng, X. Xiao. Towards LF-GDPR: Graph Metric Estimation with Local Differential Privacy. IEEE Transactions on Knowledge and Data Engineering (TKDE), 2020, accepted to appear.
- Q. Ye, H. Hu, N. Li, X. Meng, H. Zheng, H. Yan. Beyond Value Perturbation: Differential Privacy in the Temporal Setting. IEEE International Conference on Computer Communications (INFOCOM’21), May 2021, accepted to appear.
- Q. Ye, H. Hu, M. H. Au, X. Meng, X. Xiao. Towards Locally Differentially Private Generic Graph Metric Estimation. Proc. of the 36th IEEE International Conference on Data Engineering (ICDE’20), Dallas, USA, Apr. 2020, pp 1922-1925.
- Q. Ye, H. Hu. Local Differential Privacy: Tools, Challenges and Opportunities. Proc. of the 20th International Conference on Web Information Systems Engineering (WISE ’19), Hong Kong, China, Jan. 2020, pp 13-23.
- Q. Ye, H. Hu, X. Meng, and H. Zheng. PrivKV: Key-Value Data Collection with Local Differential Privacy. Proc. of 40th IEEE Symposium on Security and Privacy (SP’19), San Francisco, USA, May 2019, pp 317-331.
- M. Zhu, Q. Ye, X. Yang, X. Meng, and H. Hu. AppPrivacy: Analyzing Data Collection and Privacy Leakage from Mobile App. (poster) Proc. of 40th IEEE Symposium on Security and Privacy (SP’19), San Francisco, USA, May 2019.
- N. Li, Q. Ye. Mobile Data Collection and Analysis with Local Differential Privacy. Proc. of 20th IEEE International Conference on Mobile Data Management (MDM), Hong Kong, China, Jun. 2019, pp 4-7.
- Q. Ye, X. Meng, M. Zhu, Z. Huo. Survey on Local Differential Privacy. Journal of Software, 2018, 29(7):1981-2005.
Research Grants and Patents
- Privacy Protection in Open and Governance of Big Data (Co-I: National Natural Science Foundation of China, 91646203, 2017-2020, CNY 2,400,000)
- Theory and Method of Privacy Protection and Data Sharing for Mobile Users (Co-I: National Natural Science Foundation of China, 61941121, 2020-2021, CNY 830,000 )
- Q. Ye, H. Hu “键值对数据的收集方法和装置”，中国专利发明，CN110968612A, Apr. 2020.
Adversarial Machine Learning
With the prevalence of Big Data and AI, machine learning models are trained and deployed to facilitate humans in daily life. However, in many hostile environments, the training and deployment of these models can be undermined and their integrity can be severely jeopardized. Adversarial machine learning studies such security issues and aims for the confidentiality, integrity, availability, and accountability of machine learning techniques under malicious and stressful settings.
- H. Zheng, Q. Ye, H. Hu, C. Fang, and J. Shi. Protecting Decision Boundary of Machine Learning Model with Differentially Private Perturbation. IEEE Transactions on Dependable and Secure Computing (TDSC), 2020, accepted to appear.
- H. Zheng, Q. Ye, H. Hu, F. Cheng, and J. Shi. BDPL: A Boundary Differential Private Layer against Machine Learning Model Extraction Attacks. Proc. of the 24th European Symposium on Research in Computer Security (ESORICS ’19), Luxembourg, Sep. 2019, pp 66-83.
- Q. Ye, L. Yang, Y. Fu, X. Chen. A Classification Approach Based on Improved Belief Rule-Base Reasoning. Journal of Frontiers of Computer Science and Technology, 2016, 10(5):709-721.
Research Grants and Patents
- Medical Data Mining based on Belief Rule Base (PI: National Collegiate Innovation and Entrepreneurship Training Program, 201410386009, 2014-2015, CNY 20,000)
- H. Hu, H. Zheng, Q. Ye, C. Fang, J. Shi “ 数据防窃取方法和相关产品 ”，中国专利发明，CN110795703A , Feb. 2020.