Qingqing Ye (Vicky)


My research interests include data privacy and security, and adversarial machine learning. As a practitioner in this field, I am interested in finding and solving real problems in a pragmatic manner. The following are some research fields that I am currently working on.

Private Data Analysis with Local Differential Privacy (LDP)

With the prevalence of big data analytics, service providers become increasingly enthusiastic in collecting and analyzing usage data to improve their services. However, the collection of user data comes at the price of privacy risks, not only for users but also for service providers who are vulnerable to internal and external data breaches. As an answer to privacy-preserving data collection, Local Differential Privacy (LDP) has been proposed to perturb data at the user side before being collected. In LDP, the data collector does not need to be trusted. Due to its strong privacy guarantee and decentralized nature, LDP has been adopted in a lot of IT giants for data collection, such as Apple, Google, and Microsoft.

In the literature, some LDP-based techniques have been developed for simple data types, such as categorical, numerical, and set-valued data. However, these are far from adequate for complicated data types and various data mining tasks in many real-world applications. Over the years I have been developing novel techniques for differentially private data collection and analytics, over different data types such as key-value, graph and time series. Key-value pair is an extremely popular NoSQL data model and a generalized form of set-valued and numerical data, which is pervasive in big data analytics. Graph data analytics, in recent years, has received great attention and nurtured numerous applications in web, social network, transportation, and knowledge base. Privacy-preserving time series data analytics is also a big challenge.

External Research Grants

  • Local Differential Privacy under Malicious Attack Model
    PI: National Natural Science Foundation of China, 62102334, 2022.01-2024.12, CNY 300,000
  • Byzantine-Robust Data Collection under Local Differential Privacy Model
    PI: Research Grants Council/GRF, 15225921, 2022.01-2024.12, HKD 838,393
  • Theory and Method of Privacy Protection and Data Sharing for Mobile Users
    Co-I: National Natural Science Foundation of China, 61941121, 2020.01-2021.12, CNY 830,000
  • Privacy Protection in Open and Governance of Big Data
    Co-I: National Natural Science Foundation of China, 91646203, 2017.01-2020.12, CNY 2,400,000

Adversarial Machine Learning

With the prevalence of Big Data and AI, machine learning models are trained and deployed to facilitate humans in daily life. However, in many hostile environments, the training and deployment of these models can be undermined and their integrity can be severely jeopardized. Adversarial machine learning studies such security issues and aims for the confidentiality, integrity, availability, and accountability of machine learning techniques under malicious and stressful settings.

External Research Grants

  • Mechanism on Model Privacy Protection
    Co-I: Huawei Technologies Co. Ltd., 2020.10-2023.02, HKD 2,304,600
  • Medical Data Mining based on Belief Rule Base
    PI: National Collegiate Innovation and Entrepreneurship Training Program, 201410386009, 2014.07-2015.06, CNY 20,000