Welcome to the Rocky 2022 Conference. Please click on the links below to access the Rocky website and the list of posters: CONFERENCE RESOURCES Rocky 2022 Website
Finding associations between genetic variants and disease phenotypes via machine learning has been an exploding field of study in recent years. However, statistically significant inferences from these studies require a massive amount of sensitive genotype and phenotype information from thousands of patients, creating concerns related to patient privacy. These concerns are exacerbated when machine learning models themselves leak information about the patients in the training dataset. As a result, privacy concerns are constantly in conflict with the urge for widespread access to patient information for research purposes. Homomorphic encryption can be a potential solution as it allows computations on ciphertext space. While many privacy-preserving methods with homomorphic encryption have since been developed to address the privacy of input (genotype) and output (phenotype) during inference, none implemented mitigations for model privacy. This is largely due to the need for cleaning and pre-processing of large-scale<br>genotype data, which is computationally challenging when model parameters are encrypted. Here we implemented a privacy-preserving inference model using homomorphic encryption for five different phenotype prediction tasks, where genotypes, phenotypes, and model parameters are all encrypted. Our implementation ensures no privacy leakage at any point during inference. We show that we can achieve high accuracy for all five models (≥ 94% for all phenotypes, equivalent to plaintext inference), with each inference taking less than ten seconds for ∼200 genomes. Our study shows that it is possible to achieve high quality machine learning predictions while protecting patient confidentiality against any kind of membership inference attacks with theoretical privacy guarantees.