Amartya Sanyal
Assistant Professor in Machine Learning · University of Copenhagen; Adjunct Assistant Professor at IIT Kanpur
About
I am an Assistant Professor in Machine Learning in the Department of Computer Science in the University of Copenhagen and an Adjunct Assistant Professor at the Department of Computer Science, IIT Kanpur. I lead the Copenhagen Foundation of Responsible Machine Learning group in UCPH, and my research spans trustworthy machine learning, privacy, robustness, fairness, and learning with limited or imperfect data.
Prior to this, I was a postdoctoral fellow at the Empirical Inference group in Max Planck Institute for Intelligent Systems, Tubingen where I worked closely with Prof. Bernhard Schölkopf , and before that I was a postdoctoral fellow at ETH Zurich AI Center where I worked closely with Prof. Fanny Yang . I completed my DPhil (PhD) at the Department of Computer Science, University of Oxford, funded by the Turing Doctoral Studentship; I was also a member of the Torr Vision Group, and my DPhil advisors were Varun Kanade and Philip H.S. Torr.
Prior to that, I completed my undergraduate (B.Tech in Computer Science) at the Indian Institute of Technology, Kanpur. On various occasions, I have spent time at Facebook AI Research (FAIR), Twitter Cortex , Laboratory for Computational and Statistical Learning, Montreal Institute of Learning Algorithms , and Amazon ML.
Foundations of Responsible Machine Learning
Read more about our group here.
PhD Students
- Giorgio Racca (Co-advised with Michal Valko)
- Luka Radic
- Carolin Heinzler (Co-advised with Prof. Amir Yehudayoff)
- Johanna Düngler (DDSA PhD fellow; Co-advised with Prof. Rasmus Pagh)
- Max Cairney-Leeming (ELLIS PhD student; Co-advised with Prof. Christoph H. Lampert)
- Anmol Goel (ELLIS PhD Student; Co-advised with Prof. Iryna Gurevych)
- Omri Ben-Dov (PhD Student; Co-advised with Dr. Samira Samadi)
- Yaxi Hu (PhD Student; Advised by Prof. Bernhard Schölkopf and Prof. Fanny Yang)
Postdoc
- Vikrant Singhal (Fixed Term Assistant Professor)
Upcoming/Recent Talks
- Mar 11 Keynote: Machine Unlearning: Promises, Failures, and New Directions
- Feb 28 Lower Bounds for Differentially Private Online Learning
- Feb 25 Tutorial - Building trustworthy ML: The role of label quality and availability
- Feb 19 ELLIS talk: Privacy with Correlated data
Recent News
- Sep 2025
Our paper titled An Iterative Algorithm for Differentially Private k-PCA with Adaptive Noise is accepted to NeurIPS 2025. See you in San Diego and Copenhagen!
- Sep 2025
I will be co-hosting two events at EurIPS 2025: the Learning Theory Alliance affinity event and the EurIPS Privacy-Preserving Machine Learning Workshop .
- Jan 2025
I received the Villum Young Invesigator Grant and will be working on topics related to Privacy, unlearning, and online learning. I will be hiring motivated PhD students and Postdocs. Email me if you think you would be good match.
- Jan 2025
Three papers on differentially private alignment of LLMs, Machine unlearning, and training data attribution were accepted at ICLR 2025 and one paper on OOD robustness and label noise at AISTATS 2025 .
Publications
- SODA — Symposium on Discrete Algorithms 2025Learning in an Echo Chamber: Online Learning with Replay Adversary
- NeurIPS — Advances in Neural Information Processing Systems 2025An iterative algorithm for Differentially Private k-PCA with adaptive noise
- TPDP — Theory and Practice of Differential Privacy 2025Online Learning and Unlearning
- AISTATS — International Conference on Artificial Intelligence and Statistics 2025Accuracy on the wrong line: On the pitfalls of noisy data for out-of-distribution generalisation
- ICLR — International Conference on Learning Representations 2025PSA: Differentially Private Steering for Large Language Model Alignment
- ICLR — International Conference on Learning Representations 2025Provable unlearning in topic modeling and downstream tasks
2025 8 papers
- SODA — Symposium on Discrete Algorithms 2025Learning in an Echo Chamber: Online Learning with Replay Adversary
- NeurIPS — Advances in Neural Information Processing Systems 2025An iterative algorithm for Differentially Private k-PCA with adaptive noise
- TPDP — Theory and Practice of Differential Privacy 2025Online Learning and Unlearning
- AISTATS — International Conference on Artificial Intelligence and Statistics 2025Accuracy on the wrong line: On the pitfalls of noisy data for out-of-distribution generalisation
- ICLR — International Conference on Learning Representations 2025PSA: Differentially Private Steering for Large Language Model Alignment
- ICLR — International Conference on Learning Representations 2025Provable unlearning in topic modeling and downstream tasks
- ICLR — International Conference on Learning Representations 2025Protecting against simultaneous data poisoning attacks
- TMLR — Transactions on Machine Learning Research 2025Delta-Influence: Unlearning Poisons via Influence Functions
2024 7 papers
- TMLR — Transactions on Machine Learning Research 2024Corrective Machine Unlearning
- NeurIPS — Advances in Neural Information Processing Systems 2024Robust Mixture Learning when Outliers Overwhelm Small Groups
- NeurIPS — Advances in Neural Information Processing Systems 2024What Makes and Breaks Safety Fine-tuning? A Mechanistic Study
- TPDP, COLT — Conference on Learning Theory 2024On the Growth of Mistakes in Differentially Private Online Learning: A Lower Bound Perspective
- ICML — International Conference on Machine Learning 2024The Role of Learning Algorithms in Collective Action
- TPDP, ICML — International Conference on Machine Learning 2024Provable Privacy with Non-Private Pre-Processing
- TPDP, AISTATS — Artificial Intelligence and StatisticsCertified private data release for sparse Lipschitz functions
Theory and Practice of Differential Privacy 2024
2023 8 papers
- NeurIPS — Advances in Neural Information Processing Systems 2023Can semi-supervised learning use all the data effectively? A lower bound perspective
- PILLAR: How to make semi-private learning more effective
- TMLR — TMLR 2023Catastrophic overfitting can be induced with discriminative non-robust features
- ICML — International Conference on Machine Learning 2023Certifying Ensembles: A General Certification Theory with S-Lipschitzness
- NeurIPS Workshop — Workshop on Understanding Deep Learning Through Empirical Falsification 2023How robust accuracy suffers from certified training with convex relaxations
- Preprint — Arxiv 2023Towards Adversarial Evaluations for Inexact Machine Unlearning
- How robust is unsupervised representation learning to distribution shift?
- A law of adversarial risk, interpolation, and label noise
2022 3 papers
- NeurIPS — Neural Information Processing Systems (NeurIPS) 2022. 2022Make Some Noise: Reliable and Efficient Single-Step Adversarial Training
- How unfair is private learning ?
- Open Problem: Do you pay for Privacy in Online Learning ?
2021 2 papers
- How benign is benign overfitting?
- Progressive Skeletonization: Trimming more fat from a network at initialization
2020 2 papers
- NeurIPS — Advances in Neural Information Processing Systems (NeurIPS) 2020Calibrating Deep Neural Networks using Focal Loss
- Stable Rank Normalization for Improved Generalization in Neural Networks and GANs
2019 1 papers
- Preprint 2019Robustness via Deep Low-Rank Representations
2018 1 papers
- TAPAS: Tricks to Accelerate (encrypted) Prediction As a Service
2017 1 papers
- ICML Workshop — Machine Learning in Speech and Language Processing 2017Multiscale sequence modeling with a learned dictionary