Amartya Sanyal

I will be starting as Assistant Professor in Machine Learning in the Department of Computer Science in University of Copenhagen from Summer, 2024 and will also be an Affiliated Assistant Professor in the Department of Mathematics.

Email  /  GitHub  /  Google Scholar  /  CV  

profile photo

Till then, I am a postdoctoral fellow at the Empirical Inference group in Max Planck Institute for Intelligent Systems, Tubingen where I work closely with Prof. Bernhard Schölkopf . Prior to this, I was a postdoctoral fellow at ETH Zurich AI Center where I worked closely with Prof. Fanny Yang . I completed my DPhil (PhD) at the Department of Computer Science, University of Oxford, funded by the Turing Doctoral Studentship. I was also a member of the Torr Vision Group. My DPhil advisors were Varun Kanade and Philip H.S. Torr. I am also a member of the ELLIS Society.

Prior to that, I completed my undergraduate (B.Tech in Computer Science) at the Indian Institute of Technology, Kanpur. On various occassions, I have spent time at Facebook AI Research (FAIR), Twitter Cortex , Laboratory for Computational and Statistical Learning, Montreal Institute of Learning Algorithms , and Amazon ML.



Job Advertisement

If you are interested in a PhD position with me, see here for more details

Research Interests

I am interested in understanding both theoretical and empirical aspects of Trustworthy Machine Learning including privacy, robustness, fairness. In particular, I am focusing on the impacts that inadequate data and computation may have on the trustworthiness of ML algorithms, especially under adversarial settings, and how one can solve these issues with reasonable approximations and relaxations including semi-supervised and self-supervised learning.

Current Students

I am very lucky to be currently working with the following students.

    Anmol Goel (ELLIS PhD Student; Co-advised with Prof. Iryna Gurevych)
    Omri Ben-Dov (PhD Student; Co-advised with Dr. Samira Samadi)
    Yaxi Hu (PhD Student; Advised by Prof. Bernhard Schölkopf and Prof. Fanny Yang)

Upcoming/Recent Talks



MIT Algorithmic Fairness Seminar
September 29

Fairness and Privacy in Machine Learning: Challenges in Long-Tailed Data and Innovations in Semi-Private Algorithms

University of Michigan Data Science Seminar
October 04

Harnessing Low Dimensionality and public unlabelled data in Semi-Private Machine Learning Algorithms

CISPA Helmholtz Center for Information Security
November 28

Harnessing Low Dimensionality and public unlabelled data in Semi-Private Machine Learning Algorithms





Recent News

April, 2024

I received a NNF Start Package grant. I am hiring a PhD student to start this fall . I will also be hiring a Postdoc. Email me if you think you would be great match.

February, 2024

Three new preprints: on the problem of separation between non-private and private online learning, privacy guarantees despite non-private pre-processing and machine unlearning beyond privacy concerns are online.

January, 2024

Our paper titled Sample-efficient private data release for Lipschitz functions under sparsity assumptions is accepted at AISTATS 2024.

December, 2023

Our paper PILLAR: How to make semi-private learning more effective on semi-private learning is accepted in SatML 2023 .

September, 2023

Our paper on understanding the capabilities of semi-supervised learning was accepted as a spotlight paper in NeurIPS 2023.

August, 2023

Two papers on i) leveraging small amounts of unlabelled public data for differentially private learning and ii) certified private data release were accepted in TPDP 2023

July, 2023

Our paper titled Catastrophic overfitting can be induced with discriminative non-robust features is accepted at TMLR .

April, 2023

Our paper titled Certifying Ensembles: A General Certification Theory with S-Lipschitzness is accepted at ICML 2023.

April, 2023

Co-organising the Workshop on Pitfalls of limited data and computation for Trustworthy ML at ICLR 2023 in Kigali, Rwanda.

January, 2023

Selected to speak at the Rising Stars in AI Symposium, 2023 in KAUST.

January, 2023

Two papers on i) interpolating label noise provably hurts adversarial robustness and ii) robustness of unsupervised representation learning to distribution shift were accepted in ICLR 2023







Publications

Arxiv, 2024

Provable Privacy with Non-Private Pre-Processing


Yaxi Hu, Amartya Sanyal , Bernhard Schölkopf
Arxiv, 2024
Arxiv /

Arxiv, 2024

On the Growth of Mistakes in Differentially Private Online Learning: A Lower Bound Perspective


Daniil Dmitriev, Kristóf Szabó, Amartya Sanyal
Arxiv, 2024
Arxiv /

Arxiv, 2024

Corrective Machine Unlearning


Shashwat Goel, Ameya Prabhu, Philip Torr, Ponnurangam Kumaraguru, Amartya Sanyal
Arxiv, 2024
Arxiv /

TPDP, AISTATS, 2024

Certified private data release for sparse Lipschitz functions


Konstantin Donhauser, Johan Lokna, Amartya Sanyal , March Boedihardjo, Robert Hönig, Fanny Yang,
Artificial Intelligence and Statistics
Theory and Practice of Differential Privacy
, 2024
Arxiv /

NeurIPS, 2023
Spotlight Paper

Can semi-supervised learning use all the data effectively? A lower bound perspective


Gizem Yüce* , Alexandru Țifrea , Amartya Sanyal , Fanny Yang
Advances in Neural Information Processing Systems, 2023 Spotlight Paper
Arxiv / Proceedings /

TPDP, SaTML, 2023

PILLAR: How to make semi-private learning more effective


Francesco Pinto, Yaxi Hu, Fanny Yang, Amartya Sanyal
IEEE Conference on Secure and Trustworthy Machine Learning
Theory and Practice of Differential Privacy
Workshop on Pitfalls of limited data and computation for Trustworthy ML
, 2023
Arxiv / Proceedings / code / poster /

TMLR, 2023

Catastrophic overfitting can be induced with discriminative non-robust features


Guillermo Ortiz-Jiménez, Pau de Jorge, Amartya Sanyal , Adel Bibi, Puneet Dokania, Pascal Frossard, Gregory Rogéz, Philip H.S. Torr
TMLR , 2023
Arxiv / code /

ICML, 2023

Certifying Ensembles: A General Certification Theory with S-Lipschitzness


Aleksandar Petrov, Francisco Eiras, Amartya Sanyal , Philip H.S. Torr, Adel Bibi
International Conference on Machine Learning , 2023
Arxiv /

NeurIPS Workshop, 2023
Oral Paper

How robust accuracy suffers from certified training with convex relaxations


Piersilvio De Bartolomeis, Jacob Clarysse, Fanny Yang, Amartya Sanyal
Workshop on Understanding Deep Learning Through Empirical Falsification , 2023 Oral Paper
Arxiv / Proceedings /

Preprint, 2023

Towards Adversarial Evaluations for Inexact Machine Unlearning


Shashwat Goel, Ameya Prabhu, Amartya Sanyal , Ser-Nam Lim, Philip Torr, Ponnurangam Kumaraguru
Arxiv, 2023
Arxiv /

ICLR, 2023

A law of adversarial risk, interpolation, and label noise


Daniel Paleka*, Amartya Sanyal*
International Conference on Learning Representations (ICLR), 2023
Arxiv / Proceedings / poster /

ICLR, 2023

How robust is unsupervised representation learning to distribution shift?


Yuge Shi, Imant Daunhawer, Julia E. Vogt, Philip H.S. Torr , Amartya Sanyal .
International Conference on Learning Representations (ICLR), 2023
Arxiv / Proceedings /

NeurIPS, 2022

Make Some Noise: Reliable and Efficient Single-Step Adversarial Training


Pau De Jorge Aranda, Adel Bibi , Ricardo Volpi, Amartya Sanyal , Philip H.S. Torr, Grégory Rogez , Puneet Dokania
Neural Information Processing Systems (NeurIPS) 2022., 2022
Arxiv / Proceedings /

UAI, 2022
Oral Paper

How unfair is private learning ?


Amartya Sanyal , Yaxi Hu, Fanny Yang
Conference on Uncertainty in Artificial Intelligence (UAI) , 2022 Oral Paper
Arxiv / Proceedings / poster / slides /

COLT, 2022

Open Problem: Do you pay for Privacy in Online Learning ?


Amartya Sanyal , Giorgia Ramponi
Conference on Learning Theory (COLT) Open Problems, 2022
Arxiv / Proceedings / slides /

ICLR, 2021
Spotlight Paper

How benign is benign overfitting?


Amartya Sanyal , Varun Kanade, Puneet Dokania, Philip H.S. Torr
International Conference of Learning Representations (ICLR) , 2021 Spotlight Paper
Arxiv / Proceedings / poster /

ICLR, 2021

Progressive Skeletonization: Trimming more fat from a network at initialization


Pau De Jorge Aranda, Amartya Sanyal , Harkirat Behl , Philip H.S. Torr, Grégory Rogez , Puneet Dokania
International Conference of Learning Representations (ICLR), 2021
Arxiv / Proceedings / code /

NeurIPS, 2020

Calibrating Deep Neural Networks using Focal Loss


Jishnu Mukhoti, Viveka Kulharia, Amartya Sanyal , Stuart Golodetz, Philip H.S. Torr, Puneet Dokania
Advances in Neural Information Processing Systems (NeurIPS), 2020
Arxiv / Proceedings / code /

ICLR, 2020
Spotlight Paper

Stable Rank Normalization for Improved Generalization in Neural Networks and GANs


Amartya Sanyal , Philip H.S. Torr, Puneet Dokania
International Conference of Learning Representations (ICLR), 2020 Spotlight Paper
Arxiv / Proceedings / video /

Preprint, 2019

Robustness via Deep Low-Rank Representations


Amartya Sanyal , Varun Kanade, Philip H.S. Torr, Puneet Dokania
Preprint, 2019
Arxiv /

ICML, 2018

TAPAS: Tricks to Accelerate (encrypted) Prediction As a Service


Amartya Sanyal , Matt Kusner , Adrià Gascón Varun Kanade
International Conference of Machine Learning (ICML), 2018
Arxiv / Proceedings / code /

ICML Workshop, 2017

Multiscale sequence modeling with a learned dictionary


Bart van Merriënboer, Amartya Sanyal , Hugo Larochelle, Yoshua Bengio
Machine Learning in Speech and Language Processing, 2017
Arxiv /








Design and source code from Jon Barron's website