Amartya Sanyal

I will be starting as Assistant Professor in Machine Learning in the Department of Computer Science in University of Copenhagen from Summer, 2024 and will also be an Affiliated Assistant Professor in the Department of Mathematics.

Email  /  GitHub  /  Google Scholar  /  CV  

profile photo

Till then, I am a postdoctoral fellow at the Empirical Inference group in Max Planck Institute for Intelligent Systems, Tubingen where I work closely with Prof. Bernhard Schölkopf . Prior to this, I was a postdoctoral fellow at ETH Zurich AI Center where I worked closely with Prof. Fanny Yang . I completed my DPhil (PhD) at the Department of Computer Science, University of Oxford, funded by the Turing Doctoral Studentship. I was also a member of the Torr Vision Group. My DPhil advisors were Varun Kanade and Philip H.S. Torr. I am also a member of the ELLIS Society.

Prior to that, I completed my undergraduate (B.Tech in Computer Science) at the Indian Institute of Technology, Kanpur. On various occassions, I have spent time at Facebook AI Research (FAIR), Twitter Cortex , Laboratory for Computational and Statistical Learning, Montreal Institute of Learning Algorithms , and Amazon ML.



Job Advertisement

If you are interested in a PhD position with me, see here for more details

Research Interests

I am interested in understanding both theoretical and empirical aspects of Trustworthy Machine Learning including privacy, robustness, fairness. In particular, I am focusing on the impacts that inadequate data and computation may have on the trustworthiness of ML algorithms, especially under adversarial settings, and how one can solve these issues with reasonable approximations and relaxations including semi-supervised and self-supervised learning.

Current Students

I am very lucky to be currently working with the following students.

    Anmol Goel (ELLIS PhD Student; Co-advised with Prof. Iryna Gurevych)
    Omri Ben-Dov (PhD Student; Co-advised with Dr. Samira Samadi)
    Yaxi Hu (PhD Student; Advised by Prof. Bernhard Schölkopf and Prof. Fanny Yang)

Upcoming/Recent Talks



University of Helsinki
September 21

Harnessing Low Dimensionality and public unlabelled data in Semi-Private Machine Learning Algorithms

MIT Algorithmic Fairness Seminar
September 29

Fairness and Privacy in Machine Learning: Challenges in Long-Tailed Data and Innovations in Semi-Private Algorithms

Meta AI NYC
October 03

Harnessing Low Dimensionality and public unlabelled data in Semi-Private Machine Learning Algorithms

University of Michigan Data Science Seminar
October 04

Harnessing Low Dimensionality and public unlabelled data in Semi-Private Machine Learning Algorithms





Recent News

September, 2023

Our paper on understanding the capabilities of semi-supervised learning was accepted as a spotlight paper in NeurIPS 2023. Paper to appear soon.

August, 2023

Two papers on i) leveraging small amounts of unlabelled public data for differentially private learning and ii) certified private data release were accepted in TPDP 2023

July, 2023

Our paper titled Catastrophic overfitting can be induced with discriminative non-robust features is accepted at TMLR .

April, 2023

Our paper titled Certifying Ensembles: A General Certification Theory with S-Lipschitzness is accepted at ICML 2023.

April, 2023

Co-organising the Workshop on Pitfalls of limited data and computation for Trustworthy ML at ICLR 2023 in Kigali, Rwanda.

January, 2023

Selected to speak at the Rising Stars in AI Symposium, 2023 in KAUST.

January, 2023

Two papers on i) interpolating label noise provably hurts adversarial robustness and ii) robustness of unsupervised representation learning to distribution shift were accepted in ICLR 2023

September, 2022

Our paper titled Make Some Noise: Reliable and Efficient Single-Step Adversarial Training is accepted at NeurIPS 2022.

April, 2022

Our paper How unfair is private learning? on the interaction of Privacy, Accuracy, and Fairness received an Oral in UAI 2022.







Publications

NeurIPS, 2023
Spotlight Paper

Can semi-supervised learning use all the data effectively? A lower bound perspective


Gizem Yüce* , Alexandru Țifrea , Amartya Sanyal , Fanny Yang
Advances in Neural Information Processing Systems, 2023 Spotlight Paper

TPDP, 2023

PILLAR: How to make semi-private learning more effective


Francesco Pinto, Yaxi Hu, Fanny Yang, Amartya Sanyal
Workshop on Pitfalls of limited data and computation for Trustworthy ML
Theory and Practice of Differential Privacy
, 2023
Arxiv / Proceedings / poster /

TPDP, 2023

Sample-efficient private data release for Lipschitz functions under sparsity assumptions


Konstantin Donhauser, Johan Lokna, Amartya Sanyal , March Boedihardjo, Robert Hönig, Fanny Yang,
Theory and Practice of Differential Privacy , 2023
Arxiv /

TMLR, 2023

Catastrophic overfitting can be induced with discriminative non-robust features


Guillermo Ortiz-Jiménez, Pau de Jorge, Amartya Sanyal , Adel Bibi, Puneet Dokania, Pascal Frossard, Gregory Rogéz, Philip H.S. Torr
TMLR , 2023
Arxiv /

ICML, 2023

Certifying Ensembles: A General Certification Theory with S-Lipschitzness


Aleksandar Petrov, Francisco Eiras, Amartya Sanyal , Philip H.S. Torr, Adel Bibi
International Conference on Machine Learning , 2023
Arxiv /

NeurIPS Workshop, 2023
Oral Paper

How robust accuracy suffers from certified training with convex relaxations


Piersilvio De Bartolomeis, Jacob Clarysse, Fanny Yang, Amartya Sanyal
Workshop on Understanding Deep Learning Through Empirical Falsification , 2023 Oral Paper
Arxiv / Proceedings /

Preprint, 2023

Towards Adversarial Evaluations for Inexact Machine Unlearning


Shashwat Goel, Ameya Prabhu, Amartya Sanyal , Ser-Nam Lim, Philip Torr, Ponnurangam Kumaraguru
Arxiv, 2023
Arxiv /

ICLR, 2023

A law of adversarial risk, interpolation, and label noise


Daniel Paleka*, Amartya Sanyal*
International Conference on Learning Representations (ICLR), 2023
Arxiv / Proceedings / poster /

ICLR, 2023

How robust is unsupervised representation learning to distribution shift?


Yuge Shi, Imant Daunhawer, Julia E. Vogt, Philip H.S. Torr , Amartya Sanyal .
International Conference on Learning Representations (ICLR), 2023
Arxiv / Proceedings /

NeurIPS, 2022

Make Some Noise: Reliable and Efficient Single-Step Adversarial Training


Pau De Jorge Aranda, Adel Bibi , Ricardo Volpi, Amartya Sanyal , Philip H.S. Torr, Grégory Rogez , Puneet Dokania
Neural Information Processing Systems (NeurIPS) 2022., 2022
Arxiv / Proceedings /

SlowDNN Workshop, 2022

Semi-private learning via low dimensional structures


Yaxi Hu, Francesco Pinto, Fanny Yang, Amartya Sanyal
Workshop on Seeking Low-Dimensionality in Deep Neural Networks, 2022

UAI, 2022
Oral Paper

How unfair is private learning ?


Amartya Sanyal , Yaxi Hu, Fanny Yang
Conference on Uncertainty in Artificial Intelligence (UAI) , 2022 Oral Paper
Arxiv / Proceedings / poster / slides /

COLT, 2022

Open Problem: Do you pay for Privacy in Online Learning ?


Amartya Sanyal , Giorgia Ramponi
Conference on Learning Theory (COLT) Open Problems, 2022
Arxiv / Proceedings / slides /

ICLR, 2021
Spotlight Paper

How benign is benign overfitting?


Amartya Sanyal , Varun Kanade, Puneet Dokania, Philip H.S. Torr
International Conference of Learning Representations (ICLR) , 2021 Spotlight Paper
Arxiv / Proceedings / poster /

ICLR, 2021

Progressive Skeletonization: Trimming more fat from a network at initialization


Pau De Jorge Aranda, Amartya Sanyal , Harkirat Behl , Philip H.S. Torr, Grégory Rogez , Puneet Dokania
International Conference of Learning Representations (ICLR), 2021
Arxiv / Proceedings / code /

NeurIPS, 2020

Calibrating Deep Neural Networks using Focal Loss


Jishnu Mukhoti, Viveka Kulharia, Amartya Sanyal , Stuart Golodetz, Philip H.S. Torr, Puneet Dokania
Advances in Neural Information Processing Systems (NeurIPS), 2020
Arxiv / Proceedings / code /

ICLR, 2020
Spotlight Paper

Stable Rank Normalization for Improved Generalization in Neural Networks and GANs


Amartya Sanyal , Philip H.S. Torr, Puneet Dokania
International Conference of Learning Representations (ICLR), 2020 Spotlight Paper
Arxiv / Proceedings / video /

Preprint, 2019

Robustness via Deep Low-Rank Representations


Amartya Sanyal , Varun Kanade, Philip H.S. Torr, Puneet Dokania
Preprint, 2019
Arxiv /

ICML, 2018

TAPAS: Tricks to Accelerate (encrypted) Prediction As a Service


Amartya Sanyal , Matt Kusner , Adrià Gascón Varun Kanade
International Conference of Machine Learning (ICML), 2018
Arxiv / Proceedings / code /

ICML Workshop, 2017

Multiscale sequence modeling with a learned dictionary


Bart van Merriënboer, Amartya Sanyal , Hugo Larochelle, Yoshua Bengio
Machine Learning in Speech and Language Processing, 2017
Arxiv /








Design and source code from Jon Barron's website