The Devil’s Advocate: Shattering the Illusion of Unexploitable Data using Diffusion Models
Published in Preprint, 2023
In this work, we reveal the fragility of unlearnable examples using diffusion models.
Published in Preprint, 2023
In this work, we reveal the fragility of unlearnable examples using diffusion models.
Published in Preprint, 2022
In this paper, we theoretically analyze adversarial coreset selection and provide insights into its working dynamics.
Published in The 16th Asian Conference on Computer Vision (ACCV), 2022
In this paper, we utilize adaptive subset selection to eliminate backdoor data and train a clean model.
Published in The 17th European Conference on Computer Vision (ECCV), 2022
In this work, we reduce the training time of adversarial training using adaptive sample selection.
Published in The 34th Conference on Neural Information Processing Systems (NeurIPS), 2020
In this paper, we propose a novel black-box adversarial attack that exploits clean data distribution to conceal itself.
Published in The 23rd International Conference on Artifcial Intelligence and Statistics (AISTATS), 2020
In this paper, we explore using linear rational splines as an invertible transformations for normalizing flows.
Published in IEEE Signal Processing Letters, 2019
This paper is about designing Toeplitz measurement matrices based on Weyl sums.
Published in International Conference on Sampling Theory and Applications (SampTA), 2017
In this paper we present a sampling result about continuous-domain black and white images that form a convex shape.