Research scientist at Pluralis Research, Melbourne.
A public notebook of ML writing.
Research scientist at Pluralis Research in Melbourne. I'm statistical-minded and opinionated, usually in defence of whatever the numbers actually support. Most of my time goes to optimisers and architectures friendly to decentralised training, with a pinch of adversarial ML to keep things safe.
I did my PhD at the University of Melbourne on adversarial machine learning under Sarah Erfani and Christopher Leckie, with an Amazon internship along the way. After a postdoc with the ARC Centre of Excellence for Automated Decision-Making & Society and a year and a half at Oracle building LLM-based applications for healthcare, I joined Pluralis.
This site is where I write in public. Expect opinionated takes on what works, reading notes when a paper actually teaches me something, and the occasional interactive essay when prose runs out of room.
An overview of our NeurIPS 2020 paper. Using flows to make adversarial perturbations look like the data distribution — and that's the attack.
A walkthrough of normalising flows, coupling layers, and our AISTATS 2020 paper on monotone rational splines.
A brief overview of the diffusion models from Ho et al. — the paper that kicked off the modern diffusion wave.
Pulse will populate from @hmdolatabadi once the X API cron is wired up. Until then, head to X for the latest.