Page Not Found
Page not found. Your pixels are in another canvas.
A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
Page not found. Your pixels are in another canvas.
About me
This is a page not in th emain menu
Published:
The main goal of generative model is the ‘generate’ some new data, unlike in classification where we have a fixed set of classes we want to predict. It’s basically producing rather than predicting. There are four main types of generative models: VAEs, GANs, Normalizing Flows, Invertible Neural Networks and Diffusion Models (for next time), and they all try and learn a model which captures the underlying structure of the data.
Published:
In the last post we spoke about walking down the hill to find the location of the lowest point. This is a great analogy for training a neural network, we want to find the parameters that minimize the loss function. You can think of the loss function as the height of the hill, and the parameters as the coordinates of the hill. This landscape is called the loss landscape, and we talk alot about it in the deep learning community. Specifically in relation to valleys, local minima, saddle points, and flat regions. Whenever you hear these terms, just think about hills and valleys.
Published:
Basically every machine learning AI problem is now solved using deep learning. Deep learning is the approach of using neural networks to learn from data, in this approach we don’t need to decide on what the model (neural network) should do, we just need to give it the data and the model will learn to do the task.
Short description of portfolio item number 1
Short description of portfolio item number 2
Published in Intellectual Property Office, 2018
@article{GB2559977,
title={Patent GB2559977 - Apparatus and Methods for Obtaining Information about the Face and Eyes of a Subject},
author={Joy, T and Larkins, A},
year={2018},
journal={Intellectual Property Office},
}
Download here
Published in Intellectual Property Office, 2018
@article{GB2559978,
title={Patent GB2559978 - Using Specular Reflections to Image and Model a Face Including Pysical Eyewear},
author={Joy, T and Larkins, A},
year={2018},
journal={Intellectual Property Office},
}
Download here
Published in IEEE Transactions on Circuits and Systems II: Express Briefs, 2019
@article{rahnama2019real,
title={Real-Time Highly Accurate Dense Depth on a Power Budget using an FPGA-CPU Hybrid SoC},
author={Rahnama, Oscar and Cavallari, Tommaso and Golodetz, Stuart and Tonioni, Alessio and Joy, Thomas and Di Stefano, Luigi and Walker, Simon and Torr, Philip HS},
journal={IEEE Transactions on Circuits and Systems II: Express Briefs},
year={2019},
publisher={IEEE}
}
Download here
Published in CVPR, 2019
@article{tonioni2019learning,
title={Learning to Adapt for Stereo},
author={Tonioni, A and Rahnama, O and Joy, T and
Di Stefano, L and Ajanthan, T and
Torr, P.H.S},
year={2019},
journal={CVPR},
}
Download here
Published in SIAM J. Imaging Sci. 287--318 Vol 12, 2019
@article{joy2019densecrf,
author = {Joy, T and Desmaison, A and Ajanthan, T and Bunel, R and Salzmann, M and Kohli, P and Torr, P.H.S and Kumar, M},
journal = {SIAM J. Imaging Sci.},
number = {1},
pages = {287--318},
title = {Efficient Relaxations for Dense CRFs with Sparse Higher-Order Potentials},
volume = {12},
year = {2019}
}
Download here
Published in ICLR 2021, 2021
@article{joy2020capturing,
title={Capturing Label Characteristics in VAEs},
author={Joy, Tom and Schmon, Sebastian and Torr, Philip H.S. and Siddharth, N and Rainforth, Tom},
journal={International Conference on Learning Representations},
year={2020}
}
Download here
Published in ICLR, 2022
@article{joy2021learning,
title={Learning Multimodal VAEs through Mutual Supervision},
author={Joy, Tom and Shi, Yuge and Torr, Philip H.S. and Rainforth, Tom and Schmon, Sebastian M and Siddharth, N},
journal={ICLR},
year={2022}
}
Download here
Published in AAAI, 2023
@article{joy2023sample,
title={Sample-dependent adaptive temperature scaling for improved calibration},
author={Joy, Tom and Pinto, Francesco and Lim, Ser-Nam and Torr, Philip HS and Dokania, Puneet K},
journal={AAAI},
year={2023}
}
Download here
Published in CVPR, 2023
@inproceedings{oksuz2023towards,
title={Towards Building Self-Aware Object Detectors via Reliable Uncertainty Quantification and Calibration},
author={Oksuz, Kemal and Joy, Tom and Dokania, Puneet K},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={9263--9274},
year={2023}
}
Download here
Published in ArXiv, 2023
@article{oksuz2023mocae,
title={MoCaE: Mixture of Calibrated Experts Significantly Improves Object Detection},
author={Oksuz, Kemal and Kuzucu, Selim and Joy, Tom and Dokania, Puneet K},
journal={arXiv preprint arXiv:2309.14976},
year={2023}
}
Download here
Published in NeurIPS, 2024
@article{jain2024makes,
title={What makes and breaks safety fine-tuning? a mechanistic study},
author={Jain, Samyak and Lubana, Ekdeep Singh and Oksuz, Kemal and Joy, Tom and Torr, Philip HS and Sanyal, Amartya and Dokania, Puneet K},
booktitle={NeurIPS},
year={2024}
}
Download here
Published:
This is a description of your talk, which is a markdown files that can be all markdown-ified like any other post. Yay markdown!
Published:
This is a description of your conference proceedings talk, note the different field in type. You can put anything in this field.
Undergraduate course, University 1, Department, 2014
This is a description of a teaching experience. You can use markdown like any other post.
Workshop, University 1, Department, 2015
This is a description of a teaching experience. You can use markdown like any other post.