Enabling continual learning with differentiable hebbian plasticity V Thangarasa, T Miconi, GW Taylor 2020 International Joint Conference on Neural Networks (IJCNN), 1-8, 2020 | 18 | 2020 |
Memory Efficient 3D U-Net with Reversible Mobile Inverted Bottlenecks for Brain Tumor Segmentation M Pendse, V Thangarasa, V Chiley, R Holmdahl, J Hestness, DC Dennis International MICCAI Brainlesion Workshop, 388-397, 2021 | 15 | 2021 |
SPDF: Sparse Pre-training and Dense Fine-tuning for Large Language Models V Thangarasa, A Gupta, W Marshall, T Li, K Leong, D DeCoste, S Lie, ... Proceedings of Uncertainty in Artificial Intelligence (UAI), 2023 | 9 | 2023 |
RevBiFPN: The Fully Reversible Bidirectional Feature Pyramid Network V Chiley, V Thangarasa, A Gupta, A Samar, J Hestness, D DeCoste Proceedings of Machine Learning and Systems (MLSys) 5, 2023 | 6 | 2023 |
Self-Paced Learning with Adaptive Deep Visual Embeddings V Thangarasa, GW Taylor Proceedings of British Machine Vision Conference (BMVC), 2018 | 6 | 2018 |
Differentiable Hebbian plasticity for continual learning V Thangarasa, T Miconi, GW Taylor International conference on machine learning (ICML) adaptive and multitask …, 2019 | 4 | 2019 |
Sparse Iso-FLOP Transformations for Maximizing Training Efficiency V Thangarasa, S Saxena, A Gupta, S Lie Workshop on Advancing Neural Network Training: Computational Efficiency …, 2023 | 3* | 2023 |
Reversible Fixup Networks for Memory-Efficient Training V Thangarasa, CY Tsai, GW Taylor, U Köster NeurIPS Workshop on Systems for ML (SysML), 2019 | 1 | 2019 |
Introducing v0. 5 of the AI Safety Benchmark from MLCommons B Vidgen, A Agrawal, AM Ahmed, V Akinwande, N Al-Nuaimi, N Alfaraj, ... arXiv preprint arXiv:2404.12241, 2024 | | 2024 |
MediSwift: Efficient Sparse Pre-trained Biomedical Language Models V Thangarasa, M Salem, S Saxena, K Leong, J Hestness, S Lie arXiv preprint arXiv:2403.00952, 2024 | | 2024 |
Differentiable Hebbian Consolidation for Continual Lifelong Learning V Thangarasa University of Guelph, 2019 | | 2019 |
SPDF: Sparse Pre-training and Dense Fine-tuning for Large Language Models (Supplementary Material) V Thangarasa, A Gupta, W Marshall, T Li, K Leong, D DeCoste, S Lie, ... | | |