Scaling down to scale up: A guide to parameter-efficient fine-tuning V Lialin, V Deshpande, A Rumshisky arXiv preprint arXiv:2303.15647, 2023 | 117 | 2023 |
Relora: High-rank training through low-rank updates V Lialin, S Muckatira, N Shivagunde, A Rumshisky The Twelfth International Conference on Learning Representations, 2023 | 38* | 2023 |
Learning to ask like a physician E Lehman, V Lialin, KY Legaspi, AJR Sy, PTS Pile, NRI Alberto, ... arXiv preprint arXiv:2206.02696, 2022 | 15 | 2022 |
Named entity recognition in noisy domains V Malykh, V Lyalin 2018 international conference on artificial intelligence applications and …, 2018 | 12 | 2018 |
Honey, I shrunk the language: Language model behavior at reduced scale V Deshpande, D Pechi, S Thatte, V Lialin, A Rumshisky arXiv preprint arXiv:2305.17266, 2023 | 10 | 2023 |
Update frequently, update fast: Retraining semantic parsing systems in a fraction of time V Lialin, R Goel, A Simanovsky, A Rumshisky, R Shah arXiv preprint arXiv:2010.07865, 2020 | 9* | 2020 |
Life after BERT: What do Other Muppets Understand about Language? V Lialin, K Zhao, N Shivagunde, A Rumshisky arXiv preprint arXiv:2205.10696, 2022 | 8 | 2022 |
Scalable and accurate self-supervised multimodal representation learning without aligned video and text data V Lialin, S Rawls, D Chan, S Ghosh, A Rumshisky, W Hamza Proceedings of the IEEE/CVF Winter Conference on Applications of Computer …, 2023 | 6 | 2023 |
Recent Advances, Applications, and Open Challenges in Machine Learning for Health: Reflections from Research Roundtables at ML4H 2023 Symposium H Jeong, S Jabbour, Y Yang, R Thapta, H Mozannar, WJ Han, ... arXiv preprint arXiv:2403.01628, 2024 | 1 | 2024 |
Let's Reinforce Step by Step S Pan, V Lialin, S Muckatira, A Rumshisky arXiv preprint arXiv:2311.05821, 2023 | 1 | 2023 |
Improving Classification Robustness for Noisy Texts with Robust Word Vectors V Malykh, V Lyalin Journal of Mathematical Sciences 273 (4), 605-613, 2023 | 1 | 2023 |
Narrativetime: Dense temporal annotation on a timeline A Rogers, M Karpinska, A Gupta, V Lialin, G Smelkov, A Rumshisky arXiv preprint arXiv:1908.11443, 2019 | 1 | 2019 |
Emergent Abilities in Reduced-Scale Generative Language Models S Muckatira, V Deshpande, V Lialin, A Rumshisky arXiv preprint arXiv:2404.02204, 2024 | | 2024 |
Deconstructing In-Context Learning: Understanding Prompts via Corruption N Shivagunde, V Lialin, S Muckatira, A Rumshisky arXiv preprint arXiv:2404.02054, 2024 | | 2024 |
Larger Probes Tell a Different Story: Extending Psycholinguistic Datasets Via In-Context Learning N Shivagunde, V Lialin, A Rumshisky arXiv preprint arXiv:2303.16445, 2023 | | 2023 |
Injecting Hierarchy with U-Net Transformers D Donahue, V Lialin, A Rumshisky arXiv preprint arXiv:1910.10488, 2019 | | 2019 |
К вопросу о классификации зашумленных текстов ВА Малых, ВА Лялин Труды Института системного анализа Российской академии наук 68 (S1), 174-182, 2018 | | 2018 |
TEXT IS AN IMAGE: AUGMENTATION VIA EMBEDDING MIXING K Zhao, V Lialin, A Rumshisky | | |