SantaCoder: don't reach for the stars! LB Allal, R Li, D Kocetkov, C Mou, C Akiki, CM Ferrandis, N Muennighoff, ... arXiv preprint arXiv:2301.03988, 2023 | 117 | 2023 |
Lysandre Debut, Younes Belkada, and Sayak Paul. Peft: State-of-the-art parameter-efficient fine-tuning methods S Mangrulkar, S Gugger | 81 | 2022 |
Peft: State-of-the-art parameter-efficient fine-tuning methods S Mangrulkar, S Gugger, L Debut, Y Belkada, S Paul, B Bossan Younes Belkada and Sayak Paul," PEFT: State-of-the-art Parameter-Efficient …, 2022 | 73 | 2022 |
Lysandre Debut, Younes Belkada, Sayak Paul, and Benjamin Bossan. 2022. Peft: State-of-the-art parameterefficient fine-tuning methods S Mangrulkar, S Gugger github. com/huggingface/pe, 0 | 70 | |
Accelerate: Training and inference at scale made simple, efficient and adaptable S Gugger, L Debut, T Wolf, P Schmid, Z Mueller, S Mangrulkar, M Sun, ... | 21 | 2022 |
PEFT: State-of-the-Art Parameter-Efficient Fine-Tuning Methods. 2022 S Mangrulkar, S Gugger, L Debut, Y Belkada, S Paul, B Bossan URL https://github. com/huggingface/peft, 0 | 13 | |
A context-aware convolutional natural language generation model for dialogue systems S Mangrulkar, S Shrivastava, V Thenkanidiyoor, DA Dinesh Proceedings of the 19th Annual SIGdial Meeting on Discourse and Dialogue …, 2018 | 11 | 2018 |
Lysandre Debut. 2022. Accelerate: Training and inference at scale made simple, efficient and adaptable TWPSZ Mueller, SMS Gugger | 8 | |
Lysandre Debut, Thomas Wolf, Philipp Schmid, Zachary Mueller, and Sourab Mangrulkar. 2022. Accelerate: Training and inference at scale made simple, efficient and adaptable S Gugger | 7 | |
Peft: State-of-the-art parameter-efficient fine-tuning methods LDYBS Paul, S Mangrulkar, S Gugger | 6 | 2022 |
Accelerate: Training and inference at scale made simple, efficient and adaptable (2022) S Gugger, L Debut, T Wolf, P Schmid, Z Mueller, S Mangrulkar, M Sun, ... URL https://github. com/huggingface/accelerate, 0 | 6 | |
Multilingual semantic sourcing using product images for cross-lingual alignment S Mangrulkar, A MS, V Sembium Companion Proceedings of the Web Conference 2022, 41-51, 2022 | 4 | 2022 |
Lysandre Debut, Thomas Wolf, Philipp Schmid, Zachary Mueller, and Sourab Mangrulkar. Accelerate: Training and inference at scale made simple, efficient and adaptable S Gugger | 3 | 2022 |
BE3R: BERT based Early-Exit Using Expert Routing S Mangrulkar, A MS, V Sembium Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and …, 2022 | 2 | 2022 |
HISS: A novel hybrid inference architecture in embedding based product sourcing using knowledge distillation MS Ankith, S Mangrulkar, V Sembium | 2 | 2022 |