Follow
Rulin Shao
Rulin Shao
Verified email at cs.washington.edu - Homepage
Title
Cited by
Cited by
Year
On the adversarial robustness of vision transformers
R Shao, Z Shi, J Yi, PY Chen, CJ Hsieh
Transactions on Machine Learning Research (TMLR), 2022
1782022
How Long Can Context Length of Open-Source LLMs truly Promise?
D Li, R Shao, A Xie, Y Sheng, L Zheng, J Gonzalez, I Stoica, X Ma, ...
NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following, 2023
51*2023
Stochastic Channel-Based Federated Learning With Neural Network Pruning for Medical Data Privacy Preservation: Model Development and Experimental Validation
R Shao, H He, Z Chen, H Liu, D Liu
Journal of Medical Internet Research (JMIR) Form Res 2020;4(12):e17265, DOI …, 2020
34*2020
MPCFormer: fast, performant and private Transformer inference with MPC
D Li*, R Shao*, H Wang*, H Guo, EP Xing, H Zhang
ICLR 2023 (Spotlight), 2022
312022
VisIT-Bench: A Dynamic Benchmark for Evaluating Instruction-Following Vision-and-Language Models
Y Bitton, H Bansal, J Hessel, R Shao, W Zhu, A Awadalla, J Gardner, ...
Advances in Neural Information Processing Systems 36, 2024
23*2024
Robust text captchas using adversarial examples
R Shao, Z Shi, J Yi, PY Chen, CJ Hsieh
2022 IEEE International Conference on Big Data (Big Data), 1495-1504, 2022
152022
How and When Adversarial Robustness Transfers in Knowledge Distillation?
R Shao, J Yi, PY Chen, CJ Hsieh
ARoW in European Conference on Computer Vision (ECCV 2022), 2021
102021
Cross-modal Attention Congruence Regularization for Vision-Language Relation Alignment
R Pandey, R Shao, PP Liang, R Salakhutdinov, LP Morency
ACL 2023, 2022
82022
LightSeq: Sequence Level Parallelism for Distributed Training of Long Context Transformer
D Li*, R Shao*, A Xie, EP Xing, JE Gonzalez, I Stoica, X Ma, H Zhang
arXiv preprint arXiv:2310.03294, 2023
52023
Vision-Flan: Scaling Human-Labeled Tasks in Visual Instruction Tuning
Z Xu, C Feng, R Shao, T Ashby, Y Shen, D Jin, Y Cheng, Q Wang, ...
arXiv preprint arXiv:2402.11690, 2024
32024
Language models scale reliably with over-training and on downstream tasks
SY Gadre, G Smyrnis, V Shankar, S Gururangan, M Wortsman, R Shao, ...
arXiv preprint arXiv:2403.08540, 2024
12024
Language models scale reliably with over-training and on downstream tasks
S Yitzhak Gadre, G Smyrnis, V Shankar, S Gururangan, M Wortsman, ...
arXiv e-prints, arXiv: 2403.08540, 2024
2024
Retrieval-based Language Models Using a Multi-domain Datastore
R Shao, S Min, L Zettlemoyer, PW Koh
NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation …, 2023
2023
The system can't perform the operation now. Try again later.
Articles 1–13