Follow
Heejun Lee
Title
Cited by
Cited by
Year
Sparse token transformer with attention back tracking
H Lee, M Kang, Y Lee, SJ Hwang
The Eleventh International Conference on Learning Representations, 2023
62023
SEA: Sparse Linear Attention with Estimated Attention Mask
H Lee, J Kim, J Willette, SJ Hwang
arXiv preprint arXiv:2310.01777, 2023
22023
Training-Free Exponential Extension of Sliding Window Context with Cascading KV Cache
J Willette, H Lee, Y Lee, M Jeon, SJ Hwang
arXiv preprint arXiv:2406.17808, 2024
2024
HiP Attention: Sparse Sub-Quadratic Attention with Hierarchical Attention Pruning
H Lee, G Park, Y Lee, J Kim, W Jeong, M Jeon, SJ Hwang
arXiv preprint arXiv:2406.09827, 2024
2024
The system can't perform the operation now. Try again later.
Articles 1–4