Follow
Guy Dar
Title
Cited by
Cited by
Year
Analyzing transformers in embedding space
G Dar, M Geva, A Gupta, J Berant
arXiv preprint arXiv:2209.02535, 2022
572022
Lm-debugger: An interactive tool for inspection and intervention in transformer-based language models
M Geva, A Caciularu, G Dar, P Roit, S Sadde, M Shlain, B Tamir, ...
arXiv preprint arXiv:2204.12130, 2022
332022
Memory-efficient Transformers via Top- Attention
A Gupta, G Dar, S Goodman, D Ciprut, J Berant
arXiv preprint arXiv:2106.06899, 2021
212021
In-context Learning and Gradient Descent Revisited
G Deutch, N Magar, T Bar Natan, G Dar
arXiv e-prints, arXiv: 2311.07772, 2023
2*2023
Speaking Probes: Self Interpreting Models?
G Dar
https://towardsdatascience.com/speaking-probes-self-interpreting-models …, 2023
2023
The system can't perform the operation now. Try again later.
Articles 1–5