Obserwuj
Szymon Tworkowski
Szymon Tworkowski
Zweryfikowany adres z student.uw.edu.pl - Strona główna
Tytuł
Cytowane przez
Cytowane przez
Rok
Focused transformer: Contrastive training for context scaling
S Tworkowski, K Staniszewski, M Pacek, Y Wu, H Michalewski, P Miłoś
37th Conference on Neural Information Processing Systems (NeurIPS) 2023, 2023
462023
Thor: Wielding hammers to integrate language models and automated theorem provers
AQ Jiang, W Li, S Tworkowski, K Czechowski, T Odrzygóźdź, P Miłoś, ...
Advances in Neural Information Processing Systems 35, 8360-8373, 2022
462022
Hierarchical transformers are more efficient language models
P Nawrot, S Tworkowski, M Tyrolski, Ł Kaiser, Y Wu, C Szegedy, ...
Findings of the Association for Computational Linguistics: NAACL 2022, 1559–1571, 2022
322022
Magnushammer: A transformer-based approach to premise selection
M Mikuła, S Antoniak, S Tworkowski, AQ Jiang, JP Zhou, C Szegedy, ...
arXiv preprint arXiv:2303.04488, 2023
152023
Explaining Competitive-Level Programming Solutions using LLMs
J Li, S Tworkowski, Y Wu, R Mooney
ACL 2023, Natural Language Reasoning and Structured Explanations Workshop., 2023
102023
Analysing The Impact of Sequence Composition on Language Model Pre-Training
Y Zhao, Y Qu, K Staniszewski, S Tworkowski, W Liu, P Miłoś, Y Wu, ...
arXiv preprint arXiv:2402.13991, 2024
2024
Structured Packing in LLM Training Improves Long Context Utilization
K Staniszewski, S Tworkowski, S Jaszczur, H Michalewski, Ł Kuciński, ...
arXiv preprint arXiv:2312.17296, 2023
2023
Formal Premise Selection With Language Models
S Tworkowski, M Mikuła, T Odrzygóźdź, K Czechowski, S Antoniak, ...
7th Conference on Artificial Intelligence and Theorem Proving, 2022
2022
Nie można teraz wykonać tej operacji. Spróbuj ponownie później.
Prace 1–8