Gpt-4 technical report J Achiam, S Adler, S Agarwal, L Ahmad, I Akkaya, FL Aleman, D Almeida, ... arXiv preprint arXiv:2303.08774, 2023 | 1051 | 2023 |
Training compute-optimal large language models J Hoffmann, S Borgeaud, A Mensch, E Buchatskaya, T Cai, E Rutherford, ... arXiv preprint arXiv:2203.15556, 2022 | 976 | 2022 |
Scaling Language Models: Methods, Analysis & Insights from Training Gopher JW Rae, S Borgeaud, T Cai, K Millican, J Hoffmann, F Song, J Aslanides, ... | 851 | 2021 |
A clinically applicable approach to continuous prediction of future acute kidney injury N Tomašev, X Glorot, JW Rae, M Zielinski, H Askham, A Saraiva, ... Nature 572 (7767), 116-119, 2019 | 845 | 2019 |
Improving language models by retrieving from trillions of tokens S Borgeaud, A Mensch, J Hoffmann, T Cai, E Rutherford, K Millican, ... International conference on machine learning, 2206-2240, 2022 | 683 | 2022 |
Gemini: a family of highly capable multimodal models G Team, R Anil, S Borgeaud, Y Wu, JB Alayrac, J Yu, R Soricut, ... arXiv preprint arXiv:2312.11805, 2023 | 568 | 2023 |
Compressive transformers for long-range sequence modelling JW Rae, A Potapenko, SM Jayakumar, TP Lillicrap arXiv preprint arXiv:1911.05507, 2019 | 463 | 2019 |
Stabilizing transformers for reinforcement learning E Parisotto, F Song, J Rae, R Pascanu, C Gulcehre, S Jayakumar, ... International conference on machine learning, 7487-7498, 2020 | 347 | 2020 |
Model-free episodic control C Blundell, B Uria, A Pritzel, Y Li, A Ruderman, JZ Leibo, J Rae, ... arXiv preprint arXiv:1606.04460, 2016 | 290 | 2016 |
Relational recurrent neural networks A Santoro, R Faulkner, D Raposo, J Rae, M Chrzanowski, T Weber, ... Advances in neural information processing systems 31, 2018 | 260 | 2018 |
Neural arithmetic logic units A Trask, F Hill, SE Reed, J Rae, C Dyer, P Blunsom Advances in neural information processing systems 31, 2018 | 230 | 2018 |
Unsupervised predictive memory in a goal-directed agent G Wayne, CC Hung, D Amos, M Mirza, A Ahuja, A Grabska-Barwinska, ... arXiv preprint arXiv:1803.10760, 2018 | 195 | 2018 |
Scaling memory-augmented neural networks with sparse reads and writes J Rae, JJ Hunt, I Danihelka, T Harley, AW Senior, G Wayne, A Graves, ... Advances in Neural Information Processing Systems 29, 2016 | 180 | 2016 |
Reducing sentiment bias in language models via counterfactual evaluation PS Huang, H Zhang, R Jiang, R Stanforth, J Welbl, J Rae, V Maini, ... arXiv preprint arXiv:1911.03064, 2019 | 161 | 2019 |
Multiplicative interactions and where to find them SM Jayakumar, WM Czarnecki, J Menick, J Schwarz, J Rae, S Osindero, ... | 121 | 2020 |
V-mpo: On-policy maximum a posteriori policy optimization for discrete and continuous control HF Song, A Abdolmaleki, JT Springenberg, A Clark, H Soyer, JW Rae, ... arXiv preprint arXiv:1909.12238, 2019 | 108 | 2019 |
Memory-based parameter adaptation P Sprechmann, SM Jayakumar, JW Rae, A Pritzel, AP Badia, B Uria, ... International Conference on Learning Representations, 2018 | 107 | 2018 |
Top-kast: Top-k always sparse training S Jayakumar, R Pascanu, J Rae, S Osindero, E Elsen Advances in Neural Information Processing Systems 33, 20744-20754, 2020 | 85 | 2020 |
Training language gans from scratch C de Masson d'Autume, S Mohamed, M Rosca, J Rae Advances in Neural Information Processing Systems 32, 2019 | 85 | 2019 |
An empirical analysis of compute-optimal large language model training J Hoffmann, S Borgeaud, A Mensch, E Buchatskaya, T Cai, E Rutherford, ... Advances in Neural Information Processing Systems 35, 30016-30030, 2022 | 69 | 2022 |