Follow
Gabriel Grand
Title
Cited by
Cited by
Year
ChemBERTa: large-scale self-supervised pretraining for molecular property prediction
S Chithrananda, G Grand, B Ramsundar
arXiv preprint arXiv:2010.09885, 2020
3392020
Semantic projection recovers rich human knowledge of multiple object features from word embeddings
G Grand, IA Blank, F Pereira, E Fedorenko
Nature human behaviour 6 (7), 975-987, 2022
1222022
Adversarial regularization for visual question answering: Strengths, shortcomings, and side effects
G Grand, Y Belinkov
arXiv preprint arXiv:1906.08430, 2019
732019
Chemberta-2: Towards chemical foundation models
W Ahmad, E Simon, S Chithrananda, G Grand, B Ramsundar
arXiv preprint arXiv:2209.01712, 2022
642022
From word models to world models: Translating from natural language to the probabilistic language of thought
L Wong, G Grand, AK Lew, ND Goodman, VK Mansinghka, J Andreas, ...
arXiv preprint arXiv:2306.12672, 2023
392023
Top-down synthesis for library learning
M Bowers, TX Olausson, L Wong, G Grand, JB Tenenbaum, K Ellis, ...
Proceedings of the ACM on Programming Languages 7 (POPL), 1182-1213, 2023
302023
Identifying concept libraries from language about object structure
C Wong, WP McCarthy, G Grand, Y Friedman, JB Tenenbaum, J Andreas, ...
arXiv preprint arXiv:2205.05666, 2022
142022
ChemBERTa: Large-Scale Self-Supervised Pretraining for Molecular Property Prediction. 2020
S Chithrananda, G Grand, B Ramsundar
URL https://arxiv. org/abs, 2010
132010
Sequential monte carlo steering of large language models using probabilistic programs
AK Lew, T Zhi-Xuan, G Grand, VK Mansinghka
arXiv preprint arXiv:2306.03081, 2023
92023
Evaluating statistical language models as pragmatic reasoners
B Lipkin, L Wong, G Grand, JB Tenenbaum
arXiv preprint arXiv:2305.01020, 2023
82023
ChemBERTa-2: towards chemical foundation models. arXiv 2022
W Ahmad, E Simon, S Chithrananda, G Grand, B Ramsundar
arXiv preprint arXiv:2209.01712, 0
7
Semantic projection recovers rich human knowledge of multiple object features from word embeddings. Nature Human Behaviour, 6 (7), 975-987
G Grand, IA Blank, F Pereira, E Fedorenko
62022
Grounded physical language understanding with probabilistic programs and simulated worlds
C Zhang, L Wong, G Grand, J Tenenbaum
Proceedings of the Annual Meeting of the Cognitive Science Society 45 (45), 2023
52023
Chemberta: Large-scale self-supervised pretraining for molecular property predictionarXiv preprint
S Chithrananda, G Grand, B Ramsundar
arXiv preprint arXiv:2010.09885, 2020
52020
Lilo: Learning interpretable libraries by compressing and documenting code
G Grand, L Wong, M Bowers, TX Olausson, M Liu, JB Tenenbaum, ...
arXiv preprint arXiv:2310.19791, 2023
32023
Learning interpretable libraries by compressing and documenting code
G Grand, L Wong, M Bowers, TX Olausson, M Liu, JB Tenenbaum, ...
Intrinsically-Motivated and Open-Ended Learning Workshop@ NeurIPS2023, 2023
22023
Stream of Search (SoS): Learning to Search in Language
K Gandhi, D Lee, G Grand, M Liu, W Cheng, A Sharma, ND Goodman
arXiv preprint arXiv:2404.03683, 2024
2024
Loose LIPS Sink Ships: Asking Questions in Battleship with Language-Informed Program Sampling
G Grand, V Pepe, J Andreas, JB Tenenbaum
arXiv preprint arXiv:2402.19471, 2024
2024
Discovering Abstractions from Language via Neurosymbolic Program Synthesis
GJ Grand
Massachusetts Institute of Technology, 2023
2023
Compounds, compositions and methods of treating disorders
MR Spyvee, JM Kallenbach, A Gupta, E Simon, GJ Grand
US Patent App. 17/744,228, 2022
2022
The system can't perform the operation now. Try again later.
Articles 1–20