Follow
Xiang Fan
Xiang Fan
Computer Science & Engineering, University of Washington
Verified email at cs.washington.edu - Homepage
Title
Cited by
Cited by
Year
Multibench: Multiscale benchmarks for multimodal representation learning
PP Liang, Y Lyu, X Fan, Z Wu, Y Cheng, J Wu, L Chen, P Wu, MA Lee, ...
arXiv preprint arXiv:2107.07502, 2021
1152021
Quantifying & modeling feature interactions: An information decomposition framework
PP Liang, Y Cheng, X Fan, CK Ling, S Nie, R Chen, Z Deng, F Mahmood, ...
arXiv e-prints, arXiv: 2302.12247, 2023
172023
Highmmt: Towards modality and task generalization for high-modality representation learning
PP Liang, Y Lyu, X Fan, S Mo, D Yogatama, LP Morency, R Salakhutdinov
arXiv preprint arXiv:2203.01311, 2022
172022
High-modality multimodal transformer: Quantifying modality & interaction heterogeneity for high-modality representation learning
PP Liang, Y Lyu, X Fan, J Tsaw, Y Liu, S Mo, D Yogatama, LP Morency, ...
Transactions on Machine Learning Research, 2022
152022
Quantifying & Modeling Multimodal Interactions: An Information Decomposition Framework
PP Liang, Y Cheng, X Fan, CK Ling, S Nie, R Chen, Z Deng, N Allen, ...
Advances in Neural Information Processing Systems 36, 2024
42024
MULTIZOO & multibench: A standardized toolkit for multimodal deep learning
PP Liang, Y Lyu, X Fan, A Agarwal, Y Cheng, LP Morency, ...
The Journal of Machine Learning Research 24 (1), 11056-11062, 2023
22023
Nano: Nested Human-in-the-Loop Reward Learning for Few-shot Language Model Control
X Fan, Y Lyu, PP Liang, R Salakhutdinov, LP Morency
arXiv preprint arXiv:2211.05750, 2022
12022
HighMMT: Quantifying Modality & Interaction Heterogeneity for High-Modality Representation Learning
PP Liang, Y Lyu, X Fan, J Tsaw, Y Liu, S Mo, D Yogatama, LP Morency, ...
arXiv preprint arXiv:2203.01311, 2022
2022
The system can't perform the operation now. Try again later.
Articles 1–8