Autoencoders as Cross-Modal Teachers: Can Pretrained 2D Image Transformers Help 3D Representation Learning? R Dong, Z Qi, L Zhang, J Zhang, J Sun, Z Ge, L Yi, K Ma ICLR 2023, 2022 | 54 | 2022 |
Contrast with Reconstruct: Contrastive 3D Representation Learning Guided by Generative Pretraining Z Qi, R Dong, G Fan, Z Ge, X Zhang, K Ma, L Yi ICML 2023, 2023 | 53 | 2023 |
Dreamllm: Synergistic multimodal comprehension and creation R Dong, C Han, Y Peng, Z Qi, Z Ge, J Yang, L Zhao, J Sun, H Zhou, H Wei, ... ICLR 2024 (Spotlight), 2023 | 46 | 2023 |
VPP: Efficient Conditional 3D Generation via Voxel-Point Progressive Representation Z Qi, M Yu, R Dong, K Ma NeurIPS 2023, 2023 | 5* | 2023 |
Point-gcc: Universal self-supervised 3d scene pre-training via geometry-color contrast G Fan, Z Qi, W Shi, K Ma arXiv preprint arXiv:2305.19623, 2023 | 3 | 2023 |
ShapeLLM: Universal 3D Object Understanding for Embodied Interaction Z Qi, R Dong, S Zhang, H Geng, C Han, Z Ge, L Yi, K Ma arXiv preprint arXiv:2402.17766, 2024 | 2 | 2024 |