Chatlaw: Open-source legal large language model with integrated external knowledge bases J Cui, Z Li, Y Yan, B Chen, L Yuan arXiv preprint arXiv:2306.16092, 2023 | 160 | 2023 |
Video-llava: Learning united visual representation by alignment before projection B Lin, B Zhu, Y Ye, M Ning, P Jin, L Yuan arXiv preprint arXiv:2311.10122, 2023 | 133 | 2023 |
Languagebind: Extending video-language pretraining to n-modality by language-based semantic alignment B Zhu, B Lin, M Ning, Y Yan, J Cui, HF Wang, Y Pang, W Jiang, J Zhang, ... arXiv preprint arXiv:2310.01852, 2023 | 55 | 2023 |
Moe-llava: Mixture of experts for large vision-language models B Lin, Z Tang, Y Ye, J Cui, B Zhu, P Jin, J Zhang, M Ning, L Yuan arXiv preprint arXiv:2401.15947, 2024 | 42 | 2024 |
Video-bench: A comprehensive benchmark and toolkit for evaluating video-based large language models M Ning, B Zhu, Y Xie, B Lin, J Cui, L Yuan, D Chen, L Yuan arXiv preprint arXiv:2311.16103, 2023 | 13 | 2023 |
Prollama: A protein large language model for multi-task protein language processing L Lv, Z Lin, H Li, Y Liu, J Cui, CYC Chen, L Yuan, Y Tian arXiv preprint arXiv:2402.16445, 2024 | 4 | 2024 |
Chatface: Chat-guided real face editing via diffusion latent space manipulation D Yue, Q Guo, M Ning, J Cui, Y Zhu, L Yuan arXiv preprint arXiv:2305.14742, 2023 | 4 | 2023 |
LLMBind: A unified modality-task integration framework B Zhu, P Jin, M Ning, B Lin, J Huang, Q Song, M Pan, L Yuan arXiv preprint arXiv:2402.14891, 2024 | 3 | 2024 |
Machine Mindset: An MBTI Exploration of Large Language Models J Cui, L Lv, J Wen, J Tang, YH Tian, L Yuan arXiv preprint arXiv:2312.12999, 2023 | 2 | 2023 |
AnyTaskTune: Advanced Domain-Specific Solutions through Task-Fine-Tuning J Cui, W Zhang, J Tang, X Tong, Z Zhang, J Wen, R Wang, P Wu arXiv preprint arXiv:2407.07094, 2024 | | 2024 |
Aurora: Activating Chinese chat capability for Mistral-8x7B sparse Mixture-of-Experts through Instruction-Tuning R Wang, H Chen, R Zhou, Y Duan, K Cai, H Ma, J Cui, J Li, PCI Pang, ... arXiv preprint arXiv:2312.14557, 2023 | | 2023 |