Recurrent neural network-based semantic variational autoencoder for sequence-to-sequence learning M Jang, S Seo, P Kang Information Sciences 490, 59-73, 2019 | 60 | 2019 |
Consistency analysis of chatgpt ME Jang, T Lukasiewicz arXiv preprint arXiv:2303.06273, 2023 | 37 | 2023 |
Unusual customer response identification and visualization based on text mining and anomaly detection S Seo, D Seo, M Jang, J Jeong, P Kang Expert Systems with Applications 144, 113111, 2020 | 24 | 2020 |
BECEL: Benchmark for consistency evaluation of language models M Jang, DS Kwon, T Lukasiewicz Proceedings of the 29th International Conference on Computational …, 2022 | 23 | 2022 |
Intrusion detection based on sequential information preserving log embedding methods and anomaly detection algorithms C Kim, M Jang, S Seo, K Park, P Kang IEEE Access 9, 58088-58101, 2021 | 18 | 2021 |
Learning-free unsupervised extractive summarization model M Jang, P Kang IEEE Access 9, 14358-14368, 2021 | 18 | 2021 |
Text classification based on convolutional neural network with word and character level K Mo, J Park, M Jang, P Kang Journal of the Korean Institute of Industrial Engineers 44 (3), 180-188, 2018 | 13 | 2018 |
Kobest: Korean balanced evaluation of significant tasks M Jang, D Kim, DS Kwon, E Davis Proceedings of the 29th International Conference on Computational …, 2022 | 10* | 2022 |
Beyond distributional hypothesis: Let language models learn meaning-text correspondence M Jang, F Mtumbuka, T Lukasiewicz arXiv preprint arXiv:2205.03815, 2022 | 6 | 2022 |
Accurate, yet inconsistent? consistency analysis on language understanding models M Jang, DS Kwon, T Lukasiewicz arXiv preprint arXiv:2108.06665, 2021 | 6 | 2021 |
Paraphrase thought: Sentence embedding module imitating human language recognition M Jang, P Kang Information Sciences 541, 123-135, 2020 | 5 | 2020 |
KNOW how to make up your mind! adversarially detecting and alleviating inconsistencies in natural language explanations M Jang, BP Majumder, J McAuley, T Lukasiewicz, OM Camburu arXiv preprint arXiv:2306.02980, 2023 | 4 | 2023 |
Are Training Resources Insufficient? Predict First Then Explain! M Jang, T Lukasiewicz arXiv preprint arXiv:2110.02056, 2021 | 4 | 2021 |
NoiER: an approach for training more reliable fine-tuned downstream task models M Jang, T Lukasiewicz IEEE/ACM Transactions on Audio, Speech, and Language Processing 30, 2514-2525, 2022 | 3 | 2022 |
Sentence transition matrix: An efficient approach that preserves sentence semantics M Jang, P Kang Computer Speech & Language 71, 101266, 2022 | 2 | 2022 |
A robust deep learning platform to predict CD8+ T-cell epitopes CH Lee, J Huh, PR Buckley, M Jang, M Pereira Pinho, RA Fernandes, ... bioRxiv, 2022.12. 29.522182, 2022 | 1 | 2022 |
Pre-training and diagnosing knowledge base completion models V Kocijan, M Jang, T Lukasiewicz Artificial Intelligence, 104081, 2024 | | 2024 |
Improving Language Models Meaning Understanding and Consistency by Learning Conceptual Roles from Dictionary ME Jang, T Lukasiewicz arXiv preprint arXiv:2310.15541, 2023 | | 2023 |
A robust deep learning platform to predict CD8+ T-cell epitopes (preprint) CHJ Lee, J Huh, PR Buckley, MJ Jang, MP Pinho, R Fernandes, ... | | 2022 |
Corrigendum to “Recurrent neural network-based semantic variational autoencoder for Sequence-to-sequence learning”[Information Sciences 490 (2019) 59-73](S0020025519302786)(10 … M Jang, S Seo, P Kang Information Sciences 512, 277, 2020 | | 2020 |