팔로우
Myeongjun Erik Jang
Myeongjun Erik Jang
다른 이름Myeongjun Jang
cs.ox.ac.uk의 이메일 확인됨
제목
인용
인용
연도
Recurrent neural network-based semantic variational autoencoder for sequence-to-sequence learning
M Jang, S Seo, P Kang
Information Sciences 490, 59-73, 2019
602019
Consistency analysis of chatgpt
ME Jang, T Lukasiewicz
arXiv preprint arXiv:2303.06273, 2023
372023
Unusual customer response identification and visualization based on text mining and anomaly detection
S Seo, D Seo, M Jang, J Jeong, P Kang
Expert Systems with Applications 144, 113111, 2020
242020
BECEL: Benchmark for consistency evaluation of language models
M Jang, DS Kwon, T Lukasiewicz
Proceedings of the 29th International Conference on Computational …, 2022
232022
Intrusion detection based on sequential information preserving log embedding methods and anomaly detection algorithms
C Kim, M Jang, S Seo, K Park, P Kang
IEEE Access 9, 58088-58101, 2021
182021
Learning-free unsupervised extractive summarization model
M Jang, P Kang
IEEE Access 9, 14358-14368, 2021
182021
Text classification based on convolutional neural network with word and character level
K Mo, J Park, M Jang, P Kang
Journal of the Korean Institute of Industrial Engineers 44 (3), 180-188, 2018
132018
Kobest: Korean balanced evaluation of significant tasks
M Jang, D Kim, DS Kwon, E Davis
Proceedings of the 29th International Conference on Computational …, 2022
10*2022
Beyond distributional hypothesis: Let language models learn meaning-text correspondence
M Jang, F Mtumbuka, T Lukasiewicz
arXiv preprint arXiv:2205.03815, 2022
62022
Accurate, yet inconsistent? consistency analysis on language understanding models
M Jang, DS Kwon, T Lukasiewicz
arXiv preprint arXiv:2108.06665, 2021
62021
Paraphrase thought: Sentence embedding module imitating human language recognition
M Jang, P Kang
Information Sciences 541, 123-135, 2020
52020
KNOW how to make up your mind! adversarially detecting and alleviating inconsistencies in natural language explanations
M Jang, BP Majumder, J McAuley, T Lukasiewicz, OM Camburu
arXiv preprint arXiv:2306.02980, 2023
42023
Are Training Resources Insufficient? Predict First Then Explain!
M Jang, T Lukasiewicz
arXiv preprint arXiv:2110.02056, 2021
42021
NoiER: an approach for training more reliable fine-tuned downstream task models
M Jang, T Lukasiewicz
IEEE/ACM Transactions on Audio, Speech, and Language Processing 30, 2514-2525, 2022
32022
Sentence transition matrix: An efficient approach that preserves sentence semantics
M Jang, P Kang
Computer Speech & Language 71, 101266, 2022
22022
A robust deep learning platform to predict CD8+ T-cell epitopes
CH Lee, J Huh, PR Buckley, M Jang, M Pereira Pinho, RA Fernandes, ...
bioRxiv, 2022.12. 29.522182, 2022
12022
Pre-training and diagnosing knowledge base completion models
V Kocijan, M Jang, T Lukasiewicz
Artificial Intelligence, 104081, 2024
2024
Improving Language Models Meaning Understanding and Consistency by Learning Conceptual Roles from Dictionary
ME Jang, T Lukasiewicz
arXiv preprint arXiv:2310.15541, 2023
2023
A robust deep learning platform to predict CD8+ T-cell epitopes (preprint)
CHJ Lee, J Huh, PR Buckley, MJ Jang, MP Pinho, R Fernandes, ...
2022
Corrigendum to “Recurrent neural network-based semantic variational autoencoder for Sequence-to-sequence learning”[Information Sciences 490 (2019) 59-73](S0020025519302786)(10 …
M Jang, S Seo, P Kang
Information Sciences 512, 277, 2020
2020
현재 시스템이 작동되지 않습니다. 나중에 다시 시도해 주세요.
학술자료 1–20