Follow
Cheng-Yu Hsieh
Cheng-Yu Hsieh
Verified email at cs.washington.edu - Homepage
Title
Cited by
Cited by
Year
On the (In) fidelity and Sensitivity of Explanations
CK Yeh, CY Hsieh, A Suggala, DI Inouye, PK Ravikumar
Advances in Neural Information Processing Systems, 10965-10976, 2019
4302019
Distilling step-by-step! outperforming larger language models with less training data and smaller model sizes
CY Hsieh, CL Li, CK Yeh, H Nakhost, Y Fujii, A Ratner, R Krishna, CY Lee, ...
arXiv preprint arXiv:2305.02301, 2023
2572023
A survey on programmatic weak supervision
J Zhang, CY Hsieh, Y Yu, C Zhang, A Ratner
arXiv preprint arXiv:2202.05433, 2022
842022
Evaluations and Methods for Explanation through Robustness Analysis
CY Hsieh, CK Yeh, X Liu, P Ravikumar, S Kim, S Kumar, CJ Hsieh
International Conference on Learning Representations, 2021
582021
Automatic bridge bidding using deep reinforcement learning
CK Yeh, CY Hsieh, HT Lin
IEEE Transactions on Games 10 (4), 365-377, 2018
532018
Sugarcrepe: Fixing hackable benchmarks for vision-language compositionality
CY Hsieh, J Zhang, Z Ma, A Kembhavi, R Krishna
Advances in neural information processing systems 36, 2024
522024
Tool documentation enables zero-shot tool-usage with large language models
CY Hsieh, SA Chen, CL Li, Y Fujii, A Ratner, CY Lee, R Krishna, T Pfister
arXiv preprint arXiv:2308.00675, 2023
342023
How sensitive are sensitivity-based explanations
CK Yeh, CY Hsieh, AS Suggala, D Inouye, P Ravikumar
arXiv preprint arXiv:1901.09392, 52, 2019
242019
A deep model with local surrogate loss for general cost-sensitive multi-label learning
CY Hsieh, YA Lin, HT Lin
Proceedings of the AAAI conference on artificial intelligence 32 (1), 2018
172018
Nemo: Guiding and contextualizing weak supervision for interactive data programming
CY Hsieh, J Zhang, A Ratner
Proceedings of the VLDB Endowment 15 (13), 4093 - 4105, 2022
132022
Understanding Programmatic Weak Supervision via Source-aware Influence Function
J Zhang, H Wang, CY Hsieh, A Ratner
Advances in Neural Information Processing Systems, 2022
122022
A pseudo-label method for coarse-to-fine multi-label learning with limited supervision
CY Hsieh, M Xu, G Niu, HT Lin, M Sugiyama
Learning from Limited Labeled Data Workshop @ ICLR '19, 2019
92019
DataComp-LM: In search of the next generation of training sets for language models
J Li, A Fang, G Smyrnis, M Ivgi, M Jordan, S Gadre, H Bansal, E Guha, ...
arXiv preprint arXiv:2406.11794, 2024
22024
Active refinement for multi-label learning: a pseudo-label approach
CY Hsieh, WI Lin, M Xu, G Niu, HT Lin, M Sugiyama
arXiv preprint arXiv:2109.14676, 2021
12021
Graph-Based Captioning: Enhancing Visual Descriptions by Interconnecting Region Captions
YG Hsieh, CY Hsieh, SY Yeh, L Béthune, HP Ansari, PKA Vasu, CL Li, ...
arXiv preprint arXiv:2407.06723, 2024
2024
Lookback Lens: Detecting and Mitigating Contextual Hallucinations in Large Language Models Using Only Attention Maps
YS Chuang, L Qiu, CY Hsieh, R Krishna, Y Kim, J Glass
arXiv preprint arXiv:2407.07071, 2024
2024
Found in the Middle: Calibrating Positional Attention Bias Improves Long Context Utilization
CY Hsieh, YS Chuang, CL Li, Z Wang, LT Le, A Kumar, J Glass, A Ratner, ...
arXiv preprint arXiv:2406.16008, 2024
2024
The Unmet Promise of Synthetic Training Images: Using Retrieved Real Images Performs Better
S Geng, CY Hsieh, V Ramanujan, M Wallingford, CL Li, PW Koh, ...
arXiv preprint arXiv:2406.05184, 2024
2024
運用局部代理損失函數之深度模型於廣泛成本導向多標籤學習
CY Hsieh
國立臺灣大學資訊工程學系學位論文 2018, 1-28, 2018
2018
The Hard Positive Truth about Vision-Language Compositionality
A Kamath, CY Hsieh, KW Chang, R Krishna
The system can't perform the operation now. Try again later.
Articles 1–20