Zero-to-strong generalization: eliciting strong capabilities of large language models iteratively without gold labels

Large Language Models (LLMs) have demonstrated remarkable performance through supervised fine-tuning or in-context learning using gold labels. However, this paradigm is limited by the availability of gold labels, while in certain scenarios, LLMs may need to perform tasks that are too complex for hum...

全面介紹

Saved in:
書目詳細資料
Main Authors: Liu, Chaoqun, Chao, Qin, Zhang, Wenxuan, Wu, Xiaobao, Li, Boyang, Luu, Anh Tuan, Bing, Lidong
其他作者: Interdisciplinary Graduate School (IGS)
格式: Conference or Workshop Item
語言:English
出版: 2024
主題:
在線閱讀:https://hdl.handle.net/10356/181455
https://coling2025.org/
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
機構: Nanyang Technological University
語言: English