LARGER LANGUAGE MODELS DO IN-CONTEXT LEARNING DIFFERENTLY (1) 썸네일형 리스트형 [논문이해] LARGER LANGUAGE MODELS DO IN-CONTEXT LEARNING DIFFERENTLY 논문명: LARGER LANGUAGE MODELS DO IN-CONTEXT LEARNING DIFFERENTLY https://arxiv.org/abs/2303.03846 Larger language models do in-context learning differently We study how in-context learning (ICL) in language models is affected by semantic priors versus input-label mappings. We investigate two setups-ICL with flipped labels and ICL with semantically-unrelated labels-across various model families (GP.. 이전 1 다음