본문 바로가기

NLP/논문이해

(63)
[논문이해] Active Retrieval Augmented Generation 논문명: Active Retrieval Augmented Generation 논문링크: https://arxiv.org/abs/2305.06983 Active Retrieval Augmented Generation Despite the remarkable ability of large language models (LMs) to comprehend and generate language, they have a tendency to hallucinate and create factually inaccurate output. Augmenting LMs by retrieving information from external knowledge resources is one arxiv.org 아이디어만 정리합니다..
[논문이해] Should You Mask 15% in Masked Language Modeling? 논문명: Should You Mask 15% in Masked Language Modeling? 논문링크: https://arxiv.org/abs/2202.08005 Should You Mask 15% in Masked Language Modeling? Masked language models (MLMs) conventionally mask 15% of tokens due to the belief that more masking would leave insufficient context to learn good representations; this masking rate has been widely used, regardless of model sizes or masking strategies. In ..
[논문이해] SODA: Million-scale Dialogue Distillation with Social Commonsense Contextualization 논문명: SODA: Million-scale Dialogue Distillation with Social Commonsense Contextualization 논문링크: https://arxiv.org/abs/2212.10465 SODA: Million-scale Dialogue Distillation with Social Commonsense Contextualization We present SODA: the first publicly available, million-scale high-quality social dialogue dataset. In contrast to most existing crowdsourced, small-scale dialogue corpora, we distill 1.5..
[논문이해] Pre-Training to Learn in Context 논문명: Pre-Training to Learn in Context 논문링크: https://arxiv.org/abs/2305.09137 Pre-Training to Learn in Context In-context learning, where pre-trained language models learn to perform tasks from task examples and instructions in their contexts, has attracted much attention in the NLP community. However, the ability of in-context learning is not fully exploited becau arxiv.org 아이디어만 정리합니다. 아이디어 기존 ..
[논문이해] Diffuser: Efficient Transformers with Multi-hop Attention Diffusion for Long Sequences 논문명: Diffuser: Efficient Transformers with Multi-hop Attention Diffusion for Long Sequences 논문링크: https://arxiv.org/abs/2210.11794 Diffuser: Efficient Transformers with Multi-hop Attention Diffusion for Long Sequences Efficient Transformers have been developed for long sequence modeling, due to their subquadratic memory and time complexity. Sparse Transformer is a popular approach to improving t..
[논문이해] Unmasked Teacher: Towards Training-Efficient Video Foundation Models 논문명: Unmasked Teacher: Towards Training-Efficient Video Foundation Models 논문링크: https://arxiv.org/abs/2303.16058v1 Unmasked Teacher: Towards Training-Efficient Video Foundation Models Video Foundation Models (VFMs) have received limited exploration due to high computational costs and data scarcity. Previous VFMs rely on Image Foundation Models (IFMs), which face challenges in transferring to the..
[논문이해] GPT-4 Is Too Smart To Be Safe: Stealthy Chat with LLMs via Cipher 논문명: GPT-4 Is Too Smart To Be Safe: Stealthy Chat with LLMs via Cipher 논문링크: https://arxiv.org/abs/2308.06463 GPT-4 Is Too Smart To Be Safe: Stealthy Chat with LLMs via Cipher Safety lies at the core of the development of Large Language Models (LLMs). There is ample work on aligning LLMs with human ethics and preferences, including data filtering in pretraining, supervised fine-tuning, reinforce..
[논문이해] Dataset Distillation with Attention Labels for Fine-tuning BERT 논문명: Dataset Distillation with Attention Labels for Fine-tuning BERT 논문링크: https://aclanthology.org/2023.acl-short.12/ Dataset Distillation with Attention Labels for Fine-tuning BERT Aru Maekawa, Naoki Kobayashi, Kotaro Funakoshi, Manabu Okumura. Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). 2023. aclanthology.org 아이디어만 정리합니다 Da..