본문 바로가기

NLP/논문이해

(65)
[논문이해] ConvGQR: Generative Query Reformulation for Conversational Search 논문명: ConvGQR: Generative Query Reformulation for Conversational Search논문 링크: https://arxiv.org/abs/2305.15645 ConvGQR: Generative Query Reformulation for Conversational SearchIn conversational search, the user's real search intent for the current turn is dependent on the previous conversation history. It is challenging to determine a good search query from the whole conversation context. To avoi..
[논문이해] LORA-FA: MEMORY-EFFICIENT LOW-RANK ADAPTATION FOR LARGE LANGUAGE MODELS FINE-TUNING 논문명: LORA-FA: MEMORY-EFFICIENT LOW-RANK ADAPTATION FOR LARGE LANGUAGE MODELS FINE-TUNING논문 링크: https://arxiv.org/abs/2308.03303 LoRA-FA: Memory-efficient Low-rank Adaptation for Large Language Models Fine-tuningThe low-rank adaptation (LoRA) method can largely reduce the amount of trainable parameters for fine-tuning large language models (LLMs), however, it still requires expensive activation m..
[논문이해] REPLUG: Retrieval-Augmented Black-Box Language Models 논문명: REPLUG: Retrieval-Augmented Black-Box Language Models논문링크: https://arxiv.org/abs/2301.12652 REPLUG: Retrieval-Augmented Black-Box Language ModelsWe introduce REPLUG, a retrieval-augmented language modeling framework that treats the language model (LM) as a black box and augments it with a tuneable retrieval model. Unlike prior retrieval-augmented LMs that train language models with special ..
[논문이해] Detoxifying Text with MaRCo: Controllable Revision with Experts and Anti-Experts 논문명: Detoxifying Text with MaRCo: Controllable Revision with Experts and Anti-Experts논문링크: https://aclanthology.org/2023.acl-short.21/ Detoxifying Text with MaRCo: Controllable Revision with Experts and Anti-ExpertsSkyler Hallinan, Alisa Liu, Yejin Choi, Maarten Sap. Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). 2023.aclantholog..
[논문이해] Adapt in Contexts: Retrieval-Augmented Domain Adaptation via In-Context Learning 논문명: Adapt in Contexts: Retrieval-Augmented Domain Adaptation via In-Context Learning논문링크: https://arxiv.org/abs/2311.11551 Adapt in Contexts: Retrieval-Augmented Domain Adaptation via In-Context LearningLarge language models (LLMs) have showcased their capability with few-shot inference known as in-context learning. However, in-domain demonstrations are not always readily available in real scen..
[논문이해] Compressing Context to Enhance Inference Efficiency of Large Language Models 논문명: Compressing Context to Enhance Inference Efficiency of Large Language Models논문 링크: https://arxiv.org/abs/2310.06201 Compressing Context to Enhance Inference Efficiency of Large Language ModelsLarge language models (LLMs) achieved remarkable performance across various tasks. However, they face challenges in managing long documents and extended conversations, due to significantly increased co..
[논문이해] RECOMP: Improving Retrieval-Augmented LMs with Compression and Selective Augmentation 논문명: RECOMP: Improving Retrieval-Augmented LMs with Compression and Selective Augmentation논문 링크: https://arxiv.org/abs/2310.04408 RECOMP: Improving Retrieval-Augmented LMs with Compression and Selective AugmentationRetrieving documents and prepending them in-context at inference time improves performance of language model (LMs) on a wide range of tasks. However, these documents, often spanning h..
[논문이해] Lost in the Middle: How Language Models Use Long Contexts 논문명: Lost in the Middle: How Language Models Use Long Contexts 논문 링크: https://arxiv.org/abs/2307.03172 Lost in the Middle: How Language Models Use Long Contexts While recent language models have the ability to take long contexts as input, relatively little is known about how well they use longer context. We analyze the performance of language models on two tasks that require identifying relevant..