publications
Natural Language Process, Computer Vision and Multi-modal.
2025
- An Empirical Study on Prompt Compression for Large Language ModelsZhang Zheng , Jinyi Li, Yihuai Lan , and 2 more authorsIn ICLR 2025 Workshop on Building Trust in Language Models and Applications , 2025
Prompt engineering enables Large Language Models (LLMs) to perform a variety of tasks. However, lengthy prompts significantly increase computational complexity and economic costs. To address this issue, we study six prompt compression methods for LLMs, aiming to reduce prompt length while maintaining LLM response quality. In this paper, we present a comprehensive analysis covering aspects such as generation performance, model hallucinations, efficacy in multimodal tasks, word omission analysis, and more. We evaluate these methods across 13 datasets, including news, scientific articles, commonsense QA, math QA, long-context QA, and VQA datasets. Our experiments reveal that prompt compression has a greater impact on LLM performance in long contexts compared to short ones. In the Longbench evaluation, moderate compression even enhances LLM performance.
@inproceedings{zheng2025empirical, title = {An Empirical Study on Prompt Compression for Large Language Models}, author = {Zheng, Zhang and Li, Jinyi and Lan, Yihuai and Wang, Xiang and Wang, Hao}, booktitle = {ICLR 2025 Workshop on Building Trust in Language Models and Applications}, year = {2025}, google_scholar_id = {u-x6o8ySG0sC}, }
2024
- PCToolkit: A Unified Plug-and-Play Prompt Compression Toolkit of Large Language ModelsJinyi Li, Yihuai Lan , Lei Wang , and 1 more author2024
Prompt compression is an innovative method for efficiently condensing input prompts while preserving essential information. To facilitate quick-start services, user-friendly interfaces, and compatibility with common datasets and metrics, we present the Prompt Compression Toolkit (PCToolkit). This toolkit is a unified plug-and-play solution for compressing prompts in Large Language Models (LLMs), featuring cutting-edge prompt compressors, diverse datasets, and metrics for comprehensive performance evaluation. PCToolkit boasts a modular design, allowing for easy integration of new datasets and metrics through portable and user-friendly interfaces. In this paper, we outline the key components and functionalities of PCToolkit. We conducted evaluations of the compressors within PCToolkit across various natural language tasks, including reconstruction, summarization, mathematical problem-solving, question answering, few-shot learning, synthetic tasks, code completion, boolean expressions, multiple choice questions, and lies recognition.
@misc{li2024pctoolkit, title = {PCToolkit: A Unified Plug-and-Play Prompt Compression Toolkit of Large Language Models}, author = {Li, Jinyi and Lan, Yihuai and Wang, Lei and Wang, Hao}, year = {2024}, eprint = {2403.17411}, archiveprefix = {arXiv}, primaryclass = {cs.CL}, google_scholar_id = {u5HHmVD_uO8C}, doi = {10.48550/arXiv.2403.17411}, dimensions = {true}, }