ShortLang: Compressed Text for efficient LLMs

Author: — Nov 2025

Abstract

The rapid increase in the size and complexity of modern language models has generated renewed interest in methods that can reduce computational requirements without compromising semantic fidelity. This paper introduces ShortLang, a minimal-length, semantically-preserving textual representation framework designed to optimize language model reasoning, training efficiency, and storage requirements. ShortLang compresses natural language into a concise symbolic form while retaining core meaning as measured by embedding similarity. It can serve as an intermediate representation for downstream tasks such as chunking, vector storage, retrieval-augmented generation, model training, and multi-step reasoning. This paper outlines key principles behind ShortLang, methods for generating and validating ShortLang representations, and its potential applications in large-scale machine learning systems. We further discuss strategies for building automatic ShortLang converters, including both rule-based systems and fine-tuned summarization models, and we examine theoretical and practical considerations for future research. The source code is available at github.com/Pro-GenAI/ShortLang.

Keywords: Large Language Models, LLMs, Generative AI, Artificial Intelligence, AI, Text Processing, Natural Language Processing, Text Summarization, Efficient AI, Cost-effective AI

PDF

PDF of "ShortLang: Compressed Text for efficient LLMs"
Download the PDF file