Connect with us

Hi, what are you looking for?

Top Stories

vLLM, TensorRT-LLM, TGI v3, and LMDeploy: A Technical Breakdown of LLM Inference Performance

NVIDIA’s TensorRT-LLM achieves over 10,000 output tokens/s on H100 GPUs, offering 4.6× higher throughput and 4.4× faster latency compared to A100 models.

As the landscape of production-level large language model (LLM) serving continues to evolve, it is increasingly clear that the challenge lies not merely in the generate() loop but rather in the optimization of the inference stack. This determines crucial metrics such as tokens per second, tail latency, and ultimately the cost per million tokens on a given GPU fleet.

This article examines four prominent inference stacks currently in use:

  • vLLM
  • NVIDIA TensorRT-LLM
  • Hugging Face Text Generation Inference (TGI v3)
  • LMDeploy

1. vLLM: PagedAttention as the Open Baseline

The core innovation behind vLLM is the implementation of PagedAttention, which treats the key-value (KV) cache like paged virtual memory rather than a single contiguous buffer. This method drastically reduces external fragmentation and allows for a higher number of concurrent sequences within the same VRAM.

  • vLLM divides the KV cache into fixed-size blocks.
  • It maintains a block table that maps logical tokens to physical blocks.
  • It shares blocks across sequences when prefixes overlap.

This architecture results in a 2–4× improvement in throughput compared to systems like FasterTransformer and Orca, especially for longer sequences. The system also supports continuous batching, merging incoming requests into existing GPU batches, thus improving efficiency.

2. TensorRT-LLM: Maximizing NVIDIA GPU Performance

TensorRT-LLM is NVIDIA’s specialized inference library designed for optimal performance on its GPUs. It incorporates custom attention kernels, inflight batching, and quantization down to FP4 and INT4, particularly leveraging FP8 tensor cores on Hopper and Blackwell architectures.

Performance metrics reveal that on H100 GPUs with FP8, TensorRT-LLM achieves over 10,000 output tokens/s at peak throughput for 64 concurrent requests, with a time to first token around 100 ms. Notably, it offers up to 4.6× higher maximum throughput and 4.4× faster first token latency compared to the A100 on the same models.

3. Hugging Face TGI v3: Specializing in Long Prompts

The Text Generation Inference (TGI) v3 serves as a robust serving stack built with Rust and Python. This version emphasizes handling long prompts efficiently through techniques like chunking and prefix caching.

According to benchmarks, TGI v3 can serve a conversation reply that takes 27.5 seconds in vLLM in about 2 seconds, translating to a 13× speedup for long prompts exceeding 200,000 tokens. This is largely attributed to the system’s ability to maintain conversation context in a prefix cache, minimizing the computational overhead for subsequent turns.

4. LMDeploy: TurboMind with Blocked KV and Aggressive Quantization

LMDeploy, part of the InternLM ecosystem, focuses on high-throughput request serving and includes a blocked KV cache along with continuous batching. It emphasizes aggressive quantization strategies to improve performance.

Reportedly, LMDeploy can deliver up to 1.8× higher request throughput than vLLM, aided by its blocked KV, dynamic split and fuse, along with optimized CUDA kernels. Its architecture supports multi-model deployments with routing logic to select models based on request metadata.

Choosing the Right Stack

  • For maximum throughput and low time to first token on NVIDIA GPUs, TensorRT-LLM is the optimal choice, leveraging advanced features like FP8 and speculative decoding.
  • If handling long, reusable prompts, especially in RAG over large contexts, TGI v3 stands out due to its prefix caching method.
  • For an open, straightforward engine that provides a solid baseline performance, vLLM remains a strong candidate.
  • For deploying open models with a focus on aggressive quantization, LMDeploy is a fitting choice, particularly when working with models like InternLM.

As organizations navigate these options, many development teams find success by mixing different systems to align throughput, latency, and KV behavior with their specific workloads. Understanding these dynamics is crucial for optimizing costs and performance in LLM serving.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

Top Stories

CES 2026 showcased groundbreaking advancements in physical AI, highlighted by Nvidia and AMD's keynotes on cutting-edge technology and Hyundai's humanoid robot Atlas attracting massive...

AI Technology

NVIDIA and AMD unveil a future where AI becomes the core operating system of life, with AMD predicting a thousandfold increase in AI chip...

Top Stories

Nvidia's stock skyrocketed over 1,100% in three years, while Meta ramps up AI ambitions, making them key players in the booming $4 trillion AI...

Top Stories

Nvidia, Broadcom, and Amazon are set to lead the AI market's explosive growth, with Nvidia's EPS projected to soar 45% and Broadcom's AI revenue...

Top Stories

Lenovo unveils QIRA, a context-aware AI agent, and partners with NVIDIA to establish the world's first AI Cloud Gigafactory, enhancing enterprise AI infrastructure.

AI Technology

India partners with Nvidia to create a sovereign GPU, aiming to reduce tech imports and enhance local AI infrastructure, targeting 80% market share dominance.

AI Technology

Nvidia, AMD, and Intel unveil cutting-edge AI chips at CES 2026, with Nvidia's "Vera Rubin" servers designed to meet soaring demand for advanced processing...

AI Finance

Caterpillar partners with Nvidia to integrate AI in its Cat 306 CR Mini Excavator, enhancing operator efficiency and safety with real-time data insights.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.