Connect with us

Hi, what are you looking for?

Top Stories

vLLM, TensorRT-LLM, TGI v3, and LMDeploy: A Technical Breakdown of LLM Inference Performance

NVIDIA’s TensorRT-LLM achieves over 10,000 output tokens/s on H100 GPUs, offering 4.6× higher throughput and 4.4× faster latency compared to A100 models.

As the landscape of production-level large language model (LLM) serving continues to evolve, it is increasingly clear that the challenge lies not merely in the generate() loop but rather in the optimization of the inference stack. This determines crucial metrics such as tokens per second, tail latency, and ultimately the cost per million tokens on a given GPU fleet.

This article examines four prominent inference stacks currently in use:

  • vLLM
  • NVIDIA TensorRT-LLM
  • Hugging Face Text Generation Inference (TGI v3)
  • LMDeploy

1. vLLM: PagedAttention as the Open Baseline

The core innovation behind vLLM is the implementation of PagedAttention, which treats the key-value (KV) cache like paged virtual memory rather than a single contiguous buffer. This method drastically reduces external fragmentation and allows for a higher number of concurrent sequences within the same VRAM.

  • vLLM divides the KV cache into fixed-size blocks.
  • It maintains a block table that maps logical tokens to physical blocks.
  • It shares blocks across sequences when prefixes overlap.

This architecture results in a 2–4× improvement in throughput compared to systems like FasterTransformer and Orca, especially for longer sequences. The system also supports continuous batching, merging incoming requests into existing GPU batches, thus improving efficiency.

2. TensorRT-LLM: Maximizing NVIDIA GPU Performance

TensorRT-LLM is NVIDIA’s specialized inference library designed for optimal performance on its GPUs. It incorporates custom attention kernels, inflight batching, and quantization down to FP4 and INT4, particularly leveraging FP8 tensor cores on Hopper and Blackwell architectures.

Performance metrics reveal that on H100 GPUs with FP8, TensorRT-LLM achieves over 10,000 output tokens/s at peak throughput for 64 concurrent requests, with a time to first token around 100 ms. Notably, it offers up to 4.6× higher maximum throughput and 4.4× faster first token latency compared to the A100 on the same models.

3. Hugging Face TGI v3: Specializing in Long Prompts

The Text Generation Inference (TGI) v3 serves as a robust serving stack built with Rust and Python. This version emphasizes handling long prompts efficiently through techniques like chunking and prefix caching.

According to benchmarks, TGI v3 can serve a conversation reply that takes 27.5 seconds in vLLM in about 2 seconds, translating to a 13× speedup for long prompts exceeding 200,000 tokens. This is largely attributed to the system’s ability to maintain conversation context in a prefix cache, minimizing the computational overhead for subsequent turns.

4. LMDeploy: TurboMind with Blocked KV and Aggressive Quantization

LMDeploy, part of the InternLM ecosystem, focuses on high-throughput request serving and includes a blocked KV cache along with continuous batching. It emphasizes aggressive quantization strategies to improve performance.

Reportedly, LMDeploy can deliver up to 1.8× higher request throughput than vLLM, aided by its blocked KV, dynamic split and fuse, along with optimized CUDA kernels. Its architecture supports multi-model deployments with routing logic to select models based on request metadata.

Choosing the Right Stack

  • For maximum throughput and low time to first token on NVIDIA GPUs, TensorRT-LLM is the optimal choice, leveraging advanced features like FP8 and speculative decoding.
  • If handling long, reusable prompts, especially in RAG over large contexts, TGI v3 stands out due to its prefix caching method.
  • For an open, straightforward engine that provides a solid baseline performance, vLLM remains a strong candidate.
  • For deploying open models with a focus on aggressive quantization, LMDeploy is a fitting choice, particularly when working with models like InternLM.

As organizations navigate these options, many development teams find success by mixing different systems to align throughput, latency, and KV behavior with their specific workloads. Understanding these dynamics is crucial for optimizing costs and performance in LLM serving.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Technology

CoreWeave announces a landmark $6.8 billion deal with Anthropic for AI compute expansion, ensuring 20-30% performance boosts for next-gen models.

Top Stories

DeepSeek trains its latest AI model on Nvidia's banned Blackwell chips, revealing critical loopholes in U.S. export controls amid rising China-U.S. tech tensions

AI Education

Anthropic unveils Project Glasswing, committing $100M to harness AI for cybersecurity, uncovering thousands of vulnerabilities across major software systems.

AI Research

Nvidia's Bryan Catanzaro reveals that $30,000 GPUs are in short supply, straining AI research teams and pushing the company to prioritize efficient Nemotron models.

AI Technology

Nvidia's revenue skyrockets 73% to $68.13 billion as global AI infrastructure spending is set to reach $25.88 billion in 2026, cementing its market dominance.

AI Technology

Intel and SambaNova unveil a groundbreaking AI inference architecture, leveraging Xeon 6 processors for over 50% faster performance to take on Nvidia by 2026.

Top Stories

Nasdaq enters correction territory as Nvidia, Microsoft, and Amazon emerge as top investment opportunities amid a 10% decline, leveraging AI for growth.

Top Stories

PyTorch Foundation integrates Safetensors to enhance AI model security, ensuring safe distribution and faster loading while minimizing code execution risks.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.