Connect with us

Hi, what are you looking for?

Top Stories

vLLM, TensorRT-LLM, TGI v3, and LMDeploy: A Technical Breakdown of LLM Inference Performance

NVIDIA’s TensorRT-LLM achieves over 10,000 output tokens/s on H100 GPUs, offering 4.6× higher throughput and 4.4× faster latency compared to A100 models.

As the landscape of production-level large language model (LLM) serving continues to evolve, it is increasingly clear that the challenge lies not merely in the generate() loop but rather in the optimization of the inference stack. This determines crucial metrics such as tokens per second, tail latency, and ultimately the cost per million tokens on a given GPU fleet.

This article examines four prominent inference stacks currently in use:

  • vLLM
  • NVIDIA TensorRT-LLM
  • Hugging Face Text Generation Inference (TGI v3)
  • LMDeploy

1. vLLM: PagedAttention as the Open Baseline

The core innovation behind vLLM is the implementation of PagedAttention, which treats the key-value (KV) cache like paged virtual memory rather than a single contiguous buffer. This method drastically reduces external fragmentation and allows for a higher number of concurrent sequences within the same VRAM.

  • vLLM divides the KV cache into fixed-size blocks.
  • It maintains a block table that maps logical tokens to physical blocks.
  • It shares blocks across sequences when prefixes overlap.

This architecture results in a 2–4× improvement in throughput compared to systems like FasterTransformer and Orca, especially for longer sequences. The system also supports continuous batching, merging incoming requests into existing GPU batches, thus improving efficiency.

2. TensorRT-LLM: Maximizing NVIDIA GPU Performance

TensorRT-LLM is NVIDIA’s specialized inference library designed for optimal performance on its GPUs. It incorporates custom attention kernels, inflight batching, and quantization down to FP4 and INT4, particularly leveraging FP8 tensor cores on Hopper and Blackwell architectures.

Performance metrics reveal that on H100 GPUs with FP8, TensorRT-LLM achieves over 10,000 output tokens/s at peak throughput for 64 concurrent requests, with a time to first token around 100 ms. Notably, it offers up to 4.6× higher maximum throughput and 4.4× faster first token latency compared to the A100 on the same models.

3. Hugging Face TGI v3: Specializing in Long Prompts

The Text Generation Inference (TGI) v3 serves as a robust serving stack built with Rust and Python. This version emphasizes handling long prompts efficiently through techniques like chunking and prefix caching.

According to benchmarks, TGI v3 can serve a conversation reply that takes 27.5 seconds in vLLM in about 2 seconds, translating to a 13× speedup for long prompts exceeding 200,000 tokens. This is largely attributed to the system’s ability to maintain conversation context in a prefix cache, minimizing the computational overhead for subsequent turns.

4. LMDeploy: TurboMind with Blocked KV and Aggressive Quantization

LMDeploy, part of the InternLM ecosystem, focuses on high-throughput request serving and includes a blocked KV cache along with continuous batching. It emphasizes aggressive quantization strategies to improve performance.

Reportedly, LMDeploy can deliver up to 1.8× higher request throughput than vLLM, aided by its blocked KV, dynamic split and fuse, along with optimized CUDA kernels. Its architecture supports multi-model deployments with routing logic to select models based on request metadata.

Choosing the Right Stack

  • For maximum throughput and low time to first token on NVIDIA GPUs, TensorRT-LLM is the optimal choice, leveraging advanced features like FP8 and speculative decoding.
  • If handling long, reusable prompts, especially in RAG over large contexts, TGI v3 stands out due to its prefix caching method.
  • For an open, straightforward engine that provides a solid baseline performance, vLLM remains a strong candidate.
  • For deploying open models with a focus on aggressive quantization, LMDeploy is a fitting choice, particularly when working with models like InternLM.

As organizations navigate these options, many development teams find success by mixing different systems to align throughput, latency, and KV behavior with their specific workloads. Understanding these dynamics is crucial for optimizing costs and performance in LLM serving.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

Top Stories

Nvidia's crucial earnings report today could determine the fate of AI stock valuations and impact currencies like AUD, as investor anxiety mounts amid 3.8%...

AI Technology

Singtel partners with Nvidia to launch a multimillion-dollar AI centre of excellence, accelerating enterprise AI deployment and overcoming infrastructure challenges.

AI Technology

Meta partners with AMD to deploy 6 GW of AI-focused data center equipment, investing "double-digit billions" to enhance competitive AI capabilities against Nvidia.

AI Marketing

Meta partners with AMD for AI chip development, enhancing its strategy to deploy millions of Nvidia GPUs, signaling a major shift in its tech...

Top Stories

Meta commits over $100B to a five-year AI infrastructure deal with AMD, boosting AMD shares by 14% and solidifying its position as a top...

AI Technology

NVIDIA partners with Meta to deploy millions of GPUs and enhance AI infrastructure, aiming for significant performance improvements in data centers worldwide.

AI Technology

Nvidia boosts its Intel stake to 79.8%, enhancing their AI collaboration amid a complex rivalry in the CPU market, reshaping the semiconductor landscape.

Top Stories

Cisco and SharonAI launch Australia's first Secure AI Factory, powered by 1,024 NVIDIA Blackwell Ultra GPUs, enhancing sovereign AI infrastructure.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.