🐼 model-serving

👇 1 Items

vllm

55.1k Python Apache-2.0

A high-throughput and memory-efficient inference and serving engine for LLMs