🐼 model-serving

👇 1 Items

vllm

33.9k Python Apache-2.0

A high-throughput and memory-efficient inference and serving engine for LLMs

1 2 year(s) ago 1 month(s) ago