beam
Discover
PulseActivityAnalyticsBest forMapOrgs
Niches
AgentsMCPRAGCoding AssistantsInference & ServingVector DBs
Personal
WatchlistCompare
A
Hi, Adam
—
beam
LIVE──────── · ──:──:── UTCabout
beam
Discover
PulseActivityAnalyticsBest forMapOrgs
Niches
AgentsMCPRAGCoding AssistantsInference & ServingVector DBs
Personal
WatchlistCompare
A
Hi, Adam
—
beam
Back to Pulse
Tool profile

vllm-project/vllm-omni

stalling

A framework for efficient model inference with omni-modality models

audio-generationdiffusionimage-generationinferencemodel-servingmultimodalpytorchtransformer
Velocity score
0.00/ 10
[STARS]
4.6k
[FORKS]
871
[CONTRIBUTORS]
0
[LAST_COMMIT]
6d ago
OPEN_ON_GITHUB
Velocity class: stalling
30-day stars
0.00/ 10 score
last 31d
[SIGNAL_TRACE / 31_PT]
Score breakdown
456/ 1000
inference · vllm-project/vllm-omni
Velocity50%
Adoption30%
Maintenance15%
Community5%
[CODE_GROWTH]
421
[INSTALL_VEL]
498
[ACTIVITY]
643
[COMMUNITY_SIGNAL]
0

Terminal score: 0–1000 raw, weighted across 4 dimensions. Public score: 0–10 normalized (shown in the 30-day stars chart above).

LIVE──────── · ──:──:── UTCabout