beam
Discover
PulseActivityAnalyticsBest forMapOrgs
Niches
AgentsMCPRAGCoding AssistantsInference & ServingVector DBs
Personal
WatchlistCompare
A
Hi, Adam
—
beam
LIVE──────── · ──:──:── UTCabout
beam
Discover
PulseActivityAnalyticsBest forMapOrgs
Niches
AgentsMCPRAGCoding AssistantsInference & ServingVector DBs
Personal
WatchlistCompare
A
Hi, Adam
—
beam
Back to Pulse
Tool profile

ome-projects/ome

stalling

Open Model Engine (OME) — Kubernetes operator for LLM serving, GPU scheduling, and model lifecycle management. Works with SGLang, vLLM, TensorRT-LLM, and Triton

deepseekk8skimi-k2llamallmllm-inferencemodel-as-a-servicemodel-serving
Velocity score
0.47/ 10
[STARS]
441
[FORKS]
77
[CONTRIBUTORS]
33
[LAST_COMMIT]
4d ago
OPEN_ON_GITHUB
Velocity class: stalling
30-day stars
0.47/ 10 score
last 32d
[SIGNAL_TRACE / 32_PT]
Score breakdown
754/ 1000
inference · ome-projects/ome
Velocity50%
Adoption30%
Maintenance15%
Community5%
[CODE_GROWTH]
926
[INSTALL_VEL]
498
[ACTIVITY]
645
[COMMUNITY_SIGNAL]
897

Terminal score: 0–1000 raw, weighted across 4 dimensions. Public score: 0–10 normalized (shown in the 30-day stars chart above).

LIVE──────── · ──:──:── UTCabout