Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
Spaces:
yusufs
/
vllm-inference
like
0
Paused
App
Files
Files
Fetching metadata from the HF Docker repository...
main
vllm-inference
1 contributor
History:
51 commits
yusufs
fix(runner.sh): enable eager mode (disabling cuda graph)
5bd7bc7
2 days ago
.gitignore
Safe
19 Bytes
feat(download_model.py): remove download_model.py during build, it causing big image size
about 2 months ago
Dockerfile
Safe
1.32 kB
feat(runner.sh): using runner.sh to select llm in the run time
28 days ago
README.md
Safe
1.73 kB
feat(add-model): always download model during build, it will be cached in the consecutive builds
about 2 months ago
download_model.py
Safe
700 Bytes
feat(add-model): always download model during build, it will be cached in the consecutive builds
about 2 months ago
main.py
Safe
6.7 kB
feat(parse): parse output
about 2 months ago
openai_compatible_api_server.py
Safe
24.4 kB
feat(dep_sizes.txt): removes dep_sizes.txt during build, it not needed
about 2 months ago
poetry.lock
Safe
426 kB
feat(refactor): move the files to root
about 2 months ago
pyproject.toml
Safe
416 Bytes
feat(refactor): move the files to root
about 2 months ago
requirements.txt
Safe
9.99 kB
feat(first-commit): follow examples and tutorials
about 2 months ago
run-llama.sh
Safe
1.51 kB
fix(runner.sh): --enforce-eager not support values
2 days ago
run-sailor.sh
Safe
1.83 kB
fix(runner.sh): --enforce-eager not support values
2 days ago
runner.sh
Safe
1.73 kB
fix(runner.sh): enable eager mode (disabling cuda graph)
2 days ago