Working with only these models?
#2
by
supercharge19
- opened
Can magic run any model on huggingface (that transformers can run)? Or can it only run either of these models onlhy?
Hey
@supercharge19
,
This HF repo is primarily for convenience to host example quantized models.
MAX Serve provides an OpenAI-compatible endpoint for any PyTorch LLM on HF.
MAX currently accelerates PyTorch LLMs you can run today with HF Transformers using LlamaForCausalLM, MistralForCausalLM and MPTForCausalLM. More models are expected to be accelerated in the future.
FYI magic
is the package manager, while MAX
is the platform which the models run on