-
-
-
-
-
-
Inference status
Active filters:
amd
dahara1/llama3.1-8b-Instruct-amd-npu
dahara1/llama3-8b-amd-npu
Tech-Meld/gpus-everywhere
Text-to-Image
•
Updated
•
2
•
•
1
dahara1/ALMA-Ja-V3-amd-npu
dahara1/llama-translate-amd-npu
Translation
•
Updated
•
3
dahara1/llama-translate-gguf
Updated
•
1.11k
•
13
amd/Llama-2-7b-hf-awq-g128-int4-asym-bf16-onnx-ryzen-strix
Text Generation
•
Updated
•
18
amd/Llama2-7b-chat-awq-g128-int4-asym-bf16-onnx-ryzen-strix
Text Generation
•
Updated
•
139
amd/Llama-3-8B-awq-g128-int4-asym-bf16-onnx-ryzen-strix
Text Generation
•
Updated
•
10
•
1
amd/Llama-3.1-8B-awq-g128-int4-asym-bf16-onnx-ryzen-strix
Text Generation
•
Updated
•
21
•
2
amd/Phi-3.5-mini-instruct-awq-g128-int4-asym-bf16-onnx-ryzen-strix
Text Generation
•
Updated
•
793
•
1
amd/Phi-3-mini-4k-instruct-awq-g128-int4-asym-bf16-onnx-ryzen-strix
Text Generation
•
Updated
•
17
•
1
uday610/Llama2-7b-chat-awq-g128-int4-asym-fp32-onnx-ryzen-strix-hybrid
Text Generation
•
Updated
amd/Phi-3-mini-4k-instruct-awq-g128-int4-asym-fp16-onnx-hybrid
Text Generation
•
Updated
•
22
amd/Phi-3.5-mini-instruct-awq-g128-int4-asym-fp16-onnx-hybrid
Text Generation
•
Updated
•
20
amd/Llama-2-7b-hf-awq-g128-int4-asym-fp16-onnx-hybrid
Text Generation
•
Updated
•
8
amd/Llama-2-7b-chat-hf-awq-g128-int4-asym-fp16-onnx-hybrid
Text Generation
•
Updated
•
25
amd/Llama-3-8B-awq-g128-int4-asym-fp16-onnx-hybrid
Text Generation
•
Updated
•
16