GGUF
English
Inference Endpoints

Great AI model

#2
by oncu - opened

I just discovered your projects. I have an almost 20 years old laptop. I wanted to try AI models until I get a new one. I tried many gguf format models such as Llama 2 7B, Vicuna, Llava, etc. Even if it works on my device, I couldn't get much efficiency because it was too slow. Then I wanted to give 3D models a chance. I started with your Rosa v1 model. I also tried Rosa v3 (Rosa v1 is better). Finally I discovered the Cognizant 3D model Q6 gguf format which is really a great model. Thank you very much for sharing this project with us. It's great to be able to run our own offline AI model on an old laptop with an old CPU and get responses almost close to the responses of the Llama 2 7B, 13B and even 70B models. I wish you success and continuation of your projects my friend.

Thank you very much! I also believe Rosa v1 is better and have begun using it as the base of all my finetuning projects. Personally, I have the hardware to run 7-13B models, but I am very much drawn to 3B as they can be run so much more efficiently or in parallel with other software like multimodal captioning models or image generation models.

Thank you for your kind words!

jeiku changed discussion status to closed

Sign up or log in to comment