Vakil-7B Model Card
Model Description
Vakil-7B is a state-of-the-art language model fine-tuned on the AISimplyExplained/LegalReasoningIndianLaw
dataset for specialization in the nuances and complexities of Indian law. It is designed to provide legal professionals, students, and researchers with insights and assistance in understanding legal documents and queries within the context of the Indian legal system.
Developed by Asmi Gulati and Bhuvi Jain, this tool aims to enhance the accessibility and analysis of legal texts, driving forward the digital transformation in the legal domain.
Model Specifications
- Developed by: Asmi Gulati and Bhuvi Jain
- Model type: Fine-tuned language model
- Language(s) (NLP): English, with a focus on Indian legal terminology
- License: MIT
- Finetuned from model:
transformers
library model
Directions for Usage
!pip install "unsloth[colab_ampere] @ git+https://github.com/unslothai/unsloth.git"
!pip install "git+https://github.com/huggingface/transformers.git"
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("AISimplyExplained/Vakil-7B")
model = AutoModelForCausalLM.from_pretrained("AISimplyExplained/Vakil-7B")
Intended Use
Vakil-7B is intended for direct use by legal professionals and researchers who need to interact with Indian legal text. It is designed to assist with legal research, drafting, and education by providing AI-driven analysis and insights.
Out-of-Scope Use
Vakil-7B is not designed to replace professional legal advice or to be used as a standalone decision-making tool. It should be used as an aid in the legal research and analysis process, not as the sole source of guidance.
Bias, Risks, and Limitations
Users should be aware of the inherent limitations of AI in interpreting legal text. Vakil-7B, while sophisticated, may not capture all nuances and should be used in conjunction with professional judgment.
- Downloads last month
- 781