--- language: es tags: - Spanish - BART - Legal datasets: - Spanish-legal-corpora --- ## BART Legal Spanish โš–๏ธ **BART Legal Spanish** (base) is a BART-like model trained on [A collection of corpora of Spanish legal domain](https://zenodo.org/record/5495529#.YZItp3vMLJw). BART is a transformer encoder-decoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function and (2) learning a model to reconstruct the original text. This model is particularly effective when fine-tuned for text generation tasks (e.g., summarization, translation) but also works well for comprehension tasks (e.g., text classification, question answering). ## Training details TBA ## Model details โš™ TBA ## [Evaluation metrics](https://huggingface.co/mrm8488/bart-legal-base-es/tensorboard?params=scalars#frame) ๐Ÿงพ |Metric | # Value | |-------|---------| |Accuracy| 0.86| |Loss| 0.68| ## Benchmarks ๐Ÿ”จ WIP ๐Ÿšง ## How to use with `transformers` TBA ## Acknowledgments TBA ## Citation If you want to cite this model, you can use this: ```bibtex @misc {manuel_romero_2023, author = { {Manuel Romero} }, title = { bart-legal-base-es (Revision c33ed22) }, year = 2023, url = { https://huggingface.co/mrm8488/bart-legal-base-es }, doi = { 10.57967/hf/0472 }, publisher = { Hugging Face } } ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) > Made with in Spain