juanfra218 commited on
Commit
09ef8f6
·
verified ·
1 Parent(s): a3cc44d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +21 -37
README.md CHANGED
@@ -4,7 +4,7 @@ license: mit
4
 
5
  # Fine-Tuned Google T5 Model for Text to SQL Translation
6
 
7
- A fine-tuned version of the Google T5 model, specifically trained for the task of translating natural language queries into SQL statements.
8
 
9
  ## Model Details
10
 
@@ -17,45 +17,29 @@ A fine-tuned version of the Google T5 model, specifically trained for the task o
17
  ## Fine-Tuning Datasets
18
 
19
  1. **sql-create-context Dataset**:
20
- - This dataset was created by modifying data from the following sources:
21
- - Zhong, Victor, et al. "Seq2SQL: Generating Structured Queries from Natural Language using Reinforcement Learning." (2017).
22
- - Yu, Tao, et al. "Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-SQL task." (2018).
23
- - Citation:
24
- ```bibtex
25
- @misc{b-mc2_2023_sql-create-context,
26
- title = {sql-create-context Dataset},
27
- author = {b-mc2},
28
- year = {2023},
29
- url = {https://huggingface.co/datasets/b-mc2/sql-create-context},
30
- note = {This dataset was created by modifying data from the following sources: \cite{zhongSeq2SQL2017, yu2018spider}.},
31
- }
32
- ```
33
 
34
  2. **Synthetic-Text-To-SQL Dataset**:
35
  - A synthetic dataset for training language models to generate SQL queries from natural language prompts.
36
- - Citation:
37
- ```bibtex
38
- @software{gretel-synthetic-text-to-sql-2024,
39
- author = {Meyer, Yev and Emadi, Marjan and Nathawani, Dhruv and Ramaswamy, Lipika and Boyd, Kendrick and Van Segbroeck, Maarten and Grossman, Matthew and Mlocek, Piotr and Newberry, Drew},
40
- title = {{Synthetic-Text-To-SQL}: A synthetic dataset for training language models to generate SQL queries from natural language prompts},
41
- month = {April},
42
- year = {2024},
43
- url = {https://huggingface.co/datasets/gretelai/synthetic-text-to-sql}
44
- }
45
- ```
46
 
47
  ## Ongoing Work
48
 
49
- Currently working to implement PICARD (Parsing Incrementally for Constrained Auto-Regressive Decoding from Language Models) to improve the results of this model. More details can be found in the original PICARD paper:
50
-
51
- - Citation:
52
- ```bibtex
53
- @misc{scholak2021picardparsingincrementallyconstrained,
54
- title={PICARD: Parsing Incrementally for Constrained Auto-Regressive Decoding from Language Models},
55
- author={Torsten Scholak and Nathan Schucher and Dzmitry Bahdanau},
56
- year={2021},
57
- eprint={2109.05093},
58
- archivePrefix={arXiv},
59
- primaryClass={cs.CL},
60
- url={https://arxiv.org/abs/2109.05093},
61
- }
 
 
 
 
 
4
 
5
  # Fine-Tuned Google T5 Model for Text to SQL Translation
6
 
7
+ This repository contains a fine-tuned version of the Google T5 model, specifically trained for the task of translating natural language queries into SQL statements.
8
 
9
  ## Model Details
10
 
 
17
  ## Fine-Tuning Datasets
18
 
19
  1. **sql-create-context Dataset**:
20
+ - Created by modifying data from Seq2SQL and Spider datasets.
21
+ - [sql-create-context Dataset](https://huggingface.co/datasets/b-mc2/sql-create-context)
 
 
 
 
 
 
 
 
 
 
 
22
 
23
  2. **Synthetic-Text-To-SQL Dataset**:
24
  - A synthetic dataset for training language models to generate SQL queries from natural language prompts.
25
+ - [Synthetic-Text-To-SQL Dataset](https://huggingface.co/datasets/gretelai/synthetic-text-to-sql)
 
 
 
 
 
 
 
 
 
26
 
27
  ## Ongoing Work
28
 
29
+ Currently working to implement PICARD (Parsing Incrementally for Constrained Auto-Regressive Decoding from Language Models) to improve the results of this model. More details can be found in the original [PICARD paper](https://arxiv.org/abs/2109.05093).
30
+
31
+ ## Results
32
+
33
+ Results are currently being evaluated and will be posted here soon.
34
+
35
+ ## Files
36
+
37
+ - `optimizer.pt`: State of the optimizer.
38
+ - `training_args.bin`: Training arguments and hyperparameters.
39
+ - `tokenizer.json`: Tokenizer vocabulary and settings.
40
+ - `spiece.model`: SentencePiece model file.
41
+ - `special_tokens_map.json`: Special tokens mapping.
42
+ - `tokenizer_config.json`: Tokenizer configuration settings.
43
+ - `model.safetensors`: Trained model weights.
44
+ - `generation_config.json`: Configuration for text generation.
45
+ - `config.json`: Model architecture configuration.