felflare commited on
Commit
549f6e7
·
1 Parent(s): 7a3eb4c

Update README.md

Browse files

add text to markdown

Files changed (1) hide show
  1. README.md +77 -0
README.md CHANGED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: mit
5
+ ---
6
+ # ✨ bert-restore-punctuation
7
+ [![forthebadge](https://forthebadge.com/images/badges/gluten-free.svg)]()
8
+ This a bert-base-uncased model finetuned for punctuation restoration on [Yelp Reviews](https://www.tensorflow.org/datasets/catalog/yelp_polarity_reviews).
9
+ The model predicts the punctuation and upper-casing of plain, lower-cased text. An example use case can be ASR output. Or other cases when text has lost punctuation.
10
+ This model is intended for direct use as a punctuation restoration model for the general English language. Alternatively, you can use this for further fine-tuning on domain-specific texts for punctuation restoration tasks.
11
+
12
+ Model restores the following punctuations -- [` ! ? . , - : ; '`]
13
+ Model also restores upper-casing of words.
14
+
15
+ -----------------------------------------------
16
+ ## 🚋 Usage
17
+ Below is a quick way to get up and running with the model.
18
+ 1. First, install the package.
19
+ ```bash
20
+ pip install rpunct
21
+ ```
22
+ 2. Sample python code.
23
+ ```python
24
+ from rpunct import RestorePuncts
25
+ # The default language is 'english'
26
+ rpunct = RestorePuncts()
27
+ rpunct.punctuate("""in 2018 cornell researchers built a high-powered detector that in combination with an algorithm-driven process called ptychography set a world record by tripling the resolution of a state-of-the-art electron microscope as successful as it was that approach had a weakness it only worked with ultrathin samples that were a few atoms thick anything thicker would cause the electrons to scatter in ways that could not be disentangled now a team again led by david muller the samuel b eckert professor of engineering has bested its own record by a factor of two with an electron microscope pixel array detector empad that incorporates even more sophisticated 3d reconstruction algorithms the resolution is so fine-tuned the only blurring that remains is the thermal jiggling of the atoms themselves""")
28
+
29
+ # Outputs the following:
30
+ # In 2018, Cornell researchers built a high-powered detector that, in combination with an algorithm-driven process called Ptychography, set a world record by tripling the resolution of a state-of-the-art electron microscope. As successful as it was, that approach had a weakness. It only worked with ultrathin samples that were a few atoms thick. Anything thicker would cause the electrons to scatter in ways that could not be disentangled. Now, a team again led by David Muller, the Samuel B. Eckert Professor of Engineering, has bested its own record by a factor of two with an Electron microscope pixel array detector empad that incorporates even more sophisticated 3d reconstruction algorithms. The resolution is so fine-tuned the only blurring that remains is the thermal jiggling of the atoms themselves.
31
+ ```
32
+
33
+ `This model works on arbitrarily large text in English language and uses GPU if available.`
34
+
35
+ -----------------------------------------------
36
+ ## 📡 Training data
37
+ Here is the number of product reviews we used for finetuning the model:
38
+ | Language | Number of reviews |
39
+ | -------- | ----------------- |
40
+ | English | 560,000 |
41
+
42
+ We found the best convergence around `3 epochs`, which is what presented here and available via a download.
43
+
44
+ -----------------------------------------------
45
+ ## 🎯 Accuracy
46
+ The fine-tuned model obtained the following accuracy on 45,990 held-out text samples:
47
+
48
+ | Accuracy | Overall F1 | Eval Support |
49
+ | -------- | ---------------------- | ------------------- |
50
+ | 91% | 90% | 45,990
51
+
52
+ Below is a breakdown of the performance of the model by each label:
53
+
54
+
55
+ | label | precision | recall | f1-score | support|
56
+ | --------- | -------------|-------- | ----------|--------|
57
+ | **!** | 0.45 | 0.17 | 0.24 | 424
58
+ | **!+Upper** | 0.43 | 0.34 | 0.38 | 98
59
+ | **'** | 0.60 | 0.27 | 0.37 | 11
60
+ | **,** | 0.59 | 0.51 | 0.55 | 1522
61
+ | **,+Upper** | 0.52 | 0.50 | 0.51 | 239
62
+ | **-** | 0.00 | 0.00 | 0.00 | 18
63
+ | **.** | 0.69 | 0.84 | 0.75 | 2488
64
+ | **.+Upper** | 0.65 | 0.52 | 0.57 | 274
65
+ | **:** | 0.52 | 0.31 | 0.39 | 39
66
+ | **:+Upper** | 0.36 | 0.62 | 0.45 | 16
67
+ | **;** | 0.00 | 0.00 | 0.00 | 17
68
+ | **?** | 0.54 | 0.48 | 0.51 | 46
69
+ | **?+Upper** | 0.40 | 0.50 | 0.44 | 4
70
+ | **none** | 0.96 | 0.96 | 0.96 |35352
71
+ | **Upper** | 0.84 | 0.82 | 0.83 | 5442
72
+ -----------------------------------------------
73
+
74
+ ## ☕ Contact
75
+ Contact [Daulet Nurmanbetov]([email protected]) for questions, feedback and/or requests for similar models.
76
+
77
+ -----------------------------------------------