Update README.md
Browse files
README.md
CHANGED
@@ -2,7 +2,7 @@
|
|
2 |
license: cc-by-4.0
|
3 |
---
|
4 |
|
5 |
-
Mirror of OpenFold parameters as provided in https://github.com/aqlaboratory/openfold. Stopgap solution as the original download link was down. All rights to the authors.
|
6 |
|
7 |
OpenFold model parameters, v. 06_22.
|
8 |
|
@@ -12,6 +12,9 @@ Trained using OpenFold on 44 A100s using the training schedule from Table 4 in
|
|
12 |
the AlphaFold supplement. AlphaFold was used as the pre-distillation model.
|
13 |
Training data is hosted publicly in the "OpenFold Training Data" RODA repository.
|
14 |
|
|
|
|
|
|
|
15 |
# Parameter files:
|
16 |
|
17 |
Parameter files fall into the following categories:
|
@@ -22,21 +25,35 @@ Parameter files fall into the following categories:
|
|
22 |
Checkpoints in chronological order corresponding to peaks in the
|
23 |
validation LDDT-Ca during the finetuning phase. Roughly evenly spaced
|
24 |
across the 45 finetuning epochs.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
25 |
finetuning_ptm_x.pt:
|
26 |
-
Checkpoints in chronological order corresponding to peaks
|
27 |
-
training phase
|
28 |
-
|
29 |
-
|
30 |
|
31 |
Average validation LDDT-Ca scores for each of the checkpoints are listed below.
|
32 |
The validation set contains approximately 180 chains drawn from CAMEO over a
|
33 |
three-month period at the end of 2021.
|
34 |
|
35 |
initial_training: 0.9088
|
36 |
-
finetuning_ptm_1: 0.9075
|
37 |
-
finetuning_ptm_2: 0.9097
|
38 |
-
finetuning_1: 0.9089
|
39 |
finetuning_2: 0.9061
|
40 |
finetuning_3: 0.9075
|
41 |
finetuning_4: 0.9059
|
42 |
-
finetuning_5: 0.9054
|
|
|
|
|
|
|
|
|
|
|
|
2 |
license: cc-by-4.0
|
3 |
---
|
4 |
|
5 |
+
Mirror of OpenFold parameters as provided in https://github.com/aqlaboratory/openfold. Stopgap solution as the original download link was down. Updated based on the s3 bucket parameter update. All rights to the authors.
|
6 |
|
7 |
OpenFold model parameters, v. 06_22.
|
8 |
|
|
|
12 |
the AlphaFold supplement. AlphaFold was used as the pre-distillation model.
|
13 |
Training data is hosted publicly in the "OpenFold Training Data" RODA repository.
|
14 |
|
15 |
+
To improve model diversity, we forked training after the initial training phase
|
16 |
+
and finetuned an additonal branch without templates.
|
17 |
+
|
18 |
# Parameter files:
|
19 |
|
20 |
Parameter files fall into the following categories:
|
|
|
25 |
Checkpoints in chronological order corresponding to peaks in the
|
26 |
validation LDDT-Ca during the finetuning phase. Roughly evenly spaced
|
27 |
across the 45 finetuning epochs.
|
28 |
+
|
29 |
+
NOTE: finetuning_1.pt, which was included in a previous release, has
|
30 |
+
been deprecated.
|
31 |
+
finetuning_no_templ_x.pt
|
32 |
+
Checkpoints in chronological order corresponding to peaks during an
|
33 |
+
additional finetuning phase also starting from the 'initial_training.pt'
|
34 |
+
checkpoint but with templates disabled.
|
35 |
+
finetuning_no_templ_ptm_x.pt
|
36 |
+
Checkpoints in chronological order corresponding to peaks during the
|
37 |
+
pTM training phase of the `no_templ` branch. Models in this category
|
38 |
+
include the pTM module and comprise the most recent of the checkpoints
|
39 |
+
in said branch.
|
40 |
finetuning_ptm_x.pt:
|
41 |
+
Checkpoints in chronological order corresponding to peaks in the pTM
|
42 |
+
training phase of the mainline branch. Models in this category include
|
43 |
+
the pTM module and comprise the most recent of the checkpoints in said
|
44 |
+
branch.
|
45 |
|
46 |
Average validation LDDT-Ca scores for each of the checkpoints are listed below.
|
47 |
The validation set contains approximately 180 chains drawn from CAMEO over a
|
48 |
three-month period at the end of 2021.
|
49 |
|
50 |
initial_training: 0.9088
|
|
|
|
|
|
|
51 |
finetuning_2: 0.9061
|
52 |
finetuning_3: 0.9075
|
53 |
finetuning_4: 0.9059
|
54 |
+
finetuning_5: 0.9054
|
55 |
+
finetuning_no_templ_1: 0.9014
|
56 |
+
finetuning_no_templ_2: 0.9032
|
57 |
+
finetuning_no_templ_ptm_1: 0.9025
|
58 |
+
finetuning_ptm_1: 0.9075
|
59 |
+
finetuning_ptm_2: 0.9097
|