haneulpark commited on
Commit
eeea18b
Β·
verified Β·
1 Parent(s): d66374f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +85 -2
README.md CHANGED
@@ -87,6 +87,89 @@ dataset_info:
87
  num_examples: 596
88
  ---
89
 
90
- Human & Rat Liver Microsomal Stability
91
- --
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
92
 
 
87
  num_examples: 596
88
  ---
89
 
90
+ # Human & Rat Liver Microsomal Stability
91
+ 3345 RLM and 6420 HLM compounds were initially collected from the ChEMBL bioactivity database.
92
+ (HLM ID: 613373, 2367379, and 612558; RLM ID: 613694, 2367428, and 612558)
93
+ Finally, the RLM stability data set contains 3108 compounds, and the HLM stability data set contains 5902 compounds.
94
+ For the RLM stability data set, 1542 (49.6%) compounds were classified as stable, and 1566 (50.4%) compounds were classified as unstable,
95
+ among which the training and test sets contain 2512 and 596 compounds, respectively.
96
+ The experimental data from the National Center for Advancing Translational Sciences (PubChem AID 1508591) were used as the external set.
97
+ For the HLM data set, 3799 (64%) compounds were classified as stable, and 2103 (36%) compounds were classified as unstable.
98
+ In addition, an external set from Liu et al.12 was used to evaluate the predictive power of the HLM model.
99
+
100
+
101
+ ## Quickstart Usage
102
+
103
+ ### Load a dataset in python
104
+ Each subset can be loaded into python using the Huggingface [datasets](https://huggingface.co/docs/datasets/index) library.
105
+ First, from the command line install the `datasets` library
106
+
107
+ $ pip install datasets
108
+
109
+ then, from within python load the datasets library
110
+
111
+ >>> import datasets
112
+
113
+ and load one of the `HLM_RLM` datasets, e.g.,
114
+
115
+ >>> HLM = datasets.load_dataset("maomlab/HLM_RLM", name = "HLM")
116
+ Downloading readme: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 4.40k/4.40k [00:00<00:00, 1.35MB/s]
117
+ Downloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 680k/680k [00:00<00:00, 946kB/s]
118
+ Downloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2.11M/2.11M [00:01<00:00, 1.28MB/s]
119
+ Generating test split: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1951/1951 [00:00<00:00, 20854.95 examples/s]
120
+ Generating train split: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 5856/5856 [00:00<00:00, 144260.80 examples/s]
121
+
122
+ and inspecting the loaded dataset
123
+
124
+ >>> HLM
125
+ HLM
126
+ DatasetDict({
127
+ test: Dataset({
128
+ features: ['NO.', 'compound_name', 'IUPAC_name', 'SMILES', 'CID', 'logBB', 'BBB+/BBB-', 'Inchi', 'threshold', 'reference', 'group', 'comments', 'ClusterNo', 'MolCount'],
129
+ num_rows: 1951
130
+ })
131
+ train: Dataset({
132
+ features: ['NO.', 'compound_name', 'IUPAC_name', 'SMILES', 'CID', 'logBB', 'BBB+/BBB-', 'Inchi', 'threshold', 'reference', 'group', 'comments', 'ClusterNo', 'MolCount'],
133
+ num_rows: 5856
134
+ })
135
+ })
136
+
137
+ ### Use a dataset to train a model
138
+ One way to use the dataset is through the [MolFlux](https://exscientia.github.io/molflux/) package developed by Exscientia.
139
+ First, from the command line, install `MolFlux` library with `catboost` and `rdkit` support
140
+
141
+ pip install 'molflux[catboost,rdkit]'
142
+
143
+ then load, featurize, split, fit, and evaluate the a catboost model
144
+
145
+ import json
146
+ from datasets import load_dataset
147
+ from molflux.datasets import featurise_dataset
148
+ from molflux.features import load_from_dicts as load_representations_from_dicts
149
+ from molflux.splits import load_from_dict as load_split_from_dict
150
+ from molflux.modelzoo import load_from_dict as load_model_from_dict
151
+ from molflux.metrics import load_suite
152
+
153
+ split_dataset = load_dataset('maomlab/HLM_RLM', name = 'HLM')
154
+
155
+ split_featurised_dataset = featurise_dataset(
156
+ split_dataset,
157
+ column = "SMILES",
158
+ representations = load_representations_from_dicts([{"name": "morgan"}, {"name": "maccs_rdkit"}]))
159
+
160
+ model = load_model_from_dict({
161
+ "name": "cat_boost_classifier",
162
+ "config": {
163
+ "x_features": ['SMILES::morgan', 'SMILES::maccs_rdkit'],
164
+ "y_features": ['BBB+/BBB-']}})
165
+
166
+ model.train(split_featurised_dataset["train"])
167
+ preds = model.predict(split_featurised_dataset["test"])
168
+
169
+ classification_suite = load_suite("classification")
170
+
171
+ scores = classification_suite.compute(
172
+ references=split_featurised_dataset["test"]['BBB+/BBB-'],
173
+ predictions=preds["cat_boost_classifier::BBB+/BBB-"])
174
+
175