deepcs233 commited on
Commit
848b7a1
·
1 Parent(s): 36639bc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -2
README.md CHANGED
@@ -10,28 +10,35 @@ license: apache-2.0
10
  LMDrive is an end-to-end, closed-loop, language-based autonomous driving framework, which interacts with the dynamic environment via multi-modal multi-view sensor data and natural language instructions.
11
 
12
  **Model date:**
13
- LMDrive-1.0 (based on LLaVA-v1.5-7B) was trained in November 2023.
14
 
15
  **Paper or resources for more information:**
 
16
  Github: https://github.com/opendilab/LMDrive/README.md
17
 
18
  Paper: https://arxiv.org/abs/2312.07488
19
 
20
  **Related weights for the vision encoder**
 
21
  https://huggingface.co/deepcs233/LMDrive-vision-encoder-r50-v1.0
22
 
23
  **Where to send questions or comments about the model:**
24
- https://github.com/haotian-liu/LLaVA/issues
 
 
25
 
26
 
27
  ## Intended use
28
  **Primary intended uses:**
 
29
  The primary use of LMDrive is research on large multimodal models for autonomous driving.
30
 
31
  **Primary intended users:**
 
32
  The primary intended users of the model are researchers and hobbyists in computer vision, large multimodal model, autonomous driving, and artificial intelligence.
33
 
34
  ## Training dataset
 
35
  - 64K instruction-sensor-control data clips collected in the CARLA simulator. [dataset_webpage](https://huggingface.co/datasets/deepcs233/LMDrive)
36
  - where each clip includes one navigation instruction, several notice instructions, a sequence of multi-modal multi-view sensor data, and control signals. The duration of the clip spans from 2 to 20 seconds
37
 
 
10
  LMDrive is an end-to-end, closed-loop, language-based autonomous driving framework, which interacts with the dynamic environment via multi-modal multi-view sensor data and natural language instructions.
11
 
12
  **Model date:**
13
+ LMDrive-1.0 (based on LLaVA-v1.5-7B) was trained in November 2023. The original LLaVA-v1.5 also needs to be downloaded.
14
 
15
  **Paper or resources for more information:**
16
+
17
  Github: https://github.com/opendilab/LMDrive/README.md
18
 
19
  Paper: https://arxiv.org/abs/2312.07488
20
 
21
  **Related weights for the vision encoder**
22
+
23
  https://huggingface.co/deepcs233/LMDrive-vision-encoder-r50-v1.0
24
 
25
  **Where to send questions or comments about the model:**
26
+
27
+ https://github.com/opendilab/LMDrive/issues
28
+
29
 
30
 
31
  ## Intended use
32
  **Primary intended uses:**
33
+
34
  The primary use of LMDrive is research on large multimodal models for autonomous driving.
35
 
36
  **Primary intended users:**
37
+
38
  The primary intended users of the model are researchers and hobbyists in computer vision, large multimodal model, autonomous driving, and artificial intelligence.
39
 
40
  ## Training dataset
41
+
42
  - 64K instruction-sensor-control data clips collected in the CARLA simulator. [dataset_webpage](https://huggingface.co/datasets/deepcs233/LMDrive)
43
  - where each clip includes one navigation instruction, several notice instructions, a sequence of multi-modal multi-view sensor data, and control signals. The duration of the clip spans from 2 to 20 seconds
44