Update README.md
Browse files
README.md
CHANGED
@@ -51,7 +51,7 @@ Llama-2-7B-DMC-8x uses a model embedding size of 4096, 32 attention heads, MLP i
|
|
51 |
|
52 |
## Software Integration
|
53 |
**Runtime Engine(s):**
|
54 |
-
*
|
55 |
|
56 |
The model weights are distributed in bfloat16 format. However, it could be converted to other formats in order to run on other hardware microarchitectures.
|
57 |
|
|
|
51 |
|
52 |
## Software Integration
|
53 |
**Runtime Engine(s):**
|
54 |
+
* Not Applicable (N/A)
|
55 |
|
56 |
The model weights are distributed in bfloat16 format. However, it could be converted to other formats in order to run on other hardware microarchitectures.
|
57 |
|