PyTorch
megatron-lm
nvidia
llama 2
kvcache
alancucki commited on
Commit
f5de4b9
·
verified ·
1 Parent(s): 6c8c0fc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -51,7 +51,7 @@ Llama-2-7B-DMC-8x uses a model embedding size of 4096, 32 attention heads, MLP i
51
 
52
  ## Software Integration
53
  **Runtime Engine(s):**
54
- * [Not Applicable (N/A)]
55
 
56
  The model weights are distributed in bfloat16 format. However, it could be converted to other formats in order to run on other hardware microarchitectures.
57
 
 
51
 
52
  ## Software Integration
53
  **Runtime Engine(s):**
54
+ * Not Applicable (N/A)
55
 
56
  The model weights are distributed in bfloat16 format. However, it could be converted to other formats in order to run on other hardware microarchitectures.
57