rookielyb commited on
Commit
33d3b8d
·
1 Parent(s): b1041a3

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -2
README.md CHANGED
@@ -30,7 +30,7 @@ Generating high-quality code using natural language is a high-frequency demand i
30
 
31
  更多细节可以参考我们的公众号文章:
32
 
33
- [再创新高!姜子牙大模型开源代码大模型Ziya-Coding-34B-v1.0](https://mp.weixin.qq.com/s/tWaRF1wL3HM87ZDEawd2UA)
34
 
35
  [姜子牙大模型系列 | 代码模型ziya-coding发布!低成本微调即可学会在专有场景编程](https://mp.weixin.qq.com/s/tWaRF1wL3HM87ZDEawd2UA)
36
 
@@ -88,7 +88,6 @@ from transformers import AutoTokenizer, AutoModelForCausalLM
88
  import torch
89
 
90
  device = torch.device("cuda")
91
-
92
  prompt = "写一段快速排序"
93
  model = AutoModelForCausalLM.from_pretrained("IDEA-CCNL/Ziya-Coding-34B-v1.0", torch_dtype=torch.float16, device_map="auto")
94
  tokenizer = AutoTokenizer.from_pretrained("IDEA-CCNL/Ziya-Coding-34B-v1.0", use_fast=False)
 
30
 
31
  更多细节可以参考我们的公众号文章:
32
 
33
+ [再创新高!姜子牙大模型开源代码大模型Ziya-Coding-34B-v1.0](https://mp.weixin.qq.com/s/Op4Wkiu2J9jwFr_Zj0YSZg)
34
 
35
  [姜子牙大模型系列 | 代码模型ziya-coding发布!低成本微调即可学会在专有场景编程](https://mp.weixin.qq.com/s/tWaRF1wL3HM87ZDEawd2UA)
36
 
 
88
  import torch
89
 
90
  device = torch.device("cuda")
 
91
  prompt = "写一段快速排序"
92
  model = AutoModelForCausalLM.from_pretrained("IDEA-CCNL/Ziya-Coding-34B-v1.0", torch_dtype=torch.float16, device_map="auto")
93
  tokenizer = AutoTokenizer.from_pretrained("IDEA-CCNL/Ziya-Coding-34B-v1.0", use_fast=False)