yym68686 commited on
Commit
6038b37
·
1 Parent(s): cfd3f47

✨ Feature: Add feature: Add support for using angle brackets to enclose strings to set the string before the slash to the channel name.

Browse files
Files changed (3) hide show
  1. README.md +35 -33
  2. README_CN.md +3 -1
  3. main.py +19 -18
README.md CHANGED
@@ -1,7 +1,7 @@
1
  # uni-api
2
 
3
  <p align="center">
4
- <a href="https://t.me/uni_api">
5
  <img src="https://img.shields.io/badge/Join Telegram Group-blue?&logo=telegram">
6
  </a>
7
  <a href="https://hub.docker.com/repository/docker/yym68686/uni-api">
@@ -13,36 +13,36 @@
13
 
14
  ## Introduction
15
 
16
- If used personally, one/new-api is overly complex, with many commercial features that individuals do not need. If you do not want a complex front-end interface and want to support more models, you can try uni-api. This is a project for unified management of large model APIs, allowing you to call multiple backend services through a unified API interface, uniformly converting them to OpenAI format and supporting load balancing. The currently supported backend services include: OpenAI, Anthropic, Gemini, Vertex, Cloudflare, DeepBricks, OpenRouter, etc.
17
 
18
  ## Features
19
 
20
- - No front-end, pure configuration file setup for API channels. You can run your own API site just by writing a single file, and the documentation includes a detailed configuration guide, beginner-friendly.
21
- - Unified management of multiple backend services, supporting providers such as OpenAI, Deepseek, DeepBricks, OpenRouter, and other APIs in OpenAI format. Supports OpenAI Dalle-3 image generation.
22
- - Supports Anthropic, Gemini, Vertex API, and Cloudflare simultaneously. Vertex supports both Claude and Gemini API.
23
- - Support for OpenAI, Anthropic, Gemini, Vertex native tool use function calls.
24
  - Supports OpenAI, Anthropic, Gemini, Vertex native image recognition API.
25
  - Supports four types of load balancing.
26
- 1. Supports channel-level weighted load balancing, which can allocate requests based on different channel weights. Disabled by default, channel weights need to be configured.
27
- 2. Supports Vertex regional load balancing, supports Vertex high concurrency, and can increase Gemini and Claude concurrency up to (API quantity * regional quantity) times. Automatically enabled without additional configuration.
28
- 3. Except for Vertex region-level load balancing, all APIs support channel-level sequential load balancing, enhancing the immersive translation experience. Automatically enabled without additional configuration.
29
- 4. Support automatic API key-level round-robin load balancing for multiple API keys in a single channel.
30
  - Supports automatic retry, when an API channel response fails, automatically retry the next API channel.
31
- - Supports fine-grained permission control. Supports using wildcards to set specific models available for API key channels.
32
- - Supports rate limiting, allowing you to set the maximum number of requests per minute. It can be set as an integer, such as 2/min (2 times per minute), 5/hour (5 times per hour), 10/day (10 times per day), 10/month (10 times per month), 10/year (10 times per year). The default is 60/min.
33
 
34
  ## Configuration
35
 
36
- Using the api.yaml configuration file, you can configure multiple models, and each model can configure multiple backend services, supporting load balancing. Below is an example of the api.yaml configuration file:
37
 
38
  ```yaml
39
  providers:
40
- - provider: provider_name # Service provider name, such as openai, anthropic, gemini, openrouter, deepbricks, any name, required
41
  base_url: https://api.your.com/v1/chat/completions # Backend service API address, required
42
  api: sk-YgS6GTi0b4bEabc4C # Provider's API Key, required
43
  model: # At least one model is required
44
  - gpt-4o # Usable model name, required
45
- - claude-3-5-sonnet-20240620: claude-3-5-sonnet # Rename model, claude-3-5-sonnet-20240620 is the provider's model name, claude-3-5-sonnet is the renamed name, you can use a simpler name instead of the original complex name, optional
46
  - dall-e-3
47
 
48
  - provider: anthropic
@@ -51,22 +51,22 @@ providers:
51
  - sk-ant-api03-bNnAOJyA-xQw_twAA
52
  - sk-ant-api02-bNnxxxx
53
  model:
54
- - claude-3-5-sonnet-20240620: claude-3-5-sonnet # Rename model, claude-3-5-sonnet-20240620 is the provider's model name, claude-3-5-sonnet is the renamed name, you can use a simpler name instead of the original complex name, optional
55
- tools: true # Whether to support tools, such as generating code, generating documents, etc., default is true, optional
56
 
57
  - provider: gemini
58
  base_url: https://generativelanguage.googleapis.com/v1beta # base_url supports v1beta/v1, only for Gemini models, required
59
  api: AIzaSyAN2k6IRdgw
60
  model:
61
  - gemini-1.5-pro
62
- - gemini-1.5-flash-exp-0827: gemini-1.5-flash # After renaming, the original model name gemini-1.5-flash-exp-0827 cannot be used. If you want to use the original name, you can add the original name in the model, just add the line below to use the original name.
63
  - gemini-1.5-flash-exp-0827 # Add this line, both gemini-1.5-flash-exp-0827 and gemini-1.5-flash can be requested
64
  tools: true
65
 
66
  - provider: vertex
67
- project_id: gen-lang-client-xxxxxxxxxxxxxx # Description: Your Google Cloud project ID. Format: String, usually consists of lowercase letters, numbers, and hyphens. How to get: You can find your project ID in the project selector of the Google Cloud Console.
68
- private_key: "-----BEGIN PRIVATE KEY-----\nxxxxx\n-----END PRIVATE" # Description: Private key of Google Cloud Vertex AI service account. Format: A JSON formatted string containing the private key information of the service account. How to get: Create a service account in the Google Cloud Console, generate a JSON formatted key file, and set its content as the value of this environment variable.
69
- client_email: [email protected] # Description: Email address of the Google Cloud Vertex AI service account. Format: Usually a string like "[email protected]". How to get: Generated when creating the service account, or you can view the service account details in the "IAM & Admin" section of the Google Cloud Console.
70
  model:
71
  - gemini-1.5-pro
72
  - gemini-1.5-flash
@@ -75,25 +75,26 @@ providers:
75
  - claude-3-sonnet@20240229: claude-3-sonnet
76
  - claude-3-haiku@20240307: claude-3-haiku
77
  tools: true
78
- notes: https://xxxxx.com/ # You can put the provider's website, notes, official documentation, optional
79
 
80
  - provider: cloudflare
81
  api: f42b3xxxxxxxxxxq4aoGAh # Cloudflare API Key, required
82
  cf_account_id: 8ec0xxxxxxxxxxxxe721 # Cloudflare Account ID, required
83
  model:
84
- - '@cf/meta/llama-3.1-8b-instruct': llama-3.1-8b # Rename model, @cf/meta/llama-3.1-8b-instruct is the provider's original model name, must be enclosed in quotes otherwise YAML syntax error, llama-3.1-8b is the renamed name, you can use a simpler name instead of the original complex name, optional
85
- - '@cf/meta/llama-3.1-8b-instruct' # Must be enclosed in quotes otherwise YAML syntax error
86
 
87
  - provider: other-provider
88
  base_url: https://api.xxx.com/v1/messages
89
  api: sk-bNnAOJyA-xQw_twAA
90
  model:
91
  - causallm-35b-beta2ep-q6k: causallm-35b
 
92
  tools: false
93
  engine: openrouter # Force the use of a specific message format, currently supports gpt, claude, gemini, openrouter native format, optional
94
 
95
  api_keys:
96
- - api: sk-KjjI60Yf0JFWtfgRmXqFWyGtWUd9GZnmi3KlvowmRWpWpQRo # API Key, required for users to use this service
97
  model: # Models that this API Key can use, required
98
  - gpt-4o # Usable model name, can use all gpt-4o models provided by providers
99
  - claude-3-5-sonnet # Usable model name, can use all claude-3-5-sonnet models provided by providers
@@ -102,28 +103,29 @@ api_keys:
102
 
103
  - api: sk-pkhf60Yf0JGyJygRmXqFQyTgWUd9GZnmi3KlvowmRWpWqrhy
104
  model:
105
- - anthropic/claude-3-5-sonnet # Usable model name, can only use the claude-3-5-sonnet model provided by the provider named anthropic. Models with the same name from other providers cannot be used.
 
106
  preferences:
107
- USE_ROUND_ROBIN: true # Whether to use round-robin load balancing, true to use, false to not use, default is true. When enabled, each request to the model is made in the order configured in the model. This is independent of the original channel order in providers. Therefore, you can set different request orders for each API key.
108
  AUTO_RETRY: true # Whether to automatically retry, automatically retry the next provider, true to automatically retry, false to not automatically retry, default is true
109
  RATE_LIMIT: 2/min # Supports rate limiting, maximum number of requests per minute, can be set to an integer, such as 2/min, 2 times per minute, 5/hour, 5 times per hour, 10/day, 10 times per day, 10/month, 10 times per month, 10/year, 10 times per year. Default is 60/min, optional
110
 
111
  # Channel-level weighted load balancing configuration example
112
  - api: sk-KjjI60Yf0JFWtxxxxxxxxxxxxxxwmRWpWpQRo
113
  model:
114
- - gcp1/*: 5 # The number after the colon is the weight, only positive integers are supported.
115
- - gcp2/*: 3 # The larger the number, the higher the probability of the request.
116
- - gcp3/*: 2 # In this example, there are a total of 10 weights for all channels, and out of 10 requests, 5 requests will request the gcp1/* model, 2 requests will request the gcp2/* model, and 3 requests will request the gcp3/* model.
117
 
118
  preferences:
119
- USE_ROUND_ROBIN: true # When USE_ROUND_ROBIN must be true and there is no weight after the above channels, requests will be made in the original channel order. If there are weights, requests will be made in the weighted order.
120
  AUTO_RETRY: true
121
  ```
122
 
123
  ## Environment Variables
124
 
125
  - CONFIG_URL: The download address of the configuration file, it can be a local file or a remote file, optional
126
- - TIMEOUT: Request timeout, default is 20 seconds, the timeout can control the time needed to switch to the next channel when a channel does not respond. Optional
127
 
128
  ## Docker Local Deployment
129
 
@@ -150,7 +152,7 @@ services:
150
  - ./api.yaml:/home/api.yaml
151
  ```
152
 
153
- CONFIG_URL is a direct link that can automatically download remote configuration files. For instance, if you find it inconvenient to modify configuration files on a certain platform, you can upload the configuration files to a hosting service that provides a direct link for uni-api to download. CONFIG_URL is this direct link.
154
 
155
  Run Docker Compose container in the background
156
 
 
1
  # uni-api
2
 
3
  <p align="center">
4
+ <a href="https://t.me/uni_api">
5
  <img src="https://img.shields.io/badge/Join Telegram Group-blue?&logo=telegram">
6
  </a>
7
  <a href="https://hub.docker.com/repository/docker/yym68686/uni-api">
 
13
 
14
  ## Introduction
15
 
16
+ If used personally, one/new-api is too complex and has many commercial functions that individuals do not need. If you do not want a complicated front-end interface and want to support more models, you can try uni-api. This is a project for unified management of large model APIs, allowing you to call multiple backend services through a unified API interface, converting them uniformly to OpenAI format and supporting load balancing. Currently supported backend services include: OpenAI, Anthropic, Gemini, Vertex, Cloudflare, DeepBricks, OpenRouter, etc.
17
 
18
  ## Features
19
 
20
+ - No frontend, pure configuration file setup for API channels. You can run your own API site by just writing one file, with detailed configuration guides in the documentation, beginner-friendly.
21
+ - Unified management of multiple backend services, supporting providers like OpenAI, Deepseek, DeepBricks, OpenRouter, and other APIs in the OpenAI format. Supports OpenAI Dalle-3 image generation.
22
+ - Supports Anthropic, Gemini, Vertex API, and Cloudflare. Vertex supports both Claude and Gemini API.
23
+ - Supports OpenAI, Anthropic, Gemini, Vertex native tool use function calls.
24
  - Supports OpenAI, Anthropic, Gemini, Vertex native image recognition API.
25
  - Supports four types of load balancing.
26
+ 1. Supports channel-level weighted load balancing, which can allocate requests based on different channel weights. Disabled by default, requires channel weight configuration.
27
+ 2. Supports Vertex regional load balancing, supports Vertex high concurrency, and can increase Gemini, Claude concurrency by up to (number of APIs * number of regions) times. Automatically enabled without additional configuration.
28
+ 3. In addition to Vertex region-level load balancing, all APIs support channel-level sequential load balancing, enhancing the immersive translation experience. Automatically enabled without additional configuration.
29
+ 4. Support automatic API key-level round-robin load balancing for multiple API Keys in a single channel.
30
  - Supports automatic retry, when an API channel response fails, automatically retry the next API channel.
31
+ - Supports fine-grained access control. Supports using wildcards to set specific models for API key available channels.
32
+ - Supports rate limiting, can set the maximum number of requests per minute, can be set as an integer, such as 2/min, 2 times per minute, 5/hour, 5 times per hour, 10/day, 10 times per day, 10/month, 10 times per month, 10/year, 10 times per year. Default is 60/min.
33
 
34
  ## Configuration
35
 
36
+ Using the api.yaml configuration file, multiple models can be configured, and each model can be configured with multiple backend services, supporting load balancing. Below is an example of the api.yaml configuration file:
37
 
38
  ```yaml
39
  providers:
40
+ - provider: provider_name # Service provider name, such as openai, anthropic, gemini, openrouter, deepbricks, can be any name, required
41
  base_url: https://api.your.com/v1/chat/completions # Backend service API address, required
42
  api: sk-YgS6GTi0b4bEabc4C # Provider's API Key, required
43
  model: # At least one model is required
44
  - gpt-4o # Usable model name, required
45
+ - claude-3-5-sonnet-20240620: claude-3-5-sonnet # Rename model, claude-3-5-sonnet-20240620 is the provider's model name, claude-3-5-sonnet is the renamed name, you can use a concise name instead of the original complex name, optional
46
  - dall-e-3
47
 
48
  - provider: anthropic
 
51
  - sk-ant-api03-bNnAOJyA-xQw_twAA
52
  - sk-ant-api02-bNnxxxx
53
  model:
54
+ - claude-3-5-sonnet-20240620: claude-3-5-sonnet # Rename model, claude-3-5-sonnet-20240620 is the provider's model name, claude-3-5-sonnet is the renamed name, you can use a concise name instead of the original complex name, optional
55
+ tools: true # Whether to support tools, such as code generation, document generation, etc., default is true, optional
56
 
57
  - provider: gemini
58
  base_url: https://generativelanguage.googleapis.com/v1beta # base_url supports v1beta/v1, only for Gemini models, required
59
  api: AIzaSyAN2k6IRdgw
60
  model:
61
  - gemini-1.5-pro
62
+ - gemini-1.5-flash-exp-0827: gemini-1.5-flash # After renaming, the original model name gemini-1.5-flash-exp-0827 cannot be used, if you want to use the original name, you can add the original name in the model, just add the line below to use the original name
63
  - gemini-1.5-flash-exp-0827 # Add this line, both gemini-1.5-flash-exp-0827 and gemini-1.5-flash can be requested
64
  tools: true
65
 
66
  - provider: vertex
67
+ project_id: gen-lang-client-xxxxxxxxxxxxxx # Description: Your Google Cloud Project ID. Format: String, usually consists of lowercase letters, numbers, and hyphens. How to get it: You can find your project ID in the project selector of the Google Cloud Console.
68
+ private_key: "-----BEGIN PRIVATE KEY-----\nxxxxx\n-----END PRIVATE" # Description: Private key of Google Cloud Vertex AI service account. Format: A JSON formatted string containing the private key information of the service account. How to get it: Create a service account in the Google Cloud Console, generate a JSON formatted key file, and then set its content as the value of this environment variable.
69
+ client_email: [email protected] # Description: Email address of the Google Cloud Vertex AI service account. Format: Usually a string like "[email protected]". How to get it: Generated when creating the service account, can also be found in the "IAM & admin" section of the Google Cloud Console.
70
  model:
71
  - gemini-1.5-pro
72
  - gemini-1.5-flash
 
75
  - claude-3-sonnet@20240229: claude-3-sonnet
76
  - claude-3-haiku@20240307: claude-3-haiku
77
  tools: true
78
+ notes: https://xxxxx.com/ # You can put the service provider's website, notes, official documentation, optional
79
 
80
  - provider: cloudflare
81
  api: f42b3xxxxxxxxxxq4aoGAh # Cloudflare API Key, required
82
  cf_account_id: 8ec0xxxxxxxxxxxxe721 # Cloudflare Account ID, required
83
  model:
84
+ - '@cf/meta/llama-3.1-8b-instruct': llama-3.1-8b # Rename model, @cf/meta/llama-3.1-8b-instruct is the provider's original model name, must be enclosed in quotes, otherwise a yaml syntax error, llama-3.1-8b is the renamed name, you can use a concise name instead of the original complex name, optional
85
+ - '@cf/meta/llama-3.1-8b-instruct' # Must be enclosed in quotes, otherwise a yaml syntax error
86
 
87
  - provider: other-provider
88
  base_url: https://api.xxx.com/v1/messages
89
  api: sk-bNnAOJyA-xQw_twAA
90
  model:
91
  - causallm-35b-beta2ep-q6k: causallm-35b
92
+ - anthropic/claude-3-5-sonnet
93
  tools: false
94
  engine: openrouter # Force the use of a specific message format, currently supports gpt, claude, gemini, openrouter native format, optional
95
 
96
  api_keys:
97
+ - api: sk-KjjI60Yf0JFWtfgRmXqFWyGtWUd9GZnmi3KlvowmRWpWpQRo # API Key, users need an API key to use this service, required
98
  model: # Models that this API Key can use, required
99
  - gpt-4o # Usable model name, can use all gpt-4o models provided by providers
100
  - claude-3-5-sonnet # Usable model name, can use all claude-3-5-sonnet models provided by providers
 
103
 
104
  - api: sk-pkhf60Yf0JGyJygRmXqFQyTgWUd9GZnmi3KlvowmRWpWqrhy
105
  model:
106
+ - anthropic/claude-3-5-sonnet # Usable model name, can only use the claude-3-5-sonnet model provided by the provider named anthropic. This method will not match the model named anthropic/claude-3-5-sonnet provided by other-provider.
107
+ - <anthropic/claude-3-5-sonnet> # By adding angle brackets on both sides of the model name, it will not look for the claude-3-5-sonnet model under the channel named anthropic, but will take the entire anthropic/claude-3-5-sonnet as the model name. This method can match the model named anthropic/claude-3-5-sonnet provided by other-provider. But it will not match the claude-3-5-sonnet model under anthropic.
108
  preferences:
109
+ USE_ROUND_ROBIN: true # Whether to use round-robin load balancing, true to use, false to not use, default is true. When enabled, each request to the model will be made in sequence according to the model configuration. It has nothing to do with the original channel order in providers. Therefore, you can set a different request order for each API key.
110
  AUTO_RETRY: true # Whether to automatically retry, automatically retry the next provider, true to automatically retry, false to not automatically retry, default is true
111
  RATE_LIMIT: 2/min # Supports rate limiting, maximum number of requests per minute, can be set to an integer, such as 2/min, 2 times per minute, 5/hour, 5 times per hour, 10/day, 10 times per day, 10/month, 10 times per month, 10/year, 10 times per year. Default is 60/min, optional
112
 
113
  # Channel-level weighted load balancing configuration example
114
  - api: sk-KjjI60Yf0JFWtxxxxxxxxxxxxxxwmRWpWpQRo
115
  model:
116
+ - gcp1/*: 5 # The number after the colon is the weight, the weight only supports positive integers.
117
+ - gcp2/*: 3 # The larger the number, the greater the probability of the request.
118
+ - gcp3/*: 2 # In this example, there are a total of 10 weights for all channels, and 5 out of 10 requests will request the gcp1/* model, 2 requests will request the gcp2/* model, and 3 requests will request the gcp3/* model.
119
 
120
  preferences:
121
+ USE_ROUND_ROBIN: true # When USE_ROUND_ROBIN must be true and there is no weight after the above channels, it will request according to the original channel order, if there is weight, it will request according to the weighted order.
122
  AUTO_RETRY: true
123
  ```
124
 
125
  ## Environment Variables
126
 
127
  - CONFIG_URL: The download address of the configuration file, it can be a local file or a remote file, optional
128
+ - TIMEOUT: Request timeout, default is 20 seconds. The timeout can control the time needed to switch to the next channel when a channel does not respond. Optional.
129
 
130
  ## Docker Local Deployment
131
 
 
152
  - ./api.yaml:/home/api.yaml
153
  ```
154
 
155
+ CONFIG_URL is a link that can automatically download a remote configuration file. For example, if you find it inconvenient to modify the configuration file on a certain platform, you can upload the configuration file to a hosting service that provides a direct link for uni-api to download. CONFIG_URL is this direct link.
156
 
157
  Run Docker Compose container in the background
158
 
README_CN.md CHANGED
@@ -89,6 +89,7 @@ providers:
89
  api: sk-bNnAOJyA-xQw_twAA
90
  model:
91
  - causallm-35b-beta2ep-q6k: causallm-35b
 
92
  tools: false
93
  engine: openrouter # 强制使用某个消息格式,目前支持 gpt,claude,gemini,openrouter 原生格式,选填
94
 
@@ -102,7 +103,8 @@ api_keys:
102
 
103
  - api: sk-pkhf60Yf0JGyJygRmXqFQyTgWUd9GZnmi3KlvowmRWpWqrhy
104
  model:
105
- - anthropic/claude-3-5-sonnet # 可以使用的模型名称,仅可以使用名为 anthropic 提供商提供的 claude-3-5-sonnet 模型。其他提供商的 claude-3-5-sonnet 模型不可以使用。
 
106
  preferences:
107
  USE_ROUND_ROBIN: true # 是否使用轮询负载均衡,true 为使用,false 为不使用,默认为 true。开启轮训后每次请求模型按照 model 配置的顺序依次请求。与 providers 里面原始的渠道顺序无关。因此你可以设置每个 API key 请求顺序不一样。
108
  AUTO_RETRY: true # 是否自动重试,自动重试下一个提供商,true 为自动重试,false 为不自动重试,默认为 true
 
89
  api: sk-bNnAOJyA-xQw_twAA
90
  model:
91
  - causallm-35b-beta2ep-q6k: causallm-35b
92
+ - anthropic/claude-3-5-sonnet
93
  tools: false
94
  engine: openrouter # 强制使用某个消息格式,目前支持 gpt,claude,gemini,openrouter 原生格式,选填
95
 
 
103
 
104
  - api: sk-pkhf60Yf0JGyJygRmXqFQyTgWUd9GZnmi3KlvowmRWpWqrhy
105
  model:
106
+ - anthropic/claude-3-5-sonnet # 可以使用的模型名称,仅可以使用名为 anthropic 提供商提供的 claude-3-5-sonnet 模型。其他提供商的 claude-3-5-sonnet 模型不可以使用。这种写法不会匹配到other-provider提供的名为anthropic/claude-3-5-sonnet的模型。
107
+ - <anthropic/claude-3-5-sonnet> # 通过在模型名两侧加上尖括号,这样就不会去名为anthropic的渠道下去寻找claude-3-5-sonnet模型,而是将整个 anthropic/claude-3-5-sonnet 作为模型名称。这种写法可以匹配到other-provider提供的名为 anthropic/claude-3-5-sonnet 的模型。但不会匹配到anthropic下面的claude-3-5-sonnet模型。
108
  preferences:
109
  USE_ROUND_ROBIN: true # 是否使用轮询负载均衡,true 为使用,false 为不使用,默认为 true。开启轮训后每次请求模型按照 model 配置的顺序依次请求。与 providers 里面原始的渠道顺序无关。因此你可以设置每个 API key 请求顺序不一样。
110
  AUTO_RETRY: true # 是否自动重试,自动重试下一个提供商,true 为自动重试,false 为不自动重试,默认为 true
main.py CHANGED
@@ -274,27 +274,28 @@ class ModelRequestHandler:
274
 
275
  for model in config['api_keys'][api_index]['model']:
276
  if "/" in model:
277
- provider_name = model.split("/")[0]
278
- model_name_split = "/".join(model.split("/")[1:])
279
- models_list = []
280
- for provider in config['providers']:
281
- if provider['provider'] == provider_name:
282
- models_list.extend(list(provider['model'].keys()))
283
- # print("models_list", models_list)
284
- # print("model_name", model_name)
285
-
286
- # 处理带斜杠的模型名
287
- for provider in config['providers']:
288
- if model in provider['model'].keys():
289
- provider_rules.append(provider['provider'] + "/" + model)
290
-
291
- # print("model", model)
292
- if (model_name_split and model_name in models_list) or (model_name_split == "*" and model_name in models_list):
293
- provider_rules.append(provider_name)
 
294
  else:
295
  for provider in config['providers']:
296
  if model in provider['model'].keys():
297
- provider_rules.append(provider['provider'] + "/" + model_name_split)
298
 
299
  provider_list = []
300
  # print("provider_rules", provider_rules)
 
274
 
275
  for model in config['api_keys'][api_index]['model']:
276
  if "/" in model:
277
+ if model.startswith("<") and model.endswith(">"):
278
+ model = model[1:-1]
279
+ # 处理带斜杠的模型名
280
+ for provider in config['providers']:
281
+ if model in provider['model'].keys():
282
+ provider_rules.append(provider['provider'] + "/" + model)
283
+ else:
284
+ provider_name = model.split("/")[0]
285
+ model_name_split = "/".join(model.split("/")[1:])
286
+ models_list = []
287
+ for provider in config['providers']:
288
+ if provider['provider'] == provider_name:
289
+ models_list.extend(list(provider['model'].keys()))
290
+ # print("models_list", models_list)
291
+ # print("model_name", model_name)
292
+ # print("model", model)
293
+ if (model_name_split and model_name in models_list) or (model_name_split == "*" and model_name in models_list):
294
+ provider_rules.append(provider_name)
295
  else:
296
  for provider in config['providers']:
297
  if model in provider['model'].keys():
298
+ provider_rules.append(provider['provider'] + "/" + model)
299
 
300
  provider_list = []
301
  # print("provider_rules", provider_rules)