主题模式
文本系列模型
1. 对话生成
https://aiping.cn/api/v1/chat/completions
- Note: 使用 openai sdk 调用,url 省略最后的 /chat/completions,使用:https://aiping.cn/api/v1
AI Ping 的 API 请求和 Openai 的对话 API 非常类似,仅为了功能拓展在输入、输出格式上新增了一些字段。
1.1 Header 参数
1.2 Body 参数
采样参数会影响模型生成 token 的过程。你可以向 AI Ping 发送下列参数中的任意参数,也可以发送其他参数。 如果请求中缺少某些参数,AI Ping 使用下面列出的默认值(例如 stream 默认为 false)。
1.3 供应商获取
供应商列表见:https://aiping.cn/supplierList,使用时大小写敏感
1.4 请求格式
以下是接口 /api/v1/chat/completions 请求格式,完整的参数列表见Body 参数
python
class VideoUrl(BaseModel):
url: str
class ImageUrl(BaseModel):
url: str
class TextItem(BaseModel):
type: Literal["text"]
text: str
class VideoItem(BaseModel):
type: Literal["video_url"]
video_url: VideoUrl
class ImgItem(BaseModel):
type: Literal["image_url"]
image_url: ImageUrl
# 使用“判别联合类型”来组合
ContentItem = Union[TextItem, VideoItem, ImgItem]
class Message(BaseModel):
role: Literal["system", "user", "assistant"] = "user"
content: Union[str, List[ContentItem]] = "say hello"
class StreamOptions(MyBaseModel):
# 你可以根据平台实际需要拓展字段
include_usage: bool = False
class ResponseFormat(BaseModel):
type: Literal["text", "json_object"] = "text" # 支持 "text" 或 "json" 之类
SortEnum = Literal["input_price", "output_price", "throughput", "latency", "input_length"]
class Provider(MyBaseModel):
only: Optional[List[str]] = []
order: Optional[List[str]] = []
sort: Optional[List[SortEnum]] = None
input_price_range: List[float] = []
output_price_range: List[float] = []
throughput_range: List[float] = []
latency_range: List[float] = []
input_length_range: List[int] = []
allow_fallbacks: bool = True
sort: Optional[List[SortEnum]] = None
ignore: Optional[List[str]] = []
allow_filter_prompt_length: bool = True
class ExtraBody(MyBaseModel):
provider: Optional[Provider] = None
enable_thinking: bool = None
class ChatRequest(MyBaseModel):
model: str = "DeepSeek-R1-0528"
messages: List[Message]
max_completion_tokens: int = Field(None, description="新的最大的推理长度")
max_tokens: int = Field(None, description="旧的最大的推理长度")
temperature: float = Field(None, description="默认温度")
top_p: float = Field(None, description="默认top_p")
top_k: int = Field(None, description="默认top_k")
presence_penalty: Optional[float] = None
stream: Optional[bool] = False
stream_options: Optional[StreamOptions] = None
modalities: Optional[Literal["text", "audio"]] = None
response_format: Optional[ResponseFormat] = None
extra_body: Optional[ExtraBody] = None
provider: Optional[Provider] = Field(default=None, description="这个字段不填,如果填了的话会覆盖extra_body中的provider字段")
enable_thinking: bool = None1.5 返回格式
AI Ping 对不同模型和供应商的 schema 进行规范化,以符合 OpenAI Chat API 的要求,除此之外,每个返回的 chunk 新增 provider 字段,表示提供服务的供应商。
python
class ChatCompletionChunkResponse(BaseModel):
id: str
object: str
created: int
model: str
choices: List[Choice]
usage: Usage
provider: str
class Usage(BaseModel):
prompt_tokens: int
completion_tokens: int
total_tokens: int
prompt_tokens_details: 可选[TokensDetails] = None
completion_tokens_details: 可选[TokensDetails] = None
class Delta(BaseModel):
role: 可选[str]
content: 可选[str]
class Choice(BaseModel):
index: int
delta: Delta
class TokensDetails(BaseModel):
cached_tokens: 可选[int] = 0
reasoning_tokens: 可选[int] = 0以下是一个实例
json
{
"id": "fe444exxxxxxx3",
"object": "chat.completion.chunk",
"created": 1758002830,
"model": "DeepSeek-V3.1",
"choices": [
{
"index": 0,
"delta": {
"role": "assistant",
"content": " of"
}
}
],
"usage": {
"prompt_tokens": 108,
"completion_tokens": 1500,
"total_tokens": 1608,
"prompt_tokens_details": {
"cached_tokens": 0
},
"completion_tokens_details": {
"reasoning_tokens": 0
}
},
"provider": "XX云"
}1.6 请求示例
流式请求
python
from openai import OpenAI
openai_client = OpenAI(
base_url="https://aiping.cn/api/v1",
api_key="<API_KEY>",
)
stream = openai_client.chat.completions.create(
model="DeepSeek-R1-0528",
stream=True,
messages=[
{
"role": "user",
"content": "Hello"
}
]
)
for chunk in stream:
if not getattr(chunk, "choices", None):
continue
content = getattr(chunk.choices[0].delta, "content", None)
if content:
print(content, end="", flush=True)shell
curl -N -X POST https://aiping.cn/api/v1/chat/completions \
-H "Authorization: Bearer <API_KEY>" \
-H "Content-Type: application/json" \
-d '{
"model": "DeepSeek-R1-0528",
"stream": true,
"messages": [
{
"role": "user",
"content": "Hello"
}
]
}'非流式请求
python
from openai import OpenAI
openai_client = OpenAI(
base_url="https://aiping.cn/api/v1",
api_key="<API_KEY>",
)
completion = openai_client.chat.completions.create(
model="DeepSeek-R1-0528",
messages=[
{
"role": "user",
"content": "Hello"
}
]
)
print(completion.choices[0].message.content)shell
curl -N -X POST https://aiping.cn/api/v1/chat/completions \
-H "Authorization: Bearer <API_KEY>" \
-H "Content-Type: application/json" \
-d '{
"model": "DeepSeek-R1-0528",
"stream": false,
"messages": [
{
"role": "user",
"content": "Hello"
}
]
}'VL 模型
python
from openai import OpenAI
openai_client = OpenAI(
base_url="https://aiping.cn/api/v1",
api_key="<API_KEY>",
)
response = openai_client.chat.completions.create(
model="Qwen2.5-VL-32B-Instruct",
stream=True,
messages=[
{
"role": "user",
"content": [
{
"type": "image_url",
"image_url": {
"url": (
"https://img.alicdn.com/imgextra/i1/"
"O1CN01gDEY8M1W114Hi3XcN_!!6000000002727-0-tps-1024-406.jpg"
),
},
},
{
"type": "text",
"text": "这道题怎么解答?"
},
],
}
]
)
for chunk in response:
if not getattr(chunk, "choices", None):
continue
reasoning_content = getattr(chunk.choices[0].delta, "reasoning_content", None)
if reasoning_content:
print(reasoning_content, end="", flush=True)
content = getattr(chunk.choices[0].delta, "content", None)
if content:
print(content, end="", flush=True)