鹤啸九天 自律更自由,平凡不平庸 Less is More

OpenAI服务及接口

2023-02-18
阅读量

Notes(温馨提示):

  1. ★ 首次阅读建议浏览:导航指南
  2. 右上角工具条搜索文章,右下角二维码关注微信公众号(鹤啸九天),底栏分享、赞赏、评论
  3. ★ 转载请注明文章来源,知识点积累起来不容易,水滴石穿,绳锯木断,谢谢理解
  4. ★ 如有疑问,邮件讨论,欢迎贡献优质资料


OpenAI

ChatGPT 调用前提

ChatGPT prompt构成

完整示例

import openai

openai.api_key = "YOUR API KEY HERE"
model_engine = "text-davinci-003"
chatbot_prompt = """
作为一个高级聊天机器人,你的主要目标是尽可能地协助用户。这可能涉及回答问题、提供有用的信息,或根据用户输入完成任务。为了有效地协助用户,重要的是在你的回答中详细和全面。使用例子和证据支持你的观点,并为你的建议或解决方案提供理由。

<conversation history>

User: <user input>
Chatbot:"""


def get_response(conversation_history, user_input):
    prompt = chatbot_prompt.replace(
        "<conversation_history>", conversation_history).replace("<user input>", user_input)
    # Get the response from GPT-3
    response = openai.Completion.create(
        engine=model_engine, prompt=prompt, max_tokens=2048, n=1, stop=None, temperature=0.5)
    # Extract the response from the response object
    response_text = response["choices"][0]["text"]
    chatbot_response = response_text.strip()
    return chatbot_response

def main():
    conversation_history = ""
    while True:
        user_input = input("> ")
        if user_input == "exit":
            break
        chatbot_response = get_response(conversation_history, user_input)
        print(f"Chatbot: {chatbot_response}")
        conversation_history += f"User: {user_input}\nChatbot: {chatbot_response}\n"
main()

GPT-3 API vs ChatGPT Web

两种非官方 ChatGPT API 方法

方式 免费? 可靠性 质量
ChatGPTAPI(GPT-3) 可靠 较笨
ChatGPTUnofficialProxyAPI(网页 accessToken) 相对不可靠 聪明

对比:

  1. ChatGPTAPI 使用 text-davinci-003 通过官方OpenAI补全API模拟ChatGPT(最稳健的方法,但它不是免费的,并且没有使用针对聊天进行微调的模型)
  2. ChatGPTUnofficialProxyAPI 使用非官方代理服务器访问 ChatGPT 的后端API,绕过Cloudflare(使用真实的的ChatGPT,非常轻量级,但依赖于第三方服务器,并且有速率限制)

【2023-2-26】chatgpt-web 用 Express 和 Vue3 搭建的同时支持 openAI Key 和 网页 accessToken 的 ChatGPT 演示网页

OpenAI 收费

注意:价格上 OpenAI 最贵的 AIGC 语言模型达芬奇为每 0.02 美元 750 个单词,AIGC 图型模型价格仅为 0.020 美元一张。

  • gpt3模型付费API试用版,注册一个账号送18美金,调用费用为每1000字消耗2美分(0.02美元/500汉字,一个汉字两个token),折合下来差不多0.1元250个汉字,这个字数包括问题和返回结果(非汉字时,花费更少)。 $ 1800/250=7.2 $
  • ChatGPT单账户18美金免费访问量:1800×250÷30=15000次请求,平均250个汉字消耗0.01美元,用户平均请求长度30个汉字
  • ChatGPT用的模型是gpt3.5,目前没公开API

OpenAI收费项目详情 img

OpenAI 账户注册

国内无法注册账户,怎么办?

  • ① 注册需要国外手机号,没有的话要用虚拟号,验证码1.2元/条,详见
  • ② 嫌麻烦的话,淘宝上搜,有人提供注册服务,大概18元,账号售卖
  • ③ 有人部署了 ChatGPT微信群

流程总结

前置条件

前提条件:

  • 1、一个邮箱账号
    • 非163,OpenAI会提示无法注册
  • 2、能够科学上网,具备代理网络的环境。
  • 3、国外手机号,用于接收注册验证码。
    • 如果没有,通过第三方接码平台来注册国外手机号,支付宝要有 1.5 元人民币。
    • gv(google voice虚拟号)不行
    • 接码平台推荐:sms-activate

注册短信平台并充值

  • 先注册在线接受短信的虚拟号码 - SMS-Activate,注册好之后进行对应的充值

详见站内专题海外手机号

【2023-1-30】一文教你快速注册OpenAI(ChatGPT),国内也可以

【2023-5-2】虚拟号被OpenAI禁掉

Your account was flagged for potential abuse. If you feel this is an error, please contact us at help.openai.com

img

精简流程

注册OpenAI账户

  • OpenAI注册页面,错误信息及对应解法
    • Signup is currently unavailable, please try again later. 某些国家限制,需要开全局代理
    • Too many signups from the same IP 同一个ip注册限制
  • 邮箱认证:输入邮箱账户,一般用gmail,平台发送邮件
    • 注意别用163邮箱(提示不可用), qq邮箱可以
    • 使用vpn切到国外(香港不行),否则:OpenAI’s API is not available in your country
    • img
  • 手机认证:打开邮件,启动手机认证
    • 接码平台充值(卢比),选择国家(如印度),输入申请的虚拟号
    • 输入国外虚拟号,等待几分钟,接码平台会显示激活码(如705139)
  • 填入激活码后,注册成功
  • 登录OpenAI

OpenAI API调用

官方 API 覆盖:Text completion 、Code completion、Chat completion、Image completion、Fine-tuning、Embedding、Speech to text、Moderation

Rate limits are measured in three ways:

  • RPM (requests per minute) 每分钟请求量
  • RPD (requests per day) 每天请求量
  • TPM (tokens per minute) 每分钟token数

openai tool

openai Python 工具包

旧版: <= 0.28

pip install openai==0.28.1

重要变量,定义文件: init.py#L50

注意

  • 【2023-10-10】openai工具包变量修改后,会被覆盖,如果想恢复,记得重置(需要恢复4个参数)
import openai

openai.api_base # 服务器url地址, https://api.openai.com/v1
openai.api_key # 取值是 openai key
openai.api_type # 类型, open_ai, 或微软的 azure
openai.api_version # 版本, 支持两种取值 2020-10-01, 2020-11-07, 微软是 2023-03-15-preview
# ------ 重置官方配置 -------
openai.api_type = 'open_ai'
openai.api_base = 'https://api.openai.com/v1'
openai.api_key = "sk-******"
openai.api_version = '2020-11-07'
print(openai.api_base, openai.api_key, openai.api_type, openai.api_version, openai.app_info)
# ------ 源码 ------
api_key = os.environ.get("OPENAI_API_KEY")
# Path of a file with an API key, whose contents can change. Supercedes `api_key` if set.  The main use case is volume-mounted Kubernetes secrets, which are updated automatically.
api_key_path: Optional[str] = os.environ.get("OPENAI_API_KEY_PATH")

organization = os.environ.get("OPENAI_ORGANIZATION")
api_base = os.environ.get("OPENAI_API_BASE", "https://api.openai.com/v1")
api_type = os.environ.get("OPENAI_API_TYPE", "open_ai")
api_version = os.environ.get(
    "OPENAI_API_VERSION",
    ("2023-05-15" if api_type in ("azure", "azure_ad", "azuread") else None),
)
verify_ssl_certs = True  # No effect. Certificates are always verified.
proxy = None
app_info = None

新版: >= 1

新版以面向对象方式运行,支持异步请求

【2023-11-6】微软说明

2023 年 11 月 6 日开始,pip install openaipip install openai --upgrade 将安装 OpenAI Python 库 version 1.x。

  • 从 version 0.28.1 升级到 version 1.x 是一项中断性变更,需要测试和更新代码

接口升级

  • 旧版: <= 0.28, client.ChatCompletion.create
  • 新版: >= 1.0.0, client.chat.completions.create
import os
from openai import OpenAI

client = OpenAI()
OpenAI.api_key = os.getenv('OPENAI_API_KEY')

completion = client.chat.completions.create(
  model="gpt-3.5-turbo",
  messages=[
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Hello!"}
  ]
)

print(completion.choices[0].message.content)

Embedding

官方Embedding model

  • Embeddings are numerical representations of concepts converted to number sequences, which make it easy for computers to understand the relationships between those concepts.
  • The new model, text-embedding-ada-002, replaces five separate models for text search, text similarity, and code search, and outperforms our previous most capable model, Davinci, at most tasks, while being priced 99.8% lower.

【2022-1-25】Introducing text and code embeddings

Embeddings are useful for working with natural language and code, because they can be readily consumed and compared by other machine learning models and algorithms like clustering or search.

The new /embeddings endpoint in the OpenAI API provides text and code embeddings with a few lines of code

import openai
response = openai.Embedding.create(
    input="canine companions say",
    engine="text-similarity-davinci-001")

curl调用

OPENAI_API_KEY="sk-******"
curl https://api.openai.com/v1/embeddings \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -d '{
    "input": "Your text string goes here",
    "model": "text-embedding-ada-002"
  }'

返回格式:

  • 1536 维
{
  "data": [
    {
      "embedding": [
        0.002092766109853983,
        ...
        0.0026526579167693853
      ],
      "index": 0,
      "object": "embedding"
    }
  ],
  "model": "text-embedding-ada-002-v2",
  "object": "list",
  "usage": {
    "prompt_tokens": 12,
    "total_tokens": 12
  }
}

python调用

限速

  • requests per min. Limit: 60 / min
import openai

response = openai.Embedding.create(
  input="porcine pals say",
  model="text-embedding-ada-002"
)
# 限速, 每次请求后休息1s
import time
time.sleep(1)

改进版

import openai

openai.api_key = "sk-***"

def emb(text):
    """
        embedding
    """
    res = {"code":0, "msg":"-", "data":{}}
    if not text:
        #print(f"输入为空!{text}")
        res.update({'code':-1, 'msg':'输入为空'})
        return res
    # 调用 api    
    response = openai.Embedding.create(
        input=text,
        model="text-embedding-ada-002"
    )
    return response['data'][0]['embedding']

def chat(text, model_name='gpt-3.5-turbo'):
    """
        openai chat 调用
    """
    res = {"code":0, "msg":"-", "data":{}}
    if not text:
        #print(f"输入为空!{text}")
        res.update({'code':-1, 'msg':'输入为空'})
        return res
    # 调用 chatgpt    
    completion = openai.ChatCompletion.create(
      #model="gpt-4", 
      #model="gpt-3.5-turbo", 
      model=model_name,
      max_tokens=100,
      temperature=1.2,
      messages=[{
          "role": "user", #  role (either “system”, “user”, or “assistant”) 
          "content": text}]
    )
    res['data']['role'] = completion['choices'][0]['message']['role']
    res['data']['content'] =  completion['choices'][0]['message']['content']
    return f"[{res['data']['role']}] {res['data']['content']}"
    #print(completion)

if __name__ == '__main__':
    test = "你好,你支持哪些插件"
    res = chat(test)
    print(res)
    res = emb(test)
    print(len(res))

go 调用

go-openai

// go get github.com/sashabaranov/go-openai

package main

import (
	"context"
	"fmt"
	openai "github.com/sashabaranov/go-openai"
)

func main() {
	client := openai.NewClient("your token")
	resp, err := client.CreateChatCompletion(
		context.Background(),
		openai.ChatCompletionRequest{
			Model: openai.GPT3Dot5Turbo,
			Messages: []openai.ChatCompletionMessage{
				{
					Role:    openai.ChatMessageRoleUser,
					Content: "Hello!",
				},
			},
		},
	)

	if err != nil {
		fmt.Printf("ChatCompletion error: %v\n", err)
		return
	}

	fmt.Println(resp.Choices[0].Message.Content)
}

ChatGPT 调用

API有两种方案

  • 使用ChatGPT:浏览器调试,获取access_token,模拟登录后调用
  • 使用gpt 3 官方api
  • ChatGPT api:GPT-3.5

内测过程中调用是免费的,没有次数限制。此外,API接口调用不需要梯子或代理(使用代理反而可能会报错“Error communicating with OpenAI”),只需要API Key就可以了,且当前API Key使用免费。

现有大多数 ChatGPT API 实际上是 OpenAI GPT3 模型接口,模型名称为“text-davinci-003”,

安装使用

pip install OpenAI # 安装OpenAI
pip show OpenAI # 查看版本 Version: 0.8.0
pip install -U OpenAI # 更新,解决问题:module 'OpenAI' has no attribute 'Image',python 3.8以上才行

GPT-3接口(Completion)

Completion Python 接口

import os
import OpenAI

print("欢迎使用ChatGPT智能问答,请在Q:后面输入你的问题,输入quit退出!")
OpenAI.api_key = "<OpenAI_key>"  # 填上你自己的API,或者把API加入系统的环境变量。
start_sequence = "\nA:"
restart_sequence = "\nQ: "
while True:
    prompt = input(restart_sequence)
    if prompt == 'quit':
        break
    else:
        try:
            response = OpenAI.Completion.create(
              model="text-davinci-003", # 使用davinci-003的模型,准确度更高。
              prompt = prompt,
              temperature=1,
              max_tokens=2000, # 限制回答长度,可以限制字数,如:写一个300字作文等。
              frequency_penalty=0,
              presence_penalty=0
            )
            print(start_sequence,response["choices"][0]["text"].strip())
        except Exception as exc: #捕获异常后打印出来
            print(exc)

import os
import OpenAI

OpenAI.api_key = os.getenv("OpenAI_API_KEY")
# ------- 文本生成 ---------
prompt = """We’re releasing an API for accessing new AI models developed by OpenAI. Unlike most AI systems which are designed for one use-case, the API today provides a general-purpose “text in, text out” interface, allowing users to try it on virtually any English language task. You can now request access in order to integrate the API into your product, develop an entirely new application, or help us explore the strengths and limits of this technology."""

response = OpenAI.Completion.create(model="davinci", prompt=prompt, stop="\n", temperature=0.9, max_tokens=100)

# ------- 其它应用 ---------
response = OpenAI.Completion.create(
  engine="davinci",
  prompt="The following is a conversation with an AI assistant. The assistant is helpful, creative, clever, and very friendly.\n\nHuman: Hello, who are you?\nAI: I am an AI created by OpenAI. How can I help you today?\nHuman: I'd like to cancel my subscription.\nAI:",
  temperature=0.9,
  max_tokens=150,
  top_p=1,
  frequency_penalty=0.0,
  presence_penalty=0.6,
  stop=["\n", " Human:", " AI:"]
)

print(response)

requests调ChatGPT

  • 用requests实现的调用接口
import requests,json
api_key="<OpenAI_key>" # 设置自己的API密匙
prompt = "" # 设置prompt初始值
# 设置headers
headers = {"Authorization":f"Bearer {api_key}"}
# 设置GPT-3的网址
api_url = "https://api.OpenAI.com/v1/completions"
#设置循环可以持续发问
while prompt != 'quit':
    prompt = input("Q: ")
    #设置请求参数
    data = {'prompt':prompt,
            "model":"text-davinci-003",
            'max_tokens':128,
            'temperature':1,
            }
    #发送HTTP POST请求
    response = requests.post(api_url,json = data,headers = headers)
    #解析响应
    resp = response.json()
    print("A:",resp["choices"][0]["text"].strip(),end="\n")

ChatGPT(GPT 3.5)接口

【2023-3-2】OpenAI 提供 ChatGPT API(gpt-3.5-turbo),单次调用费用是 text-davinc-003 的 1/10

API_KEY 不要明文写代码里调用,会被OpenAI封禁

代码调用

shell 版本

OPENAI_API_KEY="sk-***"
# 腾讯云函数
# curl https://service-4jhtjgo0-1317196971.hk.apigw.tencentcs.com/release \
curl https://api.openai.com/v1/chat/completions \
 -H "Authorization: Bearer $OPENAI_API_KEY" -H "Content-Type: application/json" \
 -d '{ "model": "gpt-3.5-turbo", "messages": [{"role": "user", "content": "What is the OpenAI mission?"}] }'

python 版本

import openai

# openai.api_type = 'open_ai'
# openai.api_base = 'https://api.openai.com/v1'
# openai.api_key = "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"
# openai.api_version = '2020-11-07'
# print(openai.api_base, openai.api_key, openai.api_type, openai.api_version, openai.app_info)

openai.api_key = 'sk-***'

completion = openai.ChatCompletion.create(
  model="gpt-3.5-turbo", 
  # ---- function call -----
  # functions = function_list, # 设置函数调用
  # function_call="auto", # 开启function call
  messages=[{
      "role": "user", #  role (either “system”, “user”, or “assistant”) 
      "content": "你好,吃了吗"}]
)
print(completion['choices'][0]['message']['role'], completion['choices'][0]['message']['content'])
print(completion)

gradio web

Web UI: 基于 gradio, code

import os
from functools import partial
import gradio as gr
import openai

class Messages_lst:
    def __init__(self):
        self.memory = []
    
    def update(self, role,message):
        if role == "user":
            user_turn = {"role": "user","content":message}
            self.memory.append(user_turn)
        elif role == "assistant":
            gpt_turn = {"role": "assistant","content":message}
            self.memory.append(gpt_turn)
    
    def get(self):
        return self.memory
    
messages_lst = Messages_lst()

def get_response(api_key_input, user_input):
    
    # print(api_key_input)
    print(user_input)

    messages_lst.update("user", user_input)
    messages = messages_lst.get()

    openai.api_key = api_key_input
    MODEL = "gpt-3.5-turbo"

    print(messages)

    response = openai.ChatCompletion.create(
        model=MODEL,
        messages = messages,
        temperature=0.5)
    assistant = response['choices'][0]['message']['content']
    messages_lst.update("assistant", assistant)
    # return assistant
    # 生成HTML字符串
    html_string = ""
    for message in messages_lst.get():
        if message["role"] == "user":
            html_string += f"<p><b>User:</b> {message['content']}</p>"
        else:
            html_string += f"<p><b>Assistant:</b> {message['content']}</p>"

    return html_string

def main():
    # api_key = os.environ.get("OPENAI_API_KEY")

    api_key_input = gr.components.Textbox(
        lines=1,
        label="Enter OpenAI API Key",
        type="password",
    )

    user_input = gr.components.Textbox(
        lines=3,
        label="Enter your message",
    )
    

    output_history = gr.outputs.HTML(
        label="Updated Conversation",
    )

    inputs = [
        api_key_input,
        user_input,
    ]

    iface = gr.Interface(
        fn=get_response,
        inputs=inputs,
        outputs=[output_history],
        title="GPT WebUi",
        description="A simple chatbot using Gradio",
        allow_flagging="never",
    )

    iface.launch()

if __name__ == '__main__':
    main()

网页调用

web demo

Gradio web demo

import gradio as gr
import openai

openai.api_key = "sk-**"

def question_answer(role, question):
    if not question:
        return "输入为空..."
    completion = openai.ChatCompletion.create(
      model="gpt-3.5-turbo", 
      messages=[{
          "role": "user", #  role (either “system”, “user”, or “assistant”) 
          "content": question}
      ]
    )
    # 返回信息
    return (completion['choices'][0]['message']['role'], completion['choices'][0]['message']['content'])

gr.Interface(fn=question_answer, 
    # inputs=["text"], outputs=['text', "textbox"], # 简易用法
    inputs=[gr.components.Dropdown(label="Role", placeholder="user", choices=['system', 'user', 'assistant']),
        gr.inputs.Textbox(lines=5, label="Input Text", placeholder="问题/提示语(prompt)...")
    ],
    outputs=[gr.outputs.Textbox(label="Role"), gr.outputs.Textbox(label="Generated Text")],
    # ["highlight", "json", "html"], # 定制返回结果格式,3种输出分别用3种形式展示
    examples=[['你是谁?'], ['帮我算个数,六乘5是多少']],
    cache_examples=True, # 缓存历史案例
    title="ChatGPT Demo",
    description="A simplified version of DEMO [examples](https://gradio.app/demos/) "
).launch(share=True) # 启动 临时分享模式
#).launch() # 仅本地访问

ChatGPT 网页版

原方案:

  • ChatGPT页面 获取 session_token,使用 revChatGPT 直接访问web接口
  • 但随着 ChatGPT 接入 Cloudflare 人机验证,这一方案难以在服务器顺利运行。

登陆 OpenAI官网, 然后通过按下F12,进到调试模式,找到session_token

通过access_token来访问ChatGPT

from asyncChatGPT.asyncChatGPT import Chatbot
import asyncio
config = {
  "Authorization":"eyJhbGciOiJSUzI1NiIs....85w"
}
chatbot = Chatbot(config, conversation_id=None)
while 1 == 1:
    text = input('Q:')
    if text == 'quit':
        break
    else:
        message = asyncio.run(chatbot.get_chat_response(text))['message']
        print('A:',message)

通过session_token来访问ChatGPT

from revChatGPT.revChatGPT import Chatbot
config = {
    "email": "<YOUR_EMAIL>",
    "password": "<YOUR_PASSWORD>",
    "session_token": "eyJhbGciOiJkaXIiLCJl....7Q"
}
chatbot = Chatbot(config, conversation_id=None)
while 1==1:
    text = input("Q:")
    if text == 'quit':
        break
    else:
        response = chatbot.get_chat_response(text, output="text")
        print('A:',response['message'])

python flask 搭建 web 服务

  • 安装组件:flask、flask-cors、gunicorn
  • 服务端代码:callOpenAI.py文件
  • 启动服务:python callOpenAI.py,然后通过浏览器访问:http://xx.xx.xx.xx:xxxx/callChatGPT?input=what is your name来进行开发调测
    • img
  • 创建 wsgi.py,供gunicorn使用
  • 创建 gunicorn.conf 文件
  • 启动 gunicorn,正式投产调用接口

python组件

  • (1)因为打算用python的flask进行快速的服务端调用,安装flask : pip install flask
  • (2)为解决跨域问题安装 flask cros: pip install flask-cors
  • (3)安装专门针对flask的web服务进程gunicron:pip install gunicorn
from flask import Flask,request
from flask_cors import CORS
import os
import openai
app = Flask(__name__)
CORS(app,supports_credentials=True)

@app.route('/',methods=['GET','POST'])
def hello_world():
	text=request.args.get('text')
	return text

@app.route('/callChatGPT',methods=['GET','POST'])
def callChatGPT():
	input = request.args.get('input')
	openai.api_key = "xxxxxxxx"
	#openai.api_key = os.getenv("OPENAI_API_KEY")
	response =  openai.Completion.create(model="text-davinci-003",prompt=input,temperature=0.5,max_tokens=500)
	return response.choices[0].text

if __name__ == "__main__":
	app.run(host='xx.xx.xx.xx',port=xxxx,debug=True)

wsgi.py

from callOpenAI import app

if __name__ == "__main__":
	app.run()

同一目录下创建gunicorn.conf文件,内容如下:

bind = "xx.xx.xx.xx:xxxx"
workers = 10
errorlog = "/var/www/chatGPT/gunicorn.error.log"
loglevel = "debug"
proc_name = "callChatGPT"

执行如下命令,即可以正式投产调用接口。

gunicorn --config gunicorn.conf wsgi:app

前端调用的时候,直接使用ajax可能会出现跨域调用问题,先要如前所示安装flask-cors,然后在代码中进行配置即可解决

<html>
<head>  
<meta charset="utf-8" />
<title>chatGPT-AI问答系统</title>
 <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.4.1/jquery.min.js"></script>
<style>
    .question-container {
        padding: 10px;
    }
    .questions {
        padding: 10px;
    }
    .answers {
        padding: 10px;
    }
</style>
</head>
 <body>
    <div class="question-container">
        <h2>安联资管-chatGPT-AI问答系统</h2>
        <form>
            <div class="questions">
                <label>Questions:</label>
                <input type="text" id="question" name="提问" placeholder="在这里提问..."/>
            </div>
            <div class="answers">
                <label>Answers:</label>
                <textarea name="回答" disabled placeholder ="答案将展示在这里..." ></textarea>
            </div>
            <input type="submit" value="提交"/>
        </form>
    </div>
 <script>
    $(document).ready(function(){
         // Submit button click event
        $('form').on('submit', function(event){
            event.preventDefault();
             // Send the data to flask
            $.ajax({
              url: 'http://xx.xx.xx.xx:xxxx/callChatGPT',  // enter your flask endpoint here
              type: "GET",
              data: "input="+$('#question').val(),
              dataType: 'text',
              success: function(response) {
                console.log(JSON.stringify(response))
                  // check response and update answer box
                  if (response) {
                      alert("success");
                      $('.answers textarea').val(response);
                  } else {
                      alert("没有找到答案,请重新提问.");
                  }
              },
              error: function(xhr) {
                alert("异常: " + xhr.status + " " + xhr.statusText);
              }
            });
        });
    });
</script>
 </body>
</html>

注意,因服务端接口callChatGPT返回的是response.choices[0].text,是文本类型,因此前端的传入参数dataType要是text,response直接当成文本使用就可以了,不用再去解析,否则会报错。

  • img

参考:手把手教你搭建基于chatGPT的智能机器人

js+html

网页形式调用

<html>
<script src="https://unpkg.com/vue@3/dist/vue.global.js"></script>
<script src="https://unpkg.com/axios/dist/axios.min.js"></script>
<head>
    <title> ChatGPT Demo </title>
</head>

<body>
<div id="app" style="display: flex;flex-flow: column;margin: 20 ">
    <scroll-view scroll-with-animation scroll-y="true" style="width: 100%;">
        <!-- 用来获取消息体高度 -->
        <view id="okk" scroll-with-animation>
            <!-- 消息 -->
            <view v-for="(x,i) in msgList" :key="i">
                <!-- 用户消息 头像可选加入-->
                <view v-if="x.my" style="display: flex;
                flex-direction: column;
                align-items: flex-end;">
                    <view style="width: 400rpx;">
                        <view style="border-radius: 35rpx;">
                            <text style="word-break: break-all;"></text>
                        </view>
                    </view>
                </view>
                <!-- 机器人消息 -->
                <view v-if="!x.my" style="display: flex;
                flex-direction: row;
                align-items: flex-start;">

                    <view style="width: 500rpx;">
                        <view style="border-radius: 35rpx;background-color: #f9f9f9;">
                            <text style="word-break: break-all;"></text>
                        </view>
                    </view>
                </view>
            </view>
            <view style="height: 130rpx;">
            </view>
        </view>
    </scroll-view>
    <!-- 底部导航栏 -->
    <view style="position: fixed;bottom:0px;width: 100%;display: flex;
    flex-direction: column;
    justify-content: center;
    align-items: center;">
        <view style="font-size: 55rpx;display: flex;
        flex-direction: row;
        justify-content: space-around;
        align-items: center;width: 75%;
    margin: 20;">
            <input v-model="msg" type="text" style="width: 75%;
            height: 45px;
            border-radius: 50px;
            padding-left: 20px;
            margin-left: 10px;background-color: #f0f0f0;" @confirm="sendMsg" confirm-type="search"
                placeholder-class="my-neirong-sm" placeholder="用一句简短的话描述您的问题" />
            <button @click="sendMsg" :disabled="msgLoad" style="height: 45px;
            width: 20%;;
    color: #030303;    border-radius: 2500px;"></button>
        </view>
    </view>
    </view>
</div>
</body>
</html>
<script>
    const { createApp } = Vue
    createApp({
        data() {
            return {
                //api: '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',
                api: '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',
                msgLoad: false,
                anData: {},
                sentext: '发送',

                animationData: {},
                showTow: false,
                msgList: [{
                    my: false,
                    msg: "你好我是OpenAI机器人,请问有什么问题可以帮助您?"
                }],
                msgContent: "",
                msg: ""
            }
        },
        methods: {
            sendMsg() {
                // 消息为空不做任何操作
                if (this.msg == "") {
                    return 0;
                }
                this.sentext = '请求中'
                this.msgList.push({
                    "msg": this.msg,
                    "my": true
                })
                console.log(this.msg);
                this.msgContent += ('YOU:' + this.msg + "\n")
                this.msgLoad = true
                // 清除消息
                this.msg = ""
                axios.post('https://api.OpenAI.com/v1/completions', {
                    prompt: this.msgContent, max_tokens: 2048, model: "text-davinci-003"
                }, {
                    headers: { 'content-type': 'application/json', 'Authorization': 'Bearer ' + this.api }
                }).then(res => {
                    console.log(res);
                    //let text = res.data.choices[0].text.replace("OpenAI:", "").replace("OpenAI:", "").replace(/^\n|\n$/g, "")
                    //let text = res.data.choices[0].text.replace(/^\n|\n$/g, "");
                    let text = res.data.choices[0].text.replace("\n", "<br>").replace(" ", "&nbsp;");
                    console.log(text);
                    this.msgList.push({
                        "msg": text,
                        "my": false
                    })
                    this.msgContent += (text + "\n")
                    this.msgLoad = false
                    this.sentext = '发送'
                })
            },
        }
    }).mount('#app')
</script>

手机app

【2023-2-11】CCTV视频里,台湾人在演示 VoiceGPTVoiceGPT APK Download (version 1.35) 下载地址 , 目前就安卓版,使用时需要代理

用kivy来编写手机界面版的ChatGPT

  • kivy编写了一款在手机端访问的软件,目前软件的打包存在问题,只能在电脑端访问。
  • 在Google的colab打包,但是打包后在安卓手机上安装成功,但是打开后就闪退,原因暂不明。
  • img

安装以下包:

python -m pip install docutils pygments pypiwin32 kivy.deps.sdl2 kivy.deps.glew
python -m pip install kivy.deps.gstreamer
python -m pip install kivy
python -m pip install kivy_examples
# 速度慢时,切换源
python -m pip install kivy -i https://pypi.tuna.tsinghua.edu.cn/simple

代码

from kivy.app import App
from kivy.core.window import Window
from kivy.uix.boxlayout import BoxLayout
from kivy.uix.textinput import TextInput
from kivy.uix.button import Button
import OpenAI
import pyperclip
class Application(BoxLayout):

    def __init__(self, **kwargs):
        super().__init__(**kwargs)
        self.orientation = "vertical"
        self.spacing = 5
        self.padding = 5
        self.create_widgets()
        Window.bind(on_request_close=self.end_func) # 窗口关联函数,更容易关闭
        OpenAI.api_key = "<OpenAI_key>" # 这里要替换成自己的api
    def end_func(self,*args):
        Window.close() 
    def create_widgets(self):
        # 显示文本框
        self.txinfo = TextInput(font_name='SIMSUN.TTC',font_size=18)
        self.txinfo.text = "欢迎使用OpenAI. 作者:Gordon QQ/VX 403096966 Esc可以退出程序。"
        # self.txinfocontainer = BoxLayout(orientation="vertical", size_hint_y=None)
        self.add_widget(self.txinfo)
    
        # 定义输入框
        self.entry = TextInput(font_name='SIMSUN.TTC',font_size=18)
        self.add_widget(self.entry)

        # 定义按钮
        self.btn = Button(text="发送请求", font_name ="SIMSUN.TTC",bold = True,font_size=20, on_release=self.button_func)
        self.add_widget(self.btn)
        self.btcopy = Button(text="复制回答", font_name ="SIMSUN.TTC",bold = True,font_size=20, on_release=self.button_copy)
        self.add_widget(self.btcopy)

    def button_copy(self, instance):
        pyperclip.copy(self.txinfo.text)

    def button_func(self, instance):
        prompt = self.entry.text
        if prompt !="":
            model_engine = "text-davinci-003"
            completions = OpenAI.Completion.create(
                engine=model_engine,
                prompt=prompt,
                max_tokens=1024,
                temperature=1,
            )
            message = completions.choices[0].text
            self.txinfo.insert_text("\n\nQ: "+prompt+"\nA: "+message.strip())
        self.entry.text = ''
class OpenAI(App):
    def build(self):
        return Application()

if __name__ == '__main__':
    OpenAI().run()

自定义api

【2023-9-1】可以针对 openai工具包,设置 base_url,提升可控性

  • 切换成内部服务 —— 突破访问限制
  • 自定义 api key
  • 调用方法同OpenAI,前提是 内部服务地址要按OpenAI规范实现接口
import openai
# http://10.154.44.82:9490/v1
openai.api_base = 'http://.....'
openai.api_key = "---"

案例

  • 微软 azure cloud提供OpenAI服务
  • 第三方代理,如:

OpenAI 工具包

import openai

openai.api_key = "sk-..."
openai.organization = "..."

# api 示例
completion = openai.Completion.create(
    prompt="<prompt>",
    model="text-davinci-003"
)
  
chat_completion = openai.ChatCompletion.create(
    messages="<messages>",
    model="gpt-4"
)

embedding = openai.Embedding.create(
  input="<input>",
  model="text-embedding-ada-002"
)
# batch 输入
inputs = ["A", "B", "C"] 

embedding = openai.Embedding.create(
  input=inputs,
  model="text-embedding-ada-002"
)

Azure API

微软 Azure OpenAI 实现

import openai

openai.api_type = "azure"
openai.api_key = "..."
openai.api_base = "https://example-endpoint.openai.azure.com"
openai.api_version = "2023-05-15"  # subject to change

# api 调用
# 使用 deployment_id/engine 替代 model 参数
completion = openai.Completion.create(
    prompt="<prompt>",
    deployment_id="text-davinci-003",
    engine="text-davinci-003" 
)
  
chat_completion = openai.ChatCompletion.create(
    messages="<messages>",
    deployment_id="gpt-4",
    engine="gpt-4"
)

embedding = openai.Embedding.create(
  input="<input>",
  deployment_id="text-embedding-ada-002",
  engine="text-embedding-ada-002"
)
# batch 输入
inputs = ["A", "B", "C"] #max array size=16

embedding = openai.Embedding.create(
  input=inputs,
  deployment_id="text-embedding-ada-002",
  engine="text-embedding-ada-002"
)

简洁版

import os
import openai

openai.api_type = "azure"
openai.api_base = os.getenv("AZURE_OPENAI_ENDPOINT") 
openai.api_key = os.getenv("AZURE_OPENAI_KEY")
openai.api_version = "2023-05-15"

response = openai.ChatCompletion.create(
    engine="gpt-35-turbo", # engine = "deployment_name".
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Does Azure OpenAI support customer managed keys?"},
        {"role": "assistant", "content": "Yes, customer managed keys are supported by Azure OpenAI."},
        {"role": "user", "content": "Do other Azure AI services support this too?"}
    ]
)

print(response)
print(response['choices'][0]['message']['content'])

【2023-11-6】微软说明

差异点:

import os
# 旧: import openai
from openai import AzureOpenAI

# 旧: 配置信息
# openai.api_type = "azure"
# openai.api_base = os.getenv("AZURE_OPENAI_ENDPOINT") 
# openai.api_key = os.getenv("AZURE_OPENAI_KEY")
# openai.api_version = "2023-05-15"

client = AzureOpenAI(
  azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT"), 
  api_key=os.getenv("AZURE_OPENAI_KEY"),  
  api_version="2023-05-15"
)
# 旧: response = openai.ChatCompletion.create(
response = client.chat.completions.create(
    # engine="gpt-35-turbo", # 旧: 模型参数名变化
    model="gpt-35-turbo", # 新: 
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Does Azure OpenAI support customer managed keys?"},
        {"role": "assistant", "content": "Yes, customer managed keys are supported by Azure OpenAI."},
        {"role": "user", "content": "Do other Azure AI services support this too?"}
    ]
)
# print(response['choices'][0]['message']['content']) # 旧
print(response.choices[0].message.content)

GPT 微调

见站内专题:大模型微调

企业版

【2023-8-28】OpenAI 宣布推出 ChatGPT 企业版 (ChatGPT Enterprise),也是迄今为止最强大的 ChatGPT 版本。提供企业级安全和隐私、无限的高速 GPT-4 访问、用于处理更长输入的更长上下文窗口、高级数据分析功能、自定义选项等等。其目的是吸引更广泛的企业客户,并提高产品收入。

ChatGPT 企业版 取消了所有使用上限,并且执行速度提高了两倍。企业版中包含 32k 上下文,允许用户处理四倍长的输入或文件,还提供对高级数据分析的无限制访问。

“此功能使技术和非技术团队能够在几秒钟内分析信息,无论是金融研究人员处理市场数据、营销人员分析调查结果还是数据科学家调试 ETL 脚本。”

  • 无限制访问 GPT-4(无使用上限)
  • 更高速的 GPT-4 性能(速度提高 2倍
  • 无限制地访问高级数据分析(以前称为代码解释器)
  • 32k token 上下文窗口,用于 4倍长的输入、文件或 follow-ups
  • 可共享的聊天模板,供客户公司协作和构建通用工作流程
  • 此外,ChatGPT 企业版提供了静态数据加密 (AES-256) 和传输中数据加密 (TLS 1.2+),并已经过 SOC 2 Type 1 的合规性审核和认证。

OpenAI 还保证,不会使用客户数据来训练 OpenAI 模型。

目前,ChatGPT 已有免费版Plus 版企业版三个订阅方案。但 OpenAI 尚未给出企业版的统一定价,具体将取决于每家公司的使用情况和用例,需要单独询价。

GPT-4 API

【2023-7-10】GPT-4无法使用

GPT-4 收费对比

【2023-3-23】GPT-4 API 接口调用及价格分析

横向比较一下几个模型的单价

  • gpt-4 prompt 比 gpt-3.5-turbo贵了14倍,gpt-4 completion 比 gpt-3.5-turbo贵了29倍!假设prompt和completion的字数为1:4(实际中completion往往比prompt要长),那么gpt-4接口的综合成本是gpt-3.5-turbo的27倍!
  • gpt-3.5-turbo $20美元能处理750万字,而相同金额在gpt-4中只能处理30万字左右
模型 $0.06 $0.03 $0.002 $0.02 $0.002 $0.0005 $0.0004
gpt-4(completion) gpt-4(prompt) gpt-3.5-turbo davinci curie babbage ada  
gpt-4(completion) 0 1 29 2 29 119 149
gpt-4(prompt) -0.5 0 14 0.5 14 59 74

GPT-4 收费对比

模型名称 描述 最大token数 训练数据
gpt-4 比 GPT-3.5 模型更强大,能够执行更复杂的任务,并针对聊天场景进行了优化。 会不断迭代更新。 8,192 截至2021年6月
gpt-4-0314 gpt-4的2023年3月14日快照版本。此模型在接下来3个月内不会更新,有效期截止2023年6月14日。 8,192 截至2019年10月
gpt-4-32k 与 gpt-4 功能相同,但上下文长度是gpt-4 的4 倍。会不断迭代更新。 32,768 截至2021年6月
gpt-4-32k-0314 gpt-4-32k的2023年3月14日快照版本。此模型在接下来3个月内不会更新,有效期截止2023年6月14日。 32,768 截至2019年10月

由于还在beta阶段,GPT-4 API的调用有频次限制:

  • 40k tokens / 分钟
  • 200 请求 / 分钟

这个频次对功能测试和概念验证来说已经足够了。

如果使用ChatGPT Plus体验GPT-4,有4小时100条消息的限制。

GPT-4 API的定价策略与之前模型不同。在GPT-4之前,接口定价按照token数统一收费,不区分是prompt的token还是生成响应的token。而GPT-4将prompt token和生成响应token分开计价,价格如下:

  • $0.03美元 / 1K prompt token
  • $0.06美元 / 1K 生成响应 token

这个价格相比 gpt-3.5-turbo 的 $0.002 / 1K tokens来说贵了至少15倍起。

GPT-4 API

【2023-3-24】GPT-4使用

【2023-5-20】升级plus上看不到gpt-4选项

GPT-4 API Models

  • model = gpt-4
  • model = gpt-4-32k
import OpenAI
# 直接以用户身份提问
messages=[{"role": "user", "content": As an intelligent AI model, if you could be any fictional character, who would you choose and why?}]
# 多个输入:提前传入系统话术
messages=[{"role": "system", "content": system_intel},
          {"role": "user", "content": prompt}])

response = openai.ChatCompletion.create(
model="gpt-4", max_tokens=100,
#model="gpt-4-32k", max_tokens=32768,
temperature=1.2,
messages = message)

print(response)

第三方

from steamship import Steamship
# !pip install steamship
gpt = Steamship.use_plugin("gpt-4")
task = gpt.generate("你好")
task.wait()

ChatGPT 参数

api示例

# 终端命令
# OpenAI api completions.create -m text-davinci-003 -p "Say this is a test" -t 0 -M 7 --stream
import OpenAI

OpenAI.api_key = "你的API Key"
#openai.Model.list() # 显示可用model
response = OpenAI.Completion.create(
  model="text-davinci-003", # 模型名称
  prompt="how are you", # 问题
  temperature=0.7, # 结果随机性,0-0.9 (稳定→随机)
  max_tokens=256, # 最大字数,汉字两位
  stream=False, # ChatGPT独有参数
  top_p=1, # 返回概率最大的1个
  frequency_penalty=0, 
  presence_penalty=0
)
# print(response)
for r in response:
  res += r["choices"][0]["text"]
res = res.replace('<|im_end|>', '')
print(res)

返回结果如下所示,结果在text字段中,可通过 response[“choices”][0][“text”] 进行读取。

{
  "id": "cmpl-uqkvlQyYK7bGYrRHQ0eXlWi7",
  "object": "text_completion",
  "created": 1589478378,
  "model": "text-davinci-003",
  "choices": [
    {
      "text": "\n\nThis is indeed a test",
      "index": 0,
      "logprobs": null,
      "finish_reason": "length"
    }
  ],
  "usage": {
    "prompt_tokens": 5,
    "completion_tokens": 7,
    "total_tokens": 12
  }
}

参考:ChatGPT官方API可以抢先体验了

【2023-2-11】GPT-3 Model 参数说明: 官网

LATEST MODEL DESCRIPTION MAX REQUEST TRAINING DATA
text-davinci-003 Most capable GPT-3 model. Can do any task the other models can do, often with higher quality, longer output and better instruction-following. Also supports inserting completions within text. 4,000 tokens Up to Jun 2021
text-curie-001 Very capable, but faster and lower cost than Davinci. 2,048 tokens Up to Oct 2019
text-babbage-001 Capable of straightforward tasks, very fast, and lower cost. 2,048 tokens Up to Oct 2019
text-ada-001 Capable of very simple tasks, usually the fastest model in the GPT-3 series, and lowest cost. 2,048 tokens Up to Oct 2019

While Davinci is generally the most capable, the other models can perform certain tasks extremely well with significant speed or cost advantages. For example, Curie can perform many of the same tasks as Davinci, but faster and for 1/10th the cost.

We recommend using Davinci while experimenting since it will yield the best results. Once you’ve got things working, we encourage trying the other models to see if you can get the same results with lower latency. You may also be able to improve the other models’ performance by fine-tuning them on a specific task.

Older versions of our GPT-3 models are available as davinci, curie, babbage, and ada. These are meant to be used with our fine-tuning endpoints.

Your model can be one of: ada, babbage, curie, or davinci

各模型调用费用不同,davinci最贵,对比下来,只有最贵的 davinci 符合预期,18 刀的配额,算了一下大概也就问 1000 多个问题

  • compare

如何查看可用模型?以Python接口调用为例

import requests
import json

headers = {'Authorization': f'Bearer {openai.api_key}'}
#payload = {'key1': 'value1', 'key2': 'value2'}
url = 'https://api.openai.com/v1/models' # 查看可用模型
#r = requests.get("http://httpbin.org/get", params=payload)
r = requests.get(url, headers=headers) # header
#print(r.url) # 请求网址
#print(r.encoding) # 编码
res = json.loads(r.text) # 返回内容
json.dumps(res)
# ------------------
import pandas as pd
import datetime

info_list = []
for m in res['data']:
    tm = datetime.datetime.fromtimestamp(m['permission'][0]['created']).strftime('%Y-%m-%d %H:%M:%S')
    out = [m['id'], # m['root'], 
           # m['permission'][0]['organization'],
           tm, # m['permission'][0]['created'], 
           m['permission'][0]['allow_create_engine'],
           m['permission'][0]['allow_sampling'],
           m['permission'][0]['allow_logprobs'],
           m['permission'][0]['allow_view'],
           m['permission'][0]['allow_fine_tuning'],
           m['permission'][0]['is_blocking'],
          ]
    info_list.append(out)
    #print('\t'.join(map(str, out)))
df = pd.DataFrame(info_list, columns=['id', 'create_time','allow_create_engine', 'allow_sampling',
                                'allow_logprobs', 'allow_view', 'allow_fine_tuning','is_blocking' ])
df.sort_values('create_time', ascending=False)
print(df.to_markdown()) # 输出为markdown格式

结果示例:

id model_id create_time allow_create_engine allow_sampling allow_logprobs allow_view allow_fine_tuning is_blocking
0 babbage 2022-11-22 10:51:41 False True True True False False
1 code-davinci-002 2023-02-11 05:26:08 False True True True False False
2 davinci 2022-11-22 05:32:35 False True True True False False

GPT-3 参数

GPT-3 模型调用方式,输入主要有7个参数:详见官网

  • (1)model:模型名称,text-davinci-003
    • string, Required
    • ID of the model to use. You can use the List models API to see all of your available models, or see our Model overview for descriptions of them.
  • (2)prompt:问题或待补全内容,例如“how are you”。
    • string or array, Optional, Defaults to <|endoftext|> (分隔符,最为prompt初始值)
    • The prompt(s) to generate completions for, encoded as a string, array of strings, array of tokens, or array of token arrays.
    • Note that <|endoftext|> is the document separator that the model sees during training, so if a prompt is not specified the model will generate as if from the beginning of a new document.
  • (3)temperature:控制结果随机性,0.0表示结果固定,随机性大可以设置为0.9。
    • number, Optional, Defaults to 1
    • What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
    • We generally recommend altering this or top_p but not both.
  • (4)max_tokens:最大返回字数(包括问题和答案),通常汉字占两个token。假设设置成100,如果prompt问题中有40个汉字,那么返回结果中最多包括10个汉字。
    • ChatGPT API允许的最大token数量为 4097(大部分模型是2048),即max_tokens最大设置为4097减去prompt问题的token数量。
    • max_tokens, integer, Optional, Defaults to 16
    • The maximum number of tokens to generate in the completion. The token count of your prompt plus max_tokens cannot exceed the model’s context length. Most models have a context length of 2048 tokens (except for the newest models, which support 4096).
  • (5)top_p:设置为1即可
    • top_p, number, Optional, Defaults to 1
    • An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
    • We generally recommend altering this or temperature but not both.
  • n 每个prompt生成几个结果(占用额度,慎用)
    • integer, Optional, Defaults to 1
    • How many completions to generate for each prompt.
    • Note: Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for max_tokens and stop.
  • (6)frequency_penalty:设置为0即可。
  • (7)presence_penalty:设置为0即可。
  • (8)stream:是否采用控制流的方式输出。(ChatGPT新增)
    • (1)如果stream取值为False,那么返回结果与 GPT3接口一致,完全返回全部文字结果,可通过 response[“choices”][0][“text”]进行读取。但是,字数越多,等待返回时间越长,时间可参考控制流读出时的4字/每秒。
    • (2)如果steam取值为True时,那么返回结果是一个 Python generator,需要通过迭代获取结果,平均大约每秒钟4个字(33秒134字,39秒157字),读取程序如下所示。可以看到,读取结果的结束字段为“<|im_end|>”。
    • stream: boolean, Optional, Defaults to false
    • Whether to stream back partial progress. If set, tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a data: [DONE] message.
  • logprobs 似然概率
    • logprobs: integer, Optional, Defaults to null
    • Include the log probabilities on the logprobs most likely tokens, as well the chosen tokens. For example, if logprobs is 5, the API will return a list of the 5 most likely tokens. The API will always return the logprob of the sampled token, so there may be up to logprobs+1 elements in the response.
    • The maximum value for logprobs is 5. If you need more than this, please contact us through our Help center and describe your use case.
  • suffix 前缀
    • string, Optional, Defaults to null
    • The suffix that comes after a completion of inserted text.
  • echo 补写之外返回提示语
    • echo: boolean, Optional, Defaults to false
    • Echo back the prompt in addition to the completion
  • stop 停用句子(类似停用词),生成过程中不出现
    • stop: string or array, Optional, Defaults to null
    • Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.
  • presence_penalty 出现惩罚
    • number, Optional, Defaults to 0
    • Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics.
  • frequency_penalty 频率惩罚
    • number, Optional, Defaults to 0
    • Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim.
  • best_of
    • integer, Optional, Defaults to 1
    • Generates best_of completions server-side and returns the “best” (the one with the highest log probability per token). Results cannot be streamed.
    • When used with n, best_of controls the number of candidate completions and n specifies how many to return – best_of must be greater than n.
    • Note: Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for max_tokens and stop.
  • logit_bias 概率偏置
    • map, Optional, Defaults to null
    • Modify the likelihood of specified tokens appearing in the completion.
    • Accepts a json object that maps tokens (specified by their token ID in the GPT tokenizer) to an associated bias value from -100 to 100. You can use this tokenizer tool (which works for both GPT-2 and GPT-3) to convert text to token IDs. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.
    • As an example, you can pass {“50256”: -100} to prevent the <|endoftext|> token from being generated.
  • user 用户标志符,便于OpenAI识别是否恶意调用
    • string, Optional
    • A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. Learn more.

ChatGPT 参数详解

Chat

  • Given a chat conversation, the model will return a chat completion response.

Request body,官方

  • model, string, Required 模型名称,必备
  • messages, array, Required prompt信息,必备
    • The messages to generate chat completions for, in the chat format.
  • temperature, number, Optional, Defaults to 1 温度,0-2, 高温(0.8)使结果更随机, 低温(0.2)更加稳定
    • What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
    • We generally recommend altering this or top_p but not both.
  • top_p, number, Optional, Defaults to 1 采样策略, 超过top_p的字符才会考虑
    • An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
    • We generally recommend altering this or temperature but not both.
  • n, integer, Optional, Defaults to 1 生成多少个回复
    • How many chat completion choices to generate for each input message.
  • stream, boolean, Optional, Defaults to false 流式输出,默认否
    • If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a data: [DONE] message. See the OpenAI Cookbook for example code.
  • stop, string or array, Optional, Defaults to null 生成多少个字符后停止,最多4组参数
    • Up to 4 sequences where the API will stop generating further tokens.
  • max_tokens, integer, Optional, Defaults to inf 最长字符数
    • The maximum number of tokens to generate in the chat completion.
    • The total length of input tokens and generated tokens is limited by the model’s context length.
  • presence_penalty, number, Optional, Defaults to 0 重复字符惩罚,-2~2, 正数时,惩罚已经出现过的字符
    • Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics.
    • See more information about frequency and presence penalties.
  • frequency_penalty, number, Optional, Defaults to 0 频次惩罚,-2~2, 正数时,已出现的字符按频率惩罚
    • Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim.
    • See more information about frequency and presence penalties.
  • logit_bias, map, Optional, Defaults to null 概率偏置,给特定字符增加置信度
    • Modify the likelihood of specified tokens appearing in the completion.
    • Accepts a json object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.
  • user, string, Optional 标记是否终端用户
    • A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. Learn more.

curl

curl https://api.openai.com/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -d '{
    "model": "gpt-3.5-turbo",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'

Parameters

{
  "model": "gpt-3.5-turbo",
  "messages": [{
    "role": "user", 
    "name": "Wang", // 新增
    "content": "Hello!"
    }]
}

【2023-6-24】name参数, openai官方解释

name

The name of the author of this message. name is required if role is function, and it should be the name of the function whose response is in the content. May contain a-z, A-Z, 0-9, and underscores, with a maximum length of 64 characters.

实测: name格式有要求(满足’^[a-zA-Z0-9_-]{1,64}$’),即便填了英文字符串,openai并没有当做用户名

  • question: 你好, 知道我是谁吗
  • answer: assistant 您好!很抱歉,作为人工智能助手,我没有能力识别您是谁。

Response

{
  "id": "chatcmpl-123",
  "object": "chat.completion",
  "created": 1677652288,
  "choices": [{
    "index": 0,
    "message": {
      "role": "assistant",
      "content": "\n\nHello there, how may I assist you today?",
    },
    "finish_reason": "stop"
  }],
  "usage": {
    "prompt_tokens": 9,
    "completion_tokens": 12,
    "total_tokens": 21
  }
}

GPT-3.5 Turbo 的微调可处理 4k 个 tokens——可达之前微调模型的 2 倍。早期测试人员还对模型本身的指令进行了微调,从而将提示词长度缩短达 90%,成功加快每次 API 调用的速度并降低了执行成本。

流式输出

流式输出的好处

  • GPT 一边响应一边返回结果,流式输出,响应效率大大提升;
  • 另一方面是显著提升了用户体验,给我们的感觉就像是真实的对话一样,GPT 似乎在思考问题的答案。

调用流程

【2023-8-28】如何丝滑实现 ChatGPT 打字机流式回复?

  • Server-Sent Events: 服务端主动向客户端推送的技术,与 Websocket 有些类似,但是 SSE 并不支持客户端向服务端发送消息,即 SSE 为单工通信
    • 服务端与客户端建立了 长连接,服务端源源不断地向客户端推送消息。服务端就相当于河流的上游,客户端就相当于河流的下游,水往低处流,这就是 SSE 的流式传输。
  • Web Socket

ChatGPT流式输出

SDK有两个OpenAi客户端OpenAiClient和OpenAiStreamClient。

  • OpenAiClient支持OpenAI的所有接口,支持阻塞式输出聊天模型(gpt3.5 、4.0)。
  • OpenAiStreamClient支持OpenAI的流式输出聊天模型(gpt3.5 、4.0)。

推荐自定义OkHttpClient实现两个Client,公用一个OkHttpClient即可。

流式输出和阻塞输出类似,只需要创建OpenAiStreamClient传入自定义的EventSourceListener即可。

举例为默认的SDK实现:ConsoleEventSourceListener

web实现参考:

流式输出实现方式 小程序 安卓 ios H5
SSE示例参考:OpenAISSEEventSourceListener 不支持 支持 支持 支持
WebSocket示例参考:OpenAIWebSocketEventSourceListener 支持 支持 支持 支持

账户升级plus

OpenAI升级不支持国内信用卡,paypal都不行

解法

  1. 找有🇺🇸信用卡的朋友代充。首推这种方式,因为简单直接,手续费也不高。但不是每个人都有这样的渠道的,那就来看一个替代方式。
  2. 注册一个虚拟信用卡,这里列两个平台nobepaydepay.
  3. 购买礼品卡, 仅限美国Apple ID使用
    • 准备美区 Apple ID
    • App Store下载软件 ChatGPT
    • 打开软件,开通ChatGPT Plus订阅
    • 支付宝购买礼品卡,网站 pockytShop, app store

升级流程

【2023-3-28】如何升级付费用户?官方渠道需要有境外银行卡,不好办。

Chatgpt升级plus会员

两种方案: Depay + nobepay

  • (1) Depay: 如果有usdt(虚拟货币)可以选择平台 Depay,kyc可认证可不认证。Depay 只支持u币入金。
    • depay 打开后填写手机,邮箱,国内手机号即可。
  • (2) nobepay: 如果没有u币,可以选择 nobepay,支付宝微信就能充值,身份必须认证

nobepay充值支持微信, 支付宝。depay只支持虚拟货币,充币usdt转换成美金usd后就能使用

申请开卡,有visamaster card两种

注意事项

  1. 付款时开🪜全局代理,选🇺🇸路线,国内的ip会不行,包括🇭🇰。
  2. nobepay平时海淘也能用,最低500起充,但这个平台不建议多充,怕跑路,只是作为个工具使用。
  3. depay也是,因为我不懂加密货币,平时也不玩,这里只是作为一个工具用,我个人并不了解也不信任depay 这个平台,不能保证稳定性,所以大家别多充,万一平台跑路了呢🤔🤔
  4. 有🇺🇸信用卡渠道的优先选美卡,费率低且简单。

openai付费升级的卡号怎么选

  • chatgpt/OpenAI:除欧洲卡段474362其他都可以,建议使用新上线卡段
  • 主要是IP问题,如果被拒多换换

美国的免税州有:地址生成器

  • 蒙大拿州(Montana)
  • 俄勒冈州(Oregon)
  • 阿拉斯加州(Alaska)
  • 特拉华州(Delaware)
  • 新罕布什尔州(New Hampshire)

美国各州简称

升级被拒原因

信用卡被拒,提示:

”你的信用卡被拒绝了,请尝试用借记卡支付“

信用卡被拒可能有以下几个原因:

  • 信用卡确实不支持,比 Depay 的虚拟信用卡的号段被 OpenAI/ChatGPT 拒绝。可以尝试更换虚拟信用卡,Depay 支持申请多张。
  • 网络环境被 Stripe 风控,挂全局代理 + 浏览器无痕模式再试,总之挂代理和不挂代理都试一下
  • 全局代理 + 浏览器无痕模式 + 更换 IP 失败次数超过 3-5 次,不建议继续尝试,这种情况可以考虑更换 ChatGPT 账号 + 无痕 + 更换梯子重新订阅试试。

2023年3月24日更新:

VPS大玩家一般在Google地图上找真实地址,找地址的方法如下:

如果帐号曾经付款失败过,出现了以下提示:

  • Your credit card was declined.Try paying with a debit card instead.
  • 您的信用卡被拒绝了。请尝试用借记卡支付。
  • 你的卡已被拒绝。

那这个号可能就基本上告别ChatGPT Plus了,大概率是不能付款成功的,只能换新号。可能的原因:

  • 使用不干净的IP登录过ChatGPT,这个号被OpenAI列入了黑名单。

有个网友就是这个原因导致即使换IP(使用远程桌面机服务器)、换卡、换帐单地址都不能正常支付。使用干净的IP,重新注册一个新号就可以了。

虚拟信用卡

除了531847虚拟卡能购买Plus,556766、556735、556305以及558068这几个卡头也可以给ChatGPT付款。可以在这里获得这种卡

VPS大玩家注册及使用ChatGPT的环境:美国Windows服务器,通过远程桌面连接使用, 教程

虚拟信用卡扣款记录:

升级Plus

输入虚拟信用卡卡号,过期时间、CVV以及邮编,下面输入姓名、帐单地址,然后点“Set up payment method”。

  • 现在的虚拟信用卡,一般都可以指定姓名,帐单地址,可以过AVS验证。
  • 这里用的是556305虚拟信用卡,也是一张美国的虚拟信用卡。
  • 同样用的是俄勒冈州(OR)的地址,免消费税。
  • 2023年3月31日更新:
    • 现在绑卡的时候,要预扣5美元,一般会在7天内释放,不是实际扣款,然后在每个月的月底按照实际的使用金额结算。

GPT-4功能

GPT-4有限制:GPT-4 currently has a cap of 25 messages every 3 hours. 每3小时只能交互25次。

ChatGPT plus账户上支持选择GPT-4模型

GPT-4功能 参考

  • 相比于GPT-3.5,GPT-4是新一代多模态大模型。GPT-4不仅支持文本,还支持图像输入。

访问ChatGPT Plus就拥有DefaultLegacy双模型回答,以及快速、稳定的AI回复。

ChatGPT Plus中的default mode和legacy mode有什么区别?

  • default mode就是Turbo mode,更有情感和活力,会有趣一些,不过回答上偏更加简洁,省去了之前legacy mode一些细节。
  • legacy mode则更适合学术论文,不像Turbo Mode回答那么大众,适合科研,论文。
  • 更详细的比较可以参考

取消Plus订阅

如何取消ChatGPT Plus的自动订阅?

  • Depay信用卡其实没有透支功能,只是相当于借记卡,理论上说只要你不往卡里充钱,其实不必担心下个月被扣款。
  • 不过,保险起见,你还是可以取消自动订阅,方法是:
  • 打开ChatGPT首页并登录——左下角——My Account——Manage My Subscription——Cancel Plan

结束


支付宝打赏 微信打赏

~ 海内存知已,天涯若比邻 ~

Share

Similar Posts

Related Posts

上一篇 AI顶级机构

标题:AI顶级机构

摘要:世界顶级AI机构(OpenAI/DeepMind/BostonDnymic等)的故事;LLM创业信息

标题:多模态-Multi-Modal

摘要:多模态相关知识以及各种新兴大模型

Comments

--disqus--

    My Moment ( 微信公众号 )
    欢迎关注鹤啸九天