OpenAI ChatGPT(GPT-3.5)API错误:"InvalidRequestError。提供的请求参数未被识别: messages"
回答 2
浏览 4967
2023-03-02
重要的
有同样问题的人:请参考@Rok Benko的回答。gpt-3.5 入门指南刚刚被更新。这是他们报告的代码,工作起来非常完美:
import openai
openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Who won the world series in 2020?"},
{"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."},
{"role": "user", "content": "Where was it played?"}
]
)
在提出这个问题时,文档中的代码报告了 GPT-3 Completions 端点:
openai.Completion.create()
问题
我目前正在尝试使用OpenAI的最新模型:gpt-3.5-turbo
.我正在学习一个非常基本的教程。
我正在使用 Google Collab 笔记本工作。我必须对提示列表中的每个提示发出请求,为简单起见,它看起来像这样:
prompts = ['What are your functionalities?', 'what is the best name for an ice-cream shop?', 'who won the premier league last year?']
我定义了一个函数来做到这一点:
import openai
# Load your API key from an environment variable or secret management service
openai.api_key = 'my_API'
def get_response(prompts: list, model = "gpt-3.5-turbo"):
responses = []
restart_sequence = "\n"
for item in prompts:
response = openai.Completion.create(
model=model,
messages=[{"role": "user", "content": prompt}],
temperature=0,
max_tokens=20,
top_p=1,
frequency_penalty=0,
presence_penalty=0
)
responses.append(response['choices'][0]['message']['content'])
return responses
然而,当我调用responses = get_response(prompts=prompts[0:3])
时,我得到了以下错误:
InvalidRequestError: Unrecognized request argument supplied: messages
有什么建议吗?
编辑:
将messages
参数替换为prompt
,会导致以下错误:
InvalidRequestError: [{'role': 'user', 'content': 'What are your functionalities?'}] is valid under each of {'type': 'array', 'minItems': 1, 'items': {'oneOf': [{'type': 'integer'}, {'type': 'object', 'properties': {'buffer': {'type': 'string', 'description': 'A serialized numpy buffer'}, 'shape': {'type': 'array', 'items': {'type': 'integer'}, 'description': 'Array shape'}, 'dtype': {'type': 'string', 'description': 'Stringified dtype'}, 'token': {'type': 'string'}}}]}, 'example': '[1, 1313, 451, {"buffer": "abcdefgh", "shape": [1024], "dtype": "float16"}]'}, {'type': 'array', 'minItems': 1, 'maxItems': 2048, 'items': {'oneOf': [{'type': 'string'}, {'type': 'object', 'properties': {'buffer': {'type': 'string', 'description': 'A serialized numpy buffer'}, 'shape': {'type': 'array', 'items': {'type': 'integer'}, 'description': 'Array shape'}, 'dtype': {'type': 'string', 'description': 'Stringified dtype'}, 'token': {'type': 'string'}}}], 'default': '', 'example': 'This is a test.', 'nullable': False}} - 'prompt'
messages
并不是正确的论点。我猜你需要prompt: []
- 0stone0 2023-03-02
@0stone0 的消息参数是文档中提供的参数。然而,实施你的解决方案会导致另一个错误信息(查看最新的编辑)。
- corvusMidnight 2023-03-02
但提示只要是你的问题就可以了。
prompt: item
- 0stone0 2023-03-02
@0stone0 这导致了一个不同的错误,我相信这与模型有关(你的解决方案可以工作,例如,用davinci模型。@InvalidRequestError。这是一个聊天模型,在v1/completions端点不支持。你是说用v1/chat/completions吗?
- corvusMidnight 2023-03-02
好的,我自己编写了一些代码,无法重现您的问题。在这里工作正常。
- 0stone0 2023-03-02
2 个回答
#1楼
已采纳
得票数 10
您使用了错误的函数来完成。
Python
你需要使用这个 ↓
completion = openai.ChatCompletion.create()
completion = openai.Completion.create()
工作实例
如果你运行test.py
,OpenAI的API将返回以下的完成信息:
Hello there! How can I assist you today?
test.py
import openai
import os
openai.api_key = os.getenv('OPENAI_API_KEY')
completion = openai.ChatCompletion.create(
model = 'gpt-3.5-turbo',
messages = [
{'role': 'user', 'content': 'Hello!'}
],
temperature = 0
)
print(completion['choices'][0]['message']['content'])
NodeJS
你需要使用这个 ↓
const completion = await openai.createChatCompletion()
const completion = await openai.createCompletion()
工作实例
如果你运行test.js
,OpenAI的API将返回以下的完成信息:
Hello there! How can I assist you today?
test.js
const { Configuration, OpenAIApi } = require('openai');
const configuration = new Configuration({
apiKey: process.env.OPENAI_API_KEY,
});
const openai = new OpenAIApi(configuration);
async function getCompletionFromOpenAI() {
const completion = await openai.createChatCompletion({
model: 'gpt-3.5-turbo',
messages: [
{ role: 'user', content: 'Hello!' }
],
temperature: 0,
});
console.log(completion.data.choices[0].message.content);
}
getCompletionFromOpenAI();
这个:我认为 OpenAI 的命名实践让人有点困惑,为什么你会在介绍示例中有这个:
import openai openai.ChatCompletion.create( model="gpt-3.5-turbo", messages=[ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Who won the world series in 2020?"}, {"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."}, {"role": "user", "content": "Where was it played?"} ] )
- corvusMidnight 2023-03-03
我同意,这有点令人困惑。我认为他们应该从文档中复制粘贴这个例子。
- Rok Benko 2023-03-03
这是Pytthon,对吗?nodejs的等价物是什么?我有
const completion = await openai.createCompletion({....})
,但我得到的错误是"This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?"
- matteo 2023-03-05
#2楼
得票数 2
response = openai.ChatCompletion.create(
model='gpt-3.5-turbo',
messages=[
{"role": "user", "content": "What is openAI?"}],
max_tokens=193,
temperature=0,
)
print(response)
print(response["choices"][0]["message"]["content"])
很好,在openai==0.27.0中也能工作。
- ABarrier 2023-03-07