13

I work with the OpenAI API. I have extracted slides text from a PowerPoint presentation, and written a prompt for each slide. Now, I want to make asynchronous API calls, so that all the slides are processed at the same time.

this is the code from the async main function:

for prompt in prompted_slides_text:
    task = asyncio.create_task(api_manager.generate_answer(prompt))
    tasks.append(task)
results = await asyncio.gather(*tasks)

and this is generate_answer function:

@staticmethod
    async def generate_answer(prompt):
        """
        Send a prompt to OpenAI API and get the answer.
        :param prompt: the prompt to send.
        :return: the answer.
        """
        completion = await openai.ChatCompletion.create(
                model="gpt-3.5-turbo",
                messages=[{"role": "user", "content": prompt}]

        )
        return completion.choices[0].message.content

the problem is:

object OpenAIObject can't be used in 'await' expression

and I don't know how to await for the response in generate_answer function

Would appreciate any help!

2

3 Answers 3

19

For those landing here, the error here was probably the instantiation of the object. It has to be:

client = AsyncOpenAI(api_key=api_key)

Then you can use:

        response = await client.chat.completions.create(
        model="gpt-4",
        messages=custom_prompt,
        temperature=0.9
    )
Sign up to request clarification or add additional context in comments.

1 Comment

can this be handled somehow by langchain? For me I still have issues with langchain only being able to handle openai<1.0
15

Note: With version v1 the api has changed and this answer is not valid anymore, see Graciela's answer for the new API.


You have to use openai.ChatCompletion.acreate to use the api asynchronously.

It's documented on their Github - https://github.com/openai/openai-python#async-usage

5 Comments

The api has a limit of 3 requests per minute, and my program is crashed because of it, do you know how can I overcome this issue?
@DanielG I had written another answer that limits concurrent requests - stackoverflow.com/a/67831742/3007402. See if that helps. You will find more answers at - stackoverflow.com/q/48483348/3007402
Still it does not work.
@Brana What specifically doesn't work? Can you share your code? You may want to ask a new question.
I think this used to be correct but now the answer by Graciela is correct.
5

Haven't found any good examples of AsyncOpenAI calls for openai==1.46.0, so even though Graciela's answer is correct, I feel like it would be useful to cover the end-to-end process of the asynchronous API call. Hope it helps someone!

import asyncio
from asyncio import Semaphore
import openai

async_client = openai.AsyncOpenAI(api_key=api_key)

messages_batch = [[
        {'role': 'system', 'content': 'You are a helpful assistant.'},
        {'role': 'user', 'content': 'What is the capital of Canada?'},
]]

model='gpt-4o-mini'
sem = Semaphore(5)

async def get_completion(messages):
    async with sem:
        response = await async_client.chat.completions.create(
                messages=messages,
                model=model
        )
        return response.choices[0].message.content

async def process_messages_batch(messages_batch):
    tasks = [asyncio.create_task(get_completion(messages)) for messages in messages_batch]
    results = await asyncio.gather(*tasks)
    return results
        
loop = asyncio.get_event_loop()
results = await process_messages_batch(messages_batch)

Note that in the newer version of ChatCompletion the way to parse the response is now response.choices[0].message.content, not response.choices[0]['message']['content'].

1 Comment

Question about the asyncio process of each message in messages_batch - is the first message (role of the system) actually returning a result? Additionally, are each of the results to each message (user input) independent of each other? i.e. there is no state/no previous message will influence the result of another message.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.