0

I am trying to fetch data from the CSV file via OpenAI API but I am getting an error as token exceed. I have used both GPT-3.5 turbo and all the version of GPT-4 model to check on that.

Is there any changes what I need to do in the prompt or should I use some other method to reduce the size of the token. I have also set the maximum token to 2500. I am using Swagger to check my end point request


[Route("AskQuestionCsv")]
public async Task<IActionResult> AskQuestionCsv([FromBody] string question)
{
    if (string.IsNullOrWhiteSpace(extractedCsvText))
    {
        return BadRequest(new { Message = "No CSV content available. Please upload a CSV file first." });
    }

    if (string.IsNullOrWhiteSpace(question))
    {
        return BadRequest(new { Message = "Question cannot be empty." });
    }

    try
    {
        var openai = new OpenAIAPI("API_KEY");
        var chatRequest = new ChatRequest
        {
            Model = "gpt-4", 
            Temperature = 0.7,
            MaxTokens = 25000,
            Messages = new List<ChatMessage>
            {
                new ChatMessage
                {
                    Role = ChatMessageRole.System,
                    Content = "You are a helpful assistant."
                },
                new ChatMessage
                {
                    Role = ChatMessageRole.User,
                    Content = $"Based on the following text from the CSV file, answer the question.\n\nCSV Text:\n{extractedCsvText}\n\nQuestion: {question}"
                }
            }
        };

        var chatResponse = await openai.Chat.CreateChatCompletionAsync(chatRequest);
        var answer = chatResponse.Choices.FirstOrDefault()?.Message.Content.Trim();

        return Ok(new { Question = question, Answer = answer });
    }
    catch (Exception ex)
    {
        return StatusCode(500, new { Message = "ERROR: " + ex.Message }
    }

Error:

Error at chat/completions (https://api.openai.com/v1/chat/completions) with HTTP status code: TooManyRequests. Content: {\n "error": {\n "message": "Request too large for gpt-4 in organization org-uedxqeR1FzNcdHx3MOuawI9d on tokens per min (TPM): Limit 10000, Requested 6668686. The input or output tokens must be reduced in order to run successfully. Visit https://platform.openai.com/account/rate-limits to learn more.",\n "type": "tokens",\n "param": null,\n "code": "rate_limit_exceeded"\n }\n}\n

2 Answers 2

1

It says in your error message:

Limit 10000, Requested 6668686

You probably need to reduce the size of your CSV text before sending it.

Sign up to request clarification or add additional context in comments.

7 Comments

Is there any other way round?
Other models, like GPT-3.5 Turbo, have bigger limits, but even those don't fit a message that big: platform.openai.com/settings/organization/limits
yes , I have checked the documentation. So what can we do in this scenario? how should I trim? I have to trim my data manually or by writing the code will work
It depends what data you have in your CSV file. You can write code to only send10,000 / 90,000 tokens on each request, but it might not give you the answers you need. Ultimately, GPT won't handle files as huge as this one, it can only process a small part of it.
well , I have distributed into small Chunks of 10000 each and handling the file but no luck on that. Facing the same error as token exceed - ERROR: - "Rate limit reached for gpt-3.5-turbo in organization org-uedxqeR1FzNcdHx3MOuawI9d on tokens per min (TPM): Limit 60000, Used 58030, Requested 2557.
|
0

Your deployment has a limited amount of tokens you can send and receive from the different models per minute.

The underlying problem is that you are probably sending too much data in the prompt.

You can try with using a RAG technique instead to provide the LLM with your data. This should hopefully also reduce the amount of tokens every time you do a request.

It also seems that you set MaxTokens = 25000 and not 2500.

2 Comments

Well, I slowed down the response by providing the sleep command. and oh yes Max token I reduced it . thanks for that
How large documents (string length) are you providing in the prompts?

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.