139

My script is called by server. From server I'll receive ID_OF_MESSAGE and TEXT_OF_MESSAGE.

In my script I'll handle incoming text and generate response with params: ANSWER_TO_ID and RESPONSE_MESSAGE.

The problem is that I'm sending response to incomming "ID_OF_MESSAGE", but server which send me message to handle will set his message as delivered to me (It means I can send him response to that ID), after receiving http response 200.

One of solution is to save message to database and make some cron which will be running each minute, but I need to generate response message immediately.

Is there some solution how to send to server http response 200 and than continue executing php script?

Thank you a lot

3
  • This question is similar to: Continue PHP execution after sending HTTP response. If you believe it’s different, please edit the question, make it clear how it’s different and/or how the answers on that question are not helpful for your problem. Commented Jul 22 at 3:51
  • I think this one may be a better target for that one, instead, @YouOldFool Commented Jul 24 at 15:15
  • @TylerH - I suppose it go work either way but the other question seems more clear to me and is older. Commented Jul 24 at 23:06

15 Answers 15

240

Yes. You can do this:

ignore_user_abort(true);//not required
set_time_limit(0);

ob_start();
// do initial processing here
echo $response; // send the response
header('Connection: close');
header('Content-Length: '.ob_get_length());
ob_end_flush();
@ob_flush();
flush();
fastcgi_finish_request();//required for PHP-FPM (PHP > 5.3.3)

// now the request is sent to the browser, but the script is still running
// so, you can continue...

die(); //a must especially if set_time_limit=0 is used and the task ends
Sign up to request clarification or add additional context in comments.

21 Comments

Is it possible to do it with a keep-alive connection ?
Awesome answer! The only thing I changed was set_time_limit(0);. You do probably want it to run for longer than the default 30 seconds, but indefinitely could cause problems if it goes into an infinite loop! I have a longer value set in my php.ini file.
is there a reason you use ob_flush after ob_end_flush? I understand the need for the flush function there at the end but I'm not sure why you would need ob_flush with ob_end_flush being called.
Please note that if a content-encoding header is set to anything else than 'none' it could render this example useless as it would still let the user wait the full execution time (until timeout?). So to be absolutely sure it will work local ánd in the production environment, set the 'content-encoding' header to 'none': header("Content-Encoding: none")
Tip: I started using PHP-FPM, so I had to add fastcgi_finish_request() at the end
|
78

I've seen a lot of responses on here that suggest using ignore_user_abort(true); but this code is not necessary. All this does is ensure your script continues executing before a response is sent in the event that the user aborts (by closing their browser or pressing escape to stop the request). But that's not what you're asking. You're asking to continue execution AFTER a response is sent. All you need is the following:

// Buffer all upcoming output...
ob_start();

// Send your response.
echo "Here be response";

// Get the size of the output.
$size = ob_get_length();

// Disable compression (in case content length is compressed).
header("Content-Encoding: none");

// Set the content length of the response.
header("Content-Length: {$size}");

// Close the connection.
header("Connection: close");

// Flush all output.
ob_end_flush();
@ob_flush();
flush();

// Close current session (if it exists).
if(session_id()) session_write_close();

// Start your background work here.
...

If you're concerned that your background work will take longer than PHP's default script execution time limit, then stick set_time_limit(0); at the top.

7 Comments

Tried a LOT of different combinations, THIS is the one that works!!! Thanks Kosta Kontos!!!
Works perfectly on apache 2, php 7.0.32 and ubuntu 16.04! Thanks!
I've tried other solutions, and only this one worked for me. Order of lines is important as well.
I can't for the life of me understand why ob_flush() is required here since it throws a notice PHP Notice: ob_flush(): failed to flush buffer. No buffer to flush in php shell code on line 1. Oddly, this doesn't work without it. I guess it's doing something after all...
@billynoah I think it's not needed. ob_end_flush() does the same as ob_flush except that it closes the buffer (therefore calling ob_flush() after ob_end_flush() produces the warning you came across.
|
54

If you're using FastCGI processing or PHP-FPM, you can use fastcgi_finish_request. Be sure to call ignore_user_abort(true) first to prevent bug 68772 from prematurely killing your script.

session_write_close(); //close the session
ignore_user_abort(true); //Prevent echo, print, and flush from killing the script
fastcgi_finish_request(); //this returns 200 to the user, and processing continues

// do desired processing ...
$expensiveCalulation = 1+1;
error_log($expensiveCalculation);

Building on that, here's a reusable function with a fallback for servers that don't support FastCGI. To ensure compatibility, the fallback case forces at least one character (like a blank space) to be outputted since some non-FastCGI servers will ignore Content-Length: 0 and keep the connection open.

function close_connection_but_continue_processing($output)
{
    if (function_exists('fastcgi_finish_request'))
    {
        echo $output;
        ignore_user_abort(true); //https://bugs.php.net/bug.php?id=68772
        fastcgi_finish_request();
    }
    
    else
    {
        ob_start();
        echo (empty($output) ? ' ' : $output); //Quirk: At least one character must be outputted for the connection to be closed
        header('Content-Length: ' . ob_get_length());
        header('Connection: close');
        ob_end_flush();
        ob_flush();
        flush();
        session_write_close();
    }
}

close_connection_but_continue_processing('Your request has been received and will now be processed.');
do_long_task();

4 Comments

Thanks for this, after spending a few hours this worked for me in nginx
Thanks DarkNeuron! Great answer for us using php-fpm, just solved my problem!
Brilliant! I just spent hours trying to figure out why our app stopped working on a page where we used output buffering after moving it to a new server and this was the key. Thanks!
Works as it should! I'm sending HTTP 202 Accepted before and then do the computing.
27

I spent a few hours on this issue and I have come with this function which works on Apache and Nginx:

/**
 * respondOK.
 */
protected function respondOK()
{
    // check if fastcgi_finish_request is callable
    if (is_callable('fastcgi_finish_request')) {
        /*
         * This works in Nginx but the next approach not
         */
        session_write_close();
        fastcgi_finish_request();

        return;
    }

    ignore_user_abort(true);

    ob_start();
    $serverProtocole = filter_input(INPUT_SERVER, 'SERVER_PROTOCOL', FILTER_SANITIZE_STRING);
    header($serverProtocole.' 200 OK');
    header('Content-Encoding: none');
    header('Content-Length: '.ob_get_length());
    header('Connection: close');

    ob_end_flush();
    ob_flush();
    flush();
}

You can call this function before your long processing.

3 Comments

This is bloody lovely! After trying everything else above this is the only thing that worked with nginx.
Your code is almost exactly the same as the code here but your post is older :) +1
be careful with filter_input function it returns NULL sometimes. see this user contribution for detail
6

Modified the answer by @vcampitelli a bit. Don't think you need the close header. I was seeing duplicate close headers in Chrome.

<?php

ignore_user_abort(true);

ob_start();
echo '{}';
header($_SERVER["SERVER_PROTOCOL"] . " 202 Accepted");
header("Status: 202 Accepted");
header("Content-Type: application/json");
header('Content-Length: '.ob_get_length());
ob_end_flush();
ob_flush();
flush();

sleep(10);

1 Comment

I mentioned this in the original answer, but I'll say it here too. You do not necessarily need to have close the connection, but then what will happen is the next asset requested on the same connection will be forced to wait. So you could deliver the HTML fast but then one of your JS or CSS files might load slowly, as the connection has to finish getting the response from PHP before it can get the next asset. So for that reason, closing the connection is a good idea so the browser doesn't have to wait for it to be freed up.
6

I asked this question to Rasmus Lerdorf in April 2012, citing these articles:

I suggested the development of a new PHP built-in function to notify the platform that no further output (on stdout?) will be generated (such a function might take care of closing the connection). Rasmus Lerdorf responded:

See Gearman. You really really don't want your frontend Web servers doing backend processing like this.

I can see his point, and support his opinion for some applications/ loading scenarios! However, under some other scenarios, the solutions from vcampitelli et al, are good ones.

Comments

5

I can't install pthread and neither the previous solutions work for me. I found only the following solution to work (ref: https://stackoverflow.com/a/14469376/1315873):

<?php
ob_end_clean();
header("Connection: close");
ignore_user_abort(); // optional
ob_start();
echo ('Text the user will see');
$size = ob_get_length();
header("Content-Length: $size");
ob_end_flush(); // Strange behaviour, will not work
flush();            // Unless both are called !
session_write_close(); // Added a line suggested in the comment
// Do processing here 
sleep(30);
echo('Text user will never see');
?>

5 Comments

What version of php and what operating system are you using? Maybe something has changed with recent versions of php...
why do you put the header("Connection: close"); before the output? everyone seems to have it after. does it not matter or is there an actually effect?
@Zackattack, if you're referring to the last echo message, it's only to check that the script continues its execution but for the client is already ended (thanks to the headers). After I ensured the last echo message doesn't appear on browser, I removed sleep and echo instructions in my script to perform the actual processing. If you want to show something for the user (that's the real output), you have to insert instructions between ob_start and ob_get_length. The order among the header instructions doesn't matter.
thanks for that. so I can put header("Connection: close"); after ob_start(); as well?
From php documentation on ob_start instruction: "While output buffering is active no output is sent from the script (other than headers)", so it doesn't change anything... I would put it before ob_start() just to not mistakenly think the header is buffered too as the positioning seems to imply. But it will work anyway even after ob_start() if you prefer.
3

I use the php function register_shutdown_function for this.

void register_shutdown_function ( callable $callback [, mixed $parameter [, mixed $... ]] )

http://php.net/manual/en/function.register-shutdown-function.php

Edit: The above is not working. It seems I was misled by some old documentation. The behaviour of register_shutdown_function has changed since PHP 4.1 link link

3 Comments

This is not what is being asked for - this function basically just extends the script termination event and is still part of the output buffer.
Found it worth an upvote, cause it shows what does not work.
Ditto - upvoting because I was expecting to see this as an answer, and useful to see that it doesn't work.
3

I have something that can compressed and send the response and let other php code to execute.

function sendResponse($response){
    $contentencoding = 'none';
    if(ob_get_contents()){
        ob_end_clean();
        if(ob_get_contents()){
            ob_clean();
        }
    }
    header('Connection: close');
    header("cache-control: must-revalidate");
    header('Vary: Accept-Encoding');
    header('content-type: application/json; charset=utf-8');
    ob_start();
    if(phpversion()>='4.0.4pl1' && extension_loaded('zlib') && GZIP_ENABLED==1 && !empty($_SERVER["HTTP_ACCEPT_ENCODING"]) && (strpos($_SERVER["HTTP_ACCEPT_ENCODING"], 'gzip') !== false) && (strstr($GLOBALS['useragent'],'compatible') || strstr($GLOBALS['useragent'],'Gecko'))){
        $contentencoding = 'gzip';
        ob_start('ob_gzhandler');
    }
    header('Content-Encoding: '.$contentencoding);
    if (!empty($_GET['callback'])){
        echo $_GET['callback'].'('.$response.')';
    } else {
        echo $response;
    }
    if($contentencoding == 'gzip') {
        if(ob_get_contents()){
            ob_end_flush(); // Flush the output from ob_gzhandler
        }
    }
    header('Content-Length: '.ob_get_length());
    // flush all output
    if (ob_get_contents()){
        ob_end_flush(); // Flush the outer ob_start()
        if(ob_get_contents()){
            ob_flush();
        }
        flush();
    }
    if (session_id()) session_write_close();
}

Comments

2

There is another approach and its worthwhile considering if you don't want to tamper with the response headers. If you start a thread on another process the called function wont wait for its response and will return to the browser with a finalized http code. You will need to configure pthread.

class continue_processing_thread extends Thread 
{
     public function __construct($param1) 
     {
         $this->param1 = $param1
     }

     public function run() 
     {
        //Do your long running process here
     }
}

//This is your function called via an HTTP GET/POST etc
function rest_endpoint()
{
  //do whatever stuff needed by the response.

  //Create and start your thread. 
  //rest_endpoint wont wait for this to complete.
  $continue_processing = new continue_processing_thread($some_value);
  $continue_processing->start();

  echo json_encode($response)
}

Once we execute $continue_processing->start() PHP wont wait for the return result of this thread and therefore as far as rest_endpoint is considered. It is done.

Some links to help with pthreads

Good luck.

Comments

2

in case of php file_get_contents use, connection close is not enough. php still wait for eof witch send by server.

my solution is to read 'Content-Length:'

here is sample :

response.php:

 <?php

ignore_user_abort(true);
set_time_limit(500);

ob_start();
echo 'ok'."\n";
header('Connection: close');
header('Content-Length: '.ob_get_length());
ob_end_flush();
ob_flush();
flush();
sleep(30);

Note the "\n" in response to close line, if not the fget read while wait eof.

read.php :

<?php
$vars = array(
    'hello' => 'world'
);
$content = http_build_query($vars);

fwrite($fp, "POST /response.php HTTP/1.1\r\n");
fwrite($fp, "Content-Type: application/x-www-form-urlencoded\r\n");
fwrite($fp, "Content-Length: " . strlen($content) . "\r\n");
fwrite($fp, "Connection: close\r\n");
fwrite($fp, "\r\n");

fwrite($fp, $content);

$iSize = null;
$bHeaderEnd = false;
$sResponse = '';
do {
    $sTmp = fgets($fp, 1024);
    $iPos = strpos($sTmp, 'Content-Length: ');
    if ($iPos !== false) {
        $iSize = (int) substr($sTmp, strlen('Content-Length: '));
    }
    if ($bHeaderEnd) {
        $sResponse.= $sTmp;
    }
    if (strlen(trim($sTmp)) == 0) {
        $bHeaderEnd = true;
    }
} while (!feof($fp) && (is_null($iSize) || !is_null($iSize) && strlen($sResponse) < $iSize));
$result = trim($sResponse);

As you can see this script dosent wait about eof if content length is reach.

hope it will help

Comments

0

I have an important addition to all the other great answers!
TL;DR:
add

echo str_repeat(' ', 1024);



In my use case I would like to mark the API Call as "Accepted" and don't make the client wait for finished processing.

Actually it feels right: The client should stop waiting for answer when it receives the "Connection: close" header, but in fact at least MY php does not send those headers yet. (tested with two different PHP-Servers, and via Browser+Insomnia client each)

There is some special behaviour that flush() would not send the first content if not at least a special amount of bytes were echoed, yet (in my case 1024 bytes). (probably some hotfix in case there is some leading or trailing whitespace in php files which are actually treated like an echo statement or something like that which would prevent later header() statements to take effect.)

To solve the problem one can send 1024 leading whitespace characters which should be ignored by JSON/XML/HTML interpreters.

So the full code looks like this:

ob_start();
echo str_repeat(' ', 1024);
echo $response; // send the response
header('Connection: close');
header('Content-Length: '.ob_get_length());
ob_end_flush();
@ob_flush();
flush();

(Here is some backup for my arguments, cant find the correct source rn: How to flush output after each `echo` call?)

Comments

0

You can use register_shutdown_function and fastcgi_finish_request together for this.

As belatedly discovered by @martii, register_shutdown_function by itself does not solve the problem since output is not complete until after any registered shutdown functions have completed execution.

However, you can call fastcgi_finish_request() as the first action within a shutdown function, which then releases the client from purgatory.

This can be particularly useful, for example, if you are using a CMS like WordPress or Joomla where you may not have convenient access to manage application code at the end of request processing where you could otherwise just place the fastcgi_finish_request() call.

An example of a mail spool for dispatching emails (which can otherwise significantly increase request processing and wait time for the client) after the client response is issued:

<?php
class MailSpool
{
  private static $mails = [];
  private static $enqueued = false;

  public static function addMail($subject, $to, $message)
  {
    if (!self::$enqueued) {
      // Register the shutdown function on the first message queueing request
      register_shutdown_function('MailSpool::send');
      self::$enqueued = true;
    } 
    self::$mails[] = [ 'subject' => $subject, 'to' => $to, 'message' => $message ];
  }

  public static function send() 
  {
    fastcgi_finish_request();
    foreach(self::$mails as $mail) {
      mail($mail['to'], $mail['subject'], $mail['message']);
    }
  }
}

// Call this from anywhere in your code
MailSpool::addMail('Hello', '[email protected]', 'Hello from the spool');

Comments

0

Elevating @JLH's response to @vcampitelli's answer into an answer.

From PHP docs: https://www.php.net/manual/en/function.fastcgi-finish-request.php

All you need to do, barring bugs or specific scenarios, is

session_write_close();
fastcgi_finish_request();

The first tells the server that no more session variable's will be written to, so release the locks and close the connection at your convenience.

The second line performs a flush of output data, creates the headers, send them back to the client, and closes the session.

Comments

-1

In addition to the answers, I returned a JSON string as response. I discovered the response is being truncated for unknown reason. The fix to the problem was adding extra space:

echo $json_response;
//the fix
echo str_repeat(' ', 10);

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.