--- Update at the bottom, it's related to CURLOPT_COOKIE --
I'm developping on my local machine ( 192.168.1.103 ), and I have a PHP script that makes a CURL call to get the header and the content returned by a remote script.
I've installed 2 copies of the remote script that must return his content:
- One on my local machine, under the same virtual host. ( http://192.168.1.103/test/output_script.php )
- One on a remote server. ( http://site.com/text/outputscript.php )
The CURL script works really well when I try to get the content from the remote server, but completly timeout when trying to get the content from the local server.
The verbose of the PHP CURL is:
* About to connect() to 192.168.1.103 port 80 (#0)
* Trying 192.168.1.103... * connected
* Connected to 192.168.1.103 (192.168.1.103) port 80 (#0)
> GET /app/getContent HTTP/1.1
Host: 192.168.1.103
Accept: */*
Cookie: PHPSESSID=u8spbervheh3tcrv62gcnc2j72
* Operation timed out after 5001 milliseconds with 0 bytes received
* Closing connection #0
Note that the URI is rewrited with the following .htaccess file (on both location):
RewriteEngine on
RewriteBase /cms/client1/public_html
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule .* index.php [L]
Also note that I've activated the rewrite log and compared request to make sure that the mod_rewrite action was exactly the same in all situation. ( I'm 100% sure it's not a rewrite trouble )
If I try to get the file using the CURL app under Ubuntu, it works well:
$ curl -v --cookie PHPSESSID=u8spbervheh3tcrv62gcnc2j72 http://192.168.1.103/app/getContent
* About to connect() to 192.168.1.103 port 80 (#0)
* Trying 192.168.1.103... connected
* Connected to 192.168.1.103 (192.168.1.103) port 80 (#0)
> GET /app/getContent HTTP/1.1
> User-Agent: curl/7.21.0 (i686-pc-linux-gnu) libcurl/7.21.0 OpenSSL/0.9.8o zlib/1.2.3.4 libidn/1.18
> Host: 192.168.1.103
> Accept: */*
> Cookie: PHPSESSID=u8spbervheh3tcrv62gcnc2j72
>
< HTTP/1.1 403 Forbidden
< Date: Thu, 24 Feb 2011 21:40:17 GMT
< Server: Apache/2.2.16 (Ubuntu)
< X-Powered-By: PHP/5.3.3-1ubuntu9.3
< Expires: Thu, 19 Nov 1981 08:52:00 GMT
< Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
< Pragma: no-cache
< Vary: Accept-Encoding
< Content-Length: 82
< Content-Type: text/html; charset=UTF-8
<
* Connection #0 to host 192.168.1.103 left intact
* Closing connection #0
WT_AUTH non défini. (strictement aucune authentification actuellement en session)
The 403 error and the WT_AUTH content is what I expect to receive instead of the timeout that I have with PHP.
It's also the same (wanted & correct) result that I receive if use the php curl on the remote server:
* About to connect() to site.com port 80 (#0)
* Trying 123.123.123.123... * connected
* Connected to site.com (123.123.123.123) port 80 (#0)
> GET /app/getContent HTTP/1.1
Host: site.com
Accept: */*
Cookie: PHPSESSID=u8spbervheh3tcrv62gcnc2j72
< HTTP/1.1 403 Forbidden
< Date: Thu, 24 Feb 2011 21:45:30 GMT
< Server: Apache/2.2.16 (Debian) DAV/2 SVN/1.6.12 mod_fcgid/2.3.6
< Expires: Thu, 19 Nov 1981 08:52:00 GMT
< Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
< Pragma: no-cache
< Content-Length: 28
< Content-Type: text/html; charset=UTF-8
<
* Connection #0 to host site.com left intact
* Closing connection #0
And I'll also get the same thing if I access directly 192.168.1.103/app/getContent in my browser.
Finally, I've also made sure that the getContent script was working by putting logs in it. The weird part is that if I start the request at 16:45:00, and the timeout occur at 16:45:05, the logged data from the getContent script will be dated at 16:45:05. So it's like if the CURL was maintaining a connexion in the "opening" state. And when the connexion is closed, the php script is allowed to start.
Any idea of my it doesn't work locally ?
In case you want to take a look at the PHP code, here's the pertinent part:
$ressource = curl_init();
curl_setopt($ressource, CURLOPT_URL, $destinationUrl);
curl_setopt($ressource, CURLOPT_VERBOSE, true);
$handle = fopen(FRAMEWORK_ROOT . DIRECTORY_SEPARATOR . 'log' . DIRECTORY_SEPARATOR . 'curl_debug.txt', 'w');
curl_setopt($ressource, CURLOPT_STDERR, $handle);
// Turn off the server and peer verification (TrustManager Concept).
curl_setopt($ressource, CURLOPT_SSL_VERIFYPEER, FALSE);
curl_setopt($ressource, CURLOPT_SSL_VERIFYHOST, FALSE);
curl_setopt($ressource, CURLOPT_RETURNTRANSFER, TRUE); //retourn content
curl_setopt($ressource, CURLOPT_HEADER, TRUE); //get HTTP headers
curl_setopt($ressource, CURLOPT_COOKIE, session_name() . '=' . session_id());
curl_setopt($ressource, CURLOPT_TIMEOUT, 5);
echo "\n<br />" . date('Y/m/d H:i:s');
$httpResponse = curl_exec($ressource);
echo "\n<br />" . date('Y/m/d H:i:s');
if(curl_errno($ressource) != 0)
throw new Core_Exc_Def(curl_error($ressource)); // WILL THROW AN ERROR ON 192.168.1.103, BUT NOT ON THE REMOTE SITE.
Funny fact: before adding the TIMEOUT, the loading was infinite. The local site wasn't responding, even other pages. I needed to restart the apache server to be able to access the site again...
Update:
If I comment the line:
curl_setopt($ressource, CURLOPT_COOKIE, session_name() . '=' . session_id());
It's "working" (it cause another problem, but nothing related to the timeout). Both script are on the same virtual host, and share the same session, but that should not create a CURL TimeOut ?!