Do you have a question? Post it now! No Registration Necessary. Now with pictures!
October 12, 2006, 11:48 am
rate this thread
of websites analyses it and shows some results.
The problem I'm having is that some sites seem to work perfectly while
others don't. I know it's to do with the complexity of the site, but
I've no idea how to fix it in my code.
At the moment I am just using file_get_contents() to get all of the
relevant pages, but when I use this on http://www.dontstayin.com it
just doesn't work!
when I go there with my browser and view the source it's all lovely
html, but when I try to grab it file_get_contents() returns
what am I doing wrong, and how can I better emulate an actual
Re: http fopen problem
curl returns this as it's source:
HTTP/1.1 302 Found
Date: Thu, 12 Oct 2006 14:04:54 GMT
<h2>Object moved to <a href="/pages/home">here</a>.</h2>
Here's how to get that to a text file using curl
$ch = curl_init("http://www.dontstayin.com /");
$fp = fopen("dontstayin_homepage.txt", "w");
curl_setopt($ch, CURLOPT_FILE, $fp);
curl_setopt($ch, CURLOPT_HEADER, 1);