Recommendation for efficiently fetching headers from many URLs?

Do you have a question? Post it now! No Registration Necessary.  Now with pictures!

Threaded View
I have a MySQL database which includes several thousand links to pages on
external sites.  Naturally, over time some of those links go away, so I would
like to create a script that reads through the URL fields, accesses each link,
and performs an action based on each response code.

I can think of at least three ways to accomplish the header fetch -- http_head,
get_headers, and curl -- but I want to choose the most efficient option that
won't suck up more server resources & bandwidth than is absolutely necessary.  I
imagine that I'll also need to throttle the number of requests per second, since
this low-priority task should interfere as little as possible with other
connections in & out of the server.

Does anyone have a recommendation about which method is most efficient?

Thanks for any and all advice.

Re: Recommendation for efficiently fetching headers from many URLs? wrote:

Quoted text here. Click to load it

If you just need the headers, use Curl, and the little-known "HEAD" request,
instead of the "GET" request.

Iván Sánchez Ortega -ivansanchez-algarroba-escomposlinux-punto-org-

There may still be two superpowers on the planet: the United States and
world public opinion.
                       -- Patrick Tyler

Site Timeline