Do you have a question? Post it now! No Registration Necessary. Now with pictures!
August 25, 2008, 6:28 pm
rate this thread
I have an application which uses PBS to allocates 32 nodes on a Linux
Cluster. A Perl script needs to spawn off 100+ completely independent
processes from this job. The Perl use fork to do this. The script
came from a shared memory machine where ther was plenty of memory each
node could access. The system it needs to run on is not a shared
memory system, where each node has its own memory. The problem is
that although 32 nodes have been allocated for this job, each "fork"
spawns off a process on the same node, and therefore runs out of
My question is: Is there any option to the "fork" command to tell the
child process to go to particular nodes?
Re: Perl Fork options
Phosphate Buffered Saline?
And even if it didn't, wouldn't they all be using the same node's CPU
and not spreading out? Or does "PBS" take care of that (but without
taking care of the memory)?
Perl's fork is basically just a wrapper around the OS's fork, at least
if your OS has a fork.
Do you know how you would accomplish what you want in C or something other
than Perl? If you do and you tell us how you would do it, we might be able
to help you translate that into Perl.
Otherwise, the first thing that comes to my mind is to try system rather
than fork. The second thing that comes to my mind is that when I use
a grid with SunGridEngine I found that the best way was to learn all the
quirks of SGE and then tell SGE I wanted 100 Perl jobs, not tell Perl I
wanted 100 Perl jobs. Maybe SGE is so different from "PBS" that this
answer doesn't port.
-------------------- http://NewsReader.Com/ --------------------
The costs of publication of this article were defrayed in part by the
payment of page charges. This article must therefore be hereby marked
advertisement in accordance with 18 U.S.C. Section 1734 solely to indicate