defining server roles

Do you have a question? Post it now! No Registration Necessary.  Now with pictures!

Threaded View
I'm writing some system management software, and need to check for a
system's role. It has to run within a lab environment, where a machine
might be primarily a workstation (of several possible sorts--development,
graphics, GIS, whatever), but also a backup server (again, of several
possible sorts), etc. Or it could be part of a compile farm, a basic office
machine, etc.

This is mostly about a) separating system update package repositories, b)
maintaining a central systems audit report that is never more than 12-24
hours old, and c) keeping systems stripped as much as possible.

Ideally, there would be some sort of standard for system role definitions,
for interoperability between different Linux distros, and the BSDs. I've
been unable to locate anything. Is anyone aware of any standard? If not, do
you know of anything that's widely used in any proprietary software, such
as OpenView?

This would seem an important first step on the road toward solving harder
problems. For instance, a backup Web server in a departmental network with
strict security requirements, facing departments with less strict
requirements, and maintained entirely via binary packages, is better off
without a compiler. Depending on your PPF (Precise Paranoia Factor), that
might mean that that backup Web server should never be provisioned as a
development workstation.


YAN Security Guy

Re: defining server roles

Greg Metcalfe wrote:
Quoted text here. Click to load it
I'm not aware of anything built-in, but then there's not a well-defined
line separating workstation from server as there is in the MS world
(mostly involving thousands of pounds/dollars and Client Access
Licences). A workstation may well run a web server, DNS cache and/or
other types of server software. A server might have a number of user
applications installed to be run remotely. You say yourself that your
machines run many different combinations of software. Surely the only
reliable guide to the package update requirement is the set of packages
which actually exist on the machine?

I think that keeping development tools off an at-risk machine is one of
those things that doesn't do any harm, but doesn't really achieve much
either. If the Bad Guys are in a position to use your machine to flood
the world with spam, the need to upload gcc etc. is not going to prove
much of an obstacle to them.

I'd say this is one of those times when Open Source hasn't yet seen a
need, and you are free to be the one who meets it. I have a feeling that
few Debian admins will see the end of the road for apt-get, though.

Re: defining server roles

Joe wrote:

Quoted text here. Click to load it
I completely agree that there's no well-defined separation between
workstations and servers. I also agree with the only reliable guide to
package updates requirements is the set of packages on the machine.

In the first case, what I'd like to do is build a tool that can establish
that separation, or at least report what's actually out there. For
environments where policies exist, either use might simplify an admin's

In the second case I see a problem, in that a machine can be rolled out with
vastly more packages installed than are required for the machines's role.
This package set can then be maintained for an indefinite, rather than
removed, if you only judge by the current package list. This is non-
optimal. It's harder to keep the machine updated, etc. I'm sure most admins
have seen machines that seem to have been provisioned from a CD set, with
an 'install everything' option checked, all sorts of services running, etc.

Driving policy onto systems is hard, and I think this could be a useful tool
in the kit, if you need to do that. If you don't, the tool needs to at
least serve as a sanity check.

Quoted text here. Click to load it
It's still common for scripts to download and compile code. Not many Bad
Guys seem to be building packages! I'm a big believer in layered defenses.
This was only an example of why the tool might be found useful. Possibly
not the best, but still valid, IMHO. I can think of others, such as a
simplified means of writing SHA1 hashes and static binaries to R/O media
that's targeted to specific systems. That can be very useful for rapid
incident response.

I'll toss this one back. As a theoretical problem, can you think of another
example where such a tool might be useful? If you can, that might add a
scenario that I'd need to ensure the tool handles gracefully.

Quoted text here. Click to load it

I was hoping that others had seen it too. I was under the impression that
OpenView had functionality similar to this. Perhaps not. Sigh. Working to
an existing standard is a lot simpler than trying to create something new.
The second case is a huge opportunity to make a mess of it.

BTW, the tool would need to use native package management and reporting
tools as much as possible. The much loved apt-get is popular for extremely
good reasons.

Site Timeline