Remote SSH ufsdump

Do you have a question? Post it now! No Registration Necessary.  Now with pictures!

Threaded View

I am able to remotedump to a tape under linux via env RSH=/usr/bin/ssh
dump 0uf host:/dev/nst0 /, fine. however under Solaris things are
different, using the Korne shell:env RSH=/usr/bin/ssh
/usr/sbin/ufsdump 0uf host:/dev/nst0 /, i get connection refused,
although sshd is running; can anyone help?

Here is the output from ssh -v:

SSH Version Sun_SSH_1.0, protocol versions 1.5/2.0.
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: ssh_connect: getuid 0 geteuid 0 anon 0
debug1: Connecting to bbox [] port 22.
debug1: Allocated local port 1023.
debug1: Connection established.
debug1: identity file //.ssh/identity type 3
debug1: Bad RSA1 key file //.ssh/id_rsa.
debug1: identity file //.ssh/id_rsa type 3
debug1: identity file //.ssh/id_dsa type 3
debug1: Remote protocol version 1.99, remote software version
debug1: match: OpenSSH_3.5p1 pat ^OpenSSH
Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-Sun_SSH_1.0
debug1: sent kexinit: diffie-hellman-group1-sha1

debug1: channel_free: channel 0: status: The following connections are
  #0 client-session (t4 r0 i8/0 o128/0 fd -1/-1)
debug1: channel_free: channel 0: dettaching channel user
select: Bad file number
debug1: Transferred: stdin 0, stdout 0, stderr 25 bytes in 0.2 seconds
debug1: Bytes per second: stdin 0.0, stdout 0.0, stderr 119.3
debug1: Exit status -1

- Couldn't post all the output, instead I have the beginning and end

Re: Remote SSH ufsdump (Wyndell) writes:
Quoted text here. Click to load it

Well, the "linux" dump and the Solaris ufsdump are not the same program,
even though they may have common roots in the distant past. Specifically,
'man dump' on a nearby RedHat box says:

     RSH         Dump uses the contents of this variable to determine the name
                 of the remote shell command to use when doing remote backups
                 (rsh, ssh etc.).  If this variable is not set, rcmd(3) will
                 be used, but only root will be able to do remote backups.

- while there is no mention of such functionality in 'man ufsdump' on
e.g. Solaris 8. => There's no reason to expect it to be available in
ufsdump (it will only use rsh).

You may be able to use something like

 ufsdump 0uf - / | ssh host dd bs=10k of=/dev/nst0

--Per Hedeland

Re: Remote SSH ufsdump (Per Hedeland) writes:
Quoted text here. Click to load it

How important is it that the "bs" of the dd(1) be the same as the
blocking size of ufsdump(8)? Or is "bs=10k" that value?

David Magda <dmagda at>, /
Because the innovator has for enemies all those who have done well under
the old conditions, and lukewarm defenders in those who may do well
under the new. -- Niccolo Machiavelli, _The Prince_, Chapter VI

Re: Remote SSH ufsdump

Quoted text here. Click to load it

Frankly, with modern tape devices, I'm not sure - and the question is
probably better asked in a different group. In the ancient days of
reel-to-reel tapes, each write() would create a physical block on the
tape, and you had to read it back with read()s of at least the same
size, or you wouldn't get all the data - meaning that for sanity, you
had to make sure all the write()s were the same size (controlled by bs -
or with some versions of dd, you actually had to give both ibs and
obs). Using the same size in dd as the write()s done by [ufs]dump would
then mostly be a matter of not getting a partial last block - and of
being able to read the tape back "directly" with [ufs]restore.

I do think this is less important with current tape devices though, and
that "block size" of the write()s mostly is a performance issue (though
it probably needs to be a multiple of 512 - or maybe even of 1k or 2k).
But it seems to make sense to have the write()s done by dd the same way
as if [ufs]dump had written "directly" (or via rmt(8)) to the tape.

Quoted text here. Click to load it

Yes, that's the default for Solaris ufsdump.

--Per Hedeland

Re: Remote SSH ufsdump

Quoted text here. Click to load it

It's not important because the pipe/network is removing the effect of
block-sized writes by the ufsdump program.  All I would recommend is
that you supply the factor in reverse when doing restores..

ssh host "dd bs=10k if=/dev/nst0" | ufsrestore rf -

I like 64k, but YMMV.

Darren Dunham                                 
Unix System Administrator                    Taos - The SysAdmin Company
Got some Dr Pepper?                           San Francisco, CA bay area
         < This line left intentionally blank to confuse you. >

Re: Remote SSH ufsdump (Per Hedeland) wrote in message
Quoted text here. Click to load it

I always used your way, just like passing RSH to env for dumps. By the
way how should one, wetermine to correct "bs" using a DDS 2 tape.

Yesterday, I got the ufsdump to work, but it has since then failed
everytime ;)


Re: Remote SSH ufsdump (Wyndell) writes:
Quoted text here. Click to load it

See other followup - I think the bs (or ibs+obs) should match whatever
[ufs]dump is doing, but the latter is controllable by [ufs]dumps 'b'
option of course. For optimal values for the size for a given tape drive
technology, you're probably better off asking in some generic Unix
sysadmin group.

I've been using 126 (i.e. 63k) for most everything over the years, since
with modern or semi-modern tape technology that seems to get vastly
better performance than the typical default of 20 / 10k, while still not
overflowing 16-bit counters rumoured to exist in some tape devices etc -
but occasionally I've seen Usenet postings going like "No, that's
totally wrong, for Exabyte you should use 112 due to [long explanation
of how tracks are laid out on the tape etc]". Seems to be a black art to

Quoted text here. Click to load it

The above works fine for me on a quick test (dumping from Solaris 8 to a
remote *file*) - if you have problems with ufsdump specifically (i.e.
the ssh part works), again, this is probably not the right group for it.

Actually, to continue with the off-topicness:-), this test made me
realize that I'd misread the ufsdump man page - the default block size
is actually 64 / 32k (on Solaris 8), not 20 / 10k, and the dd should
thus probably use 32k instead (best is probably to use the 'b' option
rather than rely on defaults). Further, the remote 'dd' - on RH7.3 in
this case - treated 'bs=32k' differently than 'ibs=32k obs=32k', since
the former reported
4+93988 records in
4+93988 records out
and the latter
13+93919 records in
31293+0 records out
- i.e. it most all blocks written were "partial" when using just bs=32k
(this is common dd behaviour in my experience, man pages
notwithstanding, but perhaps not universal - I think it's a bug:-).

--Per Hedeland

Site Timeline