Do you have a question? Post it now! No Registration Necessary. Now with pictures!
- Posted on
- Mark Holden
April 14, 2006, 4:50 pm
rate this thread
backups of large directories from a local server to a remote server as in
the following command executed on the local server.
Ie., I don't want to first create the tar file on the local system then use
scp to copy it off, as there may not be enough disk storage on the local
system to create the tar file. However, I cannot seem to find any way, for
the above style of transfer using ssh, to do something like the "-l" option
of scp to limit the amount of bandwidth used during the transfer.
Does anybody know of a way?
Re: openssh: Limiting bandwidth on ssh (stdin-to-dd)
I strongly urge you to look into using "rsync" and actually mirror the
directory itself, not the tarball. That gets you bandwidth limitation, and
vastly improves the speed of the transmission by sending only changes, not
identical files. It can even be incorporated with "rsnapshot", which can
create multiple snapshots of the filesystem on the server, hard-linked
together to save space.
- » ssh on command line: force using a group size (prime size) of 1024 (and no...
- — Newest thread in » Secure Shell Forum