That's smart. Thanks Sincerely, Ken Young
On Mon, Mar 6, 2023 at 3:43 AM Linux-Fan <ma_sys...@web.de> wrote: > Ken Young writes: > > > Hello,[1;5B > > > > > > The methods I know, > > > > 1. scp > > pros: the native tool in the OS > > cons: you will either input password or put key pairs into servers for > > authentication. > > Works for simple cases. > > > 2. rsync > > pros: it can transfer data by increasement > > cons: you need to setup rsyncd server and make the correct authorization. > > Works for simple and complex cases. > > > 3. ftp/ftps > > pros: easy to use > > cons: need to setup ftpd server, and the way is not that secure? > > Whenever possible, I'd prefer 1 or 2 over this. > > > 4. rclone > > pros:easy to use > > cons: hard to setup (you may need a cloud storage for middleware). > > I only use rclone when I want to target a cloud storage. > A „cloud storage for middleware” does not seem sensible to me when I can > copy using methods 1 and 2 without using such a middleware. > > > For me I most often use scp + rsync. and what's your choice? > > These are my standard choices, too. In automated scenarios I often prefer > rsync over scp due to more flexibility in configuration. > > My additional tools for special purposes: > > 5. lsyncd > If you need to keep directories in sync continuously, there is a tool > called > `lsyncd` that automates repeated invocation of `rsync` in a smart way. > > 6. tar + netcat (or tar + ssh in very rare cases) > Using tar sacrifices all the flexibility of rsync but may attain a > significantly higher performance and does not need a lot of flags to do > the > right thing by default (i.e. preserve everything when acting as root). I > prefer this variant when migrating to a new disk or PC because it seems > to > be the most efficient variant in a "local trusted network and no speedup > from incremental copying" scenario. > > I documented my approach to this here: > https://masysma.net/37/data_transfer_netcat_tar.xhtml > > HTH and YMMV > Linux-Fan > > öö >