OK it is the text dump, uncompressed, looks like the new maint. plan does a
rebuild index then a reorganize index. Of course I do not know a lot about
SQL, but when I watch the rsync run most of the data is generated in the
last 25% of the run, I know the data is the same in there but maybe now it
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
When you say an SQL database dump do you mean you are rsyncing the raw
text file or are you compressing it? If you are compressing it you
might want to try gzip with the --rsyncable patch as it can
significantly improve rsync's ability to delta-xfer c
I have been using rsync to sync a SQL database dump(127GB in size) on a
server (cygwin to linux) for many years now w/o a problem, until they put a
maintence plan to reindex the database once a week. This is all done over a
2Mbit link normally took 3-4 hrs now I noticed it was running the next d
On 6/15/06, Wayne Davison <[EMAIL PROTECTED]> wrote:
On Thu, Jun 15, 2006 at 11:50:52AM -0400, Surer Dink wrote:
> Is this because I am using -H and --delete? What is the per-file
> overhead when both of these options are used?
It depends on what version of rsync you're using. Older versions w
On Thu, Jun 15, 2006 at 11:50:52AM -0400, Surer Dink wrote:
> Is this because I am using -H and --delete? What is the per-file
> overhead when both of these options are used?
It depends on what version of rsync you're using. Older versions would
allocate an additional file list for --delete, and
All,
I have read the lists, I have read the faq - yes, rsync uses a lot
of memory. That said - I need to use rsync in an environment with a
lot of files, and I need to use -H and --delete options. The faq says
about 100 bytes per file - at the moment I am looking at 870
files, which should
The company I work for uses rsync for backups from client computers,
the memory usage is a problem for a lot of them since they're already
busy doing other important things (databases, web serving, etc).
From the FAQ:
out of memory
The usual reason for "out of memory" when running rsync is t
[...]
> 1) Free: break your rsync's into several executions rather than one huge
> one. Do several sub-directory trees, each separately. If your data
> files are not organized in such a way that they can easily be divided
> into a reasonable number of sub-directory trees, consider re-orga
On Wed, Jul 06, 2005 at 01:13:23AM -0500, Matthew S. Hallacy wrote:
> Unfortunately there's no indication of who needs a spare week of
> coding time, or how much a week would cost.
That's a really old comment, so I'm not sure if it was written by Martin
Pool or Dave Dykstra or someone else. I'm
On Wed 06 Jul 2005, David Favro wrote:
>
> 1) Free: break your rsync's into several executions rather than one huge
> one. Do several sub-directory trees, each separately. If your data
[...]
> 2) Cheap: buy more swap space. These days random-access magnetic
[...]
> 4) Expensive: buy more solid-
Hi, Matthew --
Regarding your message of 05-Jul-2005 concerning rsync memory usage
(sorry that I am not directly replying to it; I am not as yet subscribed
to the list and my mailer doesn't allow me to hard-code an In-Reply-To
or References header):
While I applaud anyone who wants to enco
Hello,
The company I work for uses rsync for backups from client computers,
the memory usage is a problem for a lot of them since they're already
busy doing other important things (databases, web serving, etc).
>From the FAQ:
---
out of memory
The usual reason for "out of memory" when running
x27;
"There are some who call me Tim?"
Lutz Pressler <[EMAIL PROTECTED]>
Sent by: [EMAIL PROTECTED]
07/17/2002 03:27 AM
To: <[EMAIL PROTECTED]>
cc: (bcc: Tim Conway/LMT/SC/PHILIPS)
Subject:rsync memory usage
Classificat
Hi,
we are using rsync to mirror large trees (>> 50 GB, >> 2 mio files)
offsite (rsync -e ssh with forced ssh command on the remote side).
The main problem occuring is memory usage (especially as the remote
system has no swap space configured for security reasons):
It seems that the rsync proces
Yes, that sounds like a pretty good plan for (say) rsync 3.0. We all
seem to be more or less on the same track as to how the protocol
should look.
Here are my feelings about the way to get there. I would be happy to
have holes picked in them:
* rsync 2.x works well, but is too crufty to be a
A recent email from Phil Howard prompted me to think about getting rsync
to use less memory for its file list. Here's an early idea on how to
modify the protocol to not generate the file list entirely in advance.
Please feel free to poke holes in this if I'm going astray.
I envision abbreviating
> " " == Cameron Simpson <[EMAIL PROTECTED]> writes:
> The other day I was moving a lot of data from one spot to
> another. About 12G in several 2G files. Anyway, I interrupted
> the transfer because I'd chosen a fairly slow way of doing it,
> and wanted to pick up where
Cameron Simpson [[EMAIL PROTECTED]] writes:
>| Cameron> The other day I was moving a lot of data from one spot to
>| Cameron> another. About 12G in several 2G files. [...]
>| Cameron> so I used rsync so that its checksumming could speed past
>| Cameron> the partially copied file.
| Cameron> The other day I was moving a lot of data from one spot to
| Cameron> another. About 12G in several 2G files. [...]
| Cameron> so I used rsync so that its checksumming could speed past
| Cameron> the partially copied file. It spent a long time
| Cameron> transferring
> "Cameron" == Cameron Simpson <[EMAIL PROTECTED]> writes:
Cameron> The other day I was moving a lot of data from one spot to
Cameron> another. About 12G in several 2G files. Anyway, I
Cameron> interrupted the transfer because I'd chosen a fairly slow
Cameron> way of doing it
The other day I was moving a lot of data from one spot to another.
About 12G in several 2G files. Anyway, I interrupted the transfer
because I'd chosen a fairly slow way of doing it, and wanted to pick up
where I left off so I used rsync so that its checksumming could speed
past the partially cop
21 matches
Mail list logo