What I have noticed is that it reads a lot of information from the 
~/local/share/ubuntuone/syncdaemon/fsm/....
which seems to take a few minutes (~8min with 2-4%). Then it shifts over to do 
a local rescan of my files which uses 98-100% cpu for a minute or two. In total 
it takes about 10min for 15.500 files, ~600mb. 

The annoying part is that after loading the fsm data it uses ~36MB mem,
where as soon as it finishes the local rescan it jumps to a staggering
~83MB.

The weird stuff is that if you stop the syncdaemon with u1sdtool -q, and that 
it again --start then it only takes a few seconds to load the data from the fsm 
directory thus the 25-30MB mem ussage. Why on earth does it take 8 minutes if 
it is done after a reboot, it still only uses ~4% cpu for this.
It takes approximately the same amount of time doing the local file scan which 
brings the mem usage up to ~80MB.

It seems unreasonable that a peace of synchronisation software should eat up 
~80MB of memory at all time. I can only fear that it is keeping hashes of all 
my files in some internal dictionaries? That would make this operation scale 
somewhat linearly with the amount of files that I'm synchronising.
I can't possibly imagine any valid reason for having any of such information 
kept in memory at all times, it should be fetched when needed.

@Nicola Jelmorini, have you seen how much memory your syncdaemon is
using?


// "Here be rant"

Digging a bit more into this fsm folder reveals that it contains 57.000
files totalling 50MB. Without having insight in the design of
syncdaemon, and what exactly these files are good for (beside what i can
read in them, filename, local and server hashes) this just screams of
something that is not thought through.


The same thing goes with the creation of all meta data on s3 before any upload 
of content is started.

When this creation takes ~2 sec for every "meta-command" to be sent/executed, 
then physical storage would have become so cheap to by that I could buy, build 
and run my own data center before syncdaemon is finished with all files/folders.
Apparently when stopping to synchronise a folder can be done as a bulk command, 
not executing a delete of every subfolder and file but just a single 
"meta-command". Then it seems like a waste of every ones time that the creation 
is not done in some sort of bulk command.

At least files in local folders that have been "created" on the servers
should be uploaded instead of waiting until all meta data is sent.

Not even going into details about slow transfer of small files which
almost seams as if a new connection is made for every file (which it
hopefully is not), as large files doesn't have any problems and can
easily be transferred with 2-2,5MB/s.


It is incredible that a piece of software that is nearly 1,5year old still 
contains so many child diseases. Here I'm not thinking about visual stuff but 
actual problems that either makes it an unpleasant experience or almost 
prohibit the daily use. For example having to clean up lots of u1conflict files 
that is created by a faulty syncdaemon or your editor is constantly nagging 
about files being changed on disk. 

Sadly it seams as if most of these issues has been fixed according to
other bug reports but they newer seem to get any further than being
merged into trunk.

I think that this is a great initiative and will eventually be a great
service, however currently I pity those who pay for the service and
don't get proper value for their money.

-- 
ubuntuone-syncdaemon check all the files every time
https://bugs.launchpad.net/bugs/668666
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to