Hello Steffen, First thank you very much for your quick comments !
I am sorry that I couldn't convince you. > The BOINC client itself does about nothing. How much IO is required just > depends on the scientific application. I also thought that the IO amount may depend of the project, and not of boinc directly. But boinc manage the projects and I can only file bugs against boinc so I hope it is acceptable. Thanks for your '$du' data, but I wish to underline that the point is the disk IO amount, not the footprint on the disk. So 'du' will not help you to see it (because you can overwrite many times the same data, du will not show you that). (FYI from memory I had something like 100Mb footprint used by boinc. Now deleted (maybe I should have kept but...)) How is your disk IO amount ? $ iostat ( to have it you may need to #aptitude install sysstat ) > I could suggest to upstream to have the individual projects announce their > estimate for max disk space and disk I/O per task. - Yes announcing disk I/O write amount would be a good idea. - How do you think on ? (if easy to do) "install by default in /home/boinc (create a specific user). This will increase the chances to land on a HDD and not on a SSD." Because I assume, normally people don't put their home on a SSD (and people having a laptop maybe won't run boinc). > This would be a wishlist item, then. I put 'important' because it has a major effect on an SSD, which for me make boinc unusable. "4 important = a bug which has a major effect on the usability of a package, without rendering it completely unusable to everyone." My point was to warn other users about heavy writings involved, to avoid them wearing out to quickly their SSD. I don't think this belong to 'wishlist', because nobody would see it. I hope you can have the same opinion than me. If you are not yet convinced by your own iostat (then you are a lucky guy... or I must be very unlucky), here are: 1. from my ssd smart 198 Total Read Sectors 1054749553 199 Total Write Sectors 2404159500 200 Total Read Command 18727160 201 Total Write Command 18137047 208 avg Erase 485 209 remaining life 91 As you see, Total Write >> Total Read => To come to such Totals, I assume it is not linked to only one project, but to many projects. 2. boinc start-stop impact on iostat See start-stop-boinc-impact.txt Not so long test, but gives some ideas. Below command also showed high write activities when boinc was running. I have no records, but you should just let it run for a while and look at it. Time to time shots of several 10Mb are written. $ iostat -p /dev/sda 2 > You had not mentioned the project you were joining. Sorry I didn't record which projects were runnings (2 at same time, dualcore) when I tested. Looking at http://www.worldcommunitygrid.org selected projects are: The Clean Energy Project - Phase 2 Help Conquer Cancer Human Proteome Folding - Phase 2 FightAIDS@Home Discovering Dengue Drugs - Together - Phase 2 The Clean Energy Project Discovering Dengue Drugs - Together These are the last tasks I ran X0000119570355201005271402_ 0-- In Progress 7/30/11 19:27:51 8/6/11 19:27:51 0.00 0.0 / 0.0 X0000119570353201005271402_ 0-- In Progress 7/30/11 19:27:51 8/6/11 19:27:51 0.00 0.0 / 0.0 X0000119570311201005271403_ 0-- In Progress 7/30/11 19:27:27 8/6/11 19:27:27 0.00 0.0 / 0.0 E202814_ 445_ C.28.C18H6N6OS2Se.00479563.4.set1d06_ 0-- In Progress 7/30/11 19:27:27 8/9/11 19:27:27 0.00 0.0 / 0.0 or465_ 00006_ 10-- User Aborted 7/30/11 19:24:04 7/30/11 19:27:27 0.00 0.0 / 0.0 X0000119571294201005131342_ 1-- User Aborted 7/30/11 19:24:03 7/30/11 19:24:37 0.00 0.0 / 0.0 X0000119571297201005131342_ 1-- User Aborted 7/30/11 19:23:45 7/30/11 19:27:27 0.01 0.4 / 0.0 E202814_ 977_ C.27.C19H8N4S3Se.00538161.3.set1d06_ 0-- User Aborted 7/30/11 19:23:42 7/30/11 19:27:27 0.01 0.3 / 0.0 faah23578_ ZINC00631422_ x1HHPxtl_ 03_ 1-- User Aborted 7/30/11 13:14:36 7/30/11 16:39:55 0.00 0.1 / 0.0 E202811_ 452_ C.27.C24H14N2O.00075148.3.set1d06_ 0-- User Aborted 7/30/11 11:23:29 7/30/11 16:39:55 0.11 3.5 / 0.0 X0000119500648201005260958_ 1-- Valid 7/30/11 10:57:43 7/30/11 16:39:55 1.04 31.4 / 24.2 X0000119500657201005260958_ 1-- Valid 7/30/11 10:57:42 7/30/11 13:14:35 1.21 36.3 / 27.1 faah23575_ ZINC00871887_ x1HHPxtl_ 01_ 0-- User Aborted 7/30/11 09:38:52 7/30/11 16:39:55 3.24 97.6 / 0.0 X0000119481202201004281058_ 1-- Valid 7/30/11 08:05:42 7/30/11 13:14:35 1.07 32.1 / 29.8 (note: I aborted some during the tests I did). I hope above will help narrowing down the problem (or discovering that most projets write a lot). Best regards, Franck
Treated output from uptime >> test.txt && iostat -p /dev/sda -t | tee -a test.txt --- boinc not started --- 17:49:02 up 26 min, 3 users, load average: 0.00, 0.02, 0.05 Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn sda 8.64 357.56 3.11 560737 4875 sda1 8.11 355.38 3.11 557309 4871 sda2 0.42 1.77 0.00 2772 4 17:49:21 up 26 min, 3 users, load average: 0.00, 0.02, 0.05 Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn sda 8.54 353.34 3.08 560737 4887 sda1 8.02 351.18 3.08 557309 4883 sda2 0.42 1.75 0.00 2772 4 --- Boinc was just started around 5:52:21 ---- 17:51:42 up 28 min, 5 users, load average: 0.00, 0.01, 0.05 Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn sda 7.87 324.58 2.93 560881 5067 sda1 7.39 322.60 2.93 557453 5063 sda2 0.38 1.60 0.00 2772 4 17:52:21 up 29 min, 5 users, load average: 0.00, 0.01, 0.05 Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn sda 7.70 317.43 2.88 560881 5087 sda1 7.23 315.49 2.88 557453 5083 sda2 0.37 1.57 0.00 2772 4 17:53:05 up 30 min, 5 users, load average: 1.09, 0.29, 0.14 Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn sda 9.07 474.34 44.19 859037 80036 sda1 8.61 472.44 44.19 855609 80032 sda2 0.36 1.53 0.00 2772 4 17:54:17 up 31 min, 5 users, load average: 1.83, 0.70, 0.29 Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn sda 8.93 456.28 118.89 859037 223841 sda1 8.49 454.46 118.89 855609 223837 sda2 0.35 1.47 0.00 2772 4 17:54:57 up 32 min, 5 users, load average: 1.91, 0.86, 0.36 Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn sda 8.79 446.70 126.79 859037 243835 sda1 8.36 444.92 126.79 855609 243831 sda2 0.34 1.44 0.00 2772 4 --- boinc stopped just before 05:55:05 --- 17:55:05 up 32 min, 5 users, load average: 1.92, 0.88, 0.37 Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn sda 8.76 444.81 128.03 859165 247287 sda1 8.33 443.03 128.02 855737 247283 sda2 0.34 1.44 0.00 2772 4 17:55:28 up 32 min, 5 users, load average: 1.26, 0.81, 0.36 Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn sda 8.70 439.58 133.44 859165 260804 sda1 8.27 437.82 133.43 855737 260800 sda2 0.34 1.42 0.00 2772 4 17:55:45 up 32 min, 5 users, load average: 0.98, 0.77, 0.36 Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn sda 8.63 435.97 132.35 859165 260820 sda1 8.21 434.23 132.35 855737 260816 sda2 0.34 1.41 0.00 2772 4 17:56:13 up 33 min, 5 users, load average: 0.60, 0.70, 0.35 Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn sda 8.52 429.78 133.23 859165 266328 sda1 8.10 428.07 133.22 855737 266324 sda2 0.33 1.39 0.00 2772 4 17:56:58 up 34 min, 5 users, load average: 0.28, 0.60, 0.33 Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn sda 8.33 420.27 130.28 859165 266332 sda1 7.92 418.59 130.28 855737 266328 sda2 0.32 1.36 0.00 2772 4