Hi Andreas, I have exactly the same problem with LTO-4 drives. I don't have an answer, just a me too. I've ignored this for a long time just because it worked with 128K blocks.
I am currently rebuilding a server based on CentOS 6.4, 2.6.32-358.18.1.el6.x86_64 #1 SMP Wed Aug 28 17:19:38 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux. I just got everything compiled, and here is the output using 256K blocks. # ./btape -c ../etc/bacula-sd.conf Tape Tape block granularity is 1024 bytes. btape: butil.c:290-0 Using device: "Tape" for writing. btape: btape.c:477-0 open device "Tape" (/dev/nst0): OK *rewind btape: btape.c:579-0 Rewound "Tape" (/dev/nst0) *test === Write, rewind, and re-read test === I'm going to write 10000 records and an EOF then write 10000 records and an EOF, then rewind, and re-read the data to verify that it is correct. This is an *essential* feature ... btape: btape.c:1157-0 Wrote 10000 blocks of 262044 bytes. btape: btape.c:609-0 Wrote 1 EOF to "Tape" (/dev/nst0) btape: btape.c:1173-0 Wrote 10000 blocks of 262044 bytes. btape: btape.c:609-0 Wrote 1 EOF to "Tape" (/dev/nst0) btape: btape.c:1215-0 Rewind OK. Got EOF on tape. btape: btape.c:1233-0 Read block 3617 failed! ERR=Success *quit On 08/21/2013 01:20 PM, Andreas Koch wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > Hi all, > > I am stumped by the behavior of an HP LTO-5 drive running on Scientific > Linux 6.4 (Kernel 2.6.32-358.14.1.el6.x86_64) and Bacula 5.2.12. > Specifically, using dd, I can read and write block sizes of 2 MB, but btape > cannot reliably handle anything larger than 128 KB (fails on reading, see > below). > > However, when I read the tape that btape supposedly has written two files of > 10000 blocks each (and then fails after reading 3616 of them) using dd, I > read each of btape-written 10000-block ``files'' as _three_ actual tape > files (of 3616+3616+2768=10000 blocks). Note that this test was performed > with a block size of 512KB (see Device definition from bacula-sd.conf, below). > > I would be grateful for any ideas on how to resolve this. With the smaller > block sizes, the backup is noticeably slower for compressible data (e.g., > database dumps), so I really would like to move back up to larger block sizes. > > Many thanks in advance, > Andreas Koch > > gundabad ~ # btape -c /etc/bacula/bacula-sd.conf /dev/nst0 > Tape block granularity is 1024 bytes. > btape: butil.c:290 Using device: "/dev/nst0" for writing. > btape: btape.c:477 open device "LTO-4" (/dev/nst0): OK > *test > > === Write, rewind, and re-read test === > > I'm going to write 10000 records and an EOF > then write 10000 records and an EOF, then rewind, > and re-read the data to verify that it is correct. > > This is an *essential* feature ... > > btape: btape.c:1157 Wrote 10000 blocks of 524188 bytes. > btape: btape.c:609 Wrote 1 EOF to "LTO-4" (/dev/nst0) > btape: btape.c:1173 Wrote 10000 blocks of 524188 bytes. > btape: btape.c:609 Wrote 1 EOF to "LTO-4" (/dev/nst0) > btape: btape.c:1215 Rewind OK. > Got EOF on tape. > btape: btape.c:1233 Read block 3617 failed! ERR=Success > *q > btape: smartall.c:404 Orphaned buffer: btape 280 bytes at 15e55e8 from > jcr.c:362 > gundabad ~ # mt -f /dev/nst0 rewind > gundabad ~ # dd if=/dev/nst0 of=/dev/null bs=512k > 3616+0 records in > 3616+0 records out > 1895825408 bytes (1.9 GB) copied, 3.7062 s, 512 MB/s > gundabad ~ # dd if=/dev/nst0 of=/dev/null bs=512k > 3616+0 records in > 3616+0 records out > 1895825408 bytes (1.9 GB) copied, 3.7542 s, 505 MB/s > gundabad ~ # dd if=/dev/nst0 of=/dev/null bs=512k > 2768+0 records in > 2768+0 records out > 1451229184 bytes (1.5 GB) copied, 2.88829 s, 502 MB/s > gundabad ~ # dd if=/dev/nst0 of=/dev/null bs=512k > 3616+0 records in > 3616+0 records out > 1895825408 bytes (1.9 GB) copied, 3.75554 s, 505 MB/s > gundabad ~ # dd if=/dev/nst0 of=/dev/null bs=512k > 3616+0 records in > 3616+0 records out > 1895825408 bytes (1.9 GB) copied, 3.75338 s, 505 MB/s > gundabad ~ # dd if=/dev/nst0 of=/dev/null bs=512k > 2768+0 records in > 2768+0 records out > 1451229184 bytes (1.5 GB) copied, 2.88846 s, 502 MB/s > gundabad ~ # dd if=/dev/nst0 of=/dev/null bs=512k > 0+0 records in > 0+0 records out > 0 bytes (0 B) copied, 0.247548 s, 0.0 kB/s > > Device { > Name = LTO-5 > Media Type = LTO-5 > Archive Device = /dev/nst0 > AutomaticMount = yes; # when device opened, read it > AlwaysOpen = yes; > RemovableMedia = yes; > RandomAccess = no; > Maximum File Size = 8g; > Minimum block size = 524288 > Maximum block size = 524288 > Changer Device = /dev/changer > AutoChanger = yes > # AHK we want to interrogate the drive, not the changer > Alert Command = "sh -c 'smartctl -H -l error /dev/sg11'" > Maximum Spool Size = 3000g > Spool Directory = /etc/bacula/spooldisk/BaculaSpool > Maximum Network Buffer Size = 65536 > } > > > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.13 (GNU/Linux) > Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ > > iEYEARECAAYFAlIU9tgACgkQk5ta2EV7DoxhcACfWvORwaQARoXzFmJMDhoP95WO > /rsAnRbWyJFdapKKe8lYjF2jS9SHTQfI > =Lu3d > -----END PGP SIGNATURE----- > > ------------------------------------------------------------------------------ > Introducing Performance Central, a new site from SourceForge and > AppDynamics. Performance Central is your source for news, insights, > analysis and resources for efficient Application Performance Management. > Visit us today! > http://pubads.g.doubleclick.net/gampad/clk?id=48897511&iu=/4140/ostg.clktrk > _______________________________________________ > Bacula-users mailing list > Bacula-users@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/bacula-users ------------------------------------------------------------------------------ How ServiceNow helps IT people transform IT departments: 1. Consolidate legacy IT systems to a single system of record for IT 2. Standardize and globalize service processes across IT 3. Implement zero-touch automation to replace manual, redundant tasks http://pubads.g.doubleclick.net/gampad/clk?id=51271111&iu=/4140/ostg.clktrk _______________________________________________ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users