Paul Mather wrote:
> By definition, random data are not compressible. It's my understanding that
> the "compressed capacity" of tapes is based explicitly on an expected 2:1
> compression ratio for source data (and this is usually cited somewhere in the
> small print). That is a reasonable est
On Aug 9, 2010, at 2:55 AM, Henry Yen wrote:
> On Fri, Aug 06, 2010 at 10:48:10AM +0200, Christian Gaul wrote:
>> Even when catting to /dev/dsp i use /dev/urandom.. Blocking on
>> /dev/random happens much too quickly.. and when do you really need that
>> much randomness.
>
> I get about 40 bytes
Am 09.08.2010 08:55, schrieb Henry Yen:
> On Fri, Aug 06, 2010 at 10:48:10AM +0200, Christian Gaul wrote:
>
>> Even when catting to /dev/dsp i use /dev/urandom.. Blocking on
>> /dev/random happens much too quickly.. and when do you really need that
>> much randomness.
>>
> I get about 40 by
On Fri, Aug 06, 2010 at 10:48:10AM +0200, Christian Gaul wrote:
> Even when catting to /dev/dsp i use /dev/urandom.. Blocking on
> /dev/random happens much too quickly.. and when do you really need that
> much randomness.
I get about 40 bytes on a small server before blocking.
> > Reason 1: the e
On Aug 6, 2010, at 4:48 AM, Christian Gaul wrote:
> Am 05.08.2010 21:56, schrieb Henry Yen:
>> On Thu, Aug 05, 2010 at 17:17:39PM +0200, Christian Gaul wrote:
>>
[[...]]
>>
/dev/urandom seems to measure about 3MB/sec or thereabouts, so creating
a large "uncompressible" file could be do
Am 05.08.2010 21:56, schrieb Henry Yen:
> On Thu, Aug 05, 2010 at 17:17:39PM +0200, Christian Gaul wrote:
>
>> Am 05.08.2010 16:57, schrieb Henry Yen:
>>
> First, I welcome this discussion, however arcane (as long as the
> List permits it, of course) -- I am happy to discover if I'm wrong
>
On Thu, Aug 05, 2010 at 17:17:39PM +0200, Christian Gaul wrote:
> Am 05.08.2010 16:57, schrieb Henry Yen:
First, I welcome this discussion, however arcane (as long as the
List permits it, of course) -- I am happy to discover if I'm wrong
in my thinking. That said, I'm not (yet) convinced.
This p
Am 05.08.2010 16:57, schrieb Henry Yen:
> On Thu, Aug 05, 2010 at 10:09:06AM -0400, John Drescher wrote:
>
>> On Thu, Aug 5, 2010 at 8:57 AM, Henry Yen wrote:
>>
>
>>> On (at least) Linux, /dev/random will quickly block - use /dev/urandom
>>> instead.
>>>
>> Since these tend to
On Thu, Aug 05, 2010 at 10:09:06AM -0400, John Drescher wrote:
> On Thu, Aug 5, 2010 at 8:57 AM, Henry Yen wrote:
> > On (at least) Linux, /dev/random will quickly block - use /dev/urandom
> > instead.
>
> Since these tend to be slow I would just create a large file from one of
> these.
Well,
On Thu, Aug 5, 2010 at 8:57 AM, Henry Yen wrote:
> On Thu, Aug 05, 2010 at 12:46:49PM +0100, Alan Brown wrote:
>
>> Tape speed testing while writing a repetitive file is useless as
>> hardware compression makes it go a lot faster than natually.
>>
>> For tape tests use /dev/random
>
> On (at least
On Thu, Aug 05, 2010 at 12:46:49PM +0100, Alan Brown wrote:
> Tape speed testing while writing a repetitive file is useless as
> hardware compression makes it go a lot faster than natually.
>
> For tape tests use /dev/random
On (at least) Linux, /dev/random will quickly block - use /dev/urandom
ekke85 wrote (2010/08/05):
> slow. It writes at 22mb/sec. The drives should be able to do a lot more
> then that. I have to backup 11TB that takes a couple of days to
Try tar -cf /dev/null /data-with-11-tb and you will see, if the
bottleneck is data source, or something else. How many files do
you
ekke85 wrote (2010/08/05):
> I do not have spooling on and I don't have software compression on.
It seems that you have LTO-3 drive(s). If you are not able to constantly
backup data atleast at rate 27 MB/s (HP LTO-3) or 40 MB/s (IBM LTO-3), you
need the spooling - which is a must. Note that requir
ekke85 wrote:
> The spooling attribute was not enabled, I have enabled it now. This is what I
> get writing to disk and then also writing that file to tape with tar:
Tape speed testing while writing a repetitive file is useless as
hardware compression makes it go a lot faster than natually.
Fo
>
> Hi Thomas
>
> The spooling attribute was not enabled, I have enabled it now. This is
> what I get writing to disk and then also writing that file to tape with
> tar:
>
>
> This is a 10gb file to disk:
> ]# dd if=/dev/zero of=/home/bigfile bs=1M count=1 1+0
> records in
> 1+0 re
Quote:
Hi
I have a Quantum Scalar i500 and it works in Bacula, but it is very
slow. It writes at 22mb/sec. The drives should be able to do a lot more
then that. I have to backup 11TB that takes a couple of days to
complete, I don't want to think how long it would take to restore Sad
This is the
Am Thu, 05 Aug 2010 05:57:06 -0400 schrieb ekke85:
> Hi
>
> I have a Quantum Scalar i500 and it works in Bacula, but it is very
> slow. It writes at 22mb/sec. The drives should be able to do a lot more
> then that. I have to backup 11TB that takes a couple of days to
> complete, I don't want to t
Hi
I have a Quantum Scalar i500 and it works in Bacula, but it is very slow. It
writes at 22mb/sec. The drives should be able to do a lot more then that. I
have to backup 11TB that takes a couple of days to complete, I don't want to
think how long it would take to restore :(
This is the output
18 matches
Mail list logo