to
>investigate at that time.
Isn't this just the lemming-syncer hurling every dirty block over
the cliff at the same time ?
To find out: Run gstat and keep and eye on the leftmost column
The road map for fixing that has been known for years...
--
Poul-Henning Kamp | UNIX
Thanks in advance.
>
>Hi,
> It wont work. I know the authorth was thinking about adding
>VIA C3/Padlock support, but not sure the details.
There is partial support for opencrypto/hifn/vpn1401 in the p4
branch phk_gbde, but it doesn't help very much because of the high
setup o
icates this is a serious threat for you,
you should arrange for simple physical destruction of your disk to
be available.
Most modern disks have one or more holes in the metal only covered
by a metalic sticker. Pouring sulfuric acid through those openings
is a good start.
--
Poul-Henning Kamp
incremental security imo, on the
other hand it is feasible to implement it, so I've put it on the
todo list.
--
Poul-Henning Kamp | UNIX since Zilog Zeus 3.20
[EMAIL PROTECTED] | TCP/IP since RFC 956
FreeBSD committer | BSD since 4.3-tahoe
;read/write?
Not apart from the dreaded debug option.
--
Poul-Henning Kamp | UNIX since Zilog Zeus 3.20
[EMAIL PROTECTED] | TCP/IP since RFC 956
FreeBSD committer | BSD since 4.3-tahoe
Never attribute to malice what can adequat
o high temperatures.
Poul-Henning
[*] Known in certain circles as a "Warnering your laptop" :-)
--
Poul-Henning Kamp | UNIX since Zilog Zeus 3.20
[EMAIL PROTECTED] | TCP/IP since RFC 956
FreeBSD committer | BSD since 4.3-tahoe
oking at
>it now and will probably be committing a fix.
Already committed.
--
Poul-Henning Kamp | UNIX since Zilog Zeus 3.20
[EMAIL PROTECTED] | TCP/IP since RFC 956
FreeBSD committer | BSD since 4.3-tahoe
Never attribute to malice what can adequately be ex
I need two to sustain the IO
>required?
Spreading it will give you more I/O bandwidth.
--
Poul-Henning Kamp | UNIX since Zilog Zeus 3.20
[EMAIL PROTECTED] | TCP/IP since RFC 956
FreeBSD committer | BSD since 4.3-tahoe
Never attribute to malice what c
3 of these
>shelves what would my expected loss of IO be?
The loss will mostly be from latency, but how much is impossible to
tell I think.
The statistics of this, even with my trusty old Erlang table would
still be too uncertain to be of any value.
--
Poul-Henning Kamp | UNIX since Zilo
d 5-20T filesystem and actually fsck them.
I am not sure I would advocate 64k blocks yet.
I tend to stick with 32k block, 4k fragment myself.
This is a problem which is in the cross-hairs for 6.x
--
Poul-Henning Kamp | UNIX since Zilog Zeus 3.20
[EMAIL PROTECTED] | TCP/IP sin
more inodes than I need, it also saves fsck time.
--
Poul-Henning Kamp | UNIX since Zilog Zeus 3.20
[EMAIL PROTECTED] | TCP/IP since RFC 956
FreeBSD committer | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by
n that large disks will contain large files.
>
>It strikes me that driving the block size up (as far as 1M) and having
>a 256 (or so) fragments might become appropriate.
Sounds like a _great_ project for somebody :-)
--
Poul-Henning Kamp | UNIX since Zilog Zeus 3.20
[EMAIL PROTECTED
12 matches
Mail list logo