Hi
After a clean database load a database would (should?) look like this,
if a random stab at the data is taken...
[8KB-m][8KB-n][8KB-o][8KB-p]...
The data should be fairly (100%) sequential in layout ... after some
days though that same spot (using ZFS) would problably look like:
[8KB-m][ ][
some business do not accept any kind of risk and hence will try hard
(i.e spend a lot of money) to eliminate it (create 2, 3, 4 copies,
read-verify, cksum...)
at the moment only ZFS can give this assurance, plus the ability to
self correct detected
errors.
It's a good things that ZFS can help peo
Yes. Blocks are compressed individually, so a smaller block size will (on
average) lead to less compression. (Assuming that your data is compressible at
all, that is.)
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discu
Hi Richard:
I just tried your suggestion, unfortunately it doesn't work. Basically:
make a clone of the snapshot - works bine
in the clone, remove the directories - works fine
make a snapshot of the clone - works fine
destroy the clone - fails, because ZFS reports that the "filesystem has
child
I don't have time to RTFS so I was curious if there was a guide on using
zdb, and does it do any writing of the zfs information? The binary has a
lot of options which aren't clear what do what.
I'm looking for any tools that let you do low level fiddling with things
such as broken zpools.
ta,
Well, I guess we're going to remain stuck in this sub-topic for a bit longer:
> > The vast majority of what ZFS can detect (save for
> *extremely* rare
> > undetectable bit-rot and for real hardware
> (path-related) errors that
> > studies like CERN's have found to be very rare -
> and you have ye
Thanks for taking the time to flesh these points out. Comments below:
...
> The compression I see varies from something like 30%
> to 50%, very
> roughly (files reduced *by* 30%, not files reduced
> *to* 30%). This is
> with the Nikon D200, compressed NEF option. On some
> of the lower-leve
> ZFS data buffers are attached to zvp; however, we still keep
> metadata in the crashdump. At least right now, this means that
> cached ZFS metadata has kvp as its vnode.
>
>Still, it's better than what you get currently.
I absolutely agree.
At one point, we discussed a
I went ahead and bought a M9N-Sli motherboard with 6 sata controllers and also
a promise tx4 (4x sata300 non-raid) pci controller. Anyone know if the tx4 is
suppoerted in OpenSolaris? If it's as badly supported as the (crappy) Sil
chipsets i'm better of with OpenFiler (linux) I think.
This m
On Nov 12, 2007 4:16 PM, <[EMAIL PROTECTED]> wrote:
> >I don't think it should be too bad (for ::memstat), given that (at
> >least in Nevada), all of the ZFS caching data belongs to the "zvp"
> >vnode, instead of "kvp".
>
> ZFS data buffers are attached to zvp; however, we still keep m
asa wrote:
> I would like for all my NFS clients to hang during the failover, then
> pick up trucking on this new filesystem, perhaps obviously failing
> their writes back to the apps which are doing the writing. Naive?
The OpenSolaris NFS client does this already - has done since IIRC
aroun
>I don't think it should be too bad (for ::memstat), given that (at
>least in Nevada), all of the ZFS caching data belongs to the "zvp"
>vnode, instead of "kvp".
ZFS data buffers are attached to zvp; however, we still keep metadata in
the crashdump. At least right now, this means that
> > You have to detect the problem first. ZFS is in a
> > much better position
> > to detect the problem due to block checksums.
>
> Bulls***, to quote another poster here who has since been strangely quiet.
> The vast majority of what ZFS can detect (save for *extremely* rare
> undetectable bit
On Nov 8, 2007 4:21 PM, Nathan Kroenert <[EMAIL PROTECTED]> wrote:
> Hey all -
>
> Just a quick one...
>
> Is there any plan to update the mdb ::memstat dcmd to present ZFS
> buffers as part of the summary?
>
> At present, we get something like:
> > ::memstat
> Page SummaryPages
Thanks for the help guys - unfortunately the only hardware at my disposal just
at the minute is all 32 bit, so I'll just have to wait a while and fork out on
some 64-bit kit before I get the drives. I'm a home user so I'm glad I didnt
buy the drives and discover I couldnt use them without spendi
On Sat, Nov 10, 2007 at 02:05:04PM -0200, Toby Thain wrote:
> > Yup - that's exactly the kind of error that ZFS and WAFL do a
> > perhaps uniquely good job of catching.
>
> WAFL can't catch all: It's distantly isolated from the CPU end.
How so? The checksumming method is different from ZFS, bu
>
> In the previous and current responses, you seem quite
> determined of
> others misconceptions.
I'm afraid that your sentence above cannot be parsed grammatically. If you
meant that I *have* determined that some people here are suffering from various
misconceptions, that's correct.
Given
Cyril Plisko wrote:
> On Nov 12, 2007 5:51 PM, Neelakanth Nadgir <[EMAIL PROTECTED]> wrote:
>
>> You could always replace this device by another one of same, or
>> bigger size using zpool replace.
>>
>
> Indeed. Provided that I always have an unused device of same or
> bigger size, which i
On Nov 12, 2007 5:51 PM, Neelakanth Nadgir <[EMAIL PROTECTED]> wrote:
> You could always replace this device by another one of same, or
> bigger size using zpool replace.
Indeed. Provided that I always have an unused device of same or
bigger size, which is seldom the case.
:(
> -neel
>
>
> Cyri
On Nov 10, 2007, at 23:16, Carson Gaspar wrote:
> Mattias Pantzare wrote:
>
>> As the fsid is created when the file system is created it will be the
>> same when you mount it on a different NFS server. Why change it?
>>
>> Or are you trying to match two different file systems? Then you also
>> ha
You could always replace this device by another one of same, or
bigger size using zpool replace.
-neel
Cyril Plisko wrote:
> Hi !
>
> I played recently with Gigabyte i-RAM card (which is basically an SSD)
> as a log device for a ZFS pool. However, when I tried to remove it - I need
> to give the
Hi !
I played recently with Gigabyte i-RAM card (which is basically an SSD)
as a log device for a ZFS pool. However, when I tried to remove it - I need
to give the card back - it refused to do so. It looks like I am hitting
6574286 removing a slog doesn't work [1]
Is there any workaround ? I rea
IIn this PC, I'm using the PCI card
http://www.intel.com/network/connectivity/products/pro1000gt_desktop_adapter.htm
, but, more recentlyI'm using the PCI Express card
http://www.intel.com/network/connectivity/products/pro1000pt_desktop_adapter.htm
Note that the latter didn't have PXE and the b
Louwtjie Burger writes:
> Hi
>
> What is the impact of not aligning the DB blocksize (16K) with ZFS,
> especially when it comes to random reads on single HW RAID LUN.
>
> How would one go about measuring the impact (if any) on the workload?
>
The DB will have a bigger in memory footprint
James C. McPherson wrote:
> can you guess? wrote:
> ...
>
>> Ah - thanks to both of you. My own knowledge of video format internals
>> is so limited that I assumed most people here would be at least equally
>> familiar with the notion that a flipped bit or two in a video would
>> hardly qualify
25 matches
Mail list logo