On Jan 29, 2011, at 5:48 PM, stuart anderson wrote:
> Is there a simple way to query zfs send binary objects for basic information
> such as:
>
> 1) What snapshot they represent?
> 2) When they where created?
> 3) Whether they are the result of an incremental send?
> 4) What the the baseline sna
On Jan 29, 2011, at 4:14 PM, Mike Tancsa wrote:
> On 1/29/2011 6:18 PM, Richard Elling wrote:
>>> 0(offsite)#
>>
>> The next step is to run "zdb -l" and look for all 4 labels. Something like:
>> zdb -l /dev/ada2
>>
>> If all 4 labels exist for each drive and appear intact, then look more
>
Is there a simple way to query zfs send binary objects for basic information
such as:
1) What snapshot they represent?
2) When they where created?
3) Whether they are the result of an incremental send?
4) What the the baseline snapshot was, if applicable?
5) What ZFS version number they where mad
On 1/29/2011 6:18 PM, Richard Elling wrote:
>> 0(offsite)#
>
> The next step is to run "zdb -l" and look for all 4 labels. Something like:
> zdb -l /dev/ada2
>
> If all 4 labels exist for each drive and appear intact, then look more closely
> at how the OS locates the vdevs. If you can't so
On Jan 29, 2011, at 12:58 PM, Mike Tancsa wrote:
> On 1/29/2011 12:57 PM, Richard Elling wrote:
>>> 0(offsite)# zpool status
>>> pool: tank1
>>> state: UNAVAIL
>>> status: One or more devices could not be opened. There are insufficient
>>> replicas for the pool to continue functioning.
>>>
On 1/29/2011 11:38 AM, Edward Ned Harvey wrote:
>
> That is precisely the reason why you always want to spread your mirror/raidz
> devices across multiple controllers or chassis. If you lose a controller or
> a whole chassis, you lose one device from each vdev, and you're able to
> continue produ
On 1/29/2011 12:57 PM, Richard Elling wrote:
>> 0(offsite)# zpool status
>> pool: tank1
>> state: UNAVAIL
>> status: One or more devices could not be opened. There are insufficient
>>replicas for the pool to continue functioning.
>> action: Attach the missing device and online it using 'z
Whilst the driver supports TRIM, ZFS doesn't yet. So in practice it's not
supported.
Bye,
Deano
de...@cloudpixies.com
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Brandon High
Sent: 29 January 2011 18:40
To: Edward
On Sat, Jan 29, 2011 at 8:31 AM, Edward Ned Harvey
wrote:
> What is the status of ZFS support for TRIM?
I believe it's been supported for a while now.
http://www.c0t0d0s0.org/archives/6792-SATA-TRIM-support-in-Opensolaris.html
-B
--
Brandon High : bh...@freaks.com
_
I got about 222 zfs file systems, wanted to upgrade from version 3 to version 4.
How long does this operation take?
I just ran zfs upgrade -a, and it is taking a while.
truss show this repeatedly:
ioctl(3, ZFS_IOC_SET_PROP, 0x08046180) = 0
ioctl(3, ZFS_IOC_OBJSET_STATS, 0x08044BF0)
On Jan 28, 2011, at 6:41 PM, Mike Tancsa wrote:
> Hi,
> I am using FreeBSD 8.2 and went to add 4 new disks today to expand my
> offsite storage. All was working fine for about 20min and then the new
> drive cage started to fail. Silly me for assuming new hardware would be
> fine :(
>
> Th
Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
My google-fu is coming up short on this one... I didn't see that it had
been
discussed in a while ...
BTW, there were a bunch of place
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
>
> My google-fu is coming up short on this one... I didn't see that it had
been
> discussed in a while ...
BTW, there were a bunch of places where people said "ZFS doesn't
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Mike Tancsa
>
> NAMESTATE READ WRITE CKSUM
> tank1 UNAVAIL 0 0 0 insufficient replicas
> raidz1ONLINE 0 0 0
>
My google-fu is coming up short on this one... I didn't see that it had
been discussed in a while ...
What is the status of ZFS support for TRIM?
For the pool in general...
and...
Specifically for the slog and/or cache???
___
zfs-discuss maili
> From: Deano [mailto:de...@rattie.demon.co.uk]
>
> Hi Edward,
> Do you have a source for the 8KiB block size data? whilst we can't avoid
the
> SSD controller in theory we can change the smallest size we present to the
> SSD to 8KiB fairly easily... I wonder if that would help the controller do
a
i just had my oh yeah moment, this was the concept I was missing..
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> If you want Dedup to perform well, you *absolutely* must have a L2ARC
> device which can hold the *entire* Dedup Table. Remember, the size of
> the DDT is not dependent on the size of your data pool, but in the
> number of zfs slabs which are contained in that pool (slab = record,
> for
> this pu
On 1/28/2011 2:24 PM, Roy Sigurd Karlsbakk wrote:
I created a zfs pool with dedup with the following settings:
zpool create data c8t1d0
zfs create data/shared
zfs set dedup=on data/shared
The thing I was wondering about was it seems like ZFS only dedup at
the file level and not the block. When I
On 1/28/2011 1:48 PM, Nicolas Williams wrote:
On Fri, Jan 28, 2011 at 01:38:11PM -0800, Igor P wrote:
I created a zfs pool with dedup with the following settings:
zpool create data c8t1d0
zfs create data/shared
zfs set dedup=on data/shared
The thing I was wondering about was it seems like ZFS o
20 matches
Mail list logo