Op 16-5-2011 22:55 schreef Freddie Cash:
On Fri, Apr 29, 2011 at 5:17 PM, Brandon High wrote:
On Fri, Apr 29, 2011 at 1:23 PM, Freddie Cash wrote:
Running ZFSv28 on 64-bit FreeBSD 8-STABLE.
I'd suggest trying to import the pool into snv_151a (Solaris 11
Express), which is the reference and d
2011-05-17 6:32, Donald Stahl пишет:
I have two follow up questions:
1. We changed the metaslab size from 10M to 4k- that's a pretty
drastic change. Is there some median value that should be used instead
and/or is there a downside to using such a small metaslab size?
2. I'm still confused by th
Is it possible to ZFS to keep data that belongs together on the same Pool
( or would these Questions be more related to Raid-Z ? )
that way if their is a failure only the Data on the Pool that Failed Needs to
be replaced
( or if one Pool failed Does that Mean All the other Pools still fail as
On Mon, May 16, 2011 at 8:54 PM, MasterCATZ wrote:
>
> Is it possible to ZFS to keep data that belongs together on the same Pool
>
> ( or would these Questions be more related to Raid-Z ? )
>
> that way if their is a failure only the Data on the Pool that Failed Needs to
> be replaced
> ( or if o
On May 16, 2011, at 7:32 PM, Donald Stahl wrote:
> As a followup:
>
> I ran the same DD test as earlier- but this time I stopped the scrub:
>
> pool0 14.1T 25.4T 88 4.81K 709K 262M
> pool0 14.1T 25.4T104 3.99K 836K 248M
> pool0 14.1T 25.4T360 5.01K 2.
> metaslab_min_alloc_size is not the metaslab size. From the source
Sorry- that was simply a slip of the mind- it was a long day.
> By reducing this value, it is easier for the allocator to identify a
> metaslab for allocation as the file system becomes full.
Thank you for clarifying. Is there a
Hello, c4ts,
There seems to be some mixup of bad English and wrong terminology, so I am not
sure I understood your question correctly. Still, I'll try to respond ;)
I've recently posted about ZFS terminology here:
http://opensolaris.org/jive/click.jspa?searchID=4607806&messageID=515894
In ZFS
On May 17, 2011, at 9:17 AM, Jim Klimov wrote:
> Hello, c4ts,
>
> There seems to be some mixup of bad English and wrong terminology, so I am
> not sure I understood your question correctly. Still, I'll try to respond ;)
>
> I've recently posted about ZFS terminology here:
> http://opensolaris.
On Mon, May 16, 2011 at 7:32 PM, Donald Stahl wrote:
> As a followup:
>
> I ran the same DD test as earlier- but this time I stopped the scrub:
>
> pool0 14.1T 25.4T 88 4.81K 709K 262M
> pool0 14.1T 25.4T 104 3.99K 836K 248M
> pool0 14.1T 25.4T 360 5.01K
On Tue, May 17, 2011 at 6:49 AM, Jim Klimov wrote:
> 2011-05-17 6:32, Donald Stahl пишет:
>>
>> I have two follow up questions:
>>
>> 1. We changed the metaslab size from 10M to 4k- that's a pretty
>> drastic change. Is there some median value that should be used instead
>> and/or is there a downs
I have a fresh install of Solaris 11 Express on a new SSD. I have inserted the
old hard disk, and tried to import it, with:
# zpool import -f Old_rpool
but the computer reboots. Why is that? On my old hard disk, I have 10-20 BE,
starting with OpenSolaris 2009.06 and upgraded to b134 up to snv_15
may be do
zpool import -R /a rpool
On 5/17/2011 1:56 PM, Orvar Korvar wrote:
I have a fresh install of Solaris 11 Express on a new SSD. I have inserted the
old hard disk, and tried to import it, with:
# zpool import -f Old_rpool
but the computer reboots. Why is that? On my old hard disk, I h
> So if you bump this to 32k then the fragmented size
> is 512k which tells ZFS to switch to a different metaslab
> once it drops below this threshold.
Makes sense after some more reading today ;)
What happens if no metaslab has a block this large (or small)
on a sufficiently full and fragmente
I posted this to the forums a little while ago but I believe the list
was split at the time:
Does anyone have any recommendations for changing the ZFS volblocksize
when creating zfs volumes to serve as VMFS backing stores?
I've seen several people recommend that the volblocksize be set to 64k
in
On Tue, May 17, 2011 at 11:48 AM, Jim Klimov wrote:
>> So if you bump this to 32k then the fragmented size
>> is 512k which tells ZFS to switch to a different metaslab
>> once it drops below this threshold.
>
> Makes sense after some more reading today ;)
>
> What happens if no metaslab has a bloc
On Tue, May 17, 2011 at 11:10 AM, Hung-ShengTsao (Lao Tsao) Ph.D.
wrote:
>
> may be do
> zpool import -R /a rpool
'zpool import -N' may work as well.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
On Tue, May 17, 2011 at 6:38 PM, Brandon High wrote:
> On Tue, May 17, 2011 at 11:10 AM, Hung-ShengTsao (Lao Tsao) Ph.D.
> wrote:
>>
>> may be do
>> zpool import -R /a rpool
>
> 'zpool import -N' may work as well.
It looks like a crash dump is in order. The system shouldn't panic
just because i
17 matches
Mail list logo