On Sat, Apr 20, 2013 at 8:48 PM, Carl Brewer wrote:
>
> Like this :
>
> root@hostie:~# zdb | egrep 'ashift| name'
> name: 'rpool'
> ashift: 12
>
>
>
> And as I understand it, the 12 means 4k blocks, good, right? :)
Yep, those are the good kind. You're future-proof for the forese
On Tue, Apr 16, 2013 at 7:53 PM, Carl Brewer wrote:
>
> 2 x 2TB HDDs for rpool (ZFS mirror)
> 4 x 2TB HDD's to get at least a 4TB mirror (or is RAID-Z a better option?)
>
> Would I be better off with some 500GB HDD's for the rpool? And while I
> fiddle with this thing, is there any way to get th
On Wed, Apr 17, 2013 at 5:38 AM, Edward Ned Harvey (openindiana) <
openindi...@nedharvey.com> wrote:
> > From: Sašo Kiselkov [mailto:skiselkov...@gmail.com]
> >
> > Raid-Z indeed does stripe data across all
> > leaf vdevs (minus parity) and does so by splitting the logical block up
> > into equall
On Wed, Apr 17, 2013 at 11:21 AM, Jim Klimov wrote:
> On 2013-04-17 20:09, Jay Heyl wrote:
>
>> reply. Unless the first device to answer returns garbage (something
>>> that doesn't match the expected checksum), other copies are not read
>>> as part of this requ
On Tue, Apr 16, 2013 at 5:49 PM, Jim Klimov wrote:
> On 2013-04-17 02:10, Jay Heyl wrote:
>
>> Not to get into bickering about semantics, but I asked, "Or am I wrong
>> about reads being issued in parallel to all the mirrors in the array?", to
>> which you rep
On Tue, Apr 16, 2013 at 4:01 PM, Jim Klimov wrote:
> On 2013-04-16 23:56, Jay Heyl wrote:
>
>> result in more devices being hit for both read and write. Or am I wrong
>> about reads being issued in parallel to all the mirrors in the array?
>>
>
> Yes, in normal cas
On Tue, Apr 16, 2013 at 2:25 PM, Timothy Coalson wrote:
> On Tue, Apr 16, 2013 at 3:48 PM, Jay Heyl wrote:
>
> > My question about the rationale behind the suggestion of mirrored SSD
> > arrays was really meant to be more in relation to the question from the
> OP.
> &g
On Tue, Apr 16, 2013 at 11:54 AM, Jim Klimov wrote:
> On 2013-04-16 20:30, Jay Heyl wrote:
>
>> What would be the logic behind mirrored SSD arrays? With spinning platters
>> the mirrors improve performance by allowing the fastest of the mirrors to
>> respond to a particul
On Mon, Apr 15, 2013 at 5:00 AM, Edward Ned Harvey (openindiana) <
openindi...@nedharvey.com> wrote:
>
> So I'm just assuming you're going to build a pool out of SSD's, mirrored,
> perhaps even 3-way mirrors. No cache/log devices. All the ram you can fit
> into the system.
What would be the lo
On Fri, Sep 14, 2012 at 12:07 AM, Neddy, NH. Nam wrote:
>
> stick with it more. But I doubt about in my real case, my storage
> server will be working as a file level storage more than block level
> storeage. Does it slow down ZFS performance?
I don't claim to be an expert on this. I'm just a g
On Mon, Jun 25, 2012 at 5:55 PM, Vishwas Durai wrote:
> Hello all,
> I'm wondering what options are available for root filesystem in OI? By
> default, install uses ZFS and creates a rpool. But If I'm a ZFS hacker
> and made some changes to some core structures, how does one go about
> debugging
I had a bit of a hiccup last week with my zfs pool. It's a ten-drive raidz2
vdev. All the drives are Samsung F4s, though of two slightly different
models. Two of the drives showed up one morning as "degraded". In somewhat
of a panic I rushed out and bought a couple Seagate drives as replacements.
W
I have a file that shows as corrupted in the live file system and two
snapshots. The file gives every indication of being valid in earlier
snapshots. I've tried to restore it from the good snapshot but it doesn't
seem to want to take. After several failed attempts to copy directly from
the snapshot
On Wed, May 30, 2012 at 7:14 PM, Bob Friesenhahn <
bfrie...@simple.dallas.tx.us> wrote:
>
> I have been using USB drives in a mirror configuration for quite a few
> years with zfs. No problems have been encountered due to using zfs. The
> main thing I learned is to always export the pool before u
A while ago I put together a server using OI and zfs that I was hoping
would provide me room to grow for well into the future. While I'm not yet
running out of room, the usage chart shows the future approaching
considerably faster than I had hoped. The primary storage pool is composed
of ten drives
15 matches
Mail list logo