see, originally when i read about zfs it said it could expand to petabytes or
something. but really, that's not as a single "filesystem" ? that could only be
accomplished through combinations of pools?
i don't really want to have to even think about managing two separate
"partitions" - i'd like
On 08/21/08 17:45, Jürgen Keil wrote:
> The bug happens with unmounted filesystems, so you
> need to mount them first, then umount.
thanks, now all result are fine!!!
# zpool status -v
pool: rpool
state: ONLINE
scrub: scrub completed after 0h29m with 0 errors on Fri Aug 22
09:11:30 2008
c
> see, originally when i read about zfs it said it could expand to petabytes or
> something. but really, that's not as a single "filesystem" ? that could only
> be accomplished through combinations of pools?
>
> i don't really want to have to even think about managing two separate
> "partitions"
On Fri, Aug 22, 2008 at 8:11 AM, mike <[EMAIL PROTECTED]> wrote:
> see, originally when i read about zfs it said it could expand to petabytes or
> something. but really, that's not as a single "filesystem" ? that could only
> be accomplished through combinations of pools?
>
> i don't really want
Hello mike,
Friday, August 22, 2008, 8:11:36 AM, you wrote:
m> see, originally when i read about zfs it said it could expand to
m> petabytes or something. but really, that's not as a single
m> "filesystem" ? that could only be accomplished through combinations of pools?
m> i don't really want to
likewise i could also do something like
zpool create tank raidz1 disk1 disk2 disk3 disk4 disk5 disk6 disk7 \
raidz1 disk8 disk9 disk10 disk11 disk12 disk13 disk14 disk15
and i'd have a 7 disk raidz1 and an 8 disk raidz1... and i'd have 15 disks
still broken up into not-too-horrible pool sizes an
mike wrote:
> likewise i could also do something like
>
> zpool create tank raidz1 disk1 disk2 disk3 disk4 disk5 disk6 disk7 \
> raidz1 disk8 disk9 disk10 disk11 disk12 disk13 disk14 disk15
>
> and i'd have a 7 disk raidz1 and an 8 disk raidz1... and i'd have 15 disks
> still broken up into not-
Hi,
I have a zfs-pool (unfortunately not setup according to the Best
Practices Guide) that somehow got corrupted after a spontaneous server
reboot. On Solaris 10u4 the machine simply panics when I try to import
the pool. So what I've done is taken a dd-image of the whole LUN so
that I have somethi
Hi Erik,
Erik Gulliksson wrote:
> Hi,
>
> I have a zfs-pool (unfortunately not setup according to the Best
> Practices Guide) that somehow got corrupted after a spontaneous server
> reboot. On Solaris 10u4 the machine simply panics when I try to import
> the pool.
Panic stack would be useful.
>
I've got a Intel DP35DP Motherboard, Q6600 proc (Intel 2.4G, 4 core), 4GB of
ram and a
copule of Sata disks, running ICH9. S10U5, patched about a week ago or so...
I have a zpool on a single slice (haven't added a mirror yet, was getting to
that) and have
started to suffer regular hard resets a
Hey Mike,
First of all, I'd strongly suggest going for raidz2 instead of raidz. Dual
parity protection is something I'd strongly recommended over single parity
protection.
You also don't mention your boot pool. You can't boot from a raid pool, so you
need to put one disk aside for booting fr
Ben Taylor wrote:
> I've got a Intel DP35DP Motherboard, Q6600 proc (Intel 2.4G, 4 core), 4GB of
> ram and a
> copule of Sata disks, running ICH9. S10U5, patched about a week ago or so...
>
> I have a zpool on a single slice (haven't added a mirror yet, was getting to
> that) and have
> started
Hi Victor,
Thanks for the prompt reply. Here are the results from your suggestions.
> Panic stack would be useful.
I'm sorry I don't have this available and I don't want to cause another panic :)
>
> It is apparently blocked somewhere in kernel. Try to do something like this
> from another windo
On Thu, 21 Aug 2008, John wrote:
> I'm setting up a ZFS fileserver using a bunch of spare drives. I'd
> like some redundancy and to maximize disk usage, so my plan was to
> use raid-z. The problem is that the drives are considerably
> mismatched and I haven't found documentation (though I don't
Erik,
could you please provide a little bit more details.
Erik Gulliksson wrote:
> Hi,
>
> I have a zfs-pool (unfortunately not setup according to the Best
> Practices Guide) that somehow got corrupted after a spontaneous server
> reboot.
Was it totally spontaneous? What was the uptime before p
Hi,
John wrote:
> I'm setting up a ZFS fileserver using a bunch of spare drives. I'd like some
> redundancy and to maximize disk usage, so my plan was to use raid-z. The
> problem is that the drives are considerably mismatched and I haven't found
> documentation (though I don't see why it shoul
Erik Gulliksson wrote:
> Hi Victor,
>
> Thanks for the prompt reply. Here are the results from your suggestions.
>
>> Panic stack would be useful.
> I'm sorry I don't have this available and I don't want to cause another panic
> :)
It should be saved in system messages on your Solaris 10 machin
Victor,
> Was it totally spontaneous? What was the uptime before panic? Systems
> messages on you Solaris 10 machine may have some clues.
I actually don't know exactly what happened (this was during my
vacation). Monitoring graphs shows that load was very high on this
particular server this day.
Victor,
> Well, since we are talking about ZFS any thread somewhere in ZFS module are
> interesting, and there should not be too many of them. Though in this case
> it is clear - it is trying to update config object and waits for the update
> to sync. There should be another thread with stack simi
14+2 or 7+1
On 8/22/08, Miles Nordin <[EMAIL PROTECTED]> wrote:
>> "m" == mike <[EMAIL PROTECTED]> writes:
>
> m> can you combine two zpools together?
>
> no. You can have many vdevs in one pool. for example you can have a
> mirror vdev and a raidz2 vdev in the same pool. You can al
I hear everyone's concerns about multiple parity disks.
Are there any benchmarks or numbers showing the performance difference using a
15 disk raidz2 zpool? I am fine sacrificing some performance but obviously
don't want to make the machine crawl.
It sounds like I could go with 15 disks evenly
> Are there any benchmarks or numbers showing the performance difference using
> a 15 disk raidz2 zpool? I am fine sacrificing some performance but obviously
> don't want to make the machine crawl.
>
> It sounds like I could go with 15 disks evenly and have to sacrifice 3, but I
> would have 1 p
Oh sorry - for boot I don't care if it's redundant or anything.
Worst case the drive fails, I replace it and reinstall, and just re-mount the
ZFS stuff.
If I have the space in the case and the ports I could get a pair of 80 gig
drives or something and mirror them using SVM (which was recommende
mike wrote:
> And terminology-wise, one or more zpools create zdevs right?
No that isn't correct.
One or move vdevs create a pool. Each vdev in a pool can be a different
type, eg a mix or mirror, raidz, raidz2.
There is no such thing as zdev.
--
Darren J Moffat
_
Hi Ben,
I'm having exactly same error for a months. In my case problem also
started soon after update to 10U5. I've SATA mirror pool on ICH6 and
also share it over NFS.
Do you see checksum errors in zpool stats -xv?
Unfortunately, I haven't found any solution yet.
Regards,
Rustam.
Ben Taylo
> No that isn't correct.
> One or move vdevs create a pool. Each vdev in a pool can be a
> different type, eg a mix or mirror, raidz, raidz2.
> There is no such thing as zdev.
Sorry :)
Okay, so you can create a zpool from multiple vdevs. But you cannot
add more vdevs to a zpool once the zpool
mike wrote:
> And terminology-wise, one or more zpools create zdevs right?
>
Lets get the terminology right first.
You can have more than one zPool.
Each zPool can have many filesystems which all share *ALL* the space in
the pool.
Each zPool can get it's space from one or more vDevs.
(Yes you c
mike wrote:
>> No that isn't correct.
>
>> One or move vdevs create a pool. Each vdev in a pool can be a
>> different type, eg a mix or mirror, raidz, raidz2.
>
>> There is no such thing as zdev.
>
> Sorry :)
>
> Okay, so you can create a zpool from multiple vdevs. But you cannot
> add more vd
mike wrote:
>
>
> Sorry :)
>
> Okay, so you can create a zpool from multiple vdevs. But you cannot
> add more vdevs to a zpool once the zpool is created. Is that right?
Nope. That's exactly what you *CAN* do.
So say today you only really need 6TB usable, you could go buy 8 of your
1TB disks,
and
On 8/22/08, Darren J Moffat <[EMAIL PROTECTED]> wrote:
> I could if I wanted to add another vdev to this pool but it doesn't
> have to be raidz it could be raidz2 or mirror.
> If they did they are wrong, hope the above clarifies.
I get it now. If you add more disks they have to be in their own
m
On Thu, 2008-08-21 at 21:15 -0700, mike wrote:
> I've seen 5-6 disk zpools are the most recommended setup.
This is incorrect.
Much larger zpools built out of striped redundant vdevs (mirror, raidz1,
raidz2) are recommended and also work well.
raidz1 or raidz2 vdevs of more than a single-digit nu
On Fri, 22 Aug 2008, mike wrote:
> Oh sorry - for boot I don't care if it's redundant or anything.
8-O
> Worst case the drive fails, I replace it and reinstall, and just re-mount the
> ZFS stuff.
If you use a ZFS mirrored root, you just replace a drive when it
fails. None of this reinstall no
On 8/22/08, Kyle McDonald <[EMAIL PROTECTED]> wrote:
> Antoher note, as someone said earlier, if you can go to 16 drives, you
> should consider 2 8disk RAIDZ2 vDevs, over 2 7disk RAIDZ vDevs with a spare,
> or (I would think) even a 14disk RAIDZ2 vDev with a spare.
>
> If you can (now or later) ge
It looks like this will be the way I do it:
initially:
zpool create mypool raidz2 disk0 disk1 disk2 disk3 disk4 disk5 disk6 disk7
when I need more space and buy 8 more disks:
zpool add mypool raidz2 disk8 disk9 disk10 disk11 disk12 disk13 disk14 disk15
Correct?
> Enable compression, and set up
On 8/22/08, Rich Teer <[EMAIL PROTECTED]> wrote:
> ZFS boot works fine; it only recently integrated into Nevada, but it
> has been in use for quite some time now.
Yeah I got the install option when I installed snv_94 but wound up not
having enough disks to use it.
> Even better: just use ZFS roo
About the best I can see:
zpool create dirtypool raidz 250a 250b 320a raidz 320b 400a 400b raidz 500a
500b 750a
And you have to do them in that order. The zpool will create using the
smallest device. This gets you about 2140GB (500 + 640 + 1000) of space.
Your desired method is only 2880GB (720 *
On Fri, Aug 22, 2008 at 1:08 PM, mike <[EMAIL PROTECTED]> wrote:
> It looks like this will be the way I do it:
>
> initially:
> zpool create mypool raidz2 disk0 disk1 disk2 disk3 disk4 disk5 disk6 disk7
>
> when I need more space and buy 8 more disks:
> zpool add mypool raidz2 disk8 disk9 disk10 d
mike wrote:
> Or do smaller groupings of raidz1's (like 3 disks) so I can remove
> them and put 1.5TB disks in when they come out for instance?
>
I wouldn't reduce it to 3 disks (should almost mirror if you go that low.)
Remember, while you can't take a drive out of a vDev, or a vDev out of a
While on the subject, in a home scenario where one actually notices
the electric bill personally, is it more economical to purchase a big
expensive 1tb disk and save on electric to run it for five years or to
purchase two cheap 1/2 TB disk and spend double on electric for them
for 5 years? Has any
mike wrote:
> On 8/22/08, Rich Teer <[EMAIL PROTECTED]> wrote:
>
>
>> ZFS boot works fine; it only recently integrated into Nevada, but it
>> has been in use for quite some time now.
>>
>
> Yeah I got the install option when I installed snv_94 but wound up not
> having enough disks to use i
On 8/22/08, Kyle McDonald <[EMAIL PROTECTED]> wrote:
> You only need 1 disk to use ZFS root. You won't have any redundancy, but as
> Darren said in another email, you can convert single device vDevs to
> Mirror'd vDevs later without any hassle.
I'd just get some 80 gig disks and mirror them. Migh
Chris Cosby wrote:
> About the best I can see:
>
> zpool create dirtypool raidz 250a 250b 320a raidz 320b 400a 400b raidz
> 500a 500b 750a
>
> And you have to do them in that order. The zpool will create using the
> smallest device. This gets you about 2140GB (500 + 640 + 1000) of
> space. Your
I'd like to experiment with storing the boot archive
on a compact flash that emulates an IDE hard disk,
but then have the root pool on a 4-disk raidz set.
(I'm using OpenSolaris)
Anyone have suggestions on how to go about this?
Will I need to set rootdev in /etc/system to tell the
kernel the root
On Fri, 22 Aug 2008, Jacob Ritorto wrote:
> While on the subject, in a home scenario where one actually notices
> the electric bill personally, is it more economical to purchase a big
> expensive 1tb disk and save on electric to run it for five years or to
> purchase two cheap 1/2 TB disk and spen
> "m" == mike <[EMAIL PROTECTED]> writes:
m> that could only be accomplished through combinations of pools?
m> i don't really want to have to even think about managing two
m> separate "partitions" - i'd like to group everything together
m> into one large 13tb instance
You
I noted this PSARC thread with interest:
Re: zpool autoexpand property [PSARC/2008/353 Self Review]
because it so happens that during a recent disk upgrade,
on a laptop. I've migrated a zpool off of one partition
onto a slightly larger one, and I'd like to somehow tell
zfs to grow the zpool to fi
Yes, that looks pretty good mike. There are a few limitations to that as you
add the 2nd raidz2 set, but nothing major. When you add the extra disks, your
original data will still be stored on the first set of disks, if you've any
free space left on those you'll then get some data stored acros
On 8/22/08, Ross <[EMAIL PROTECTED]> wrote:
> Yes, that looks pretty good mike. There are a few limitations to that as you
> add the 2nd raidz2 set, but nothing major. When you add the extra disks,
> your original data will still be stored on the first set of disks, if you've
> any free space
Yup, you got it, and an 8 disk raid-z2 array should still fly for a home system
:D I'm guessing you're on gigabit there? I don't see you having any problems
hitting the bandwidth limit on it.
Ross
> Date: Fri, 22 Aug 2008 11:11:21 -0700
> From: [EMAIL PROTECTED]
> To: [EMAIL PROTECTED]
> Su
yeah i am on gigabit, but the clients are things like an xbox which is
only 10/100, etc. right now the setup works fine. i'm thinking the new
CIFS implementation should make it run even cleaner too.
On 8/22/08, Ross Smith <[EMAIL PROTECTED]> wrote:
> Yup, you got it, and an 8 disk raid-z2 array sh
On Fri, Aug 22, 2008 at 10:54:00AM -0700, Gordon Ross wrote:
> I noted this PSARC thread with interest:
> Re: zpool autoexpand property [PSARC/2008/353 Self Review]
> because it so happens that during a recent disk upgrade,
> on a laptop. I've migrated a zpool off of one partition
> onto a slight
I'm want to upgrade the hardware of my Open Solaris b95 server at
home. It's currently running on 32 bit intel hardware. I'm going 64
bit with the new hardware. I don't need server grade hardware since
this is a home server. This means I'm not buying the an Opteron or
Xeon, or any quad core process
Just my 2c: Is it possible to do an "offline" dedup, kind of like snapshotting?
What I mean in practice, is: we make many Solaris full-root zones. They share a
lot of data as complete files. This is kind of easy to save space - make one
zone as a template, snapshot/clone its dataset, make new zo
On Fri, Aug 22, 2008 at 1:30 PM, Joe S <[EMAIL PROTECTED]> wrote:
> I picked the 790GX of the 790 series because it has integrated video.
The 790GX is a high-clocked 780G, so look at that chipset as well. The
boards are slightly cheaper, too. If you're not overclocking or
running Crossfire, there'
54 matches
Mail list logo