I have ZFS/Xen server for my home network. The box itself has two
physical NICs. I want Dom0 to be on my "management" network and the
guest domains to be on the "dmz" and "private" networks. The "private"
network is where all my home computers are and would like to export
iscsi volumes di
Over the course of multiple OpenSolaris installs , I first created a
pool called "tank" and then, later and resusing some of the same
drives, I created another pool called tank. I can `zpool export tank`,
but when I `zpool import tank`, I get:
bash-3.2# zpool import tank
cannot import 'tan
For the archive, I swapped the mobo and all is good now... (I copied
100GB into the pool without a crash)
One problem I had was that Solaris would hang whenever booting - even
when all the aoc-sat2-mv8 cards were pulled out. Turns out that
switching the BIOS field "USB 2.0 Controller Mode" f
Thanks for the note Anton. I let memtest86 run overnight and it found
no issues. I've also now moved the cards around and have confirmed that
slot #3 on the mobo is bad (all my aoc-sat2-mv8 cards, cables, and
backplanes are OK).
However, I think its more than just slot #3 that has a fault b
Thanks Richard and Al,
I'll refrain from express how disturbing this is, as I'm trying to help
the Internet be kid-safe ;)
As for the PSU, I'd be very surprised there if that were it as it is a
3+1 redundant PSU that came with this system, built by a reputable
integrator. Also, the PSU is
Below I create zpools isolating one card at a time
- when just card#1 - it works
- when just card #2 - it fails
- when just card #3 - it works
And then again using the two cards that seem to work:
- when cards #1 and #3 - it fails
So, at first I thought I narrowed it down to a card, but my
On a lark, I decided to create a new pool not including any devices
connected to card #3 (i.e. "c5")
It crashes again, but this time with a slightly different dump (see below)
- actually, there are two dumps below, the first is using the xVM
kernel and the second is not
Any ideas?
Kent
[
Hey all,
I'm not sure if this is a ZFS bug or a hardware issue I'm having - any
pointers would be great!
Following contents include:
- high-level info about my system
- my first thought to debugging this
- stack trace
- format output
- zpool status output
- dmesg output
High-Lev
Eric Schrock wrote:
> Or just let ZFS work its magic ;-)
>
Oh, I didn't realize that `zpool create` could be fed vdevs that didn't
exist in /dev/dsk/ - and, as a bonus, it also creates the /dev/dsk/ links!
# zpool create -f tank raidz2 c3t0d0 c3t4d0 c4t0d0 c4t4d0 c5t0d0 c5t4d0z
# ls -l /dev
Kent Watsen wrote:
> So, I picked up an AOC-SAT2-MV8 off eBay for not too much and then I got
> a 4xSATA to one SFF-8087 cable to connect it to one one my six
> backplanes. But, as fortune would have it, the cable I bought has SATA
> connectors that are physically too big to p
Paul Jochum wrote:
>What the lsiutil does for me is clear the persistent mapping for
> all of the drives on a card.
Since James confirms that I'm doomed to ad hoc methods tracking
device-ids to bays, I'm interested in knowing if somehow your ability to
clear the persistent mapping for
Eric Schrock wrote:
> For x86 systems, you can use ipmitool to manipulate the led state
> (ipmitool sunoem led ...). On older galaxy systems, you can only set the
> fail LED ('io.hdd0.led'), as the ok2rm LED is not physically connected
> to anything. On newer systems, you can set both the 'fail'
Kent Watsen wrote:
> Given that manually tracking shifting ids doesn't sound appealing to
> me, would using a SATA controller like the AOC-SAT2-MV8 resolve the
> issue? Given that I currently only have one LSI HBA - I'd need to get 2
> more for all 24 drives ---or---
Hi Paul,
Already in my LSI Configuration Utility I have an option to clear the
persistent mapping for drives not present, but then the card resumes its
normal persistent-mapping logic. What I really want is to disable to
persistent mapping logic completely - is the `lsiutil` doing that for yo
iver mpxio support with SAS. I have a bit
> of knowledge about your issue :-)
>
> Kent Watsen wrote:
>> Based on recommendations from this list, I asked the company that
>> built my box to use an LSI SAS3081E controller.
>>
>> The first problem I noticed was that t
Based on recommendations from this list, I asked the company that built
my box to use an LSI SAS3081E controller.
The first problem I noticed was that the drive-numbers were ordered
incorrectly. That is, given that my system has 24 bays (6 rows, 4
bays/row), the drive numbers from top-to-bott
Christopher wrote:
> Kent - I see your point and it's a good one and, but for me, I only want a
> big fileserver with redundancy for my music collection, movie collection and
> pictures etc. I would of course make a backup of the most important data as
> well from time to time.
>
Chris,
We ha
tting the 8+1 mttdl)*
4. increases performance (adding disks to a raidz set has no impact)
5. increases space more slowly (the only negative - can you live with
it?)
Sorry!
Kent
Kent Watsen wrote:
I think I have managed to confuse myself so i am asking outright hoping for a straight a
I think I have managed to confuse myself so i am asking outright hoping for a straight answer.
Straight answer:
ZFS does not (yet) support adding a disk to an existing raidz set -
the only way to expand an existing pool is by adding a stripe.
Stripes can either be mirror, raid5, o
How does one access the PSARC database to lookup the description of
these features?
Sorry if this has been asked before! - I tried google before posting
this :-[
Kent
George Wilson wrote:
> ZFS Fans,
>
> Here's a list of features that we are proposing for Solaris 10u5. Keep
> in mind that
Probably not, my box has 10 drives and two very thirsty FX74 processors
and it draws 450W max.
At 1500W, I'd be more concerned about power bills and cooling than the UPS!
Yeah - good point, but I need my TV! - or so I tell my wife so I can
play with all this gear :-X
Cheers,
Ken
>> - can have 6 (2+2) w/ 0 spares providing 6000 GB with MTTDL of
>> 28911.68 years
>>
>
> This should, of course, set off one's common-sense alert.
>
So true, I pointed the same thing out in this list a while back [sorry,
can't find the link] where it was beyond my lifetime and folks
>> I know what you are saying, but I , wonder if it would be noticeable? I
>
> Well, "noticeable" again comes back to your workflow. As you point out
> to Richard, it's (theoretically) 2x IOPS difference, which can be very
> significant for some people.
Yeah, but my point is if it would be not
David Edmondson wrote:
>> One option I'm still holding on to is to also use the ZFS system as a
>> Xen-server - that is OpenSolaris would be running in Dom0... Given that
>> the Xen hypervisor has a pretty small cpu/memory footprint, do you think
>> it could share 2-cores + 4Gb with ZFS or should
[CC-ing xen-discuss regarding question below]
>>
>> Probably a 64 bit dual core with 4GB of (ECC) RAM would be a good
>> starting point.
>
> Agreed.
So I was completely out of a the ball-park - I hope the ZFS Wiki can be
updated to contain some sensible hardware-sizing information...
One option
>
> Sorry, but looking again at the RMP page, I see that the chassis I
> recommended is actually different than the one we have. I can't find
> this chassis only online, but here's what we bought:
>
> http://www.siliconmechanics.com/i10561/intel-storage-server.php?cat=625
That is such a cool lo
> Nit: small, random read I/O may suffer. Large random read or any random
> write workloads should be ok.
Given that video-serving is all sequential-read, is it correct that
that raidz2, specifically 4(4+2), would be just fine?
> For 24 data disks there are enough combinations that it is not e
Hey Adam,
>> My first posting contained my use-cases, but I'd say that video
>> recording/serving will dominate the disk utilization - thats why I'm
>> pushing for 4 striped sets of RAIDZ2 - I think that it would be all
>> around goodness
>
> It sounds good, that way, but (in theory), you'll s
> Fun exercise! :)
>
Indeed! - though my wife and kids don't seem to appreciate it so much ;)
>> I'm thinking about using this 26-disk case: [FYI: 2-disk RAID1 for
>> the OS & 4*(4+2) RAIDZ2 for SAN]
>
> What are you *most* interested in for this server? Reliability?
> Capacity? High Perform
>
> I will only comment on the chassis, as this is made by AIC (short for
> American Industrial Computer), and I have three of these in service at
> my work. These chassis are quite well made, but I have experienced
> the following two problems:
>
>
Oh my, thanks for the heads-up! Charlie at
Hi all,
I'm putting together a OpenSolaris ZFS-based system and need help
picking hardware.
I'm thinking about using this 26-disk case: [FYI: 2-disk RAID1 for the
OS & 4*(4+2) RAIDZ2 for SAN]
http://rackmountpro.com/productpage.php?prodid=2418
Regarding the mobo, cpus, and memory - I se
>> But to understand how to best utilize an array with a fixed number of
>> drives, I add the following constraints:
>> - N+P should follow ZFS best-practice rule of N={2,4,8} and P={1,2}
>> - all sets in an array should be configured similarly
>> - the MTTDL for S sets is equal to (MTTDL f
All,
When I reformatted to HTML, I forgot ro fix the code also - here is the
correct code:
#include
#include
#define NUM_BAYS 24
#define DRIVE_SIZE_GB 300
#define MTBF_YEARS 4
#define MTTR_HOURS_NO_SPARE 16
#define MTTR_HOURS_SPARE 4
int main() {
printf("\n");
printf("%u bays w/ %u
Resent as HTML to avoid line-wrapping:
Richard's blog analyzes MTTDL as a function of N+P+S:
http://blogs.sun.com/relling/entry/raid_recommendations_space_vs_mttdl
But to understand how to best utilize an array with a fixed number of
drives, I add the following constraints:
- N+P should fol
Richard's blog analyzes MTTDL as a function of N+P+S:
http://blogs.sun.com/relling/entry/raid_recommendations_space_vs_mttdl
But to understand how to best utilize an array with a fixed number of
drives, I add the following constraints:
- N+P should follow ZFS best-practice rule of N={2,4,8
John-Paul Drawneek wrote:
> Your data gets striped across the two sets so what you get is a raidz stripe
> giving you the 2x faster.
>
> tank
> ---raidz
> --devices
> ---raidz
> --devices
>
> sorry for the diagram.
>
> So you got your zpool tank with raidz stripe.
Thanks - I think you all
Rob Logan wrote:
>
> > which is better 8+2 or 8+1+spare?
>
> 8+2 is safer for the same speed
> 8+2 requires alittle more math, so its slower in theory. (unlikely seen)
> (4+1)*2 is 2x faster, and in theory is less likely to have wasted space
> in transaction group (unlikely seen)
I keep re
> Another reason to recommend spares is when you have multiple top-level
> vdevs
> and want to amortize the spare cost over multiple sets. For example, if
> you have 19 disks then 2x 8+1 raidz + spare amortizes the cost of the
> spare
> across two raidz sets.
> -- richard
Interesting - I hadn
> Don't confuse vdevs with pools. If you add two 4+1 vdevs to a single pool it
> still appears to be "one place to put things". ;)
>
Newbie oversight - thanks!
Kent
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.o
> (4+1)*2 is 2x faster, and in theory is less likely to have wasted space
> in transaction group (unlikely seen)
> (4+1)*2 is cheaper to upgrade in place because of its fewer elements
I'm aware of these benefits but I feel that having one large lun is
easier to manage - in that I can allo
> I think that the 3<=num-disks<=9 rule only applies to RAIDZ and it was
> changed to 4<=num-disks<=10 for RAIDZ2, but I might be remembering wrong.
>
Can anybody confirm that the 3<=num-disks<=9 rule only applies to RAIDZ
and that 4<=num-disks<=10 applies to RAIDZ2?
Thanks,
Kent
___
Hi all,
I'm new here and to ZFS but I've been lurking for quite some time... My
question is simple: which is better 8+2 or 8+1+spare? Both follow the
(N+P) N={2,4,8} P={1,2} rule, but 8+2 results in a total or 10 disks,
which is one disk more than 3<=num-disks<=9 rule. But 8+2 has much
b
42 matches
Mail list logo