> Wrt. what I've experienced and read in ZFS-discussion etc. list I've the
> __feeling__, that we would have got really into trouble, using Solaris
> (even the most recent one) on that system ...
> So if one asks me, whether to run Solaris+ZFS on a production system, I
> usually say: definitely, b
> Let's not be too quick to assign blame, or to think that perfecting
> the behaviour is straightforward or even possible.
>
> Start introducing random $20 components and you begin to dilute the
> quality and predictability of the composite system's behaviour.
>
> But this NEVER happens on linux *g
On Aug 30, 2008, at 8:45 AM, George Wilson wrote:
> Krister Joas wrote:
>> Hello.
>> I have a machine at home on which I have SXCE B96 installed on a
>> root zpool mirror. It's been working great until yesterday. The
>> root pool is a mirror with two identical 160GB disks. The other
>> d
Miles Nordin wrote:
>> "re" == Richard Elling <[EMAIL PROTECTED]> writes:
>>
>
> re> if you use Ethernet switches in the interconnect, you need to
> re> disable STP on the ports used for interconnects or risk
> re> unnecessary cluster reconfigurations.
>
> RSTP/802.
Ok, I've managed to get around the kernel panic.
[EMAIL PROTECTED]:~/Download$ pfexec mdb -kw
Loading modules: [ unix genunix specfs dtrace cpu.generic uppc pcplusmp
scsi_vhci zfs sd ip hook neti sctp arp usba uhci s1394 fctl md lofs random sppp
ipc ptm fcip fcp cpc crypto logindmux ii nsctl sdb
Krister Joas wrote:
> Hello.
>
> I have a machine at home on which I have SXCE B96 installed on a root
> zpool mirror. It's been working great until yesterday. The root pool
> is a mirror with two identical 160GB disks. The other day I added a
> third disk to the mirror, a 250 GB disk. S
> "re" == Richard Elling <[EMAIL PROTECTED]> writes:
re> if you use Ethernet switches in the interconnect, you need to
re> disable STP on the ports used for interconnects or risk
re> unnecessary cluster reconfigurations.
RSTP/802.1w plus setting the ports connected to Solaris as `
[EMAIL PROTECTED] said:
> I took a snapshot of a directory in which I hold PDF files related to math.
> I then added a 50MB pdf file from a CD (Oxford Math Reference; I strongly
> reccomend this to any math enthusiast) and did "zfs list" to see the size of
> the snapshot (sheer curiosity). I don't
Hi.
I took a snapshot of a directory in which I hold PDF files related to math.
I then added a 50MB pdf file from a CD (Oxford Math Reference; I strongly
reccomend this to any math enthusiast) and did "zfs list" to see the size of
the snapshot (sheer curiosity). I don't have compression turned
On Fri, 29 Aug 2008, Shawn Ferry wrote:
On Aug 29, 2008, at 7:09 AM, Tomas Ögren wrote:
On 15 August, 2008 - Tomas Ögren sent me these 0,4K bytes:
On 14 August, 2008 - Paul Raines sent me these 2,9K bytes:
This problem is becoming a real pain to us again and I was wondering
if there has b
On Fri, 29 Aug 2008, Miles Nordin wrote:
>
> I guess I'm changing my story slightly. I *would* want ZFS to collect
> drive performance statistics and report them to FMA, but I wouldn't
Your email *totally* blew my limited buffer size, but this little bit
remained for me to look at. It left me w
> "es" == Eric Schrock <[EMAIL PROTECTED]> writes:
es> The main problem with exposing tunables like this is that they
es> have a direct correlation to service actions, and
es> mis-diagnosing failures costs everybody (admin, companies,
es> Sun, etc) lots of time and money. Once
On Fri, 29 Aug 2008, Kyle McDonald wrote:
>>
> What would one look for to decide what vdev to place each LUN?
>
> All mine have the same Current Load Balance value: round robin.
That is a good question and I will have to remind myself of the
answer. The "round robin" is good because that means
Nicolas Williams wrote:
> On Thu, Aug 28, 2008 at 11:29:21AM -0500, Bob Friesenhahn wrote:
>
>> Which of these do you prefer?
>>
>>o System waits substantial time for devices to (possibly) recover in
>> order to ensure that subsequently written data has the least
>> chance of being
Bob Friesenhahn wrote:
> On Fri, 29 Aug 2008, Bob Friesenhahn wrote:
>
>> If you do use the two raidz2 vdevs, then if you pay attention to how
>> MPxIO works, you can balance the load across your two fiber channel
>> links for best performance. Each raidz2 vdev can be served (by
>> default) by
On Aug 29, 2008, at 7:09 AM, Tomas Ögren wrote:
> On 15 August, 2008 - Tomas Ögren sent me these 0,4K bytes:
>
>> On 14 August, 2008 - Paul Raines sent me these 2,9K bytes:
>>
>>> This problem is becoming a real pain to us again and I was wondering
>>> if there has been in the past few month any
On Fri, 29 Aug 2008, Bob Friesenhahn wrote:
>
> If you do use the two raidz2 vdevs, then if you pay attention to how
> MPxIO works, you can balance the load across your two fiber channel
> links for best performance. Each raidz2 vdev can be served (by
> default) by a differente FC link.
As a foll
On Fri, 29 Aug 2008, Kenny wrote:
>
> 1) I didn't do raid2 because I didn't want to lose the space. Is
> this a bas idea??
Raidz2 is the most reliable vdev configuration other than
triple-mirror. The pool is only as strong as its weakest vdev. In
private email I suggested using all 12 drives
On Thu, Aug 28, 2008 at 01:05:54PM -0700, Eric Schrock wrote:
> As others have mentioned, things get more difficult with writes. If I
> issue a write to both halves of a mirror, should I return when the first
> one completes, or when both complete? One possibility is to expose this
> as a tunable
On Thu, Aug 28, 2008 at 11:29:21AM -0500, Bob Friesenhahn wrote:
> Which of these do you prefer?
>
>o System waits substantial time for devices to (possibly) recover in
> order to ensure that subsequently written data has the least
> chance of being lost.
>
>o System immediately
Hello again...
Now that I've got my 2540 up and running. I'm considering which configuration
is best. I have a proposed config and wanted your opinions and comments on it.
Background
I have a requirement to host syslog data from approx 30 servers. Currently the
data is about 3.5TB in
Just an update to this thread with my results. To summarize, I have no
problems with the nVidia 750a chipset. It's simply a newer version of
the 5** series chipets that have reportedly worked well. Also, at
IDLE, this system uses 133 Watts:
CPU - AMD Athlon X2 4850e
Motherboard - XFX MD-A72P-7509
To All...
Problem solved. Operator error on my part. (but I did learn something!!
)
Thank you all very much!
--Kenny
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.o
Here is the output from "zpool import" showing the configuration of
the pool in case that can help diagnosing my problem.
pool: rpool
id: ...
state: DEGRADED
status: The pool was last accessed by another system.
action: The pool can be imported despite missing or damaged devices.
Th
On 08/29/08 04:09, Tomas Ögren wrote:
> On 15 August, 2008 - Tomas Ögren sent me these 0,4K bytes:
>
>> On 14 August, 2008 - Paul Raines sent me these 2,9K bytes:
>>
>>> This problem is becoming a real pain to us again and I was wondering
>>> if there has been in the past few month any known fix o
Hello.
I have a machine at home on which I have SXCE B96 installed on a root
zpool mirror. It's been working great until yesterday. The root pool
is a mirror with two identical 160GB disks. The other day I added a
third disk to the mirror, a 250 GB disk. Soon after, the third disk
deve
On 15 August, 2008 - Tomas Ögren sent me these 0,4K bytes:
> On 14 August, 2008 - Paul Raines sent me these 2,9K bytes:
>
> > This problem is becoming a real pain to us again and I was wondering
> > if there has been in the past few month any known fix or workaround.
>
> Sun is sending me an IDR
G'day,
I've got a OpenSolaris server n95, that I use for media, serving. It's uses a
DQ35JOE motherboard, dual core, and I have my rpool mirrored on two IDE 40GB
drives, and my media mirrored on 2 x 500GB SATA drives.
I've got a few CIFS shares on the media drive, and I'm using MediaTomb to
s
28 matches
Mail list logo