On Thu, Jul 24, 2008 at 08:38:21PM -0700, Neal Pollack wrote:
>
> As of build 94, it does not automatically bring the disk online.
> I replaced a failed disk on an x4500 today running Nevada build 94, and
> still
> had to manually issue
>
> # cfgadm -c configure sata1/3
> # zpool replace tank cx
Lida Horn wrote:
Richard Elling wrote:
There are known issues with the Marvell drivers in X4500s. You will
want to pay attention to the release notes, SRDBs, InfoDocs, and SunAlerts
for the platform.
http://sunsolve.sun.com/handbook_pub/validateUser.do?target=Systems/SunFireX4500/SunFireX450
On Fri, Apr 25, 2008 at 9:22 AM, Robert Milkowski <[EMAIL PROTECTED]> wrote:
> Hello andrew,
>
> Thursday, April 24, 2008, 11:03:48 AM, you wrote:
>
> a> What is the reasoning behind ZFS not enabling the write cache for
> a> the root pool? Is there a way of forcing ZFS to enable the write cache?
>
"Enda O'Connor ( Sun Micro Systems Ireland)" <[EMAIL PROTECTED]>
writes:
[..]
> meant to add that on x86 the following should do the trick ( again I'm open
> to correction )
>
> installgrub /boot/grub/stage1 /zfsroot/boot/grub/stage2 /dev/rdsk/c1t0d0s0
>
> haven't tested the z86 one though.
I use
Richard Elling wrote:
> There are known issues with the Marvell drivers in X4500s. You will
> want to pay attention to the release notes, SRDBs, InfoDocs, and SunAlerts
> for the platform.
> http://sunsolve.sun.com/handbook_pub/validateUser.do?target=Systems/SunFireX4500/SunFireX4500
>
> You will
Since we were drowning, we decided to go ahead and reboot with my
guesses, even though I have not heard and expert opinions on the
changes. (Also, 3 mins was way under estimated. It takes 12 minutes to
reboot our x4500).
The new values are: (original)
set bufhwm_pct=10(2%)
set m
On Sun, Jul 13, 2008 at 3:37 AM, Bryan Wagoner <[EMAIL PROTECTED]> wrote:
> I was a little confused on what to get, so I ended up buying this off the
> Provantage website where I'm getting the card. The card was like $123 and
> each of these cables was like $22.
>
> CBL-0118L-02IPASS to 4 SA
> s> And, if better I'm open also to intel!
> intel you can possibly get onboard AHCI that works,
> and the intel
> igabit MAC, and 16GB instead of 8GB RAM on a desktop
> board. Also the
> video may be better-supported. but it's, you know,
> intel.
Miles, sorry, but probably I'm missing someth
> Yup, and the Supermicro card uses the "Marvell
> Hercules-2 88SX6081 (Rev. C0) SATA Host Controller",
> which is part of the series supported by the same
> driver:
> http://docs.sun.com/app/docs/doc/816-5177/marvell88sx
> 7d?a=view. I've seen the Supermicro card mentioned
> in connection with t
> On Thu, Jul 24, 2008 at 1:28 AM, Steve
> <[EMAIL PROTECTED]> wrote:
> > And interesting of booting from CF, but it seems is
> possible to boot from the zraid and I would go for
> it!
>
> It's not possible to boot from a raidz volume yet.
> You can only boot
> from a single drive or a mirror.
If
I have 4 filesystems in a pool that I want to replicate into another
pool, so I've taken snapshots prior to replication:
pool1/home1 14.3G 143G 14.3G /home1
pool1/[EMAIL PROTECTED] 1.57M - 14.3G -
pool1/home2 4.31G 143G 4.31G /home2
pool1/[EMAIL PROTECTED] 0
Thank you very much Brandon for pointing out the issue for the case!!
(anyway that's really a peaty, I hope it will find a solution!...)
About Atom a person from Sun was pointing out the only good version for ZFS
would be N200 (64bit). Anyway I wouldn't make a problem of money (still ;-),
but ap
Lori Alt wrote:
> Sounds like LU needs some of the same swap/dump flexibility
> that we just gave initial install. I'll bring this up within the team.
The (partial) workaround I tried was:
1. create a ZFS BE in an existing pool that has enough space
2 lumount the BE, edit the vfstab to use the
Aaargh! My perfect case not working!!
The back-pane should not be just a "pass-trough"? There was something
unmounted? The power was not enough for all the disks? Can it depend on the
disks?
Did you have some replies?
I would tell also to tech support of Chenbro directly
(http://www.chenbro.c
Hoping this is not too off topic.Can anyone confirm you can break a
mirrored zfs root pool once formed. I basically want to clone a boot drive,
take it to another piece of identical hardware and have two machines ( or more
). I am running indiana b93 on x86 hardware. I have read that
> Ross,
>
> The X4500 uses 6x Marvell 88SX SATA controllers for
> its internal disks. They are not Supermicro
> controllers. The new X4540 uses an LSI chipset
> instead of the Marvell chipset.
>
> --Brett
Yup, and the Supermicro card uses the "Marvell Hercules-2 88SX6081 (Rev. C0)
SATA Host C
I will look into this. I don't know why it would have failed.
Lori
Rainer Orth wrote:
Lori Alt <[EMAIL PROTECTED]> writes:
use of swap/dump zvols? If your existing swap/dump slice
is contiguous with your root pool, you can grow the root
pool into that space (using format to merge the sl
Miles Nordin wrote:
>> "s" == Steve <[EMAIL PROTECTED]> writes:
>>
>
> s> About freedom: I for sure would prefere open source drivers
> s> availability, let's account for it!
>
> There is source for the Intel gigabit cards in the source browser.
>
>
> http://src.o
Alan Burlison wrote:
> Lori Alt wrote:
>
>> What if you turned slice 1 into a pool (a new one), migrated your BE
>> into it,
>> then grow that pool to soak up the space in the slices that follow
>> it? You might
>> still need to save some stuff elsewhere while you're doing the
>> transition.
Ross,
The X4500 uses 6x Marvell 88SX SATA controllers for its internal disks. They
are not Supermicro controllers. The new X4540 uses an LSI chipset instead of
the Marvell chipset.
--Brett
This message posted from opensolaris.org
___
zfs-discuss
There are known issues with the Marvell drivers in X4500s. You will
want to pay attention to the release notes, SRDBs, InfoDocs, and SunAlerts
for the platform.
http://sunsolve.sun.com/handbook_pub/validateUser.do?target=Systems/SunFireX4500/SunFireX4500
You will want to especially pay attention
> "s" == Steve <[EMAIL PROTECTED]> writes:
s> About freedom: I for sure would prefere open source drivers
s> availability, let's account for it!
There is source for the Intel gigabit cards in the source browser.
http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/
Lori Alt wrote:
> What if you turned slice 1 into a pool (a new one), migrated your BE
> into it,
> then grow that pool to soak up the space in the slices that follow it?
> You might
> still need to save some stuff elsewhere while you're doing the transition.
Doesn't work, because LU wants to
On Thu, Jul 24, 2008 at 3:41 AM, Steve <[EMAIL PROTECTED]> wrote:
> Or "Atom" maybe viable?
The atom CPU has pretty crappy performance. At 1.6 GHz performance is
somewhere between a 900MHz Celeron-M and 1.13 Pentium 3-M. It's also
single-core. It would probably work, but it could be CPU bound on
w
PS: I scaled down to mini-ITX form factot because it seems that the
http://www.chenbro.com/corporatesite/products_detail.php?serno=100 is the
PERFECT case for the job!
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss
Rex Kuo wrote:
> Dear All :
>
> We are looking for best practice for Solaris as an NFS-server sharing a
> number of ZFS file systems and nfs clients are RHEL 5.0 OS to mount
> NFS-server.
>
> Any S10 NFS-server and RHEL 5.0 NFS-client tuning guide or suggestion are
> welcome.
>
We try to
Have any of you guys reported this to Sun? A quick search of the bug database
doesn't bring up anything that appears related to sata drives and hanging or
hot swapping.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discu
Yeah, I thought of the storage forum today and found somebody else with the
problem, and since my post a couple of people have reported similar issues on
Thumpers.
I guess the storage thread is the best place for this now:
http://www.opensolaris.org/jive/thread.jspa?threadID=42507&tstart=0
T
Did you have success?
What version of Solaris? OpenSolaris? etc?
I'd want to use this card with the latest Solaris 10 (update 5?)
The connector on the adapter itself is "IPASS" and the Supermicro part number
for cables from the adapter to standard SATA drives is CBL-0118L-02 "IPASS to 4
SATA C
Alan Burlison wrote:
Lori Alt wrote:
In designing the changes to the install software, we had to
decide whether to be all things to all people or make some
default choices. Being all things to all people makes the
interface a lot more complicated and takes a lot more
engineering effort (
I've discovered this as well - b81 to b93 (latest I've tried). I
switched from my on-board SATA controller to AOC-SAT2-MV8 cards because
the MCP55 controller caused random disk hangs. Now the SAT2-MV8 works as
long as the drives are working correctly, but the system can't handle a
drive failure
Lori Alt <[EMAIL PROTECTED]> writes:
> use of swap/dump zvols? If your existing swap/dump slice
> is contiguous with your root pool, you can grow the root
> pool into that space (using format to merge the slices.
> A reboot or re-import of the pool will cause it to grow into
> the newly-available
Aaron Botsis пишет:
> Hello, I've hit this same problem.
>
> Hernan/Victor, I sent you an email asking for the description of this
> solution. I've also got important data on my array. I went to b93 hoping
> there'd be a patch for this.
>
> I caused the problem in a manner identical to Hernan;
Or this that seems a very very nice intel mb (4+ sata in a mini package!):
- http://www.intel.com/Products/Desktop/Motherboards/DG45FC/DG45FC-overview.htm
The same: could it be good (/best) for the purpose?
This message posted from opensolaris.org
__
Aaron Botsis пишет:
> Nevermind -- this problem seems like it's been fixed in b94. I saw a
> bug that looked like the description fit (slow clone removal, didn't
> write down the bug number) and gave it a shot. imported and things
> seem like they're back up and running.
Good to hear that. It is
Vdbench IS a Sun tool, and it is in the process of being open sourced.
You can find the latest GA version at
https://cds.sun.com/is-bin/INTERSHOP.enfinity/WFS/CDS-CDS_SMI-Site/en_US/-/USD/[EMAIL
PROTECTED]
Henk.
This message posted from opensolaris.org
___
Lori Alt wrote:
> In designing the changes to the install software, we had to
> decide whether to be all things to all people or make some
> default choices. Being all things to all people makes the
> interface a lot more complicated and takes a lot more
> engineering effort (we'd still be develo
[EMAIL PROTECTED] wrote:
> Just make sure you use dumpadm to point to valid dump device and
> this setup should work fine. Please let us know if it doesn't.
Yep, works fine.
> The ZFS strategy behind automatically creating separate swap and
> dump devices including the following:
>
> o Eliminat
Alan Burlison wrote:
[EMAIL PROTECTED] wrote:
ZFS doesn't swap to a slice in build 92. In this build, a ZFS root
environment requires separate ZFS volumes for swap and dump devices.
The ZFS boot/install project and information trail starts here:
http://opensolaris.org/os/community/zfs/b
[EMAIL PROTECTED] wrote:
> Alan,
>
> Just make sure you use dumpadm to point to valid dump device and
> this setup should work fine. Please let us know if it doesn't.
>
> The ZFS strategy behind automatically creating separate swap and
> dump devices including the following:
>
> o Eliminates the
Alan,
Just make sure you use dumpadm to point to valid dump device and
this setup should work fine. Please let us know if it doesn't.
The ZFS strategy behind automatically creating separate swap and
dump devices including the following:
o Eliminates the need to create separate slices
o Enables u
Enda O'Connor ( Sun Micro Systems Ireland) wrote:
> Mike Gerdts wrote:
>> On Wed, Jul 23, 2008 at 11:36 AM, <[EMAIL PROTECTED]> wrote:
>>> Rainer,
>>>
>>> Sorry for your trouble.
>>>
>>> I'm updating the installboot example in the ZFS Admin Guide with the
>>> -F zfs syntax now. We'll fix the insta
Dear All :
We are looking for best practice for Solaris as an NFS-server sharing a
number of ZFS file systems and nfs clients are RHEL 5.0 OS to mount NFS-server.
Any S10 NFS-server and RHEL 5.0 NFS-client tuning guide or suggestion are
welcome.
Best Regards,
-- Rex
This message posted
On Thu, 24 Jul 2008, Tharindu Rukshan Bamunuarachchi wrote:
> Do you have any recommend parameters should I try ?
Using an external log is really not needed when using the StorageTek
2540. I doubt that it is useful at all.
Bob
==
Bob Friesenhahn
[EMAIL PROTE
Mike Gerdts wrote:
> On Wed, Jul 23, 2008 at 11:36 AM, <[EMAIL PROTECTED]> wrote:
>> Rainer,
>>
>> Sorry for your trouble.
>>
>> I'm updating the installboot example in the ZFS Admin Guide with the
>> -F zfs syntax now. We'll fix the installboot man page as well.
>
> Perhaps it also deserves a me
[EMAIL PROTECTED] wrote:
> ZFS doesn't swap to a slice in build 92. In this build, a ZFS root
> environment requires separate ZFS volumes for swap and dump devices.
>
> The ZFS boot/install project and information trail starts here:
>
> http://opensolaris.org/os/community/zfs/boot/
Is this goin
On Wed, Jul 23, 2008 at 11:36 AM, <[EMAIL PROTECTED]> wrote:
> Rainer,
>
> Sorry for your trouble.
>
> I'm updating the installboot example in the ZFS Admin Guide with the
> -F zfs syntax now. We'll fix the installboot man page as well.
Perhaps it also deserves a mention in the FAQ somewhere near
On Thu, 24 Jul 2008, Brandon High wrote:
>
> Have you tried exporting the individual drives and using zfs to handle
> the mirroring? It might have better performance in your situation.
It should indeed have better performance. The single LUN exported
from the 2540 will be treated like a single d
On Thu, Jul 24, 2008 at 08:22:16AM -0400, Charles Menser wrote:
> Yes, I am vary happy with the M2A-VM.
You will need at least SNV_93 to use it in AHCI mode.
The northbridge gets quite hot, but that does not seem to be impairing
its performance. I have the M2A-VM with an AMD 64 BE-2400 (45W) and
I installed it with snv_86 in IDE controller mode, and have since
upgraded ending up at snv_93.
Do you know what implications there are for using AHCI vs IDE modes?
Thanks,
Charles
On Thu, Jul 24, 2008 at 9:26 AM, Florin Iucha <[EMAIL PROTECTED]> wrote:
> On Thu, Jul 24, 2008 at 08:22:16AM -0400
On Thu, Jul 24, 2008 at 10:38:49AM -0400, Charles Menser wrote:
> I installed it with snv_86 in IDE controller mode, and have since
> upgraded ending up at snv_93.
>
> Do you know what implications there are for using AHCI vs IDE modes?
I had the same question and Neal Pollack <[EMAIL PROTECTED]>
On Thu, 24 Jul 2008, Tharindu Rukshan Bamunuarachchi wrote:
We do not use raidz*. Virtually, no raid or stripe through OS.
We have 4 disk RAID1 volumes. RAID1 was created from CAM on 2540.
What ZFS block size are you using?
Are you using synchronous writes for each 700byte message? 10k
sy
Hi Alan,
ZFS doesn't swap to a slice in build 92. In this build, a ZFS root
environment requires separate ZFS volumes for swap and dump devices.
The ZFS boot/install project and information trail starts here:
http://opensolaris.org/os/community/zfs/boot/
Cindy
Alan Burlison wrote:
> I'm up
Nevermind -- this problem seems like it's been fixed in b94. I saw a bug that
looked like the description fit (slow clone removal, didn't write down the bug
number) and gave it a shot. imported and things seem like they're back up and
running.
This message posted from opensolaris.org
___
Great news, thanks for the update :)
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hmmn, that *sounds* as if you are saying you've a very-high-redundancy
RAID1 mirror, 4 disks deep, on an 'enterprise-class tier 2 storage' array
that doesn't support RAID 1+0 or 0+1.
That sounds weird: the 2540 supports RAID levels 0, 1, (1+0), 3 and 5,
and deep mirrors are normally only used
I'm upgrading my B92 UFS-boot system to ZFS root using Live Upgrade. It
appears to work fine so far, but I'm wondering why it allocates a ZFS
filesystem for swap when I already have a dedicated swap slice.
Shouldn't it just use any existing swap slice rather than creating a ZFS
one?
--
Alan
Yes, I am vary happy with the M2A-VM.
Charles
On Wed, Jul 23, 2008 at 5:05 PM, Steve <[EMAIL PROTECTED]> wrote:
> Thank you for all the replays!
> (and in the meantime I was just having a dinner! :-)
>
> To recap:
>
> tcook:
> you are right, in fact I'm thinking to have just 3/4 for now, without
We had the same problem, at least a good chunk of the zfs volumes died when the
drive failed. Granted, I don't think the drive actually failed, but a driver
issue/lockup. A reboot 2 weeks ago brought the machine back up and the drive
hasn't had a problem since. I was behind on two patches that
Thankx for your continuous
help ...
We do not read ...
We hardly read
Actually our system is writing whole day, each and every transaction it
receives ...
We need written data to recover the system from a crash , in the middle
of day (very rare situation, but most important part of a t
We have had a disk fail in the the existing x4500 and it sure froze the
whole server. I believe it is an OS problem which (should have) been
fixed in a version newer than we have. If you want me to test it on the
new x4500 because it runs Sol10 508 I can do.
Ross wrote:
> Hi Jorgen,
>
> This
Do you have any recommend parameters should I try ?
Ellis, Mike wrote:
Would adding a dedicated ZIL/SLOG (what is the difference between those 2 exactly? Is there one?) help meet your requirement?
The idea would be to use some sort of relatively large SSD drive of some variety to absorb th
Or "Atom" maybe viable?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi Jorgen,
This isn't an answer to your problem I'm afraid, but a request for you to do a
test when you get your new x4500.
Could you try pulling a SATA drive to see if the system hangs? I'm finding
Solaris just locks up if I pull a drive connected to the Supermicro
AOC-SAT2-MV8 card, and I w
Following the VIA link and googling a bit I found something that seems
interesting:
- MB: http://www.avmagazine.it/forum/showthread.php?s=&threadid=108695
- in the case http://www.chenbro.com/corporatesite/products_detail.php?serno=100
Are they viable??
This message posted from opensolaris.or
As I used OpenSolaris for some time I wanted to give SXCE (snv_93) a change on
my home server. Now I' wondering what would be the best setup for my disks.
I have two 300GiB PATA disks* in stock, two 160G SATA disks** in use by my old
linux server and - maybe for temporary use - an external 160G
On Wed, Jul 23, 2008 at 10:02 PM, Tharindu Rukshan Bamunuarachchi
<[EMAIL PROTECTED]> wrote:
> We do not use raidz*. Virtually, no raid or stripe through OS.
So it's ZFS on a single LUN exported from the 2540? Or have you
created a zpool from multiple raid1 LUNs on the 2540?
Have you tried export
On Thu, Jul 24, 2008 at 1:28 AM, Steve <[EMAIL PROTECTED]> wrote:
> And interesting of booting from CF, but it seems is possible to boot from the
> zraid and I would go for it!
It's not possible to boot from a raidz volume yet. You can only boot
from a single drive or a mirror.
-B
--
Brandon H
many on HD setup:
Thanks for the replies, but actual doubt is on MB.
I would go with the suggestion of different HD (even if I think that the speed
will be aligned to the slowest of them), and may be raidz2 (even if I think
raidz is enough for a home server)
bhigh:
It seems than 780G/SB700 a
We do not use raidz*.
Virtually, no raid or stripe through OS.
We have 4 disk RAID1 volumes. RAID1 was created from CAM on 2540.
2540 does not have RAID 1+0 or 0+1.
cheers
tharindu
Brandon High wrote:
On Tue, Jul 22, 2008 at 10:35 PM, Tharindu Rukshan Bamunuarachchi
<[EMAIL PROTECTED]>
70 matches
Mail list logo