On Solaris 10 If I install using ZFS root on only one drive is there a way
to add another drive as a mirror later? Sorry if this was discussed
already. I searched the archives and couldn't find the answer. Thank you.
___
zfs-discuss mailing list
zfs-disc
Thank you all for your answers and links :-)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I have a desktop system with 2 ZFS mirrors. One drive in one mirror is
starting to produce read errors and slowing things down dramatically. I
detached it and the system is running fine. I can't tell which drive it is
though! The error message and format command let me know which pair the bad
driv
Hello all,
Trying to reply to everyone so far in one post.
casper@oracle.com said
> Did you try:
>
> iostat -En
I issued that command and I see (soft) errors from all 4 drives. There is a
serial no. field in the message headers but it is has no contents.
>
> messages in /var/adm/mes
Richard Elling said
> If the errors bubble up to ZFS, then they will be shown in the output of
> "zpool status"
On the console I was seeing retryable read errors that eventually
failed. The block number and drive path were included but not any info I
could relate to the actual disk.
zpool statu
I've been watching the heat control issue carefully since I had to take a
job offshore (cough reverse H1B cough) in a place without adequate AC and I
was able to get them to ship my servers and some other gear. Then I read
Intel is guaranteeing their servers will work up to 100 degrees F ambient
t
You wrote:
> 2012-07-23 18:37, Anonymous wrote:
> > Really, it would be so helpful to know which drives we can buy with
> > confidence and which should be avoided...is there any way to know from the
> > manufacturers web sites or do you have to actually buy one and see what it
> > does? Thanks to
> It depends on the model. Consumer models are less likely to
> immediately flush. My understanding that this is done in part to do
> some write coalescing and reduce the number of P/E cycles. Enterprise
> models should either flush, or contain a super capacitor that provides
> enough power for th
Hi Darren,
> On 08/30/12 11:07, Anonymous wrote:
> > Hi. I have a spare off the shelf consumer PC and was thinking about loading
> > Solaris on it for a development box since I use Studio @work and like it
> > better than gcc. I was thinking maybe it isn't so smart to use ZFS since it
> > has onl
Thank you.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> If you plan to generate a lot of data, why use the root pool? You can put
> the /home and /proj filesystems (/export/...) on a separate pool, thus
> off-loading the root pool.
I don't, it's a development box with not alot happening.
>
> My two cents,
thanks
__
> Right, put some small (30GB or something trivial) disks in for root and
> then make a nice fast multi-spindle pool for your data. If your 320s
> are around the same performance as your 500s, you could stripe and
> mirror them all into a big pool. ZFS will waste the extra 180 on the
> bigge
> Hi Dave,
Hi Cindy.
> Consider the easiest configuration first and it will probably save
> you time and money in the long run, like this:
>
> 73g x 73g mirror (one large s0 on each disk) - rpool
> 73g x 73g mirror (use whole disks) - data pool
>
> Then, get yourself two replacement disks, a g
I am having a problem after a new install of Solaris 10. The installed rpool
works fine when I have only those disks connected. When I connect disks from
an rpool I created during a previous installation, my newly installed rpool
is ignored even though the BIOS (x86) is set to boot only from the n
Hi Roy, things got alot worse since my first email. I don't know what
happened but I can't import the old pool at all. It shows no errors but when
I import it I get a kernel panic from assertion failed: zvol_get_stats(os,
nv) which looks like is fixed by patch 6801926 which is applied in Solaris
1
You wrote:
> >
> > Hi Roy, things got alot worse since my first email. I don't know what
> > happened but I can't import the old pool at all. It shows no errors but when
> > I import it I get a kernel panic from assertion failed: zvol_get_stats(os,
> > nv) which looks like is fixed by patch 68019
16 matches
Mail list logo