Hi Bob,
The problem could be due to a faulty/failing disk, a poor connection
with a disk, or some other hardware issue. A failing disk can easily
make the system pause temporarily like that.
As root you can run '/usr/sbin/fmdump -ef' to see all the fault events
as they are reported. Be sur
Hi all,
we are encountering severe problems on our X4240 (64GB, 16 disks)
running Solaris 10 and ZFS. From time to time (5-6 times a day)
• FrontBase hangs or crashes
• VBox virtual machine do hang
• Other applications show rubber effect (white screen) while moving
the windows
I have been t
Hi all
we are encountering severe problems on our X4240 (64GB, 16 disks)
running Solaris 10 and ZFS. From time to time (5-6 times a day)
• FrontBase hangs or crashes
• VBox virtual machine do hang
• Other applications show rubber effect (white screen) while moving the
windows
I have been te
Hi Ragnar,
I need to replace a disk in a zfs pool on a production server (X4240
running Solaris 10) today and won't have access to my documentation
there. That's why I would like to have a good plan on paper before
driving to that location. :-)
The current tank pool looks as follows:
pool:
Hi all,
I need to replace a disk in a zfs pool on a production server (X4240
running Solaris 10) today and won't have access to my documentation
there. That's why I would like to have a good plan on paper before
driving to that location. :-)
The current tank pool looks as follows:
pool: t
Hi Khyron,
No, he did *not* say that a mirrored SLOG has no benefit,
redundancy-wise.
He said that YOU do *not* have a mirrored SLOG. You have 2 SLOG
devices
which are striped. And if this machine is running Solaris 10, then
you cannot
remove a log device because those updates have not made
Hi Edward,
thanks a lot for your detailed response!
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Andreas Höschler
• I would like to remove the two SSDs as log devices from the pool and
instead add them as a separate pool for sole use by
Hi all,
while setting of our X4140 I have - following suggestions - added two
SSDs as log devices as follows
zpool add tank log c1t6d0 c1t7d0
I currently have
pool: rpool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
rpool
Hi all,
just after sending a message to sunmanagers I realized that my question
should rather have gone here. So sunmanagers please excus ethe double
post:
I have inherited a X4140 (8 SAS slots) and have just setup the system
with Solaris 10 09. I first setup the system on a mirrored pool ov
Hi Cindy,
I think you can still offline the faulted disk, c1t6d0.
OK, here it gets tricky. I have
NAME STATE READ WRITE CKSUM
tank DEGRADED 0 0 0
mirror ONLINE 0 0 0
c1t2d0 ONLINE 0 0
Hi all,
zpool add tank spare c1t15d0
? After doing that c1t6d0 is offline and ready to be physically
replaced?
Yes, that is correct.
Then you could physically replace c1t6d0 and add it back to the pool
as
a spare, like this:
# zpool add tank spare c1t6d0
For a production system, the s
Hi Cindy,
Good job for using a mirrored configuration. :-)
Thanks!
Your various approaches would work.
My only comment about #2 is that it might take some time for the spare
to kick in for the faulted disk.
Both 1 and 2 would take a bit more time than just replacing the faulted
disk with
Dear managers,
one of our servers (X4240) shows a faulty disk:
-bash-3.00# zpool status
pool: rpool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONL
13 matches
Mail list logo