I've been having the same problems, and it appears to be from a remote
monitoring app that calls zpool status and/or zfs list. I've also found
problems with PERC and I'm finally replacing the PERC cards with SAS5/E
controllers (which are much cheaper anyway). Every time I reboot, the PERC
tel
zpool clear pool2
>
> I would use fmdump -eV to see what's going with c10t11d0.
>
> Thanks,
>
> Cindy
>
> On 10/04/10 07:47, Brian Kolaci wrote:
>> Hi,
>> I had a hot spare used to replace a failed drive, but then the drive appears
>> to be fine
Hi,
I had a hot spare used to replace a failed drive, but then the drive appears to
be fine anyway.
After clearing the error it shows that the drive was resilvered, but keeps the
spare in use.
zpool status pool2
pool: pool2
state: ONLINE
scrub: none requested
config:
NAMEST
/importing the pool, even after
> the upgrade.
Format sees both the pseudo and physical/native device names for both paths.
I can provide an example that he did today.
>
> Thanks,
>
> Cindy
> On 08/09/10 09:55, Brian Kolaci wrote:
>> On some machines running PowerPath,
On some machines running PowerPath, there's sometimes issues after an
update/upgrade of the PowerPath software. Sometimes the pseudo devices get
remapped and change names. ZFS appears to handle it OK, however sometimes it
then references half native device names and half the emcpower pseudo d
On 7/6/2010 10:37 AM, Victor Latushkin wrote:
On Jul 6, 2010, at 6:30 PM, Brian Kolaci wrote:
Well, I see no takers or even a hint...
I've been playing with zdb to try to examine the pool, but I get:
# zdb -b pool4_green
zdb: can't open pool4_green: Bad exchange descriptor
s in the logs and it just
"disappeared" without a trace.
The only logs are from subsequent reboots where it says a ZFS pool failed to
open.
It does not give me a warm & fuzzy about using ZFS as I've relied on it heavily
in the past 5 years.
Any advice would be well appreciate
I've recently acquired some storage and have been trying to copy data from a
remote data center to hold backup data. The copies had been going for weeks,
with about 600GB transferred so far, and then I noticed the throughput on the
router stopped. I see a pool disappeared.
# zpool status -x
On Jun 28, 2010, at 12:26 PM, Tristram Scott wrote:
>> I use Bacula which works very well (much better than
>> Amanda did).
>> You may be able to customize it to do direct zfs
>> send/receive, however I find that although they are
>> great for copying file systems to other machines,
>> they are i
I use Bacula which works very well (much better than Amanda did).
You may be able to customize it to do direct zfs send/receive, however I find
that although they are great for copying file systems to other machines, they
are inadequate for backups unless you always intend to restore the whole f
On Mar 2, 2010, at 11:09 AM, Bob Friesenhahn wrote:
> On Tue, 2 Mar 2010, Brian Kolaci wrote:
>>
>> What is probability of corruption with ZFS in Solaris 10 U6 and up in a SAN
>> environment? Have people successfully recovered?
>
> The probability of corruption in
We have a virtualized environment of T-Series where each host has either zones
or LDoms.
All of the virtual systems will have their own dedicated storage on ZFS (and
some may also get raw LUNs). All the SAN storage is delivered in fixed sized
33GB LUNs.
The question I have to the community i
I recently upgraded a box to Solaris 10 U8.
I've been getting more timeouts and I guess the Adaptec card is suspect, possibly not
able to keep up, so it issues bus resets at times. It has apparently corrupted some
files on the pool, and zpool status -v showed 2 files and one dataset corrupt.
I was frustrated with this problem for months. I've tried different
disks, cables, even disk cabinets. The driver hasn't been updated in
a long time.
When the timeouts occurred, they would freeze for about a minute or
two (showing the 100% busy). I even had the problem with less than 8
L
Hi,
I'm having trouble with scsi timeouts, but it appears to only happen
when I use ZFS.
I've tried to replicate with SVM, but I can't get the timeouts to happen
when that is the underlying volume manager, however the performance with
ZFS is much better when it does work.
The symptom is tha
Thanks all,
It was a government customer that I was talking too and it sounded like a good
idea, however with the certification paper trails required today, I don't think
it would be of such a benefit after all. It may be useful on the disk
evacuation, but they're still going to need their pa
Hi,
I was discussing the common practice of disk eradication used by many firms for
security. I was thinking this may be a useful feature of ZFS to have an option
to eradicate data as its removed, meaning after the last reference/snapshot is
done and a block is freed, then write the eradicati
Is there a way to change the device name used to create a zpool?
My customer created their pool with on emc powerpath. An SA removed
powerpath by mistake, then reinstalled it. The names on the zpool are
now the physical device names of one path. They have data on there
already, so they woul
On Aug 6, 2009, at 5:36 AM, Ian Collins wrote:
Brian Kolaci wrote:
They understand the technology very well. Yes, ZFS is very
flexible with many features, and most are not needed in an
enterprise environment where they have high-end SAN storage that is
shared between Sun, IBM, linux
cindy.swearin...@sun.com wrote:
Brian,
CR 4852783 was updated again this week so you might add yourself or
your customer to continue to be updated.
Will do. I thought I was on it, but didn't see any updates...
In the meantime, a reminder is that a mirrored ZFS configuration
is flexible in
Bob Friesenhahn wrote:
On Wed, 5 Aug 2009, Brian Kolaci wrote:
I have a customer that is trying to move from VxVM/VxFS to ZFS,
however they have this same need. They want to save money and move to
ZFS. They are charged by a separate group for their SAN storage
needs. The business group
Richard Elling wrote:
On Aug 5, 2009, at 2:58 PM, Brian Kolaci wrote:
I'm chiming in late, but have a mission critical need of this as well
and posted as a non-member before. My customer was wondering when
this would make it into Solaris 10. Their complete adoption depends
on it.
I
I'm chiming in late, but have a mission critical need of this as well and
posted as a non-member before. My customer was wondering when this would make
it into Solaris 10. Their complete adoption depends on it.
I have a customer that is trying to move from VxVM/VxFS to ZFS, however they
have
Does anyone know when Solaris 10 will have the bits to allow removal of
vdevs from a pool to shrink the storage?
Thanks,
Brian
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Is there a way to change the device name used to create a zpool?
My customer created their pool with physical device names rather than
the emc powerpath virtual names.
They have data on there already, so they would like to preserve it.
My experience with zpool replace is that it copies data ove
In a recovery situation where the primary node crashed, the
disks get write-disabled while the failover node takes control.
How can you unmount the zpool? It panics the system and actually
gets into a panic loop when it tries to mount it again on next boot.
Thanks,
Brian
Robert Milkowski wrote:
> Hello can,
>
> Thursday, December 13, 2007, 12:02:56 AM, you wrote:
>
> cyg> On the other hand, there's always the possibility that someone
> cyg> else learned something useful out of this. And my question about
>
> To be honest - there's basically nothing useful in th
27 matches
Mail list logo