This has happened to me several times now, I'm confused as to why...
This one particular drive, and its always the same drive, randomly shows up as
being removed from the pool. I have to export and import the pool in order to
have this disk seen again and for re-silvering to reoccur. When this l
So, to paraphase this, if with these various I/O tests and tools it is
determined that disks are suffering from latency issues, adding RAM or an L2ARC
will help, and there is really no drawback of doing one or the other as both
will be used as available. So, it's just a question of to what exten
Can somebody kindly clarify as to how Solaris and ZFS makes use of RAM?
I have 4 gig of RAM installed on my Solaris/ZFS box serving a pool of 6 disks.
I can see that it is using all of this memory, although the machine has never
had to dip into swap space.
What would be the net effect of adding
Ahhh, I figured you could always do that, I guess I was wrong...
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hello,
I found in the release notes for Solaris 10 9/10:
"Oracle Solaris ZFS online device management, which allows customers to make
changes to filesystem configurations, without taking data offline."
Can somebody kindly clarify what sort of filesystem configuration changes can
me made this
It would be helpful if you posted more information about your
configuration.
Numbers *are* useful too, but minimally, describing your setup, use case,
the hardware and other such facts would provide people a place to start.
There are much brighter stars on this list than myself, but if you are
Hello,
I'm wondering if somebody can kindly direct me to a sort of newbie way of
assessing whether my ZFS pool performance is a bottleneck that can be improved
upon, and/or whether I ought to invest in a SSD ZIL mirrored pair? I'm a little
confused by what the output of iostat, fsstat, the zils
Hello,
I have a drive that was a part of the pool showing up as "removed". I made no
changes to the machine, and there are no errors being displayed, which is
rather weird:
# zpool status nm
pool: nm
state: DEGRADED
scrub: none requested
config:
NAMESTATE READ WRITE CKS
Anybody?
I would truly appreciate some general, if not definite insight as to what one
can expect in terms of I/O performance after adding new disks to ZFS pools.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@open
My impression was that the ZFS Fuse project was no longer being maintained?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hello,
As I understand it, in a traditional RAID 5 setup adding new disks to the pool
provides more overall I/O as the load is spread out across multiple disks.
What exactly is this relationship in a RAID-Z setup? What should one expect in
terms of overall I/O performance as disks are added and
I'm entertaining something which might be a little wacky, I'm wondering what
your general reaction to this scheme might be :)
I would like to invest in some sort of storage appliance, and I like the idea
of something I can grow over time, something that isn't tethered to my servers
(i.e. not d
12 matches
Mail list logo