On 5/7/10 9:38 PM, Giovanni wrote:
> Hi guys,
>
> I have a quick question, I am playing around with ZFS and here's what I did.
>
> I created a storage pool with several drives. I unplugged 3 out of 5 drives
> from the array, currently:
>
> NAMESTATE READ WRITE CKSUM
> gpool
On 05/ 8/10 04:38 PM, Giovanni wrote:
Hi guys,
I have a quick question, I am playing around with ZFS and here's what I did.
I created a storage pool with several drives. I unplugged 3 out of 5 drives
from the array, currently:
NAMESTATE READ WRITE CKSUM
gpool
Hi guys,
I have a quick question, I am playing around with ZFS and here's what I did.
I created a storage pool with several drives. I unplugged 3 out of 5 drives
from the array, currently:
NAMESTATE READ WRITE CKSUM
gpool UNAVAIL 0 0 0 insufficien
AFAIK, zfs should be able to protect against (if the pool is redundant), or at
least
detect, corruption from the point that it is handed the data, to the point
that the data is written to permanent storage, _provided_that_ the system
has ECC RAM (so it can detect and often correct random backgroun
Brandon High wrote:
"On Mon, May 3, 2010 at 4:33 PM, Michael Shadle wrote:
Is ZFS doing it's magic checksumming and whatnot on this share, even
though it is seeing junk data (NTFS on top of iSCSI...) or am I not
getting any benefits from this setup at all (besides thin
provisioning, things l
On Fri, 7 May 2010, Kris Kasner wrote:
One thing my customers noticed immediately was a reduction in "free"
memory as reported by 'top'. By way of explaining that ZFS keeps
it's cache in kernel and not in the freelist, it became apparent
that memory is being used disproportionally to the filesy
On 05/07/10 15:05, Kris Kasner wrote:
Is ZFS swap cached in the ARC? I can't account for data in the ZFS filesystems
to use as much ARC as is in use without the swap files being cached.. seems a
bit redundant?
There's nothing to explicitly disable caching just for swap; from zfs's
point of vie
Hi Folks..
We have started to convert our Veritas clustered systems over to ZFS root to
take advantage of the extreme simplification of using Live Upgrade. Moving the
data of these systems off VxVM and VxFS is not in scope for reasons to numerous
to go into..
One thing my customers noticed
> - Poweroff with USB drive connected or removed, Solaris will not boot
> unless USB drive is
> connected, and in some cases need to be attached to the exact same
> USB port when last
> attached. Is this a bug ?
Possibly hitting this?
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug
On 05/06/2010 11:00 AM, Bruno Sousa wrote:
Going on the specs it seems to me that if this device has a good price
it might be quite useful for caching purposes on ZFS based storage.
Not bad, they claim 1TB transfer in 47 minutes:
http://www.google.com/search?hl=en&q=1TB%2F47+minutes
That's
On 05/07/2010 11:08 AM, Edward Ned Harvey wrote:
I'm going to continue encouraging you to staying "mainstream," because what
people do the most is usually what's supported the best.
If I may be the contrarian, I hope Matt keeps experimenting with this,
files bugs, and they get fixed. His use
On Fri, May 7, 2010 at 2:51 AM, Matt Keenan wrote:
> - Poweroff with USB drive connected or removed, Solaris will not boot unless
> USB drive is
> connected, and in some cases need to be attached to the exact same USB port
> when last
> attached. Is this a bug ?
There's a known issue in recent
On Fri, May 7, 2010 at 8:07 AM, Emily Grettel wrote:
> Hi,
>
> I've had my RAIDz volume working well on SNV_131 but it has come to my
> attention that there has been some read issues with the drives. Previously I
> thought this was a CIFS problem but I'm noticing that when transfering files
> or
On Fri, 7 May 2010, Gabriele Bulfon wrote:
I have read on "zfs best practices" articles that slicing is not suggested
(unless you want to just create one slice for each disk to slightly lower each disk size,
to be prepared for disks small differences in case of substitution of any of them).
The
> On 06/05/2010 21:07, Erik Trimble wrote:
>> VM images contain large quantities of executable files, most of which
>> compress poorly, if at all.
>
> What data are you basing that generalisation on ?
note : I can't believe someone said that.
warning : I just detected a fast rise time on my peda
> From: Matt Keenan [mailto:matt...@opensolaris.org]
>
> After some playing around I've noticed some kinks particularly around
> booting.
I'm going to continue encouraging you to staying "mainstream," because what
people do the most is usually what's supported the best. I think you'll
have a mor
On Thu, May 06, 2010 at 07:46:49PM -0700, Rob wrote:
> Hi Gary,
> I would not remove this line in /etc/system.
> We have been combatting this bug for a while now on our ZFS file
> system running JES Commsuite 7.
>
> I would be interested in finding out how you were able to pin point
> the problem.
On Fri, May 7, 2010 04:32, Darren J Moffat wrote:
> Remember also that unless you are very CPU bound you might actually
> improve performance from enabling compression. This isn't new to ZFS,
> people (my self included) used to do this back in MS-DOS days with
> Stacker and Doublespace.
CPU has
Hi,
I've had my RAIDz volume working well on SNV_131 but it has come to my
attention that there has been some read issues with the drives. Previously I
thought this was a CIFS problem but I'm noticing that when transfering files or
uncompressing some fairly large 7z (1-2Gb) files (or even s
Hi, all,
I think I'm missing a concept with import and export. I'm working on
installing a Nexenta b134 system under Xen, and I have to run the installer
under hvm mode, then I'm trying to get it back up under pv mode. In that
process the controller names change, and that's where I'm getting
After some playing around I've noticed some kinks particularly around
booting.
Some scenarios :
- Poweroff with USB drive connected or removed, Solaris will not boot
unless USB drive is
connected, and in some cases need to be attached to the exact same
USB port when last
attached. Is thi
>On 06/05/2010 21:07, Erik Trimble wrote:
>> VM images contain large quantities of executable files, most of which
>> compress poorly, if at all.
>
>What data are you basing that generalisation on ?
>
>Look at these simple examples for libc on my OpenSolaris machine:
>
>1.6M /usr/lib/libc.so.1*
>
On 06/05/2010 21:07, Erik Trimble wrote:
VM images contain large quantities of executable files, most of which
compress poorly, if at all.
What data are you basing that generalisation on ?
Look at these simple examples for libc on my OpenSolaris machine:
1.6M /usr/lib/libc.so.1*
636K /tmp/l
Hi, I would love some suggestions for an implementation I'm going to deploy.
I will have a machine with 4x1T disks, going to be a file server for both
windows and osx clients through smb/cifs.
I have read on "zfs best practices" articles that slicing is not suggested
(unless you want to just creat
Thanks for your suggestions :)
Another thing comes to my mind (expecially after a past bad experience with a
buggy storage non-zfs backend).
Usually (correct me if I'm wrong) the storage will be having redundancy on its
zfs volumes (be it mirror or raidz).
Once the redundant volume is exposed as
25 matches
Mail list logo