> On Dec 15, 2009, at 5:50, Jack Kielsmeier
> wrote:
>
> > Thanks.
> >
> > I've decided now to only post when:
> >
> > 1) I have my zfs pool back
> > or
> > 2) I give up
> >
> > I should note that there are periods of time where
> I can ping my
> > server (rarely), but most of the time not. I h
Hi,
Created a zpool with 64k recordsize and enabled dedupe on it.
zpool create -O recordsize=64k TestPool device1
zfs set dedup=on TestPool
I copied files onto this pool over nfs from a windows client.
Here is the output of zpool list
Prompt:~# zpool list
NAME SIZE ALLOC FREECAP
On Dec 15, 2009, at 5:50, Jack Kielsmeier wrote:
Thanks.
I've decided now to only post when:
1) I have my zfs pool back
or
2) I give up
I should note that there are periods of time where I can ping my
server (rarely), but most of the time not. I have not been able to
ssh into it, and the
On Dec 4, 2009, at 9:33, James Risner wrote:
It was created on AMD64 FreeBSD with 8.0RC2 (which was version 13 of
ZFS iirc.)
At some point I knocked it out (export) somehow, I don't remember
doing so intentionally. So I can't do commands like zpool replace
since there are no pools.
Ha
On Mon, Dec 14, 2009 at 01:29:50PM +0300, Andrey Kuzmin wrote:
> On Mon, Dec 14, 2009 at 4:04 AM, Jens Elkner
> wrote:
...
> > Problem is pool1 - user homes! So GNOME/firefox/eclipse/subversion/soffice
...
> Flash-based read cache should help here by minimizing (metadata) read
> latency, and flash
Thanks.
I've decided now to only post when:
1) I have my zfs pool back
or
2) I give up
I should note that there are periods of time where I can ping my server
(rarely), but most of the time not. I have not been able to ssh into it, and
the console is hung (minus the little blinking cursor).
On Mon, Dec 14, 2009 at 3:54 PM, Craig S. Bell wrote:
> I am also accustomed to seeing diluted properties such as compressratio.
> IMHO it could be useful (or perhaps just familiar) to see a diluted dedup
> ratio for the pool, or maybe see the size / percentage of data used to arrive
> at dedu
>> Is there better solution to this problem, what if the machine crashes?
>>
>
> Crashes are abnormal conditions. If it crashes you should fix the problem to
> avoid future crashes and probably you will need to clear the pool dir
> hierarchy prior to import the pool.
Are you serious? I really hope
On 12/14/09, Cyril Plisko wrote:
> On Mon, Dec 14, 2009 at 9:32 PM, Andrey Kuzmin
> wrote:
>>
>> Right, but 'verify' seems to be 'extreme safety' and thus rather rare
>> use case.
>
> Hmm, dunno. I wouldn't set anything, but scratch file system to
> dedup=on. Anything of even slight significance
I am also accustomed to seeing diluted properties such as compressratio. IMHO
it could be useful (or perhaps just familiar) to see a diluted dedup ratio for
the pool, or maybe see the size / percentage of data used to arrive at
dedupratio.
As Jeff points out, there is enough data available to
Sorry if you got this twice
but I never saw it appear on the alias.
OK Today I played with a J4400 connected to a Txxx server running S10
10/09
First off
read the release
notes I spent about 4 hours pulling my hair out as I could not get
stmsboot to work until we read in the release no
>
> Hmm, dunno. I wouldn't set anything, but scratch file
> system to
> dedup=on. Anything of even slight significance is set
> to dedup=verify.
Why? Are you saying this because the ZFS dedup code is relatively new? Or
because you think there's some other problem/disadvantage to it? We're
pl
On Mon, Dec 14, 2009 at 9:32 PM, Andrey Kuzmin
wrote:
>
> Right, but 'verify' seems to be 'extreme safety' and thus rather rare
> use case.
Hmm, dunno. I wouldn't set anything, but scratch file system to
dedup=on. Anything of even slight significance is set to dedup=verify.
> Saving cycles lost
Hi Cesare,
According to our CR 6524163, this problem was fixed in PowerPath 5.0.2,
but then the problem reoccurred.
According to the EMC PowerPath Release notes, here:
www.emc.com/microsites/clariion-support/pdf/300-006-626.pdf
This problem is fixed in 5.2 SP1.
I would review the related ZF
On Mon, Dec 14, 2009 at 9:53 PM, wrote:
>
>>On Mon, Dec 14, 2009 at 09:30:29PM +0300, Andrey Kuzmin wrote:
>>> ZFS deduplication is block-level, so to deduplicate one needs data
>>> broken into blocks to be written. With compression enabled, you don't
>>> have these until data is compressed. Look
>On Mon, Dec 14, 2009 at 09:30:29PM +0300, Andrey Kuzmin wrote:
>> ZFS deduplication is block-level, so to deduplicate one needs data
>> broken into blocks to be written. With compression enabled, you don't
>> have these until data is compressed. Looks like cycles waste indeed,
>> but ...
>
>ZFS c
On Mon, Dec 14, 2009 at 09:30:29PM +0300, Andrey Kuzmin wrote:
> ZFS deduplication is block-level, so to deduplicate one needs data
> broken into blocks to be written. With compression enabled, you don't
> have these until data is compressed. Looks like cycles waste indeed,
> but ...
ZFS compressi
On Dec 14, 2009, at 10:18 AM, Markus Kovero wrote:
How you can setup these values to fma?
UTSL
http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/cmd/fm/modules/common/zfs-diagnosis/zfs_de.c#775
Standard caveats for adjusting timeouts applies.
-- richard
__
On Sun, Dec 13, 2009 at 11:51 PM, Steve Radich, BitShop, Inc.
wrote:
> I enabled compression on a zfs filesystem with compression=gzip9 - i.e.
> fairly slow compression - this stores backups of databases (which compress
> fairly well).
>
> The next question is: Is the CRC on the disk based on t
How you can setup these values to fma?
Yours
Markus Kovero
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of R.G. Keen
Sent: 14. joulukuuta 2009 20:14
To: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] hard driv
> FMA (not ZFS, directly) looks for a number of
> failures over a period of time.
> By default that is 10 failures in 10 minutes. If you
> have an error that trips
> on TLER, the best it can see is 2-3 failures in 10
> minutes. The symptom
> you will see is that when these long timeouts happen,
>
Hi James,
What are the commands that are used to reboot this server?
Also, you can use the fmdump -eV command to review any underlying
hardware problems. You might see some clues about what is going
on with c7t2d0.
Thanks,
Cindy
On 12/13/09 16:46, James Nelson wrote:
A majority of the time w
Hi,
Martin Uhl wrote:
> obviously that will fail.
So AFAIK those directories will be created on mount but not removed on unmount
Good point. I was not aware of this. Will check with engineering.
The problem is not that exporting will not remove dirs (which I doubt it should) but moun
On Dec 13, 2009, at 11:28 PM, Yaverot wrote:
Been lurking for about a week and a half and this is my first post...
--- bfrie...@simple.dallas.tx.us wrote:
On Fri, 11 Dec 2009, Bob wrote:
Thanks. Any alternatives, other than using enterprise-level drives?
You can of course use normal cons
> If you umount a ZFS FS that has some other FS's underneath it, then the
> mount points for the "child" FS needs to be created to have those
> mounted; that way if you don't export the pool the dirs won't be deleted
> and next time you import the pool the FS will fail to mount because your
> m
Hello,
On 14 dec 2009, at 14.16, Markus Kovero wrote:
Hi, if someone running 129 could try this out, turn off compression
in your pool, mkfile 10g /pool/file123, see used space and then
remove the file and see if it makes used space available again. I’m
having trouble with this, reminds m
>>Hi, if someone running 129 could try this out, turn off compression in your
>>pool, mkfile 10g /pool
>>/file123, see used space and then remove the file and see if it makes used
>>space available again. I
>>'m having trouble with this, reminds me of similar bug that occurred in
>>111-release.
>Hi, if someone running 129 could try this out, turn off compression in your
>pool, mkfile 10g /pool
/file123, see used space and then remove the file and see if it makes used
space available again. I
'm having trouble with this, reminds me of similar bug that occurred in
111-release.
Any auto
Hi, if someone running 129 could try this out, turn off compression in your
pool, mkfile 10g /pool/file123, see used space and then remove the file and see
if it makes used space available again. I'm having trouble with this, reminds
me of similar bug that occurred in 111-release.
Yours
Markus
Hi,
Martin Uhl wrote:
We opened a Support Case (Case ID 71912304) which after some discussion came to the
"conclusion" that we should not use /etc/reboot for rebooting.
Yes. You are using "/etc/reboot" which is the same as calling
"/usr/sbin/halt":
% ls -l /etc/reboot
lrwxrwxrwx 1 ro
> There was an announcement made in November about auto
> snapshots being made obsolete in build 128
That thread (which I know well) talks about the replacement of the
[b]implementation[/b], while retaining the (majority of) the behaviour and
configuration interface. The old implementation had
We are also running into this bug.
Our system is a Solaris 10u4
SunOS sunsystem9 5.10 Generic_127112-10 i86pc i386 i86pc
ZFS version 4
We opened a Support Case (Case ID 71912304) which after some discussion came to
the "conclusion" that we should not use /etc/reboot for rebooting.
This leads me
On Mon, 2009-12-07 at 23:31 +0100, Martijn de Munnik wrote:
> On Dec 7, 2009, at 11:23 PM, Daniel Carosone wrote:
>
> >> but if you attempt to "add" a disk to a redundant
> >> config, you'll see an error message similar [..]
> >>
> >> Doesn't the "mismatched replication" message help?
> >
> > No
On Wed, Dec 9, 2009 at 3:22 PM, Mike Johnston wrote:
> Thanks for the info Alexander... I will test this out. I'm just wondering
> what it's going to see after I install Power Path. Since each drive will
> have 4 paths, plus the Power Path... after doing a "zfs import" how will I
> force it to
Thanks for the update, it's no help to you of course, but I'm watching your
progress with interest. Your progress updates are very much appreciated.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
ht
On Mon, Dec 14, 2009 at 4:04 AM, Jens Elkner
wrote:
> On Sat, Dec 12, 2009 at 04:23:21PM +, Andrey Kuzmin wrote:
>> As to whether it makes sense (as opposed to two distinct physical
>> devices), you would have read cache hits competing with log writes for
>> bandwidth. I doubt both will be ple
There was an announcement made in November about auto snapshots being made
obsolete in build 128, I assume major changes are afoot:
http://www.opensolaris.org/jive/thread.jspa?messageID=437516&tstart=0#437516
--
This message posted from opensolaris.org
___
On 13/12/2009 20:51, Steve Radich, BitShop, Inc. wrote:
I enabled compression on a zfs filesystem with compression=gzip9 - i.e. fairly
slow compression - this stores backups of databases (which compress fairly
well).
The next question is: Is the CRC on the disk based on the uncompressed data
38 matches
Mail list logo