On 18.12.09 07:13, Jack Kielsmeier wrote:
Ok, my console is 100% completely hung, not gonna be able to enter any
commands when it freezes.
I can't even get the numlock light to change it's status.
This time I even plugged in a PS/2 keyboard instead of USB thinking maybe it
was USB dying during
On Dec 17, 2009, at 9:04 PM, stuart anderson wrote:
As a specific example of 2 devices with dramatically different
performance for sub-4k transfers has anyone done any ZFS benchmarks
between the X25E and the F20 they can share?
I am particularly interested in zvol performance with a blocks
> On Wed, Dec 16 at 7:35, Bill Sprouse wrote:
> >The question behind the question is, given the
> really bad things that
> >can happen performance-wise with writes that are not
> 4k aligned when
> >using flash devices, is there any way to insure that
> any and all
> >writes from ZFS are 4k alig
Ok, this is the script I am running (as a background process). This script
doesn't matter much, it's just here for reference, as I'm running into problems
just running the savecore command while the zpool import is running.
#!/
Ok, my console is 100% completely hung, not gonna be able to enter any commands
when it freezes.
I can't even get the numlock light to change it's status.
This time I even plugged in a PS/2 keyboard instead of USB thinking maybe it
was USB dying during the hang, but not so.
I have hard reboote
My ARC is ~3GB.
I'm doing a test that copies 10GB of data to a volume where the blocks
should dedupe 100% with existing data.
First time, the test that runs <5MB sec, seems to average 10-30% ARC *miss*
rate. <400 arc reads/sec.
When things are working at disk bandwidth, I'm getting 3-5% ARC misse
I used the default while creating zpool with one disk drive. I guess it is a
RAID 0 configuration.
Thanks,
Giri
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/lis
It looks like the kernel is using a lot of memory, which may be part
of the performance problem. The ARC has shrunk to 1G, and the kernel
is using up over 5G.
I'm doing a send|receive of 683G of data. I started it last night
around 1am, and as of right now it's only sent 450GB. That's about
8.5MB/
> Thanks for the response Adam.
>
> Are you talking about ZFS list?
>
> It displays 19.6 as allocated space.
>
> What does ZFS treat as hole and how does it identify?
ZFS will compress blocks of zeros down to nothing and treat them like
sparse files. 19.6 is pretty close to your computed. Does
On Thu, Dec 17, 2009 at 3:10 PM, Anil wrote:
> If you have another partition with enough space, you could technically just
> do:
>
> mv src /some/other/place
> mv /some/other/place src
>
> Anyone see a problem with that? Might be the best way to get it de-duped.
You'd lose any existing snapshots
Your parenthetical comments here raise some concerns, or at least eyebrows,
with me. Hopefully you can lower them again.
> compress, encrypt, checksum, dedup.
> (and you need to use zdb to get enough info to see the
> leak - and that means you have access to the raw devices)
An attacker with
> Hi Giridhar,
>
> The size reported by ls can include things like holes
> in the file. What space usage does the zfs(1M)
> command report for the filesystem?
>
> Adam
>
> On Dec 16, 2009, at 10:33 PM, Giridhar K R wrote:
>
> > Hi,
> >
> > Reposting as I have not gotten any response.
> >
> >
If you have another partition with enough space, you could technically just do:
mv src /some/other/place
mv /some/other/place src
Anyone see a problem with that? Might be the best way to get it de-duped.
--
This message posted from opensolaris.org
___
On Wed, Dec 16, 2009 at 6:17 AM, Steven Sim wrote:
> r...@sunlight:/root# zfs send myplace/myd...@prededup | zfs receive -v
> myplace/mydata
> cannot receive new filesystem stream: destination 'myplace/fujitsu' exists
> must specify -F to overwrite it
Try something like this:
zfs create -o mount
Hi Doug,
The pool and file system version upgrades allow you to access new
features that are available for a particular Solaris release. For
example, if you upgrade your system to Solaris 10 10/09, then you
would need to upgrade your pool version to access the pool features
available in the Solar
I have observed the opposite, and I believe that all writes are slow to my
dedup'd pool.
I used local rsync (no ssh) for one of my migrations (so it was restartable,
as it took *4 days*), and the writes were slow just like zfs recv.
I have not seen fast writes of real data to the deduped volume,
On Thu, Dec 17, 2009 at 7:11 AM, Edward Ned Harvey
wrote:
> And I've heard a trend of horror stories, that zfs has a tendency to implode
> when it's very full. So try to keep your disks below 90%.
I've taken to creating an unmounted empty filesystem with a
reservation to prevent the zpool from f
fmdump shows errors on a different drive, and none on the one that has this
slow read problem:
Nov 27 2009 20:58:28.670057389 ereport.io.scsi.cmd.disk.recovered
nvlist version: 0
class = ereport.io.scsi.cmd.disk.recovered
ena = 0xbeb7f4dd531
detector = (embedded nvlist
I'm running Solaris 10 update 8 (10/09). I started out using an older
version of Solaris and have upgraded a few times. I have used "zpool
upgrade" on the pools I have as new versions become available after
kernel updates.
I see now when I run "zfs upgrade" that pools I created long ago are
at v
On Thu, Dec 17, 2009 at 12:30:29PM -0800, Stacy Maydew wrote:
> So thanks for that answer. I'm a bit confused though if the dedup is
> applied per zfs filesystem, not zpool, why can I only see the dedup on
> a per pool basis rather than for each zfs filesystem?
>
> Seems to me there should be a wa
The commands "zpool list" and "zpool get dedup " both show a ratio of
1.10.
So thanks for that answer. I'm a bit confused though if the dedup is applied
per zfs filesystem, not zpool, why can I only see the dedup on a per pool basis
rather than for each zfs filesystem?
Seems to me there sho
On Thu, Dec 17, 2009 at 6:14 PM, Kjetil Torgrim Homme
wrote:
> Darren J Moffat writes:
>> Kjetil Torgrim Homme wrote:
>>> Andrey Kuzmin writes:
>>>
Downside you have described happens only when the same checksum is
used for data protection and duplicate detection. This implies sha256,
On Thu, Dec 17, 2009 at 10:57 AM, Stacy Maydew wrote:
> When I sent one backup set to the filesystem, the usage reported by "zfs
> list" and "zfs get used " are the expected values based on the data
> size.
>
> When I store a second copy, which should dedupe entirely, the zfs commands
> report
On Fri 18/12/09 07:57 , "Stacy Maydew" stacy.may...@sun.com sent:
> I'm trying to see if zfs dedupe is effective on our datasets, but I'm
> having a hard time figuring out how to measure the space saved.
> When I sent one backup set to the filesystem, the usage reported by
> "zfs list" and "zfs g
Hi Giridhar,
The size reported by ls can include things like holes in the file. What space
usage does the zfs(1M) command report for the filesystem?
Adam
On Dec 16, 2009, at 10:33 PM, Giridhar K R wrote:
> Hi,
>
> Reposting as I have not gotten any response.
>
> Here is the issue. I created
On Thu, Dec 17, 2009 at 8:57 PM, Stacy Maydew wrote:
> I'm trying to see if zfs dedupe is effective on our datasets, but I'm having
> a hard time figuring out how to measure the space saved.
>
> When I sent one backup set to the filesystem, the usage reported by "zfs
> list" and "zfs get used "
I'm trying to see if zfs dedupe is effective on our datasets, but I'm having a
hard time figuring out how to measure the space saved.
When I sent one backup set to the filesystem, the usage reported by "zfs list"
and "zfs get used " are the expected values based on the data size.
When I store
On Thu, Dec 17, 2009 at 03:32:21PM +0100, Kjetil Torgrim Homme wrote:
> if the hash used for dedup is completely separate from the hash used for
> data protection, I don't see any downsides to computing the dedup hash
> from uncompressed data. why isn't it?
Hash and checksum functions are slow (h
-Original Message-
From: Bone, Nick
Sent: 16 December 2009 16:33
To: oab
Subject: RE: [zfs-discuss] Import a SAN cloned disk
Hi
I know that EMC don't recommend a SnapView snapshot being added to the original
hosts Storage Group although it is not prevented.
I tried this just now & as
On Thu, 17 Dec 2009, Kjetil Torgrim Homme wrote:
compression requires CPU, actually quite a lot of it. even with the
lean and mean lzjb, you will get not much more than 150 MB/s per core or
something like that. so, if you're copying a 10 GB image file, it will
take a minute or two, just to com
Kjetil Torgrim Homme wrote:
I don't know how tightly interwoven the dedup hash tree and the block
pointer hash tree are, or if it is all possible to disentangle them.
At the moment I'd say very interwoven by desgin.
conceptually it doesn't seem impossible, but that's easy for me to
say, with
Tim,
Use the fmdump -eV command to see what disk errors are
reported through the fault management system and see what
output iostat -En might provide.
Cindy
On 12/16/09 23:41, Tim wrote:
hmm, not seeing the same slow down when I boot from the Samsung EStool CD and
run a diag which performs a
Hi, I have a zfs volume that's exported via iscsi for my wife's Mac to
use for Time Machine.
I've just built a new machine to house my "big" pool, and installed
build 129 on it. I'd like to start using COMSTAR for exporting the
iscsi targets, rather than the older iscsi infrastructure.
I've
Darren J Moffat writes:
> Kjetil Torgrim Homme wrote:
>> Andrey Kuzmin writes:
>>
>>> Downside you have described happens only when the same checksum is
>>> used for data protection and duplicate detection. This implies sha256,
>>> BTW, since fletcher-based dedupe has been dropped in recent build
> Hi all,
> I need to move a filesystem off of one host and onto another
> smaller
> one. The fs in question, with no compression enabled, is using 1.2 TB
> (refer). I'm hoping that zfs compression will dramatically reduce this
> requirement and allow me to keep the dataset on an 800 GB sto
> I'm willing to accept slower writes with compression enabled, par for
> the course. Local writes, even with compression enabled, can still
> exceed 500MB/sec, with moderate to high CPU usage.
> These problems seem to have manifested after snv_128, and seemingly
> only affect ZFS receive speeds. L
Kjetil Torgrim Homme wrote:
Andrey Kuzmin writes:
Downside you have described happens only when the same checksum is
used for data protection and duplicate detection. This implies sha256,
BTW, since fletcher-based dedupe has been dropped in recent builds.
if the hash used for dedup is comple
Andrey Kuzmin writes:
> Downside you have described happens only when the same checksum is
> used for data protection and duplicate detection. This implies sha256,
> BTW, since fletcher-based dedupe has been dropped in recent builds.
if the hash used for dedup is completely separate from the has
Jacob Ritorto wrote:
Hi all,
I need to move a filesystem off of one host and onto another smaller
one. The fs in question, with no compression enabled, is using 1.2 TB
(refer). I'm hoping that zfs compression will dramatically reduce this
requirement and allow me to keep the dataset on a
Hi all,
I need to move a filesystem off of one host and onto another smaller
one. The fs in question, with no compression enabled, is using 1.2 TB
(refer). I'm hoping that zfs compression will dramatically reduce this
requirement and allow me to keep the dataset on an 800 GB store. Does
th
Read this http://wiki.dovecot.org/MailLocation/SharedDisk
If you were running Dovecot on the Thumper, mmap has issues under ZFS, old
versions of ZFS (not sure if it is fixed in Sol10), so switch this off
mmap_disable = yes as per the URL above for over NFS.
Ensure NFS is tuned to 32K read and
On Thu, Dec 17, 2009 at 09:14, Eric D. Mudama wrote:
> On Wed, Dec 16 at 7:35, Bill Sprouse wrote:
>
>> The question behind the question is, given the really bad things that can
>> happen performance-wise with writes that are not 4k aligned when using flash
>> devices, is there any way to insure
Downside you have described happens only when the same checksum is
used for data protection and duplicate detection. This implies sha256,
BTW, since fletcher-based dedupe has been dropped in recent builds.
On 12/17/09, Kjetil Torgrim Homme wrote:
> Andrey Kuzmin writes:
>> Darren J Moffat wrote:
On Wed, Dec 16 at 7:35, Bill Sprouse wrote:
The question behind the question is, given the really bad things that
can happen performance-wise with writes that are not 4k aligned when
using flash devices, is there any way to insure that any and all
writes from ZFS are 4k aligned?
Some flash d
On Wed, Dec 16 at 22:41, Tim wrote:
hmm, not seeing the same slow down when I boot from the Samsung EStool CD and
run a diag which performs a surface scan...
could this still be a hardware issue, or possibly something with the Solaris
data format on the disk?
Rotating drives often have variou
45 matches
Mail list logo