Hi list,
Experimental question ...
Imagine a pool made of SSDs disks, is there any interest to add a SSD cache to
it ? What real impact ?
Thx.
--
Francois
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/lis
dd if=/dev/urandom of=largefile.txt bs=1G count=8
cp largefile.txt ./test/1.txt &
cp largefile.txt ./test/2.txt &
Thats it now the system is totally unusable after launching the two 8G copies.
Until these copies finish no other application is able to launch completely.
Checking prstat shows the
this one has me alittle confused. ideas?
j...@opensolaris:~# zpool import z
cannot mount 'z/nukeme': mountpoint or dataset is busy
cannot share 'z/cle2003-1': smb add share failed
j...@opensolaris:~# zfs destroy z/nukeme
internal error: Bad exchange descriptor
Abort (core dumped)
j...@opensolaris
On 01/08/2010 02:42 PM, Lutz Schumann wrote:
> See the reads on the pool with the low I/O ? I suspect reading the
> DDT causes the writes to slow down.
>
> See this bug
> http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6913566.
> It seems to give some backgrounds.
>
> Can you test sett
On Fri, Jan 8, 2010 at 1:44 PM, Ian Collins wrote:
> James Lee wrote:
>
>> I haven't seen much discussion on how deduplication affects performance.
>> I've enabled dudup on my 4-disk raidz array and have seen a significant
>> drop in write throughput, from about 100 MB/s to 3 MB/s. I can't
>> i
James Lee wrote:
I haven't seen much discussion on how deduplication affects performance.
I've enabled dudup on my 4-disk raidz array and have seen a significant
drop in write throughput, from about 100 MB/s to 3 MB/s. I can't
imagine such a decrease is normal.
What is you data?
I've foun
See the reads on the pool with the low I/O ? I suspect reading the DDT causes
the writes to slow down.
See this bug
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6913566. It seems to
give some backgrounds.
Can you test setting the "primarycache=metadata" on the volume you test ?
On Fri, Jan 8, 2010 at 12:28 PM, Torrey McMahon wrote:
> On 1/8/2010 10:04 AM, James Carlson wrote:
>>
>> Mike Gerdts wrote:
>>
>>>
>>> This unsupported feature is supported with the use of Sun Ops Center
>>> 2.5 when a zone is put on a "NAS Storage Library".
>>>
>>
>> Ah, ok. I didn't know that.
Cindy Swearingen wrote:
Hi Ian,
I see the problem. In your included URL below, you didn't
include the /N suffix as included in the zpool upgrade
output.
That's correct, N is the version number. I see it is fixed now, thanks.
--
Ian.
___
zfs-discus
On Jan 8, 2010, at 6:20 AM, Frank Batschulat (Home) wrote:
On Fri, 08 Jan 2010 13:55:13 +0100, Darren J Moffat > wrote:
Frank Batschulat (Home) wrote:
This just can't be an accident, there must be some coincidence and
thus there's a good chance
that these CHKSUM errors must have a common sou
On 1/8/2010 10:04 AM, James Carlson wrote:
Mike Gerdts wrote:
This unsupported feature is supported with the use of Sun Ops Center
2.5 when a zone is put on a "NAS Storage Library".
Ah, ok. I didn't know that.
Does anyone know how that works? I can't find it in the docs, no on
Mike Gerdts wrote:
> This unsupported feature is supported with the use of Sun Ops Center
> 2.5 when a zone is put on a "NAS Storage Library".
Ah, ok. I didn't know that.
--
James Carlson 42.703N 71.076W
___
zfs-discuss mailing list
z
On Fri, 08 Jan 2010 13:55:13 +0100, Darren J Moffat
wrote:
> Frank Batschulat (Home) wrote:
>> This just can't be an accident, there must be some coincidence and thus
>> there's a good chance
>> that these CHKSUM errors must have a common source, either in ZFS or in NFS ?
>
> What are you using
Frank Batschulat (Home) wrote:
> This just can't be an accident, there must be some coincidence and thus
> there's a good chance
> that these CHKSUM errors must have a common source, either in ZFS or in NFS ?
One possible cause would be a lack of substantial exercise. The man
page says:
On Fri, Jan 08, 2010 at 10:00:14AM -0800, James Lee wrote:
> I haven't seen much discussion on how deduplication affects performance.
> I've enabled dudup on my 4-disk raidz array and have seen a significant
> drop in write throughput, from about 100 MB/s to 3 MB/s. I can't
> imagine such a decre
I haven't seen much discussion on how deduplication affects performance.
I've enabled dudup on my 4-disk raidz array and have seen a significant
drop in write throughput, from about 100 MB/s to 3 MB/s. I can't
imagine such a decrease is normal.
> # zpool iostat nest 1 (with dedup enabled):
> ...
On Fri, Jan 8, 2010 at 9:11 AM, Mike Gerdts wrote:
> I've seen similar errors on Solaris 10 in the primary domain and on a
> M4000. Unfortunately Solaris 10 doesn't show the checksums in the
> ereport. There I noticed a mixture between read errors and checksum
> errors - and lots more of them.
Hello,
Sorry for the (very) long subject but I've pinpointed the problem to this exact
situation.
I know about the other threads related to hangs, but in my case there was no <
zfs destroy > involved, nor any compression or deduplication.
To make a long story short, when
- a disk contains
On Fri, 8 Jan 2010, Peter van Gemert wrote:
I don't think the use of snapshots will alter the way data is
fragmented or localized on disk.
What happens after a snapshot is deleted?
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagic
Ok, after browsing I found that the sata disks are not shown via cfgadm.
I found http://opensolaris.org/jive/message.jspa?messageID=287791&tstart=0
which states that you have to set the mode to "AHCI" to enable hot-plug etc.
However I sill think, also the plain IDE driver needs a timeout to han
Hi Ian,
I see the problem. In your included URL below, you didn't
include the /N suffix as included in the zpool upgrade
output.
CR 6898657 is still filed to identify the change.
If you copy and paste the URL from the zpool upgrade -v output:
http://www.opensolaris.org/os/community/zfs/version
On 08/01/2010 14:50, David Dyer-Bennet wrote:
On Fri, January 8, 2010 07:51, Robert Milkowski wrote:
On 08/01/2010 12:40, Peter van Gemert wrote:
By having a snapshot you
are not releasing the
space forcing zfs to allocate new space from other
parts of a disk
drive. This may lead (dep
On Fri, Jan 8, 2010 at 5:28 AM, Frank Batschulat (Home)
wrote:
[snip]
> Hey Mike, you're not the only victim of these strange CHKSUM errors, I hit
> the same during my slightely different testing, where I'm NFS mounting an
> entire, pre-existing remote file living in the zpool on the NFS server an
BTW, this was on snv_111b - sorry I forgot to mention.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi,
I have just observed the following issue and I would like to ask if it is
already known:
I'm using zones on ZFS filesystems which were cloned from a common template
(which is itself an original filesystem). A couple of weeks ago, I did a pkg
image-update, so all zone roots got cloned aga
On Fri, January 8, 2010 07:51, Robert Milkowski wrote:
> On 08/01/2010 12:40, Peter van Gemert wrote:
>>> By having a snapshot you
>>> are not releasing the
>>> space forcing zfs to allocate new space from other
>>> parts of a disk
>>> drive. This may lead (depending on workload) to more
>>> fragm
Ok,
I now waited 30 minutes - still hung. After that I pulled the SATA cable to the
L2ARC device also - still no success (I waited 10 minutes).
After 10 minutes I put the L2ARC device back (SATA + Power)
20 seconds after that the system continues to run.
dmesg shows:
Jan 8 15:41:57 nexe
Hello,
today I wanted to test that the failure of the L2ARC device is not crucial to
the pool. I added a Intel X25-M Postville (160GB) as cache device to a 54 disk
mittor pool. Then I startet a SYNC iozone on the pool:
iozone -ec -r 32k -s 2048m -l 2 -i 0 -i 2 -o
Pool:
pool
mirror-0
Yet another way to thin-out the backing devices for a zpool on a
thin-provisioned storage host, today: resilver.
If your zpool has some redundancy across the SAN backing LUNs, simply
drop and replace one at a time and allow zfs to resilver only the
blocks currently in use onto the replacement LUN
On Fri, Jan 8, 2010 at 6:51 AM, James Carlson wrote:
> Frank Batschulat (Home) wrote:
>> This just can't be an accident, there must be some coincidence and thus
>> there's a good chance
>> that these CHKSUM errors must have a common source, either in ZFS or in NFS ?
>
> One possible cause would b
On Fri, Jan 8, 2010 at 6:55 AM, Darren J Moffat wrote:
> Frank Batschulat (Home) wrote:
>>
>> This just can't be an accident, there must be some coincidence and thus
>> there's a good chance
>> that these CHKSUM errors must have a common source, either in ZFS or in
>> NFS ?
>
> What are you using
On 08/01/2010 12:40, Peter van Gemert wrote:
By having a snapshot you
are not releasing the
space forcing zfs to allocate new space from other
parts of a disk
drive. This may lead (depending on workload) to more
fragmentation, less
localized data (more and longer seeks).
ZFS uses COW (cop
Hi List,
We create a zfs filesystem for each user's homedir. I would like to
monitor their usage and when the user approaches his quota I would like to
receive a warning by mail. Does anybody have a script available which does
this job and can be run using a cron job. Or even better, is this a bui
Frank Batschulat (Home) wrote:
This just can't be an accident, there must be some coincidence and thus there's
a good chance
that these CHKSUM errors must have a common source, either in ZFS or in NFS ?
What are you using for on the wire protection with NFS ? Is it shared
using krb5i or do y
On Wed, 23 Dec 2009 03:02:47 +0100, Mike Gerdts wrote:
> I've been playing around with zones on NFS a bit and have run into
> what looks to be a pretty bad snag - ZFS keeps seeing read and/or
> checksum errors. This exists with S10u8 and OpenSolaris dev build
> snv_129. This is likely a blocker
--- On Thu, 1/7/10, Tiernan OToole wrote:
> Sorry to hijack the thread, but can you
> explain your setup? Sounds interesting, but need more
> info...
This is just a home setup to amuse me and placate my three boys, each of whom
has several Windows instances running under Virtualbox.
Server is a
> By having a snapshot you
> are not releasing the
> space forcing zfs to allocate new space from other
> parts of a disk
> drive. This may lead (depending on workload) to more
> fragmentation, less
> localized data (more and longer seeks).
>
ZFS uses COW (copy on write) during writes. This me
I'm thinking that the issue is simply with zfs destroy, not with dedup or
compression.
Yesterday I decided to do some iscsi testing, I created a new dataset in my
pool, 1TB. I did not use compression or dedup.
After copying about 700GB of data from my windows box (NTFS on top of the iscsi
disk
38 matches
Mail list logo