On 14/10/2009, at 2:27 AM, casper@sun.com wrote:
So why not the built-in CIFS support in OpenSolaris? Probably has a
similar issue, but still.
In my case, it’s at least two reasons:
* Crossing mountpoints requires separate shares - Samba can share an
entire hierarchy regardless of ZF
On 03/11/2009, at 7:32 AM, Daniel Streicher wrote:
But how can I "update" my current OpenSolaris (2009.06) or Solaris
10 (5/09) to use this.
Or have I wait for a new stable release of Solaris 10 / OpenSolaris?
For OpenSolaris, you change your repository and switch to the
development branc
On 18/11/2009, at 7:33 AM, Dushyanth wrote:
> Now when i run dd and create a big file on /iftraid0/fs and watch `iostat
> -xnz 2` i dont see any stats for c8t4d0 nor does the write performance
> improves.
>
> I have not formatted either c9t9d0 or c8t4d0. What am i missing ?
Last I checked, i
On 10/12/2009, at 5:36 AM, Adam Leventhal wrote:
> The dedup property applies to all writes so the settings for the pool of
> origin don't matter, just those on the destination pool.
Just a quick related question I’ve not seen answered anywhere else:
Is it safe to have dedup running on your rp
Hi Tomas,
On 27/12/2009, at 7:25 PM, Tomas Bodzar wrote:
> pfexec zpool set dedup=verify rpool
> pfexec zfs set compression=gzip-9 rpool
> pfexec zfs set devices=off rpool/export/home
> pfexec zfs set exec=off rpool/export/home
> pfexec zfs set setuid=off rpool/export/home
grub doesn’t support g
Hi John,
On 08/01/2010, at 7:19 AM, john_dil...@blm.gov wrote:
> Is there a way to upgrade my current ZFS version. I show the version could
> be as high as 22.
The version of Solaris you are running only suport ZFS versions up to version
15 as demonstrated by your zfs upgrade -v output. You pr
On 20/06/2009, at 9:55 PM, Charles Hedrick wrote:
I have a USB disk, to which I want to do a backup. I've used send |
receive. It works fine until I try to reboot. At that point the
system fails to come up because the backup copy is set to be mounted
at the original location so the system
Hi Erik,
On 22/06/2009, at 1:15 PM, Erik Trimble wrote:
I just looked at pricing for the higher-end MLC devices, and it
looks like I'm better off getting a single drive of 2X capacity than
two with X capacity.
Leaving aside the issue that by using 2 drives I get 2 x 3.0Gbps
SATA perform
On 25/06/2009, at 5:16 AM, Miles Nordin wrote:
and mpt is the 1068 driver, proprietary, works on x86 and SPARC.
then there is also itmpt, the third-party-downloadable closed-source
driver from LSI Logic, dunno much about it but someone here used it.
I'm confused. Why do you say the mpt dr
Hi All,
We have recently acquired hardware for a new fileserver and my task,
if I want to use OpenSolaris (osol or sxce) on it is for it to perform
at least as well as Linux (and our 5 year old fileserver) in our
environment.
Our current file server is a whitebox Debian server with 8x 10,
On 03/07/2009, at 5:03 PM, Brent Jones wrote:
Are you sure the slog is working right? Try disabling the ZIL to see
if that helps with your NFS performance.
If your performance increases a hundred fold, I'm suspecting the slog
isn't perming well, or even doing its job at all.
The slog appears
Hej Henrik,
On 03/07/2009, at 8:57 PM, Henrik Johansen wrote:
Have you tried running this locally on your OpenSolaris box - just to
get an idea of what it could deliver in terms of speed ? Which NFS
version are you using ?
Most of the tests shown in my original message are local except the
Hi Mertol,
On 03/07/2009, at 6:49 PM, Mertol Ozyoney wrote:
ZFS SSD usage behaviour heavly depends on access pattern and for
asynch ops ZFS will not use SSD's. I'd suggest you to disable
SSD's , create a ram disk and use it as SLOG device to compare the
performance. If performance doesnt
On 03/07/2009, at 10:37 PM, Victor Latushkin wrote:
Slog in ramdisk is analogous to no slog at all and disable zil
(well, it may be actually a bit worse). If you say that your old
system is 5 years old difference in above numbers may be due to
difference in CPU and memory speed, and so it
On 04/07/2009, at 10:42 AM, Ross Walker wrote:
XFS on LVM or EVMS volumes can't do barrier writes due to the lack
of barrier support in LVM and EVMS, so it doesn't do a hard cache
sync like it would on a raw disk partition which makes the numbers
higher, BUT with battery backed write cache
On 04/07/2009, at 2:08 PM, Miles Nordin wrote:
iostat -xcnXTdz c3t31d0 1
on that device being used as a slog, a higher range of output looks
like:
extended device statistics
r/sw/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
0.0 1477.80.0 2955.
On 04/07/2009, at 1:49 PM, Ross Walker wrote:
I ran some benchmarks back when verifying this, but didn't keep them
unfortunately.
You can google: XFS Barrier LVM OR EVMS and see the threads about
this.
Interesting reading. Testing seems to show that either it's not
relevant or there is
On 04/07/2009, at 3:08 AM, Bob Friesenhahn wrote:
It seems like you may have selected the wrong SSD product to use.
There seems to be a huge variation in performance (and cost) with so-
called "enterprise" SSDs. SSDs with capacitor-backed write caches
seem to be fastest.
Do you have any
On 05/07/2009, at 1:57 AM, Ross Walker wrote:
Barriers are by default are disabled on ext3 mounts... Google it and
you'll see interesting threads in the LKML. Seems there was some
serious performance degradation in using them. A lot of decisions in
Linux are made in favor of performance over da
On 06/07/2009, at 9:31 AM, Ross Walker wrote:
There are two types of SSD drives on the market, the fast write SLC
(single level cell) and the slow write MLC (multi level cell). MLC
is usually used in laptops as SLC drives over 16GB usually go for
$1000+ which isn't cost effective in a lapt
On 07/07/2009, at 8:20 PM, James Andrewartha wrote:
Have you tried putting the slog on this controller, either as an SSD
or
regular disk? It's supported by the mega_sas driver, x86 and amd64
only.
What exactly are you suggesting here? Configure one disk on this
array as a dedicated ZIL?
On 15/07/2009, at 7:18 AM, Orvar Korvar wrote:
With dedup, will it be possible somehow to identify files that are
identical but has different names? Then I can find and remove all
duplicates. I know that with dedup, removal is not really needed
because the duplicate will just be a referenc
On 15/07/2009, at 1:51 PM, Jean Dion wrote:
Do we know if this web article will be discuss at Brisbane Australia
the conference this week?
http://www.pcworld.com/article/168428/sun_tussles_with_deduplication_startup.html?tk=rss_news
I do not expect details but at least Sun position on this
On 28/07/2009, at 6:44 AM, dick hoogendijk wrote:
Are there any known issues with zfs in OpenSolaris B118?
I run my pools formatted like the original release 2009.06 (I want to
be able to go back to it ;-). I'm a bit scared after reading about
serious issues in B119 (will be skipped, I heard).
On 28/07/2009, at 9:22 AM, Robert Thurlow wrote:
I can't help with your ZFS issue, but to get a reasonable crash
dump in circumstances like these, you should be able to do
"savecore -L" on OpenSolaris.
That would be well and good if I could get a login - due to the rpool
being unresponsive,
Thanks for that Brian.
I've logged a bug:
CR 6865661 *HOT* Created, P1 opensolaris/triage-queue zfs scrub rpool
causes zpool hang
Just discovered after trying to create a further crash dump that it's
failing and rebooting with the following error (just caught it prior
to the reboot):
On 29/07/2009, at 5:47 PM, Ross wrote:
Everyone else should be using the Intel X25-E. There's a massive
difference between the M and E models, and for a slog it's IOPS and
low latency that you need.
Do they have any capacitor backed cache? Is this cache considered
stable storage? If s
On 29/07/2009, at 12:00 AM, James Lever wrote:
CR 6865661 *HOT* Created, P1 opensolaris/triage-queue zfs scrub
rpool causes zpool hang
This bug I logged has been marked as related to CR 6843235 which is
fixed in snv 119.
cheers,
James
___
zfs
Hi Darryn,
On 30/07/2009, at 6:33 PM, Darren J Moffat wrote:
That already works if you have the snapshot delegation as that
user. It even works over NFS and CIFS.
Can you give us an example of how to correctly get this working?
I've read through the manpage but have not managed to get the
On 30/07/2009, at 11:32 PM, Darren J Moffat wrote:
On the host that has the ZFS datasets (ie the NFS/CIFS server) you
need to give the user the delegation to create snapshots and to
mount them:
# zfs allow -u james snapshot,mount,destroy tank/home/james
Ahh, it was the lack of mount that
Nathan Hudson-Crim,
On 04/08/2009, at 8:02 AM, Nathan Hudson-Crim wrote:
Andre, I've seen this before. What you have to do is ask James each
question 3 times and on the third time he will tell the truth. ;)
I know this is probably meant to be seen as a joke, but it's clearly
in very poor t
On 04/08/2009, at 9:42 PM, Joseph L. Casale wrote:
I noticed a huge improvement when I moved a virtualized pool
off a series of 7200 RPM SATA discs to even 10k SAS drives.
Night and day...
What I would really like to know is if it makes a big difference
comparing say 7200RPM drives in mirro
On 05/08/2009, at 10:36 AM, Carson Gaspar wrote:
Isn't the PERC 6/e just a re-branded LSI? LSI added SSD support
recently.
Yep, it's a mega raid device.
I have been using one with a Samsung SSD in RAID0 mode (to avail
myself of the cache) recently with great success.
cheers,
James
On 05/08/2009, at 11:36 AM, Ross Walker wrote:
Which model?
PERC 6/E w/512MB BBWC.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 05/08/2009, at 11:41 AM, Ross Walker wrote:
What is your recipe for these?
There wasn't one! ;)
The drive I'm using is a Dell badged Samsung MCCOE50G5MPQ-0VAD3.
cheers,
James
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail
Is there a mechanism by which you can perform a zfs send | zfs receive
and not have the data uncompressed and recompressed at the other end?
I have a gzip-9 compressed filesystem that I want to backup to a
remote system and would prefer not to have to recompress everything
again at such gre
On 28/08/2009, at 3:23 AM, Adam Leventhal wrote:
There appears to be a bug in the RAID-Z code that can generate
spurious checksum errors. I'm looking into it now and hope to have
it fixed in build 123 or 124. Apologies for the inconvenience.
Are the errors being generated likely to cause a
On 02/09/2009, at 9:54 AM, Adam Leventhal wrote:
After investigating this problem a bit I'd suggest avoiding
deploying RAID-Z
until this issue is resolved. I anticipate having it fixed in build
124.
Thanks for the status update on this Adam.
cheers,
James
I’m experiencing occasional slow responsiveness on an OpenSolaris b118
system typically noticed when running an ‘ls’ (no extra flags, so no
directory service lookups). There is a delay of between 2 and 30
seconds but no correlation has been noticed with load on the server
and the slow retu
On 07/09/2009, at 12:53 AM, Ross Walker wrote:
That behavior sounds a lot like a process has a memory leak and is
filling the VM. On Linux there is an OOM killer for these, but on
OpenSolaris, your the OOM killer.
If it was this type of behaviour, where would it be logged when the
process w
On 07/09/2009, at 6:24 AM, Richard Elling wrote:
On Sep 6, 2009, at 7:53 AM, Ross Walker wrote:
On Sun, Sep 6, 2009 at 9:15 AM, James Lever wrote:
I’m experiencing occasional slow responsiveness on an OpenSolaris
b118
system typically noticed when running an ‘ls’ (no extra flags, so no
On 07/09/2009, at 11:08 AM, Richard Elling wrote:
Ok, just so I am clear, when you mean "local automount" you are
on the server and using the loopback -- no NFS or network involved?
Correct. And the behaviour has been seen locally as well as remotely.
You are looking for I/O that takes sec
On 07/09/2009, at 10:46 AM, Ross Walker wrote:
zpool is RAIDZ2 comprised of 10 * 15kRPM SAS drives behind an LSI
1078 w/ 512MB BBWC exposed as RAID0 LUNs (Dell MD1000 behind PERC 6/
E) with 2x SSDs each partitioned as 10GB slog and 36GB remainder as
l2arc behind another LSI 1078 w/ 256MB BB
On 08/09/2009, at 2:01 AM, Ross Walker wrote:
On Sep 7, 2009, at 1:32 AM, James Lever wrote:
Well a MD1000 holds 15 drives a good compromise might be 2 7 drive
RAIDZ2s with a hotspare... That should provide 320 IOPS instead of
160, big difference.
The issue is interactive responsiveness
On 25/09/2009, at 2:58 AM, Richard Elling wrote:
On Sep 23, 2009, at 10:00 PM, James Lever wrote:
So it turns out that the problem is that all writes coming via NFS
are going through the slog. When that happens, the transfer speed
to the device drops to ~70MB/s (the write speed of his
On 25/09/2009, at 1:24 AM, Bob Friesenhahn wrote:
On Thu, 24 Sep 2009, James Lever wrote:
Is there a way to tune this on the NFS server or clients such that
when I perform a large synchronous write, the data does not go via
the slog device?
Synchronous writes are needed by NFS to
On 25/09/2009, at 11:49 AM, Bob Friesenhahn wrote:
The commentary says that normally the COMMIT operations occur during
close(2) or fsync(2) system call, or when encountering memory
pressure. If the problem is slow copying of many small files, this
COMMIT approach does not help very much
I thought I would try the same test using dd bs=131072 if=source of=/
path/to/nfs to see what the results looked liked…
It is very similar to before, about 2x slog usage and same timing and
write totals.
Friday, 25 September 2009 1:49:48 PM EST
extended device st
On 26/09/2009, at 1:14 AM, Ross Walker wrote:
By any chance do you have copies=2 set?
No, only 1. So the double data going to the slog (as reported by
iostat) is still confusing me and clearly potentially causing
significant harm to my performance.
Also, try setting zfs_write_limit_ov
49 matches
Mail list logo