Hi Sundirk
Im having exactly the same problem can you please post your fix on how you
resolved this permissions issue.
I have created another share and can read and write to it no problems.
The top level share will allow me to write to it but then i can not delete any
files after the fact.
I
+1
On Wed, Jul 28, 2010 at 6:11 PM, Robert Milkowski wrote:
>
> fyi
>
> --
> Robert Milkowski
> http://milek.blogspot.com
>
>
> Original Message Subject: zpool import despite missing
> log [PSARC/2010/292 Self Review] Date: Mon, 26 Jul 2010 08:38:22 -0600 From:
> Tim Haley
Hi r2ch
The operations column shows about 370 operations for read - per spindle
(Between 400-900 for writes)
How should I be measuring iops?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail
How many iops per spindle are you getting?
A rule of thumb I use is to expect no more than 125 iops per spindle for
regular HDDs.
SSDs are a different story of course. :)
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-dis
I appear to be getting between 2-9MB/s reads from individual disks in my zpool
as shown in iostat -v
I expect upwards of 100MBps per disk, or at least aggregate performance on par
with the number of disks that I have.
My configuration is as follows:
Two Quad-core 5520 processors
48GB ECC/REG ra
fyi
--
Robert Milkowski
http://milek.blogspot.com
Original Message
Subject:zpool import despite missing log [PSARC/2010/292 Self Review]
Date: Mon, 26 Jul 2010 08:38:22 -0600
From: Tim Haley
To: psarc-...@sun.com
CC: zfs-t...@sun.com
I am sponsoring th
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Richard Elling
>
> This can happen if there is a failure in a common system component
> during the write (eg. main memory, HBA, PCI bus, CPU, bridges, etc.)
I bet that's the cause. Because as
> From: Darren J Moffat [mailto:darr...@opensolaris.org]
>
> It basically says that 'zfs send' gets a new '-b' option so "send back
> properties", and 'zfs recv' gets a '-o' and '-x' option to allow
> explicit set/ignore of properties in the stream. It also adds a '-r'
> option for 'zfs set'.
>
Hi Sol,
What kind of disks?
You should be able to use the fmdump -eV command to identify when the
checksum errors occurred.
Thanks,
Cindy
On 07/28/10 13:41, sol wrote:
Hi
Having just done a scrub of a mirror I've lost a file and I'm curious how this
can happen in a mirror. Doesn't it req
On Jul 28, 2010, at 12:11 PM, sol wrote:
> Richard Elling wrote:
>> Gregory Gee wrote:
>>> I am using OpenSolaris to host VM images over NFS for XenServer. I'm
>>> looking
>> for tips on what parameters can be set to help optimize my ZFS pool that
>> holds
>> my VM images.
>> There is nothing
On 07/29/10 07:41 AM, sol wrote:
Hi
Having just done a scrub of a mirror I've lost a file and I'm curious how this
can happen in a mirror. Doesn't it require the almost impossible scenario
of exactly the same sector being trashed on both disks? However the
zpool status shows checksum errors no
On Jul 28, 2010, at 12:41 PM, sol wrote:
> Having just done a scrub of a mirror I've lost a file and I'm curious how this
> can happen in a mirror. Doesn't it require the almost impossible scenario
> of exactly the same sector being trashed on both disks? However the
> zpool status shows checksum
Hi
Having just done a scrub of a mirror I've lost a file and I'm curious how this
can happen in a mirror. Doesn't it require the almost impossible scenario
of exactly the same sector being trashed on both disks? However the
zpool status shows checksum errors not I/O errors and I'm not sure what
> -Original Message-
> From: zfs-discuss-boun...@opensolaris.org
> [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of sol
> Sent: Wednesday, July 28, 2010 3:12 PM
> To: Richard Elling; Gregory Gee
> Cc: zfs-discuss@opensolaris.org
> Subject: Re: [zfs-discuss] Tips for ZFS tuning for
Richard Elling wrote:
> Gregory Gee wrote:
> > I am using OpenSolaris to host VM images over NFS for XenServer. I'm
> > looking
>for tips on what parameters can be set to help optimize my ZFS pool that holds
>my VM images.
> There is nothing special about tuning for VMs, the normal NFS tuning a
That looks like that will work. Won't be able to test until late tonight.
Thanks
mike
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Mike,
Did you also give the user permissions to the underlying mount point:
# chmod A+user:user-name:add_subdirectory:fd:allow /rpool
If so, please let me see the syntax and error messages.
Thanks,
Cindy
On 07/28/10 12:23, Mike DeMarco wrote:
Thanks adding mount did allow me to create it bu
Thanks adding mount did allow me to create it but does not allow me to create
the mountpoint.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi Mark,
A couple of things are causing this to fail:
1. The user needs permissions to the underlying mount point.
2. The user needs both create and mount permissions to create ZFS datasets.
See the syntax below, which might vary depending on your Solaris
release.
Thanks,
Cindy
# chmod A+us
I am trying to give a general user permissions to create zfs filesystems in the
rpool.
zpool set=delegation=on rpool
zfs allow create rpool
both run without any issues.
zfs allow rpool reports the user does have create permissions.
zfs create rpool/test
cannot create rpool/test : permission d
Thanks,
Looks like I'll be using raidz3.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi all,
I have in lab two servers running snv_134 and while doing some
experiences with iscsi volumes and replication i came up to a road-block
that i would like to ask for your help.
So in server A i have a lun created in COMSTAR without any views attach
to it and i can zfs send it to server B wi
On Jul 28, 2010, at 8:34 AM, Roy Sigurd Karlsbakk wrote:
>> The performance will be similar, but in the non-degraded case, the
>> raidz3
>> will perform better for small, random reads.
>
> Why is this? The two will have the same amount of data drives
The simple small, random read model for h
Hi Gary,
If your root pool is getting full, you can replace the root pool
disk with a larger disk. My recommendation is to attach the replacement
disk, let the replacement disk resilver, install the boot blocks, and
then detach the smaller disk. The system will see the expanded space
automaticall
> The performance will be similar, but in the non-degraded case, the
> raidz3
> will perform better for small, random reads.
Why is this? The two will have the same amount of data drives
Vennlige hilsener / Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blo
On Jul 27, 2010, at 10:37 PM, Jack Kielsmeier wrote:
> The only other zfs pool in my system is a mirrored rpool (2 500 gb disks).
> This is for my own personal use, so it's not like the data is mission
> critical in some sort of production environment.
>
> The advantage I can see with going wit
On 28/07/2010 14:53, Edward Ned Harvey wrote:
From: Richard Elling [mailto:richard.ell...@gmail.com]
http://arc.opensolaris.org/caselog/PSARC/2010/193/mail
Agree. This is a better solution because some configurable parameters
are hidden from "zfs get all"
Forgive me for not seeing it ... T
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Mark J Musante
>
> > I do a backup of the pool nightly, so I feel confident that I don't
> need to mirror the drive and can break the mirror and expand the pool
> with the detached drive.
> >
>
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Gary Gendel
>
> I do a backup of the pool nightly, so I feel confident that I don't
> need to mirror the drive and can break the mirror and expand the pool
> with the detached drive.
>
> I und
> From: Richard Elling [mailto:richard.ell...@gmail.com]
>
> > http://arc.opensolaris.org/caselog/PSARC/2010/193/mail
>
> Agree. This is a better solution because some configurable parameters
> are hidden from "zfs get all"
Forgive me for not seeing it ... That link is extremely dense, and 34 p
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of devsk
>
> Thanks, Michael. That's exactly right.
>
> I think my requirement is: writable snapshots.
>
> And I was wondering if someone knowledgeable here could tell me if I
> could do this ma
On Wed, 28 Jul 2010, Gary Gendel wrote:
Right now I have a machine with a mirrored boot setup. The SAS drives are 43Gs
and the root pool is getting full.
I do a backup of the pool nightly, so I feel confident that I don't need to
mirror the drive and can break the mirror and expand the pool
Right now I have a machine with a mirrored boot setup. The SAS drives are 43Gs
and the root pool is getting full.
I do a backup of the pool nightly, so I feel confident that I don't need to
mirror the drive and can break the mirror and expand the pool with the detached
drive.
I understand how
Hi,
I would really apreciate if any of you can help me get the modified mdb and zdb
(in any version of OpenSolaris) for digital forensic reserch purpose.
Thank you.
Jonathan Cifuentes
_
Invit
Edward Ned Harvey writes:
> There are legitimate specific reasons to use separate filesystems
> in some circumstances. But if you can't name one reason why it's
> better ... then it's not better for you.
Having separate filesystems per user lets you create user-specific
quotas and reservations,
35 matches
Mail list logo