On Mon, Feb 7, 2011 at 7:53 PM, Richard Elling wrote:
> On Feb 7, 2011, at 1:07 PM, Peter Jeremy wrote:
>
>> On 2011-Feb-07 14:22:51 +0800, Matthew Angelo wrote:
>>> I'm actually more leaning towards running a simple 7+1 RAIDZ1.
>>> Running this with 1TB is not a problem but I just wanted to
>>>
On Feb 14, 2011 6:56 AM, "Paul Kraus" wrote:
> P.S. I am measuring number of objects via `zdb -d` as that is faster
> than trying to count files and directories and I expect is a much
> better measure of what the underlying zfs code is dealing with (a
> particular dataset may have lots of snapshot
Hello. I am looking to see if performance data exists for on-disk
dedup. I am currently in the process of setting up some tests based on
input from Roch, but before I get started, thought I'd ask here.
Thanks for the help,
Janice
___
zfs-discuss m
Thanks for the responses.. I found the issue. It was due to power management,
and a probably bug with event driven power management states,
changing
cpupm enable
to
cpupm enable poll-mode
in /etc/power.conf fixed the issue for me. back up to 110MB/sec+ now..
--
This message posted from op
I have old pool skeletons with vdevs that no longer exist. Can't import them,
can't destroy them, can't even rename them to something obvious like junk1.
What do I do to clean up?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
On Sat, Feb 12, 2011 at 3:14 AM, ian W wrote:
> Thanks for the responses.. I found the issue. It was due to power management,
> and a probably bug with event driven power management states,
>
> changing
>
> cpupm enable
>
> to
>
> cpupm enable poll-mode
>
> in /etc/power.conf fixed the issue for
Hi Janice,
> Hello. I am looking to see if performance data exists for on-disk dedup. I
> am currently in the process of setting up some tests based on input from
> Roch, but before I get started, thought I'd ask here.
I find it somewhat interesting that you are asking this question on behalf
Hi Chris,
Yes, this is a known problem and a CR is filed.
I haven't tried these in a while, but consider one of the following
workarounds below.
#1 is most drastic and make sure you've got the right device name. No
sanity checking is done by the dd command.
Other experts can comment on a bette
With ZFS on a Solaris server using storage on a SAN device, is it
reasonable to configure the storage device to present one LUN for each
RAID group? I'm assuming that the SAN and storage device are
sufficiently reliable that no additional redundancy is necessary on
the Solaris ZFS server. I'm als
Hi Nathan,
comments below...
On Feb 13, 2011, at 8:28 PM, Nathan Kroenert wrote:
> On 14/02/2011 4:31 AM, Richard Elling wrote:
>> On Feb 13, 2011, at 12:56 AM, Nathan Kroenert wrote:
>>
>>> Hi all,
>>>
>>> Exec summary: I have a situation where I'm seeing lots of large reads
>>> starving wri
On Mon, Feb 14, 2011 at 2:38 PM, Gary Mills wrote:
> I realize that it is possible to configure more than one LUN per RAID
> group on the storage device, but doesn't ZFS assume that each LUN
> represents an independant disk, and schedule I/O accordingly? In that
> case, wouldn't ZFS I/O scheduli
Hi Ian,
You are correct.
Previous Solaris releases displayed older POSIX ACL info on this
directory. It was changed to the new ACL style from the integration of
this CR:
6792884 Vista clients cannot access .zfs
Thanks,
Cindy
On 02/13/11 19:30, Ian Collins wrote:
While scanning filesystems l
On 02/15/11 10:14 AM, Cindy Swearingen wrote:
Hi Ian,
You are correct.
Previous Solaris releases displayed older POSIX ACL info on this
directory. It was changed to the new ACL style from the integration of
this CR:
6792884 Vista clients cannot access .zfs
Thanks Cindy. Unfortunately bugs.o
Hi I wanted to get some expert advice on this. I have an ordinary hardware
SAN from Promise Tech that presents the LUNs via iSCSI. I would like to use
that if possible with my VMware environment where I run several Solaris /
OpenSolaris virtual machines. My question is regarding the virtual disks.
On Tue, Feb 15, 2011 at 5:47 AM, Mark Creamer wrote:
> Hi I wanted to get some expert advice on this. I have an ordinary hardware
> SAN from Promise Tech that presents the LUNs via iSCSI. I would like to use
> that if possible with my VMware environment where I run several Solaris /
> OpenSolaris
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Mark Creamer
>
> 1. Should I create individual iSCSI LUNs and present those to the VMware
> ESXi host as iSCSI storage, and then create virtual disks from there on
each
> Solaris VM?
>
> - or
On Mon, Feb 14, 2011 at 03:04:18PM -0500, Paul Kraus wrote:
> On Mon, Feb 14, 2011 at 2:38 PM, Gary Mills wrote:
> >
> > Is there any reason not to use one LUN per RAID group?
[...]
> In other words, if you build a zpool with one vdev of 10GB and
> another with two vdev's each of 5GB (both com
Thanks for all the thoughts, Richard.
One thing that still sticks in my craw is that I'm not wanting to write
intermittently. I'm wanting to write flat out, and those writes are
being held up... Seems to me that zfs should know and do something about
that without me needing to tune zfs_vdev_ma
Hello
my power.conf is as follows; any recommendations for improvement?
device-dependency-property removable-media /dev/fb
autopm enable
autoS3 enable
cpu-threshold 1s
# Auto-Shutdown Idle(min) Start/Finish(hh:mm) Behavior
autoshutdown 30 0:00 0:00 noshutdown
S3-support enable
On 2/14/2011 3:52 PM, Gary Mills wrote:
On Mon, Feb 14, 2011 at 03:04:18PM -0500, Paul Kraus wrote:
On Mon, Feb 14, 2011 at 2:38 PM, Gary Mills wrote:
Is there any reason not to use one LUN per RAID group?
[...]
In other words, if you build a zpool with one vdev of 10GB and
another with
On Feb 14, 2011, at 4:49 PM, ian W wrote:
> Hello
>
> my power.conf is as follows; any recommendations for improvement?
For best performance, disable power management. For certain processors
and BIOSes, some combinations of power management (below the OS) are
also known to be toxic. At Nexenta,
21 matches
Mail list logo