I can think of two rather ghetto ways to go.
1. write data then set the read-only property. If you need to make updates
cycle back to rw, write data, set read only.
2. Write data, snapshot the fs, expose the snapshot instead of the r/w file
system. Your mileage may vary depending on the impleme
On Sat, Aug 28, 2010 at 7:54 AM, Darin Perusich
wrote:
> Hello All,
>
> I'm sure this has been discussed previously but I haven't been able to find an
> answer to this. I've added another raidz1 vdev to an existing storage pool and
> the increased available storage isn't reflected in the 'zfs list
Hi,
Thanks for posting information about this port here. Just to add few points:
* Currently we are planning to do a closed beta for this port, which is
based on b121, we will be doing a proper release around end of this year,
which will be based on latest build b141. If you are interested in bei
I get the answer: -p.
> -Original Message-
> From: Fred Liu
> Sent: 星期六, 八月 28, 2010 9:00
> To: zfs-discuss@opensolaris.org
> Subject: get quota showed in precision of byte?
>
> Hi,
>
> Is it possible to do "zfs get -??? quota filesystem" ?
>
> Thanks.
>
> Fred
___
This just popped up:
In terms of how native ZFS for Linux is being handled by [KQ
Infotec], they are releasing their ported ZFS code under the Common
Development & Distribution License and will not be attempting to go
for mainline integration. Instead, this company will just be
releasing
> From: Ian Collins [mailto:i...@ianshome.com]
>
> On 08/28/10 12:45 PM, Edward Ned Harvey wrote:
> > Another specific example ...
> >
> > Suppose you "zfs send" from a primary server to a backup server. You
> want
> > the filesystems to be readonly on the backup fileserver, in order to
> receive
Hi,
Is it possible to do "zfs get -??? quota filesystem" ?
Thanks.
Fred
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Ian Collins
>
> so it should behave in the same way as an unmount in
> the presence of open files.
+1
You can unmount lazy, or force, or by default, the unmount fails in the
presence of open
On 08/28/10 12:45 PM, Edward Ned Harvey wrote:
Another specific example ...
Suppose you "zfs send" from a primary server to a backup server. You want
the filesystems to be readonly on the backup fileserver, in order to receive
incrementals. If you make a mistake, and start writing to the backu
Hello All,
I'm sure this has been discussed previously but I haven't been able to find an
answer to this. I've added another raidz1 vdev to an existing storage pool and
the increased available storage isn't reflected in the 'zfs list' output. Why
is this?
The system in question is runnning Sol
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Ian Collins
>
> > However writes to already opened files are allowed.
>
> Think of this from the perspective of an application. How would write
> failure be reported?
Both very good points.
On Sat, Aug 28, 2010 at 12:05:53PM +1200, Ian Collins wrote:
> Think of this from the perspective of an application. How would
> write failure be reported? open(2) returns EACCES if the file can
> not be written but there isn't a corresponding return from write(2).
> Any open file descriptors woul
On 08/28/10 12:05 PM, Ian Collins wrote:
On 08/28/10 11:13 AM, Robert Milkowski wrote:
Hi,
When I set readonly=on on a dataset then no new files are allowed to
be created.
However writes to already opened files are allowed.
This is rather counter intuitive - if I set a filesystem as read-onl
On 08/28/10 11:13 AM, Robert Milkowski wrote:
Hi,
When I set readonly=on on a dataset then no new files are allowed to
be created.
However writes to already opened files are allowed.
This is rather counter intuitive - if I set a filesystem as read-only
I would expect it not to allow any modi
Hi,
When I set readonly=on on a dataset then no new files are allowed to be
created.
However writes to already opened files are allowed.
This is rather counter intuitive - if I set a filesystem as read-only I
would expect it not to allow any modifications to it.
I think it shouldn't behave
On Fri, Aug 27, 2010 at 03:51:39PM -0700, Eff Norwood wrote:
> By all means please try it to validate it yourself and post your
> results from hour one, day one and week one. In a ZIL use case,
> although the data set is small it is always writing a small ever
> changing (from the SSDs perspective)
By all means please try it to validate it yourself and post your results from
hour one, day one and week one. In a ZIL use case, although the data set is
small it is always writing a small ever changing (from the SSDs perspective)
data set. The SSD does not know to release previously written pag
Hi Mark;
I have installed several 7000 series systems, some running 100's of VM's.
I can help try to help you but to find where exactly the problem is I may
need more information.
I can understand that you have no ZIL's. So most probably you are using the
7110 with 250 GB drives.
All 7000 ser
No. From what I've seen, ZFS will periodically flush writes from the
ZIL to disk. You may run into a "read starvation" situation where ZFS is
so busy flushing to disk that you won't get reads. If you have VMs where
developers expect low latency interactivity, they get unhappy. Trust me. :)
On Fri, Aug 27, 2010 at 01:22:15PM -0700, John wrote:
> Wouldn't it be possible to saturate the SSD ZIL with enough
> backlogged sync writes?
>
> What I mean is, doesn't the ZIL eventually need to make it to the
> pool, and if the pool as a whole (spinning disks) can't keep up with
> 30+ vm's of
On Aug 27, 2010, at 2:32 PM, Mark wrote:
> Saddly most of those options will not work, since we are using a SUN Unified
> Storage 7210, the only option is to buy the SUN SSD's for it, which is about
> $15k USD for a pair. We also don't have the ability to shut off ZIL or any
> of the other opt
Wouldn't it be possible to saturate the SSD ZIL with enough backlogged sync
writes?
What I mean is, doesn't the ZIL eventually need to make it to the pool, and if
the pool as a whole (spinning disks) can't keep up with 30+ vm's of write
requests, couldn't you fill up the ZIL that way?
--
This
On Fri, Aug 27, 2010 at 12:46:42PM -0700, Mark wrote:
> It does, its on a pair of large APC's.
>
> Right now we're using NFS for our ESX Servers. The only iSCSI LUN's
> I have are mounted inside a couple Windows VM's. I'd have to
> migrate all our VM's to iSCSI, which I'm willing to do if it wo
It does, its on a pair of large APC's.
Right now we're using NFS for our ESX Servers. The only iSCSI LUN's I have are
mounted inside a couple Windows VM's. I'd have to migrate all our VM's to
iSCSI, which I'm willing to do if it would help and not cause other issues.
So far the 7210 Applia
On Fri, Aug 27, 2010 at 11:57:17AM -0700, Marion Hakanson wrote:
> markwo...@yahoo.com said:
> > So the question is with a proper ZIL SSD from SUN, and a RAID10... would I
> > be
> > able to support all the VM's or would it still be pushing the limits a 44
> > disk pool?
>
> If it weren't a clos
markwo...@yahoo.com said:
> So the question is with a proper ZIL SSD from SUN, and a RAID10... would I be
> able to support all the VM's or would it still be pushing the limits a 44
> disk pool?
If it weren't a closed 7000-series appliance, I'd suggest running the
"zilstat" script. It should mak
Hey thanks for the replies everyone.
Saddly most of those options will not work, since we are using a SUN Unified
Storage 7210, the only option is to buy the SUN SSD's for it, which is about
$15k USD for a pair. We also don't have the ability to shut off ZIL or any of
the other options that o
On Fri, Aug 27 at 6:16, Eff Norwood wrote:
David asked me what I meant by "filled up". If you make the unwise
decision to use an SSD as your ZIL, at some point days to weeks
after you install it, all of the pages will be allocated and you
will suddenly find the device to be slower than a conven
Hi Cindy,
I'll investigate more next week since I'm in a hurry to leave, but one
point now:
> I'm no device expert but we see this problem when firmware updates or
> other device/controller changes change the device ID associated with
> the devices in the pool.
This is the internal disk in a lap
Hi Rainer,
I'm no device expert but we see this problem when firmware updates or
other device/controller changes change the device ID associated with
the devices in the pool.
In general, ZFS can handle controller/device changes if the driver
generates or fabricates device IDs. You can view devic
Bob Friesenhahn wrote:
On Thu, 26 Aug 2010, George Wilson wrote:
What gets "scrubbed" in the slog? The slog contains transient data
which exists for only seconds at a time. The slog is quite likely to be
empty at any given point in time.
Bob
Yes, the typical ZIL block never lives long e
I have been running some large VirtualBox guest images on OpenSolaris (b134) -
and have on three occasions had my zpool develop unrecoverable errors. The
corruption developed in the VirtualBox disk image files. These are large files
with intense activity- so better chance of seeing errors I supp
Hi,
sometimes ago Jeff Bonwick provides source code and x86-binary for a tool to
recover detached disks.
Refs at http://www.opensolaris.org/jive/thread.jspa?messageID=229969 or
http://opensolaris.org/jive/thread.jspa?messageID=303895
Does someone have an binary for sparc at hand? I can´t comp
On Thu, 26 Aug 2010, George Wilson wrote:
David Magda wrote:
On Wed, August 25, 2010 23:00, Neil Perrin wrote:
Does a scrub go through the slog and/or L2ARC devices, or only the
"primary" storage components?
A scrub will go through slogs and primary storage devices. The L2ARC device
is cons
On Fri, Aug 27, 2010 at 05:51:38AM -0700, David Magda wrote:
> On Fri, August 27, 2010 08:46, Eff Norwood wrote:
> > Saso is correct - ESX/i always uses F_SYNC for all writes and that is for
> > sure your performance killer. Do a snoop | grep sync and you'll see the
> > sync write calls from VMWare
Sean,
> I am glad it helped; but removing anything from /dev/*dsk is a kludge that
> cannot be accepted/condoned/supported.
no doubt about this: two parts of the kernel (zfs vs. devfs?) disagreeing
about devices mustn't happen.
Rainer
--
Rainer,
devfsadm -C alone didn't make a difference, but clearing out /dev/*dsk
and running devfsadm -Cv did help.
I am glad it helped; but removing anything from /dev/*dsk is a kludge
that cannot be accepted/condoned/supported.
Regards... Sean.
__
On Aug 27, 2010, at 1:04 AM, Mark wrote:
> We are using a 7210, 44 disks I believe, 11 stripes of RAIDz sets. When I
> installed I selected the best bang for the buck on the speed vs capacity
> chart.
>
> We run about 30 VM's on it, across 3 ESX 4 servers. Right now, its all
> running NFS,
Mark J Musante writes:
> On Fri, 27 Aug 2010, Rainer Orth wrote:
>> zpool status thinks rpool is on c1t0d0s3, while format (and the kernel)
>> correctly believe it's c11t0d0(s3) instead.
>>
>> Any suggestions?
>
> Try removing the symlinks or using 'devfsadm -C' as suggested here:
>
> https://def
David asked me what I meant by "filled up". If you make the unwise decision to
use an SSD as your ZIL, at some point days to weeks after you install it, all
of the pages will be allocated and you will suddenly find the device to be
slower than a conventional disk drive. This is due to the way SS
On Fri, August 27, 2010 08:46, Eff Norwood wrote:
> Saso is correct - ESX/i always uses F_SYNC for all writes and that is for
> sure your performance killer. Do a snoop | grep sync and you'll see the
> sync write calls from VMWare. We use DDRdrives in our production VMWare
> storage and they are ex
Saso is correct - ESX/i always uses F_SYNC for all writes and that is for sure
your performance killer. Do a snoop | grep sync and you'll see the sync write
calls from VMWare. We use DDRdrives in our production VMWare storage and they
are excellent for solving this problem. Our cluster supports
LaoTsao 老曹 writes:
> may be boot a livecd then export and import the zpool?
I've already tried all sorts of contortions to regenerate
/etc/path_to_inst to no avail. This is simply a case of `should not
happen'.
Rainer
--
---
On Fri, 27 Aug 2010, Rainer Orth wrote:
zpool status thinks rpool is on c1t0d0s3, while format (and the kernel)
correctly believe it's c11t0d0(s3) instead.
Any suggestions?
Try removing the symlinks or using 'devfsadm -C' as suggested here:
https://defect.opensolaris.org/bz/show_bug.cgi?id=14
hi
may be boot a livecd then export and import the zpool?
regards
On 8/27/2010 8:27 AM, Rainer Orth wrote:
For quite some time I'm bitten by the fact that on my laptop (currently
running self-built snv_147) zpool status rpool and format disagree about
the device name of the root disk:
r...@mas
For quite some time I'm bitten by the fact that on my laptop (currently
running self-built snv_147) zpool status rpool and format disagree about
the device name of the root disk:
r...@masaya 14 > zpool status rpool
pool: rpool
state: ONLINE
status: The pool is formatted using an older on-disk f
On 8/27/2010 12:25 AM, Michael Dodwell wrote:
Lao,
I had a look at the HAStoragePlus etc and from what i understand that's to
mirror local storage across 2 nodes for services to be able to access 'DRBD
style'.
not true, HAS+ use shred storage.
in this case since ZFS is not clustered FS so i
Hi,
In a setup similar to yours I changed from a single 15 disks raidz2 to 7 mirros
of 2 disks each. The change in performance was stellar. The key point in
serving things for VMware is that it always issue synchronous writes, wheter on
iscsi or NFS. When you have tens of VM the resulting traff
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
If I remember correctly, ESX always uses synchronous writes over NFS. If
so, adding a dedicated log device (such as a DDRdrive) might help you
out here. You should be able to test it by disabling the ZIL for a short
while and see if performance improve
Hi,
i think, the local ZFS filesystem with raidz on the 7210 is not the
problem (when there are fast HDs), but you can test it with e.g.
bonnie++ (downloadable at sunfreeware.com), also NFS should not be the
problem because iscsi is also very slow(isn´t it?).
some other ideas are:
Network c
50 matches
Mail list logo