Christian Auby wrote:
On Wed, 8 Jul 2009, Moore, Joe wrote:
That's true for the worst case, but zfs mitigates
that somewhat by
batching i/o into a transaction group. This means
that i/o is done every
30 seconds (or 5 seconds, depending on the version
you're running),
allowing multiple writes
You might also search for OpenSolaris NAS projects. Some that I've seen
previously
involve nearly the same config you're building - a CF card or USB stick with
the OS
and a number of HDDs in a zfs pool for the data only.
I am not certain which ones I've seen, but you can look for EON, and Pulsar
> Trying to spare myself the expense as this is my home system so budget is
> a constraint.
> What I am trying to avoid is having multiple raidz's because every time I
> have another one I loose a lot of extra space to parity. Much like in raid 5.
There's a common perception which I tend to sh
On 07/09/09 17:25, Mark Michael wrote:
Thanks for the info. Hope that the pfinstall changes to support zfs root flash
jumpstarts can be extended to support luupgrade -f at some point soon.
BTW, where can I find an example profile? do I just substitute in the
install_type flash_install
> On Wed, 8 Jul 2009, Moore, Joe wrote:
> That's true for the worst case, but zfs mitigates
> that somewhat by
> batching i/o into a transaction group. This means
> that i/o is done every
> 30 seconds (or 5 seconds, depending on the version
> you're running),
> allowing multiple writes to be wr
Thanks for the info. Hope that the pfinstall changes to support zfs root flash
jumpstarts can be extended to support luupgrade -f at some point soon.
BTW, where can I find an example profile? do I just substitute in the
install_type flash_install
archive_location ...
for
install_type
I'm not sure if this is the correct list for this query, however, I am
trying to create a number of zpools inside a zone. I am running snv_117 and
this is a ipkg banded zone, here is the zone configuration:
a...@vs-idm:~$ zonecfg -z vsnfs-02 export
> create -b
> set zonepath=/rpool/zones/vsnfs-0
2009.06 is v111b, but you're running v111a. I don't know, but perhaps the a->b
transition addressed this issue, among others?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.o
After reading many-many threads on ZFS performance today (top of the list in
the
forum, and some chains of references), I applied a bit of tuning to the server.
In particular, I've set the zfs_write_limit_override to 384Mb so my cache is
spooled
to disks more frequently (if streaming lots of w
On Thu, Jul 9, 2009 at 8:42 PM, Norbert wrote:
> Does anyone have the code/script to change the GUID of a ZFS pool?
I did such tool for my client around a year ago and that client agreed
to release the code.
However, the API I've used is has been changed and not available
anymore. So you cannot co
Does anyone have the code/script to change the GUID of a ZFS pool?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Thanks for the link Richard,
I guess the next question is, how safe would it be to run snv_114 in
production? Running something that would be technically "unsupported"
makes a few folks here understandably nervous...
-Greg
On Thu, 2009-07-09 at 10:13 -0700, Richard Elling wrote:
> Greg Mason wro
I don't swear. The word it bleeped was not a bad word
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I have a much more generic question regarding this thread. I have a sun T5120
(T2 quad core, 1.4GHz) with two 10K RPM SAS drives in a mirrored pool running
Solaris 10 u7. The disk performance seems horrible. I have the same apps
running on a Sun X2100M2 (dual core 1.8GHz AMD) also running Sol
Haudy Kazemi wrote:
Adding additional data protection options are commendable. On the
other hand I feel there are important gaps in the existing feature
set that are worthy of a higher priority, not the least of which is
the automatic recovery of uberblock / transaction group problems
(see
Greg Mason wrote:
I'm trying to find documentation on how to set and work with user and
group quotas on ZFS. I know it's quite new, but googling around I'm just
finding references to a ZFS quota and refquota, which are
filesystem-wide settings, not per user/group.
Cindy does an excellent job
Greg Mason wrote:
I'm trying to find documentation on how to set and work with user and
group quotas on ZFS. I know it's quite new, but googling around I'm just
finding references to a ZFS quota and refquota, which are
filesystem-wide settings, not per user/group.
Also, after reviewing a few bug
Flash archive on zfs means archiving an entire root pool (minus any
explicitly excluded datasets), not an individual BE. These types of
flash archives can only be installed using Jumpstart and are intended to
install an entire system, not an individual BE.
Flash archives of a single BE could
I'm trying to find documentation on how to set and work with user and
group quotas on ZFS. I know it's quite new, but googling around I'm just
finding references to a ZFS quota and refquota, which are
filesystem-wide settings, not per user/group.
Also, after reviewing a few bugs, I'm a bit confuse
I've been hoping to get my hands on patches that permit Sol10U7 to do a
luupgrade -f of a ZFS root-based ABE since Solaris 10 10/08.
Unfortunately, after applying patchids 119534-15 and 124630-26 to both the PBE
and the miniroot of the OS image, I'm still getting the same "ERROR: Field 2 -
Inva
> > I installed opensolaris and setup rpool as my base
> install on a single 1TB drive
>
> If I understand correctly, you have rpool and the
> data pool configured all as one
> pool?
Correct
> That's not probably what you'd really want. For one
> part, the bootable root pool
> should all be ava
On Jul 9, 2009, at 4:22 AM, Jim Klimov wrote:
To tell the truth, I expected zvols to be faster than filesystem
datasets. They seem
to have less overhead without inodes, posix, acls and so on. So I'm
puzzled by test
results.
I'm now considering the dd i/o block size, and it means a lot
in
Thanks everyone for the patch IDs.
On Wed, Jul 8, 2009 at 4:50 PM, Enda O'Connor wrote:
> Hi
> for sparc
> 119534-15
> 124630-26
>
>
> for x86
> 119535-15
> 124631-27
>
> higher rev's of these will also suffice.
>
> Note these need to be applied to the miniroot of the jumpstart image so that
> it
I wonder exactly what's going on. Perhaps it is the cache flushes that is
causing the SCSI errors
when trying to use the SSD (Intel X25-E and X25-M) disks? Btw, I'm seeing the
same behaviour on
both an X4500 (SATA/Marwell controller) and the X4240 (SAS/LSI controller).
Well, almost. On the
X4
One more note,
> For example, if you were to remake the pool (as suggested above for rpool and
> below for raidz data pool) - where would you re-get the original data for
> copying
> over again?
Of course, if you take on with the idea of buying 4 drives and building a
raidz1 vdev
right away, an
> I installed opensolaris and setup rpool as my base install on a single 1TB
> drive
If I understand correctly, you have rpool and the data pool configured all as
one
pool?
That's not probably what you'd really want. For one part, the bootable root pool
should all be available to GRUB from a s
You might also want to force ZFS into accepting a faulty root pool:
# zpool set failmode=continue rpool
//Jim
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listi
You can also select which snapshots you'd like to copy - and egrep away what you
don't need.
Here's what I did to back up some servers to a filer (as compressed ZFS
snapshots
stored into files or further simple deployment on multiple servers, as well as
offsite rsyncing of the said files). The e
To tell the truth, I expected zvols to be faster than filesystem datasets. They
seem
to have less overhead without inodes, posix, acls and so on. So I'm puzzled by
test
results.
I'm now considering the dd i/o block size, and it means a lot indeed,
especially if
compared to zvol results with sm
I forgot to mention this is a
SunOS biscotto 5.11 snv_111a i86pc i386 i86pc
version.
Maurilio.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discu
Hi,
I have a pc where a pool suffered a disk failure, I did replace the failed disk
and the pool resilvered but, after resilvering, it was in this state
mauri...@biscotto:~# zpool status iscsi
pool: iscsi
state: DEGRADED
status: One or more devices has experienced an unrecoverable error. An
Hmm, scratch that. Maybe.
I did not first get the point that your writes to a filesystem dataset work
quickly.
Perhaps filesystem is (better) cached indeed, i.e. *maybe* zvol writes are
synchronous and zfs writes may be cached and thus async? Try playing around
with relevant dataset attributes.
> Ok so this is my solution, pls be advised I am a
> total linux nube so I am learning as I go along. I
> installed opensolaris and setup rpool as my base
> install on a single 1TB drive. I attached one of my
> NTFS drives to the system then used a utility called
> prtparts to get the name of the N
33 matches
Mail list logo