On Wed, Dec 23, 2009 at 10:36 PM, Ian Collins wrote:
>
>>
>
> An EFI label isn't "OS specific formatting"!
>
>
>
at the risk of sounding really stupidis an EFI label the same as using
guid partions? I think i remember reading about setting GUID partioned disks
in FreeBSD. If so, i could try
Mattias Pantzare wrote:
I'm not sure how to go about it. Basically, how should i format my
drives in FreeBSD, create a ZPOOL which can be imported into OpenSolaris.
I'm not sure about BSD, but Solaris ZFS works with whole devices. So there
isn't any OS specific formatting involved. I
I am planning on building an opensolaris server to replace my NAS.
My case has room for 20 hotswap sata drives and 1 or 2 internal drives. I
was planning on going with 5 raidz vdevs each with 4 drives, and maybe a hot
spare inside the case in one of the extra slots.
I am going to use 2 Supermicr
On Tue, Dec 22 at 12:33, James Risner wrote:
As for whether or not to do raidz, for me the issue is performance.
I can't handle the raidz write penalty. If I needed triple drive
protection, a 3way mirror setup would be the only way I would go. I
don't yet quite understand why a 4+ drive raidz3
Len Zaifman wrote:
Because we have users who will create millions of files in a directory it would
be nice to report the number of files a user has or a group has in a filesystem.
Is there a way (other than find) to get this?
I don't know if there is a good way, but I have noticed that with
On Wed, 23 Dec 2009, Yanjun (Tiger) Hu wrote:
Hi Jim,
I think Tony was asking a very valid question. It reminds me
http://developers.sun.com/solaris/articles/sol8memory.html#where.
The question is valid, but the answer will be misleading. Regardless
of if a memory page represents part of a
Think he's looking for a single, intuitively obvious, easy to acces indicator
of memory usage along the lines of the vmstat free column (before ZFS) that
show the current amount of free RAM.
On Dec 23, 2009, at 4:09 PM, Jim Mauro wrote:
> Hi Anthony -
>
> I don't get this. How does the presen
>> I'm not sure how to go about it. Basically, how should i format my
>> drives in FreeBSD, create a ZPOOL which can be imported into OpenSolaris.
>
> I'm not sure about BSD, but Solaris ZFS works with whole devices. So there
> isn't any OS specific formatting involved. I assume BSD does the sa
>> UFS is a totally different issue, sync writes are always sync'ed.
>>
>> I don't work for Sun, but it would be unusual for a company to accept
>> willful negligence as a policy. Ambulance chasing lawyers love that
>> kind of thing.
>
> The Thor replaces a geriatric Enterprise system running Sola
On Thu, Dec 24, 2009 at 12:07:03AM +0100, Jeroen Roodhart wrote:
> We are under the impression that a setup that server NFS over UFS has
> the same assurance level than a setup using "ZFS without ZIL". Is this
> impression false?
Completely. It's closer to "UFS mount -o async", without the risk o
On Dec 23, 2009, at 3:00 PM, Michael Herf wrote:
For me, arcstat.pl is a slam-dunk predictor of dedup throughput. If
my "miss%" is in the single digits, dedup write speeds are
reasonable. When the arc misses go way up, dedup writes get very
slow. So my guess is that this issue depends entir
On Wed, Dec 23, 2009 at 6:07 PM, Ian Collins wrote:
>
>
> Is the pool on slices or whole drives? If the latter, you should be able
> to import the pool (unless BSD introduces any incompatibilities).
It's on whole disks but if i remember right those disks are tied to the
highpoint raid card.
On Thu 24/12/09 10:31 , "Thomas Burgess" wonsl...@gmail.com sent:
> I was wondering what the best method of moving a pool from FreeBSD 8.0 to
> OpenSolaris is.
>
> When i originally built my system, it was using hardware which wouldn't
> work in opensolairs, but i'm about to do an upgrade so i sh
-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160
Hi Richard, ZFS-discuss.
> Message: 2
> Date: Wed, 23 Dec 2009 09:49:18 -0800
> From: Richard Elling
> To: Auke Folkerts
> Cc: zfs-discuss@opensolaris.org
> Subject: Re: [zfs-discuss] Benchmarks results for ZFS + NFS,using
> SSD's as sl
For me, arcstat.pl is a slam-dunk predictor of dedup throughput. If my
"miss%" is in the single digits, dedup write speeds are reasonable. When the
arc misses go way up, dedup writes get very slow. So my guess is that this
issue depends entirely on whether or not the DDT is in RAM or not. I don't
h
erik.trim...@sun.com said:
> The suggestion was to make the SSD on each machine an iSCSI volume, and add
> the two volumes as a mirrored ZIL into the zpool.
I've mentioned the following before
For a poor-person's slog which gives decent NFS performance, we have had
good results with allocat
I was wondering what the best method of moving a pool from FreeBSD 8.0 to
OpenSolaris is.
When i originally built my system, it was using hardware which wouldn't work
in opensolairs, but i'm about to do an upgrade so i should be able to use
Opensolaris when i'm done.
My current system uses a High
Hi Jim,
I think Tony was asking a very valid question. It reminds me
http://developers.sun.com/solaris/articles/sol8memory.html#where.
Regards,
Tiger
Jim Mauro wrote:
Hi Anthony -
I don't get this. How does the presence (or absence) of the ARC change
the methodology for doing memory capacit
Hi Anthony -
I don't get this. How does the presence (or absence) of the ARC change
the methodology for doing memory capacity planning?
Memory capacity planning is all about identifying and measuring consumers.
Memory consumers;
- The kernel.
- User processes.
- The ZFS ARC, which is technically
2 Things
1) solaris 09/10 is solaris 10 update 8 , not 9 - sorry for the confusion
2) setting userqu...@user at solaris10u8 and looking from a linux nfs client
with quotas installed:
Disk quotas for user leonardz (uid 1006):
Filesystem blocks quota limit grace files quota limi
Paul Armstrong wrote:
I'm surprised at the number as well.
Running it again, I'm seeing it jump fairly high just before the fork errors:
bash-4.0# ps -ef | grep zfsdle | wc -l
20930
(the next run of ps failed due to the fork error).
So maybe it is running out of processes.
ZFS file data fro
Some questions below...
On Dec 23, 2009, at 8:27 AM, Auke Folkerts wrote:
Hello,
We have performed several tests to measure the performance
using SSD drives for the ZIL.
Tests are performed using a X4540 "Thor" with a zpool consisting of
3 14-disk RaidZ2 vdevs. This fileserver is connected t
On Dec 23, 2009, at 7:45 AM, Markus Kovero wrote:
Hi, I threw 24GB of ram and couple latest nehalems at it and
dedup=on seemed to cripple performance without actually using much
cpu or ram. it's quite unusable like this.
What does the I/O look like? Try "iostat -zxnP 1" and see if there
from zpool history
zpool create -f zfs_hpf c6t600A0B8000495A51081F492C644Dd0
c6t600A0B8000495B1C053148B41F54d0 c6t600A0B8000495B1C053248B42036d0
c6t600A0B8000495B1C05B948CA87A2d0
these are raid5 devices from a 2540 disk controller : we did not us raidz on top
we cleaned as follo
Chris:
This happened to us recently due to some hardware failures.
zpool scrub poolname
cleared this up for us. We did not try rm damaged file at all.
Len Zaifman
Systems Manager, High Performance Systems
The Centre for Computational Biology
The Hospital for Sick Children
555 University Ave.
Tor
I have a system that took a RAID6 hardware array and created a ZFS pool on top
of it (pool only has one device in it which is the entire RAID6 HW array). A
few weeks ago, the Sun v440 somehow got completely wrapped around the axle and
the operating system had to be rebuilt. Once the system was
Hello,
We have performed several tests to measure the performance
using SSD drives for the ZIL.
Tests are performed using a X4540 "Thor" with a zpool consisting of
3 14-disk RaidZ2 vdevs. This fileserver is connected to a Centos 5.4
machine which mounts a filesystem on the zpool via NFS, over
Deirdre has posted a video of the presentation Darren Muffat gave at
the November 2009 Solaris Security Summit:
http://blogs.sun.com/video/entry/zfs_crypto_data_encryption_for
Slides (470 KB PDF):
http://wikis.sun.com/download/attachments/164725359/osol-sec-sum-09-zfs.pdf
___
On Tue, 22 Dec 2009, Marty Scholes wrote:
If there is a RAIDZ write penalty over mirroring, I am unaware of
it. In fact, sequential writes are faster under RAIDZ.
There is always an IOPS penalty for raidz when writing or reading,
given a particular zfs block size. There may be a write pena
Hi, I threw 24GB of ram and couple latest nehalems at it and dedup=on seemed to
cripple performance without actually using much cpu or ram. it's quite unusable
like this.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-disc
Andrey Kuzmin wrote:
And how do you expect the mirrored iSCSI volume to work after
failover, with secondary (ex-primary) unreachable?
Regards,
Andrey
As a normal Degraded mirror. No problem.
The suggestion was to make the SSD on each machine an iSCSI volume, and
add the two volumes as a
And how do you expect the mirrored iSCSI volume to work after
failover, with secondary (ex-primary) unreachable?
Regards,
Andrey
On Wed, Dec 23, 2009 at 9:40 AM, Erik Trimble wrote:
> Charles Hedrick wrote:
>>
>> Is ISCSI reliable enough for this?
>>
>
> YES.
>
> The original idea is a good o
32 matches
Mail list logo