Hi Prashanth,
This was about a year ago. I believe I ran bonnie++ and IOzone tests.
Tried also to simulate an OLTP load. The 15-20% overhead for ZFS was
vs. UFS on a raw disk...UFS on SVM was almost exactly 15% lower
performance than raw UFS. UFS and XFS on raw disk were pretty similar
in terms o
Wow. That's an incredibly cool story. Thank you for sharing it! Does
the Thumper today pretty much resemble what you saw then?
Best Regards,
Jason
On 1/23/07, Bryan Cantrill <[EMAIL PROTECTED]> wrote:
> This is a bit off-topic...but since the Thumper is the poster child
> for ZFS I hope its no
> This is a bit off-topic...but since the Thumper is the poster child
> for ZFS I hope its not too off-topic.
>
> What are the actual origins of the Thumper? I've heard varying stories
> in word and print. It appears that the Thumper was the original server
> Bechtolsheim designed at Kealia as a
Jason J. W. Williams wrote:
Hi All,
This is a bit off-topic...but since the Thumper is the poster child
for ZFS I hope its not too off-topic.
What are the actual origins of the Thumper? I've heard varying stories
in word and print. It appears that the Thumper was the original server
Bechtolshei
Neal Pollack wrote:
Jason J. W. Williams wrote:
So I was curious if anyone had any insights into the history/origins
of the Thumper...or just wanted to throw more rumors on the fire. ;-)
Thumper was created to hold the the entire electronic transcript of the
Bill Clinton impeachment proceed
Jason J. W. Williams wrote:
Hi All,
This is a bit off-topic...but since the Thumper is the poster child
for ZFS I hope its not too off-topic.
What are the actual origins of the Thumper? I've heard varying stories
in word and print. It appears that the Thumper was the original server
Bechtolshei
Hi Jason,
> My company did a lot of LVM+XFS vs. SVM+UFS testing in addition to
> ZFS. Overall, LVM's overhead is abysmal. We witnessed performance hits
> of 50%+. SVM only reduced performance by about 15%. ZFS was similar,
> though a tad higher.
Yes, LVM snapshots' overhead is high. But I've seen
Hi Prashanth,
My company did a lot of LVM+XFS vs. SVM+UFS testing in addition to
ZFS. Overall, LVM's overhead is abysmal. We witnessed performance hits
of 50%+. SVM only reduced performance by about 15%. ZFS was similar,
though a tad higher.
Also, my understanding is you can't write to a ZFS sna
> > Is there someway to synchronously mount a ZFS filesystem?
> > '-o sync' does not appear to be honoured.
>
> No there isn't. Why do you think it is necessary?
Specifically, I was trying to compare ZFS snapshots with LVM snapshots on
Linux. One of the tests does writes to an ext3FS (that's on
Hi All,
This is a bit off-topic...but since the Thumper is the poster child
for ZFS I hope its not too off-topic.
What are the actual origins of the Thumper? I've heard varying stories
in word and print. It appears that the Thumper was the original server
Bechtolsheim designed at Kealia as a mas
Frank Cusack wrote:
It's interesting the topics that come up here, which really have little to
do with zfs. I guess it just shows how great zfs is. I mean, you would
never have a ufs list that talked about the merits of sata vs sas and what
hardware do i buy. Also interesting is that zfs expos
Hi Eric,
eric kustarz wrote:
The first thing i would do is see if any I/O is happening ('zpool iostat
1'). If there's none, then perhaps the machine is hung (which you then
would want to grab a couple of '::threadlist -v 10's from mdb to figure
out if there are hung threads).
there seems to
Hi Peter,
Ah! That clears it up for me. Thank you.
Best Regards,
Jason
On 1/23/07, Peter Tribble <[EMAIL PROTECTED]> wrote:
On 1/23/07, Jason J. W. Williams <[EMAIL PROTECTED]> wrote:
> Hi Peter,
>
> Perhaps I'm a bit dense, but I've been befuddled by the x+y notation
> myself. Is it X stripes
I'm looking at bringing up a new Solaris 10 based file server running off an
older UltraSPARC-IIi 360MHz with 512mb ram. I've brought up the 11/06 release
from scratch no patches installed at this time. I have 4 externally attached
36gb scsi devices off the hosts systems scsi bus.
After setti
Note that the bad disk on the node caused a normal reboot to hang.
I also verified that sync from the command line hung. I don't know
how ZFS (or Solaris) handles situations involving bad disks...does
a bad disk block proper ZFS/OS handling of all IO, even to the
other healthy disks?
> Note also that for most applications, the size of their IO operations
> would often not match the current page size of the buffer, causing
> additional performance and scalability issues.
Thanks for mentioning this, I forgot about it.
Since ZFS's default block size is configured to be larger th
Peter Schuller wrote:
Hello,
There have been comparisons posted here (and in general out there on the net)
for various RAID levels and the chances of e.g. double failures. One problem
that is rarely addressed though, is the various edge cases that significantly
impact the probability of loss
On 1/23/07, Jason J. W. Williams <[EMAIL PROTECTED]> wrote:
Hi Peter,
Perhaps I'm a bit dense, but I've been befuddled by the x+y notation
myself. Is it X stripes consisting of Y disks?
Sorry. Took a short cut on that bit. It's x data disks + y parity. So in the
case of raidz1, y=1; in the c
Hi Peter,
Perhaps I'm a bit dense, but I've been befuddled by the x+y notation
myself. Is it X stripes consisting of Y disks?
Best Regards,
Jason
On 1/23/07, Peter Tribble <[EMAIL PROTECTED]> wrote:
On 1/23/07, Neal Pollack <[EMAIL PROTECTED]> wrote:
> Hi: (Warning, new zfs user question
It's interesting the topics that come up here, which really have little to
do with zfs. I guess it just shows how great zfs is. I mean, you would
never have a ufs list that talked about the merits of sata vs sas and what
hardware do i buy. Also interesting is that zfs exposes hardware bugs
yet
On 23-Jan-07, at 4:51 PM, Bart Smaalders wrote:
Frank Cusack wrote:
yes I am an experienced Solaris admin and know all about devfsadm :-)
and the older disks command.
It doesn't help in this case. I think it's a BIOS thing. Linux and
Windows can't see IDE drives that aren't there at boot tim
[EMAIL PROTECTED] wrote:
In order to protect the user pages while a DIO is in progress, we want
support from the VM that isn't presently implemented. To prevent a page
from being accessed by another thread, we have to unmap the TLB/PTE
entries and lock the page. There's a cost associated with t
> Basically speaking - there needs to be some sort of strategy for
> bypassing the ARC or even parts of the ARC for applications that
> may need to advise the filesystem of either:
> 1) the delicate nature of imposing additional buffering for their
> data flow
> 2) already well optimized applicatio
On 1/23/07, Neal Pollack <[EMAIL PROTECTED]> wrote:
Hi: (Warning, new zfs user question)
I am setting up an X4500 for our small engineering site file server.
It's mostly for builds, images, doc archives, certain workspace
archives, misc
data.
...
Can someone provide an actual example
Hello,
There have been comparisons posted here (and in general out there on the net)
for various RAID levels and the chances of e.g. double failures. One problem
that is rarely addressed though, is the various edge cases that significantly
impact the probability of loss of data.
In particular,
ooh. they support it? cool. i'll have to explore that option now.
however i still really want eSATA.
On 1/23/07, Samuel Hexter <[EMAIL PROTECTED]> wrote:
We've got two Areca ARC-1261ML cards (PCI-E x8, up to 16 SATA disks each)
running a 12TB zpool on snv54 and Areca's arcmsr driver. They're a
I believe the SmartArray is an LSI like the Dell PERC isn't it?
Best Regards,
Jason
On 1/23/07, Robert Suh <[EMAIL PROTECTED]> wrote:
People trying to hack together systems might want to look
at the HP DL320s
http://h10010.www1.hp.com/wwpc/us/en/ss/WF05a/15351-241434-241475-241475
-f79-3232017
Hi Neal,
We've been getting pretty good performance out of RAID-Z2 with 3x
6-disk RAID-Z2 stripes. More stripes mean better performance all
around...particularly on random reads. But as a file-server that's
probably not a concern. With RAID-Z2 it seems to me 2 hot-spares is
very sufficient, but I
Roch
I've been chewing on this for a little while and had some thoughts
On Jan 15, 2007, at 12:02, Roch - PAE wrote:
Jonathan Edwards writes:
On Jan 5, 2007, at 11:10, Anton B. Rang wrote:
DIRECT IO is a set of performance optimisations to circumvent
shortcomings of a given filesystem.
Tomas Ögren wrote:
You know that this is a stripe over two 4-way mirrors, right?
yes. performance isn't really a concern for us in this setup.
persistence is. we want to be able to have access to files when disks
fail. we need to be able to handle up to three disk failures. The slice
layout
Frank Cusack wrote:
yes I am an experienced Solaris admin and know all about devfsadm :-)
and the older disks command.
It doesn't help in this case. I think it's a BIOS thing. Linux and
Windows can't see IDE drives that aren't there at boot time either,
and on Solaris the SATA controller runs
*snip snip*
> AFAIK
> only Adaptec and LSI Logic are making controllers
> today. With so few
> manufacturers it's a scary investment. (Of course,
> someone please
> correct me if you know of other players.)
There's a few others. Those are (of course) the major players (and with big
names like
Nicolas Williams wrote:
On Tue, Jan 23, 2007 at 04:49:38PM +, Darren J Moffat wrote:
Jeremy Teo wrote:
I'm defining "zpool split" as the ability to divide a pool into 2
separate pools, each with identical FSes. The typical use case would
be to split a N disk mirrored pool into a N-1 pool an
On Tue, Jan 23, 2007 at 04:49:38PM +, Darren J Moffat wrote:
> Jeremy Teo wrote:
> >I'm defining "zpool split" as the ability to divide a pool into 2
> >separate pools, each with identical FSes. The typical use case would
> >be to split a N disk mirrored pool into a N-1 pool and a 1 disk pool,
Hi: (Warning, new zfs user question)
I am setting up an X4500 for our small engineering site file server.
It's mostly for builds, images, doc archives, certain workspace
archives, misc
data.
I'd like a trade off between space and safety of data. I have not set
up a large
ZFS system be
> While contemplating "zpool split" functionality, I
> wondered whether we
> really want such a feature because
>
> 1) SVM allows it and admins are used to it.
> or
> 2) We can't do what we want using zfs send |zfs recv
I don't think this is an either/or scenario. There are simply too many times
> For the "clone another system" zfs send/recv might be
> useful
Keeping in mind that you only want to send/recv one half of the ZFS mirror...
Rainer
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
h
People trying to hack together systems might want to look
at the HP DL320s
http://h10010.www1.hp.com/wwpc/us/en/ss/WF05a/15351-241434-241475-241475
-f79-3232017.html
12 drive bays, Intel Woodcrest, SAS (and SATA) controller. If you snoop
around, you
might be able to find drive carriers on eBay o
On 22 January, 2007 - Peter Buckingham sent me these 5,2K bytes:
> $ zpool status
> pool: tank
> state: ONLINE
> scrub: none requested
> config:
>
> NAME STATE READ WRITE CKSUM
> tank ONLINE 0 0 0
> mirror ONLINE 0 0
Rob Logan <[EMAIL PROTECTED]> wrote:
> > FWIW, the Micropolis 1355 is a 141 MByte (!) ESDI disk.
> > The MD21 is an ESDI to SCSI converter.
>
> yup... its the board in the middle left of
> http://rob.com/sun/sun2/md21.jpg
If you are talking about the middle right, this
is a ACB-4000 series con
Jeremy Teo wrote:
I'm defining "zpool split" as the ability to divide a pool into 2
separate pools, each with identical FSes. The typical use case would
be to split a N disk mirrored pool into a N-1 pool and a 1 disk pool,
and then transport the 1 disk pool to another machine.
Can you pick anot
If you are talking from one host to another, snapshots should actually be
a usable solution. many filesystems only get 3 -> 10% churn per day and if
you use rsync with -inplace will get you delta data on snapshots that is
very similar to the actual block delta on the original server.
For an e
> FWIW, the Micropolis 1355 is a 141 MByte (!) ESDI disk.
> The MD21 is an ESDI to SCSI converter.
yup... its the board in the middle left of
http://rob.com/sun/sun2/md21.jpg
Rob
___
zfs-discuss mailing list
zfs-discuss@opensola
Hello,
Disk capacity is between 70 and 100GB and most of the time the diskspace is
more then 90% full. Every day there is a full backup of the user data and on
Friday for system files. We keep the backup tapes for 30 days. So, it's
impossible to make 30 snapshots. Scripting solutions like tar (
I'm defining "zpool split" as the ability to divide a pool into 2
separate pools, each with identical FSes. The typical use case would
be to split a N disk mirrored pool into a N-1 pool and a 1 disk pool,
and then transport the 1 disk pool to another machine.
While contemplating "zpool split" fun
> Areca makes excellent PCI express cards - but probably have zero
> support in Solaris/OpenSolaris. I use them in both Windows and Linux.
> Works natively in FreeBSD too. They're the fastest cards on the market
> I believe still.
>
> However probably not very appropriate for this since it's a Sol
Hi Robert,
On Tue, Jan 23, 2007 at 02:42:33PM +0100, Robert Milkowski wrote:
> Tuesday, January 23, 2007, 1:48:50 PM, you wrote:
> CD> On Tue, Jan 23, 2007 at 12:07:34PM +0100, Robert Milkowski wrote:
>
> >> Of course the question is why use ZFS over DID?
>
> CD> Actually the question is probably
Hello Ceri,
Tuesday, January 23, 2007, 1:48:50 PM, you wrote:
CD> On Tue, Jan 23, 2007 at 12:07:34PM +0100, Robert Milkowski wrote:
>> Hello Zoram,
>>
>> Tuesday, January 23, 2007, 11:27:48 AM, you wrote:
>>
>> ZT> Hi Ceri,
>>
>> ZT> I just saw your mail today. I'm replying In case you haven't
On Tue, Jan 23, 2007 at 12:07:34PM +0100, Robert Milkowski wrote:
> Hello Zoram,
>
> Tuesday, January 23, 2007, 11:27:48 AM, you wrote:
>
> ZT> Hi Ceri,
>
> ZT> I just saw your mail today. I'm replying In case you haven't found a
> ZT> solution.
>
> ZT> This is
>
> ZT> 6475304 zfs core dumps
On Tue, Jan 23, 2007 at 03:57:48PM +0530, Zoram Thanga wrote:
> Hi Ceri,
>
> I just saw your mail today. I'm replying In case you haven't found a
> solution.
>
> This is
>
> 6475304 zfs core dumps when trying to create new spool using "did" device
>
> The workaround suggests:
>
> Set environm
On 1/23/07, Darren J Moffat <[EMAIL PROTECTED]> wrote:
For the "clone another system" zfs send/recv might be useful
Having support for this directly in flarcreate would be nice. It
would make differential flars very quick and efficient.
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
_
mario heimel wrote:
this is a good point, the mirror loses all information about the zpool.
this is very important for the ZFS Root pool, i don't know how often i have broken the
svm-mirror of the root disks, to clone a system and bring the disk to a other system or
use "live upgrade" and so on
Hello Zoram,
Tuesday, January 23, 2007, 11:27:48 AM, you wrote:
ZT> Hi Ceri,
ZT> I just saw your mail today. I'm replying In case you haven't found a
ZT> solution.
ZT> This is
ZT> 6475304 zfs core dumps when trying to create new spool using "did" device
ZT> The workaround suggests:
ZT> Set
Hi Ceri,
I just saw your mail today. I'm replying In case you haven't found a
solution.
This is
6475304 zfs core dumps when trying to create new spool using "did" device
The workaround suggests:
Set environmental variable
NOINUSE_CHECK=1
And the problem does not exists.
Thanks,
Zoram
C
54 matches
Mail list logo