Bob Friesenhahn writes:
> On Wed, 29 Jul 2009, Jorgen Lundman wrote:
> >
> > For example, I know rsync and tar does not use fdsync (but dovecot does)
> > on
> > its close(), but does NFS make it fdsync anyway?
>
> NFS is required to do synchronous writes. This is what allows NFS
> cli
"C. Bergström" writes:
> James C. McPherson wrote:
> > An introduction to btrfs, from somebody who used to work on ZFS:
> >
> > http://www.osnews.com/story/21920/A_Short_History_of_btrfs
> >
> *very* interesting article.. Not sure why James didn't directly link to
> it, but courteous of
Henk Langeveld writes:
> Mario Goebbels wrote:
> >>> An introduction to btrfs, from somebody who used to work on ZFS:
> >>>
> >>> http://www.osnews.com/story/21920/A_Short_History_of_btrfs
> >> *very* interesting article.. Not sure why James didn't directly link to
> >> it, but courteous o
Hi,
Now zpool status is referring to a device which does not even exist,
though everything else is working fine:
Sine my initial posting, I had move my data to a larger disk, so I mirrored the
rpool and removed the original disk. To make the system boot again, I also
booted from cd, removed
> I'm currently trying to decide between a MB with that chipset and
> another that uses the nVidia 780a and nf200 south bridge.
>
> Is the nVidia SATA controller well supported? (in AHCI mode?)
Be careful with nVidia if you want to use Samsung SATA disks.
There is a problem with the disk freezing
Thanks for your input, its good to read that not all are to positive. I will do
a lot more testing before i do the final choice.
I have never tested more than 3-5vm's on sata raids, but we use 40x sata with
great result our backup box, but then its only 1 servers.
does anybody have some numbe
>does anybody have some numbers on speed on sata vs 15k sas?
The next chance I get, I will do a comparison.
>Is it really a big difference?
I noticed a huge improvement when I moved a virtualized pool
off a series of 7200 RPM SATA discs to even 10k SAS drives.
Night and day...
jlc
_
On 04/08/2009, at 9:42 PM, Joseph L. Casale wrote:
I noticed a huge improvement when I moved a virtualized pool
off a series of 7200 RPM SATA discs to even 10k SAS drives.
Night and day...
What I would really like to know is if it makes a big difference
comparing say 7200RPM drives in mirro
Dear all,
I recently started another scrub, and so far the results look like this:
sh-3.2# zpool status -v
pool: z
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if po
Volker A. Brandt wrote:
I'm currently trying to decide between a MB with that chipset and
another that uses the nVidia 780a and nf200 south bridge.
Is the nVidia SATA controller well supported? (in AHCI mode?)
Be careful with nVidia if you want to use Samsung SATA disks.
There is a proble
Le 4 août 09 à 13:42, Joseph L. Casale a écrit :
does anybody have some numbers on speed on sata vs 15k sas?
The next chance I get, I will do a comparison.
Is it really a big difference?
I noticed a huge improvement when I moved a virtualized pool
off a series of 7200 RPM SATA discs to ev
Le 19 juil. 09 à 16:47, Bob Friesenhahn a écrit :
On Sun, 19 Jul 2009, Ross wrote:
The success of any ZFS implementation is *very* dependent on the
hardware you choose to run it on.
To clarify:
"The success of any filesystem implementation is *very* dependent on
the hardware you choose
I seem to have run into an issue with a pool I have, and haven't found a
resolution yet. The box is currently running FreeBSD 7-STABLE with ZFS v13,
(Open)Solaris doesn't support my raid controller.
In short: I moved all data off a pool and destroyed it. Then I added a single
slice to each dri
Le 26 juil. 09 à 01:34, Toby Thain a écrit :
On 25-Jul-09, at 3:32 PM, Frank Middleton wrote:
On 07/25/09 02:50 PM, David Magda wrote:
Yes, it can be affected. If the snapshot's data structure / record
is
underneath the corrupted data in the tree then it won't be able to
be
reached.
Try
zpool import 2169223940234886392 [storage1]
-r
Le 4 août 09 à 15:11, David a écrit :
I seem to have run into an issue with a pool I have, and haven't
found a resolution yet. The box is currently running FreeBSD 7-
STABLE with ZFS v13, (Open)Solaris doesn't support my raid controller.
> My testing has shown some serious problems with the
> iSCSI implementation for OpenSolaris.
>
> I setup a VMware vSphere 4 box with RAID 10
> direct-attached storage and 3 virtual machines:
> - OpenSolaris 2009.06 (snv_111b) running 64-bit
> - CentOS 5.3 x64 (ran yum update)
> - Ubuntu Server 9.
On Aug 4, 2009, at 7:26 AM, Joachim Sandvik
wrote:
does anybody have some numbers on speed on sata vs 15k sas? Is it
really a big difference?
For random io the number of IOPS is 1000/(mean access + avg rotational
latency) (in ms)
Avg rotational latency is 1/2 the rotational late
On Tue, Aug 4, 2009 at 7:33 AM, Roch Bourbonnais
wrote:
>
> Le 4 août 09 à 13:42, Joseph L. Casale a écrit :
>
> does anybody have some numbers on speed on sata vs 15k sas?
>>>
>>
>> The next chance I get, I will do a comparison.
>>
>> Is it really a big difference?
>>>
>>
>> I noticed a huge im
> Try
>
> zpool import 2169223940234886392 [storage1]
>
> -r
>
> Le 4 août 09 à 15:11, David a écrit :
>
Thanks for the suggestion, but that only gives me a 4 drive vdev with no
data/filesystems.
amnesiac# zpool import 2169223940234886392
amnesiac# zpool list
NAME SIZE USED AVAIL
On Aug 4, 2009, at 7:01 AM, Ross Walker wrote:
On Aug 4, 2009, at 7:26 AM, Joachim Sandvik re...@opensolaris.org> wrote:
does anybody have some numbers on speed on sata vs 15k sas? Is it
really a big difference?
For random io the number of IOPS is 1000/(mean access + avg
rotational lat
You're running into the same problem I had with 2009.06 as they have
"corrected" a bug where the iSCSI target prior to 2009.06 didn't honor
completely SCSI sync commands issued by the initiator.
Some background :
Discussion:
http://opensolaris.org/jive/thread.jspa?messageID=388492
"correcte
On Tue, Aug 4, 2009 at 10:33 AM, Richard Elling wrote:
> On Aug 4, 2009, at 7:01 AM, Ross Walker wrote:
>>
>> On Aug 4, 2009, at 7:26 AM, Joachim Sandvik
>> wrote:
>>
>>>
>>> does anybody have some numbers on speed on sata vs 15k sas? Is it really
>>> a big difference?
>>
>> For random io the numb
Are there any improvements in the Solaris 10 pipeline for how
compression is implemented?
I changed my USB-based backup pool to use gzip compression (with
default level 6) rather than the lzjb compression which was used
before. When lzjb compression was used, it would case the X11 session
to
>>If by 'huge' you mean much more than 10K/7.2K in the data path with otherwise
>>same number of spindles, then
>>that has got to be because of something not specified here.
>>
>
>No it doesn't. The response time on 10k drives is night and day better than
>7.2k drives. VMware workloads look exa
On Tue, Aug 4, 2009 at 10:40 AM, erik.ableson wrote:
> You're running into the same problem I had with 2009.06 as they have
> "corrected" a bug where the iSCSI target prior to
> 2009.06 didn't honor completely SCSI sync commands issued by the initiator.
> Some background :
> Discussion:
> http://op
Tim Cook writes:
> On Tue, Aug 4, 2009 at 7:33 AM, Roch Bourbonnais
> wrote:
>
> >
> > Le 4 août 09 à 13:42, Joseph L. Casale a écrit :
> >
> > does anybody have some numbers on speed on sata vs 15k sas?
> >>>
> >>
> >> The next chance I get, I will do a comparison.
> >>
> >> Is it
On Tue, Aug 4, 2009 at 9:57 AM, Charles Baker wrote:
>> My testing has shown some serious problems with the
>> iSCSI implementation for OpenSolaris.
>>
>> I setup a VMware vSphere 4 box with RAID 10
>> direct-attached storage and 3 virtual machines:
>> - OpenSolaris 2009.06 (snv_111b) running 64-bi
This has been a very enlightening thread for me, and explains a lot of the
performance data I have collected on both 2008.11 and 2009.06 which mirrors the
experiences here. Thanks to you all.
NFS perf tuning, here I come...
-Scott
--
This message posted from opensolaris.org
__
On Tue, Aug 4, 2009 at 11:21 AM, Ross Walker wrote:
> On Tue, Aug 4, 2009 at 9:57 AM, Charles Baker wrote:
>>> My testing has shown some serious problems with the
>>> iSCSI implementation for OpenSolaris.
>>>
>>> I setup a VMware vSphere 4 box with RAID 10
>>> direct-attached storage and 3 virtual
On 4-Aug-09, at 9:28 AM, Roch Bourbonnais wrote:
Le 26 juil. 09 à 01:34, Toby Thain a écrit :
On 25-Jul-09, at 3:32 PM, Frank Middleton wrote:
On 07/25/09 02:50 PM, David Magda wrote:
Yes, it can be affected. If the snapshot's data structure /
record is
underneath the corrupted data in
I have a zpool that has been plagued due to physical disk failures.
The zpool consists of two raidz2s. There are a few disks that have
been removed from the zpool due to failure, and otherwise, the data on
failing disks has been copied to new media and the few (<10 per disk)
blocks that wer
Hi
I'm running an application which is using hot plug sata drives as giant
removable usb keys but bigger and with SATA performance.
I'm using “cfgadm connect” then “configure” then “zpool import” to bring a
drive on-line and export / unconfigure / disconnect before unplugging. All
works well.
On Tue, 4 Aug 2009, Ross Walker wrote:
But this MUST happen. If it doesn't then you are playing Russian
Roulette with your data, as a kernel panic can cause a loss of up to
1/8 of the size of your system's RAM (ZFS lazy write cache) of your
iSCSI target's data!
The actual risk (with recent zfs
You seem to be hitting :
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6586537
The fix is available in OpenSolaris build 115 and later not for Solaris 10 yet.
--
Prabahar.
On Tue, Aug 04, 2009 at 10:08:37AM -0500, Bob Friesenhahn wrote:
> Are there any improvements in the Solaris 1
What version of Solaris / OpenSolaris are you running there? I remember zfs
commands locking up being a big problem a while ago, but I thought they'd
managed to solve the issues like this.
--
This message posted from opensolaris.org
___
zfs-discuss ma
On Tue, 4 Aug 2009, Prabahar Jeyaram wrote:
You seem to be hitting :
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6586537
The fix is available in OpenSolaris build 115 and later not for Solaris 10 yet.
It is interesting that this is a simple thread priority issue. The
system
Hi Bob,
Bob Friesenhahn wrote:
On Tue, 4 Aug 2009, Prabahar Jeyaram wrote:
You seem to be hitting :
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6586537
The fix is available in OpenSolaris build 115 and later not for
Solaris 10 yet.
It is interesting that this is a simple th
On Aug 4, 2009, at 8:01 AM, Ross Walker wrote:
On Tue, Aug 4, 2009 at 10:33 AM, Richard Elling> wrote:
On Aug 4, 2009, at 7:01 AM, Ross Walker wrote:
On Aug 4, 2009, at 7:26 AM, Joachim Sandvik >
wrote:
does anybody have some numbers on speed on sata vs 15k sas? Is it
really
a big diffe
On Tue, Aug 04, 2009 at 01:01:40PM -0500, Bob Friesenhahn wrote:
> On Tue, 4 Aug 2009, Prabahar Jeyaram wrote:
>
>> You seem to be hitting :
>>
>> http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6586537
>>
>> The fix is available in OpenSolaris build 115 and later not for Solaris 10
>>
Apologies - I'm daft for not saying originally: OpenSolaris 2009.06 on x86
Cheers
Chris
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
what exact type of sata controller do you use?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi All,
I am about to setup a personal data server on some decent hardware (1u
SuperServer, Xeon, LSI SAS controller, SAS backplane). Well at least,
it's decent hardware to me. :)
After reading Richard's blog post, I'm still a little unsure how to
proceed.
Details:
- I have 8 drives to
It's a generic Sil3132 based PCIe x1 card using the si3124 driver.
Prior to this I had been using Intel ICH10R with AHCI but I have found the
Sil3132 actually hot plugs a little smoother than the Intel chipset. I have not
gone back to recheck this specific problem on the ICH10R (though I can), I
I'm using Solaris 10u6 updated to u7 via patches, and I have a pool
with a mirrored pair and a (shared) hot spare. We reconfigured disks
a while ago and now the controller is c4 instead of c2. The hot spare
was originally on c2, and apparently on rebooting it didn't get found.
So, I looked up wh
On Tue, 4 Aug 2009, Adam Sherman wrote:
4. Use a CompactFlash card (the board has a slot) for root, 8 drives in
raidz2 tank, backup the root regularly
If booting/running from CompactFlash works, then I like this one.
Backing up root should be trivial since you can back it up into your
big sto
On 4-Aug-09, at 16:08 , Bob Friesenhahn wrote:
On Tue, 4 Aug 2009, Adam Sherman wrote:
4. Use a CompactFlash card (the board has a slot) for root, 8
drives in raidz2 tank, backup the root regularly
If booting/running from CompactFlash works, then I like this one.
Backing up root should be t
I'd create a mirror for rpool and the rest in another pool using raidz2.
Another note, have you bought disks already? You may want to take a look at
2.5" SAS disks from Seagate as they are enterprise grade with different
firmware for better error recovery. I know the SAS backplane is picky
some
Adam Sherman wrote:
On 4-Aug-09, at 16:08 , Bob Friesenhahn wrote:
On Tue, 4 Aug 2009, Adam Sherman wrote:
4. Use a CompactFlash card (the board has a slot) for root, 8 drives
in raidz2 tank, backup the root regularly
If booting/running from CompactFlash works, then I like this one.
Backing
On Aug 4, 2009, at 1:35 PM, Bob Friesenhahn > wrote:
On Tue, 4 Aug 2009, Ross Walker wrote:
But this MUST happen. If it doesn't then you are playing Russian
Roulette with your data, as a kernel panic can cause a loss of up to
1/8 of the size of your system's RAM (ZFS lazy write cache) of your
On 4-Aug-09, at 16:18 , Chris Du wrote:
Another note, have you bought disks already? You may want to take a
look at 2.5" SAS disks from Seagate as they are enterprise grade
with different firmware for better error recovery. I know the SAS
backplane is picky sometimes. You may see disks disco
On Aug 4, 2009, at 2:11 PM, Richard Elling
wrote:
On Aug 4, 2009, at 8:01 AM, Ross Walker wrote:
On Tue, Aug 4, 2009 at 10:33 AM, Richard Elling> wrote:
On Aug 4, 2009, at 7:01 AM, Ross Walker wrote:
For random io the number of IOPS is 1000/(mean access + avg
rotational
latency) (in ms
Hi Will,
It looks to me like you are running into this bug:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6664649
This is fixed in Nevada and a fix will also be available in an
upcoming Solaris 10 release.
This doesn't help you now, unfortunately.
I don't think this ghost of a de
Yes Constellation, they also have sata version. CA$350 is way too high. It's
CA$280 for SAS and CA$235 for SATA, 500GB in Vancouver.
If you already have the disks, then forget about it.
--
This message posted from opensolaris.org
___
zfs-discuss mailin
On Tue, Aug 4, 2009 at 19:05, wrote:
> Hi Will,
>
> It looks to me like you are running into this bug:
>
> http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6664649
>
> This is fixed in Nevada and a fix will also be available in an
> upcoming Solaris 10 release.
That looks like exactly th
Ross Walker wrote:
I get pretty good NFS write speeds with NVRAM (40MB/s 4k sequential
write). It's a Dell PERC 6/e with 512MB onboard.
...
there, dedicated slog device with NVRAM speed. It would be even better
to have a pair of SSDs behind the NVRAM, but it's hard to find
compatible SSDs for
We ran into something similar with controllers changing after a x4500 to
x4540 upgrade.
In our case the the spares were in a separate data pool so the recovery
procedure we developed was relatively easy to implement as long as downtime
could be scheduled.
You may be able to tweak the procedure to
What shall I do ? my server is not support ssd . go back to use 0811 ?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 05/08/2009, at 10:36 AM, Carson Gaspar wrote:
Isn't the PERC 6/e just a re-branded LSI? LSI added SSD support
recently.
Yep, it's a mega raid device.
I have been using one with a Samsung SSD in RAID0 mode (to avail
myself of the cache) recently with great success.
cheers,
James
> With RAID-Z stripes can be of variable width meaning that, say, a
> single row
> in a 4+2 configuration might have two stripes of 1+2. In other words,
> there
> might not be enough space in the new parity device.
Wow -- I totally missed that scenario. Excellent point.
> I did write up the
> s
On Aug 4, 2009, at 8:36 PM, Carson Gaspar wrote:
Ross Walker wrote:
I get pretty good NFS write speeds with NVRAM (40MB/s 4k sequential
write). It's a Dell PERC 6/e with 512MB onboard.
...
there, dedicated slog device with NVRAM speed. It would be even
better to have a pair of SSDs behind
On Aug 4, 2009, at 9:18 PM, James Lever wrote:
On 05/08/2009, at 10:36 AM, Carson Gaspar wrote:
Isn't the PERC 6/e just a re-branded LSI? LSI added SSD support
recently.
Yep, it's a mega raid device.
I have been using one with a Samsung SSD in RAID0 mode (to avail
myself of the cache)
On 05/08/2009, at 11:36 AM, Ross Walker wrote:
Which model?
PERC 6/E w/512MB BBWC.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Aug 4, 2009, at 9:37 PM, James Lever wrote:
On 05/08/2009, at 11:36 AM, Ross Walker wrote:
Which model?
PERC 6/E w/512MB BBWC.
Really?
You know I tried flashing mine with LSI's firmware and while it seemed
to take it still didn't recognize my mtrons.
What is your recipe for these
Ross Walker wrote:
On Aug 4, 2009, at 8:36 PM, Carson Gaspar wrote:
Isn't the PERC 6/e just a re-branded LSI? LSI added SSD support recently.
Yes, but the LSI support of SSDs is on later controllers.
Please cite your source for that statement.
The PERC 6/e is an LSI 1078. The LSI web sit
Is there a way to change the device name used to create a zpool?
My customer created their pool with physical device names rather than
the emc powerpath virtual names.
They have data on there already, so they would like to preserve it.
My experience with zpool replace is that it copies data ove
Does anyone know when Solaris 10 will have the bits to allow removal of
vdevs from a pool to shrink the storage?
Thanks,
Brian
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 05/08/2009, at 11:41 AM, Ross Walker wrote:
What is your recipe for these?
There wasn't one! ;)
The drive I'm using is a Dell badged Samsung MCCOE50G5MPQ-0VAD3.
cheers,
James
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail
On Aug 4, 2009, at 9:55 PM, Carson Gaspar wrote:
Ross Walker wrote:
On Aug 4, 2009, at 8:36 PM, Carson Gaspar wrote:
Isn't the PERC 6/e just a re-branded LSI? LSI added SSD support
recently.
Yes, but the LSI support of SSDs is on later controllers.
Please cite your source for that stat
On Tue, 4 Aug 2009, Ross Walker wrote:
Are you sure that it is faster than an SSD? The data is indeed pushed
closer to the disks, but there may be considerably more latency associated
with getting that data into the controller NVRAM cache than there is into a
dedicated slog SSD.
I don't see
On Aug 4, 2009, at 10:17 PM, James Lever wrote:
On 05/08/2009, at 11:41 AM, Ross Walker wrote:
What is your recipe for these?
There wasn't one! ;)
The drive I'm using is a Dell badged Samsung MCCOE50G5MPQ-0VAD3.
So the key is the drive needs to have the Dell badging to work?
I called m
On Aug 4, 2009, at 10:22 PM, Bob Friesenhahn > wrote:
On Tue, 4 Aug 2009, Ross Walker wrote:
Are you sure that it is faster than an SSD? The data is indeed
pushed closer to the disks, but there may be considerably more
latency associated with getting that data into the controller
NVRAM ca
Ok - in an attempt to weasel my way past the issue I mirrored my problematic
si3124 drive to a second drive on the ICH10R, started writing to the file
system and then killed the power to the si3124 removable drive.
To my (unfortunate) surprise, the IO stream that was writing to the mirrored
fil
Hi All,
You know, ZFS afford a very Big buffer for write IO.
So, When we write a file, the first stage is put it to buffer.
But, if the file is VERY short-lived? Is it bring IO to disk?
or else, it just put the meta data and data to memory, and then removed it?
___
i boot from compact flash. it's not a big deal if you mirror it because you
shouldn't be booting up very often. Also, they make these great
compactflash to sata adapters so if yer motherboard has 2 open sata ports
then you'll be golden there.
On Tue, Aug 4, 2009 at 7:46 PM, Chris Du wrote:
> Y
Whether ZFS properly detects device removal depends to a large extent on the
device drivers for the controller. I personally have stuck to using
controllers with chipsets I know Sun use on their own servers, but even then
I've been bitten by similar problems to yours on the AOC-SAT2-MV8 cards.
So much for the "it's a consumer hardware problem" argument.
I for one gotta count it as a major drawback of ZFS that it doesn't provide you
a mechanism to get something of your pool back in the manner of reconstruction
or reversion, if a failure occurs, where there is a metadata inconsistency.
Chris,
Can you please check the failmode property of the pool ?
-- zpool get failmode
If it is set to "wait", you could try setting it to "continue".
Regards,
Sanjeev
On Tue, Aug 04, 2009 at 08:56:03PM -0700, Chris Baker wrote:
> Ok - in an attempt to weasel my way past the issue I mirrored my
Ross Walker wrote:
On Aug 4, 2009, at 8:36 PM, Carson Gaspar wrote:
Ross Walker wrote:
I get pretty good NFS write speeds with NVRAM (40MB/s 4k sequential
write). It's a Dell PERC 6/e with 512MB onboard.
...
there, dedicated slog device with NVRAM speed. It would be even
better to have a
78 matches
Mail list logo