Hmmm,
wondering about IMHO strange ZFS results ...
X4440: 4x6 2.8GHz cores (Opteron 8439 SE), 64 GB RAM
6x Sun STK RAID INT V1.0 (Hitachi H103012SCSUN146G SAS)
Nevada b124
Started with a simple test using zfs on c1t0d0s0: cd /var/tmp
(1) time sh -c 'mkfile 32g bla ; sync'
0.16
On Oct 22, 2009, at 4:18, Ian Allison wrote:
Hi,
I've been looking at a raidz using opensolaris snv_111b and I've
come across something I don't quite understand. I have 5 disks
(fixed size disk images defined in virtualbox) in a raidz
configuration, with 1 disk marked as a spare. The dis
Bob Friesenhahn simple.dallas.tx.us> writes:
>
> The Intel specified random write IOPS are with the cache enabled and
> without cache flushing.
For random write I/O, caching improves I/O latency not sustained I/O
throughput (which is what random write IOPS usually refer to). So Intel can't
ch
On Wed, Oct 21, 2009 at 9:15 PM, Jake Caferilla wrote:
> Clearly a lot of people don't understand latency, so I'll talk about
> latency, breaking it down in simpler components.
>
> Sometimes it helps to use made up numbers, to simplify a point.
>
> Imagine a non-real system that had these 'ridicu
Now lets talk about the 'latency deny'ers'
First of all, the say, there is no standard measurement of latency.
That isn't complicated. Sun includes the transfer time in latency figures,
other companies do not.
THen latency deny'ers say, there is no way to compare the numbers. Thats what
I'm
Clearly a lot of people don't understand latency, so I'll talk about latency,
breaking it down in simpler components.
Sometimes it helps to use made up numbers, to simplify a point.
Imagine a non-real system that had these 'ridiculous' performance
characteristics:
The system has a 60 second (1
have a look at this thread:-
http://mail.opensolaris.org/pipermail/zfs-discuss/2009-September/032349.html
we discussed this a while back.
SHOUJIN WANG wrote:
Hi there,
What I am tring to do is: Build a NAS storage server based on the following hardware architecture:
Server-->SAS HBA--->
On Oct 21, 2009, at 5:18 PM, Ian Allison wrote:
Hi,
I've been looking at a raidz using opensolaris snv_111b and I've
come across something I don't quite understand. I have 5 disks
(fixed size disk images defined in virtualbox) in a raidz
configuration, with 1 disk marked as a spare. The d
Erik Trimble wrote:
> As always, the devil is in the details. In this case,
> the primary
> problem I'm having is maintaining two different block
> mapping schemes
> (one for the old disk layout, and one for the new
> disk layout) and still
> being able to interrupt the expansion process. My
>
Hi,
I've been looking at a raidz using opensolaris snv_111b and I've come
across something I don't quite understand. I have 5 disks (fixed size
disk images defined in virtualbox) in a raidz configuration, with 1 disk
marked as a spare. The disks are 100m in size and I wanted simulate data
cor
Hi there,
What I am tring to do is: Build a NAS storage server based on the following
hardware architecture:
Server-->SAS HBA--->SAS JBOD
I plugin 2 SAS HBA cards into a X86 box, I also have 2 SAS I/O Modules on SAS
JBOD. From each HBA card, I have one SAS cable which connects to SAS JBOD.
Confi
Hi Stacy,
Can you try to forcibly create a new pool using the devices from
the corrupted pool, like this:
# zpool create -f newpool disk1 disk2 ...
Then, destroy this pool, which will release the devices.
This CR has been filed to help resolve the pool cruft problem:
6893282 Allow the zpool c
On Oct 21, 2009, at 6:14 AM, Dupuy, Robert wrote:
" This is one of the skimpiest specification sheets that I have ever
seen for an enterprise product."
At least it shows the latency.
STORAGEsearch has been trying to wade through the spec muck
for years.
http://www.storagesearch.com/ssd-fastes
Stacy Maydew wrote:
I have an exported zpool that had several drives incur errors at the same time
and as a result became unusable. The pool was exported at the time the drives
had problems and now I can't find a way to either delete or import the pool.
I've tried relabeling the disks and usi
Ok. Thanks. Why does '/' show up in the newly created /BE/etc/vfstab but not
in the current /etc/vfstab? Should '/' be in the /BE/etc/vfstab?
btw, thank you for responding so quickly to this.
Mark
On Wed, Oct 21, 2009 at 12:49 PM, Enda O'Connor wrote:
> Mark Horstman wrote:
>
>> Then why the wa
Then why the warning on the lucreate. It hasn't done that in the past.
Mark
On Oct 21, 2009, at 12:41 PM, "Enda O'Connor"
wrote:
Hi T
his will boot ok in my opinion, not seeing any issues there.
Enda
Mark Horstman wrote:
more input:
# lumount foobar /mnt
/mnt
# cat /mnt/etc/vfstab
# cat
" This is one of the skimpiest specification sheets that I have ever
seen for an enterprise product."
At least it shows the latency.
This is some kind of technology cult, I've wondered into.
I won't respond further.
-Original Message-
From: Bob Friesenhahn [mailto:bfrie...@simple.dall
I've already explained how you can scale up IOP #'s and unless that is
your real workload, you won't see that in practice.
See, running a high # of parallel jobs spread evenly across.
I don't find the conversation genuine, so I'm not going to continue it.
-Original Message-
From: Richar
My take on the responses I've received the last days, is that it isn't
genuine.
From: Tim Cook [mailto:t...@cook.ms]
Sent: 2009-10-20 20:57
To: Dupuy, Robert
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Sun Flash Accelerator F20
On Tue,
I have an exported zpool that had several drives incur errors at the same time
and as a result became unusable. The pool was exported at the time the drives
had problems and now I can't find a way to either delete or import the pool.
I've tried relabeling the disks and using dd to write several
On Tue, 20 Oct 2009, [UTF-8] Fr??d??ric VANNIERE wrote:
> You can't use the Intel X25-E because it has a 32 or 64 MB volatile cache
> that can't be disabled neither flushed by ZFS.
Say what? My understanding is that the officially supported Sun SSD for the
x4540 is an OEM'd Intel X25-E, so I don'
I've had a case open for a while (SR #66210171) regarding the inability to
import a pool whose log device failed while the pool was off line.
I was told this was CR #6343667, which was supposedly fixed in patches
141444-09/141445-09. However, I recently upgraded a system to U8 which
includes that
Hi
Yes sorry remove that line from vfstab in the new BE
Enda
Mark wrote:
Ok. Thanks. Why does '/' show up in the newly created /BE/etc/vfstab but
not in the current /etc/vfstab? Should '/' be in the /BE/etc/vfstab?
btw, thank you for responding so quickly to this.
Mark
On Wed, Oct 21, 2009 a
On Wed, October 21, 2009 12:53, Bob Friesenhahn wrote:
> On Wed, 21 Oct 2009, David Dyer-Bennet wrote:
>>>
>>> Device performance should be specified as a minimum assured level of
>>> performance and not as meaningless "peak" ("up to") values. I repeat:
>>> peak values are meaningless.
>>
>> Seem
On Wed, 21 Oct 2009, David Dyer-Bennet wrote:
Device performance should be specified as a minimum assured level of
performance and not as meaningless "peak" ("up to") values. I repeat:
peak values are meaningless.
Seems a little pessimistic to me. Certainly minimum assured values are
the bas
Mark Horstman wrote:
Then why the warning on the lucreate. It hasn't done that in the past.
this is from the vfstab processing code in ludo.c, in your case not
causing any issue, but shall be fixed.
Enda
Mark
On Oct 21, 2009, at 12:41 PM, "Enda O'Connor" wrote:
Hi T
his will boot ok in
Hi T
his will boot ok in my opinion, not seeing any issues there.
Enda
Mark Horstman wrote:
more input:
# lumount foobar /mnt
/mnt
# cat /mnt/etc/vfstab
# cat /mnt/etc/vfstab
#live-upgrade: updated boot environment
#device device mount FS fsckmount mount
On Wed, October 21, 2009 12:21, Bob Friesenhahn wrote:
>
> Device performance should be specified as a minimum assured level of
> performance and not as meaningless "peak" ("up to") values. I repeat:
> peak values are meaningless.
Seems a little pessimistic to me. Certainly minimum assured val
On Wed, 21 Oct 2009, Marc Bevand wrote:
Bob Friesenhahn simple.dallas.tx.us> writes:
[...]
X25-E's write cache is volatile), the X25-E has been found to offer a
bit more than 1000 write IOPS.
I think this is incorrect. On the paper the X25-E offers 3300 random write
4kB IOPS (and Intel is kn
On Oct 20, 2009, at 10:24 PM, Frédéric VANNIERE wrote:
The ZIL is a write-only log that is only read after a power failure.
Several GB is large enough for most workloads.
You can't use the Intel X25-E because it has a 32 or 64 MB volatile
cache that can't be disabled neither flushed by ZFS.
Hi Stuart,
I ran various forms of the zdb command to see if I could glean
the metadata accounting stuff but it is beyond my mere mortal
skills.
Maybe someone else can provide the right syntax.
Cindy
On 10/20/09 10:17, Stuart Anderson wrote:
Cindy,
Thanks for the pointer. Until this is re
Hi Matthew,
You can use various forms of fmdump to decode this output.
It might be easier to use fmdump -eV and look for the
device info in the vdev path entry, like the one below.
Also see if the errors on these vdevs are reported in
your zpool status output.
Thanks,
Cindy
# fmdump -eV | mor
more input:
# lumount foobar /mnt
/mnt
# cat /mnt/etc/vfstab
# cat /mnt/etc/vfstab
#live-upgrade: updated boot environment
#device device mount FS fsckmount mount
#to mount to fsck point typepassat boot options
#
fd -
More input:
# cat /etc/lu/ICF.1
sol10u8:-:/dev/zvol/dsk/rpool/swap:swap:67108864
sol10u8:/:rpool/ROOT/sol10u8:zfs:0
sol10u8:/appl:pool00/global/appl:zfs:0
sol10u8:/home:pool00/global/home:zfs:0
sol10u8:/rpool:rpool:zfs:0
sol10u8:/install:pool00/shared/install:zfs:0
sol10u8:/opt/local:pool00/shared
I have several of these messages from fmdump:
fmdump -v -u 98abae95-8053-4cdc-d91a-dad89b125db4~
TIME UUID SUNW-MSG-ID
Sep 18 00:45:23.7621 98abae95-8053-4cdc-d91a-dad89b125db4 ZFS-8000-FD
100% fault.fs.zfs.vdev.io
Proble
Ed, your comment:
>If solaris is able to install at all, I would have to acknowledge, I
>have to shutdown anytime I need to change the Perc configuration, including
>replacing failed disks.
Replacing failed disks is easy when PERC is doing the RAID. Just remove the
failed drive and replace with
Thanks Frédéric, that is a very interesting read.
So my options as I see them now:
1. Keep the x25-e, and disable the cache. Performance should still be improved,
but not by a *whole* like, right? I will google for an expectation, but if
anyone knows off the top of their head, I would be app
Mark Horstman wrote:
I don't see anything wrong with my /etc/vfstab. Until I get this resolved, I'm
afraid to patch and use the new BE.
It's the vfstab file in the newly created ABE that is "wrongly" written to.
Try to mount this new ABE and check out for yourself.
--
Dick Hoogendijk -- PGP
Hi
This looks ok to me, message but not an indicator of an issue
could you post
cat /etc/lu/ICF.1
cat /etc/ICF.2 ( the foobar Be )
also lumount foobar /a
and cat /a/etc/vfstab
Enda
Mark Horstman wrote:
I'm seeing the same [b]lucreate[/b] error on my fresh SPARC sol10u8 install
(and my SPARC
Neither the virgin SPARC sol10u8 nor the (update to date) patched SPARC sol10u7
have any local zones.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-d
My goal is to have a big, fast, HA filer that holds nearly everything for a
bunch of development services, each running in its own Solaris zone. So when I
need a new service, test box, etc., I provision a new zone and hand it to the
dev requesters and they load their stuff on it and go.
Ea
I'm seeing the same [b]lucreate[/b] error on my fresh SPARC sol10u8 install
(and my SPARC sol10u7 machine I keep patches up to date), but I don't have a
separate /var:
# zfs list
NAMEUSED AVAIL REFER MOUNTPOINT
pool00 3.36G 532G20K none
pool00/global
> Thanks Ed. It sounds like you have run in this mode? No issues with
> the perc?
> >
> > You can JBOD with the perc. It might be technically a raid0 or
> > raid1 with a
> > single disk in it, but that would be functionally equivalent to JBOD.
The only time I did this was ...
I have a Windows se
Please don't feed the troll.
:)
-brian
On Wed, Oct 21, 2009 at 06:32:42AM -0700, Robert Dupuy wrote:
> There is a debate tactic known as complex argument, where so many false and
> misleading statements are made at once, that it overwhelms the respondent.
>
> I'm just going to respond this way
There is a debate tactic known as complex argument, where so many false and
misleading statements are made at once, that it overwhelms the respondent.
I'm just going to respond this way.
I am very disappointed in this discussion group. The response is not genuine.
The idea that latency is not
Thanks Ed. It sounds like you have run in this mode? No issues with
the perc?
--
Scott Meilicke
On Oct 20, 2009, at 9:59 PM, "Edward Ned Harvey"
wrote:
System:
Dell 2950
16G RAM
16 1.5T SATA disks in a SAS chassis hanging off of an LSI 3801e, no
extra drive slots, a single zpool.
svn_124
Thank you Bob and Richard. I will go with A, as it also keeps things simple.
One physical device per pool.
-Scott
On 10/20/09 6:46 PM, "Bob Friesenhahn" wrote:
> On Tue, 20 Oct 2009, Richard Elling wrote:
>>
>> The ZIL device will never require more space than RAM.
>> In other words, if you o
I've been trying to rollback a snapshot but seem to be unable to do so. Can
anyone shed some light on what I may be doing wrong?
I'm trying to rollback from thumperpool/m...@200908271200 to
thumperpool/m...@200908270100.
344 r...@thumper1:~> zfs list -t snapshot | tail
thumperpool/m...@2009082617
What makes you say that the X25-E's cache can't be disabled or flushed?
The net seems to be full of references to people who are disabling the
cache, or flushing it frequently, and then complaining about the
performance!
T
Frédéric VANNIERE wrote:
The ZIL is a write-only log that is only read
So is there is a Change Request on this?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
50 matches
Mail list logo