Regardless of the merit of the rest of your proposal, I think you have put your
finger on the core of the problem: aside from some apparent reluctance on the
part of some of the ZFS developers to believe that any problem exists here at
all (and leaving aside the additional monkey wrench that us
>
> Poor sequential read performance has not been quantified.
>
> >- COW probably makes that conflict worse
> >
> >
>
> This needs to be proven with a reproducible, real-world workload before it
> makes sense to try to solve it. After all, if we cannot measure where
> we are,
> how can we prov
On Nov 19, 2007 11:47 PM, Richard Elling <[EMAIL PROTECTED]> wrote:
> Asif Iqbal wrote:
> > I have the following layout
> >
> > A 490 with 8 1.8Ghz and 16G mem. 6 6140s with 2 FC controllers using
> > A1 anfd B1 controller port 4Gbps speed.
> > Each controller has 2G NVRAM
> >
> > On 6140s I setup
On Nov 19, 2007 1:43 AM, Louwtjie Burger <[EMAIL PROTECTED]> wrote:
> On Nov 17, 2007 9:40 PM, Asif Iqbal <[EMAIL PROTECTED]> wrote:
> > (Including storage-discuss)
> >
> > I have 6 6140s with 96 disks. Out of which 64 of them are Seagate
> > ST337FC (300GB - 1 RPM FC-AL)
>
> Those disks ar
That locked up pretty quickly as well, one more reboot and this is what I'm
seeing now:
root:=> zpool status
pool: fserv
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the re
Asif Iqbal wrote:
> I have the following layout
>
> A 490 with 8 1.8Ghz and 16G mem. 6 6140s with 2 FC controllers using
> A1 anfd B1 controller port 4Gbps speed.
> Each controller has 2G NVRAM
>
> On 6140s I setup raid0 lun per SAS disks with 16K segment size.
>
> On 490 I created a zpool with 8 4
After messing around... who knows what's going on with it now. Finally
rebooted because I was sick of it hanging. After that, this is what it came
back with:
root:=> zpool status
pool: fserv
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
c
James Cone wrote:
> Hello All,
>
> Here's a possibly-silly proposal from a non-expert.
>
> Summarising the problem:
>- there's a conflict between small ZFS record size, for good random
> update performance, and large ZFS record size for good sequential read
> performance
>
Poor sequential
Hello All,
Mike Speirs at Sun in New Zealand pointed me toward you-all. I have
several sets of questions, so I plan to group them and send several emails.
This question is about the name/attribute mapping layer in ZFS. In the
last version of the source-code that I read, it provides hash-table
Hello All,
Here's a possibly-silly proposal from a non-expert.
Summarising the problem:
- there's a conflict between small ZFS record size, for good random
update performance, and large ZFS record size for good sequential read
performance
- COW probably makes that conflict worse
- re
On Mon, Nov 19, 2007 at 06:23:01PM -0800, Eric Schrock wrote:
> You should be able to do a 'zpool detach' of the replacement and then
> try again.
Thanks. That worked.
> - Eric
>
> On Mon, Nov 19, 2007 at 08:20:04PM -0600, Albert Chin wrote:
> > Running ON b66 and had a drive fail. Ran 'zfs repl
So... issues with reslivering yet again. This is ~3TB pool. I have one raid-z
of 5 500GB disks, and a second pool of 3 300GB disks. One of the 300GB disks
failed, so I have replaced the drive. After doing the resliver, it takes
approximately 5 minutes for it to complete 68.05% of the reslive
On Sun, Nov 18, 2007 at 02:18:21PM +0100, Peter Schuller wrote:
> > Right now I have noticed that LSI has recently began offering some
> > lower-budget stuff; specifically I am looking at the MegaRAID SAS
> > 8208ELP/XLP, which are very reasonably priced.
I looked up the 8204XLP, which is really q
You should be able to do a 'zpool detach' of the replacement and then
try again.
- Eric
On Mon, Nov 19, 2007 at 08:20:04PM -0600, Albert Chin wrote:
> Running ON b66 and had a drive fail. Ran 'zfs replace' and resilvering
> began. But, accidentally deleted the replacement drive on the array
> via
Running ON b66 and had a drive fail. Ran 'zfs replace' and resilvering
began. But, accidentally deleted the replacement drive on the array
via CAM.
# zpool status -v
...
raidz2 DEGRADED 0 0 0
c0t600A0B800029996605964668CB3
> : > Big talk from someone who seems so intent on hiding
> : > their credentials.
>
> : Say, what? Not that credentials mean much to me since I evaluate people
> : on their actual merit, but I've not been shy about who I am (when I
> : responded 'can you guess?' in registering after giving billt
On Mon, Nov 19, 2007 at 04:33:26PM -0700, Brian Lionberger wrote:
> If I yank out a disk in a raidz2 4 disk array, shouldn't the other disks
> pick up without any errrors?
> I have a 3120 JBOD and I went and yanked out a disk and the everything
> got hosed. It's okay, because I'm just testing stu
If I yank out a disk in a raidz2 4 disk array, shouldn't the other disks
pick up without any errrors?
I have a 3120 JBOD and I went and yanked out a disk and the everything
got hosed. It's okay, because I'm just testing stuff and wanted to see
raidz2 in action when a disk goes down.
Am I missing
...
currently what it
> does is to
> maintain files subject to small random writes
> contiguous to
> the level of the zfs recordsize. Now after a
> significant
> run of random writes the files ends up with a
> scattered
> n-disk layout. This should work well for the
> transaction
>
Hey ZFS crew ... do you want to jump in and help answer these questions?
On 11/19/07 04:13, Ross wrote:
> Heya,
>
> This is a question based on a feature request I saw in the wiki:
> http://www.genunix.org/wiki/index.php/OpenSolaris_Storage_Developer_Wish_List
>
> I'm a complete newbie to all thi
Roch - PAE wrote:
> Neil Perrin writes:
> >
> >
> > Joe Little wrote:
> > > On Nov 16, 2007 9:13 PM, Neil Perrin <[EMAIL PROTECTED]> wrote:
> > >> Joe,
> > >>
> > >> I don't think adding a slog helped in this case. In fact I
> > >> believe it made performance worse. Previously the ZIL w
Joe Little wrote:
> On Nov 16, 2007 10:41 PM, Neil Perrin <[EMAIL PROTECTED]> wrote:
>>
>> Joe Little wrote:
>>> On Nov 16, 2007 9:13 PM, Neil Perrin <[EMAIL PROTECTED]> wrote:
Joe,
I don't think adding a slog helped in this case. In fact I
believe it made performance worse. P
On Nov 19, 2007 9:41 AM, Roch - PAE <[EMAIL PROTECTED]> wrote:
>
> Neil Perrin writes:
> >
> >
> > Joe Little wrote:
> > > On Nov 16, 2007 9:13 PM, Neil Perrin <[EMAIL PROTECTED]> wrote:
> > >> Joe,
> > >>
> > >> I don't think adding a slog helped in this case. In fact I
> > >> believe it m
Neil Perrin writes:
>
>
> Joe Little wrote:
> > On Nov 16, 2007 9:13 PM, Neil Perrin <[EMAIL PROTECTED]> wrote:
> >> Joe,
> >>
> >> I don't think adding a slog helped in this case. In fact I
> >> believe it made performance worse. Previously the ZIL would be
> >> spread out over all devi
On Mon, Nov 19, 2007 at 11:10:32AM +0100, Paul Boven wrote:
> Any suggestions on how to further investigate / fix this would be very
> much welcomed. I'm trying to determine whether this is a zfs bug or one
> with the Transtec raidbox, and whether to file a bug with either
> Transtec (Promise) or z
On 11/19/07, Ian Collins <[EMAIL PROTECTED]> wrote:
> For a home user, data integrity is probably as, if not more, important
> than for a corporate user. How many home users do regular backups?
Let me correct a point I made badly the first time around, I
value the data integrity provided
K wrote:
> I have a xVm b75 server and use zfs for storage (zfs root mirror and a
> raid-z2 datapool.)
>
> I see everywhere that it is recommended to have a lot of memory on a
> zfs file server... but I also need to relinquish a lot of my memory to
> be used by the domUs.
>
> What would a
I have a xVm b75 server and use zfs for storage (zfs root mirror and a
raid-z2 datapool.)
I see everywhere that it is recommended to have a lot of memory on a
zfs file server... but I also need to relinquish a lot of my memory to
be used by the domUs.
What would a good value for dom0_mem o
> For a home user, data integrity is probably as, if not more, important
> than for a corporate user. How many home users do regular backups?
I'm a heavy computer user and probably passed the 500GB mark way before
most other home users, did various stunts like running a RAID0 on IBM
Deathstars, a
Anton B. Rang writes:
> > When you have a striped storage device under a
> > file system, then the database or file system's view
> > of contiguous data is not contiguous on the media.
>
> Right. That's a good reason to use fairly large stripes. (The
> primary limiting factor for stripe s
> OTOH, when someone whom I don't know comes across as
> a pushover, he loses
> credibility.
It may come as a shock to you, but some people couldn't care less about those
who assess 'credibility' on the basis of form rather than on the basis of
content - which means that you can either lose out
Hi Tom, everyone,
Tom Mooney wrote:
> A little extra info:
> ZFS brings in a ZFS spare device the next time the pool is accessed, not
> a raidbox hot spare. Resilvering starts automatically and increases disk
> access times by about 30%. The first hour of estimated time left ( for
> 5-6 TB pools )
Paul Kraus wrote:
>
> I also like being able to see how much space I am using for
> each with a simple df rather than a du (that takes a while to run). I
> can also tune compression on a data type basis (no real point in
> trying to compress media files that are already compressed MPEG and
>> As I said in a different thread, I really do try to respond to people in
>> the manner that they deserve
> This is the wrong way to approach the problem.
Sorry, Will: it might be the wrong way for *you* to approach 'the problem'
(such as it is), but not for me.
...
>> (I was beginning to w
34 matches
Mail list logo