On Sat, Nov 15, 2008 at 7:54 AM, Richard Elling <[EMAIL PROTECTED]>wrote:
> In short, separate logs with rotating rust may reduce sync write latency by
> perhaps 2-10x on an otherwise busy system. Using write optimized SSDs
> will reduce sync write latency by perhaps 10x in all cases. This is on
[EMAIL PROTECTED] wrote:
> > WD Caviar Black drive [...] Intel E7200 2.53GHz 3MB L2
> > The P45 based boards are a no-brainer
>
> 16G of DDR2-1066 with P45 or
> 8G of ECC DDR2-800 with 3210 based boards
>
> That is the question.
>
>
I guess the answer is how valuable is your data?
--
Ian.
> WD Caviar Black drive [...] Intel E7200 2.53GHz 3MB L2
> The P45 based boards are a no-brainer
16G of DDR2-1066 with P45 or
8G of ECC DDR2-800 with 3210 based boards
That is the question.
Rob
___
zfs-discuss mailing list
zfs-discu
On Fri, Nov 14, 2008 at 7:12 PM, Bob Friesenhahn
<[EMAIL PROTECTED]> wrote:
> On Fri, 14 Nov 2008, Al Hopper wrote:
>>
>> b) If I were building a system today, I'd go Intel - even thought I'm
>> an AMD fanboy - but I can't recommend AMD today ... unfortunately.
>
> Is there some particular reason
Jordan Brown wrote:
> Rich Reynolds wrote:
>> BTW: I am loath to call them bugs until I know its not a
>> configuration/pilot error.
>
> IMHO, if you can cause the root to become corrupt, it's a bug. Short of
> mucking around in /dev/kmem or /dev/dsk/*, it just shouldn't be possible
> to corru
On Fri, 14 Nov 2008, Al Hopper wrote:
>
> b) If I were building a system today, I'd go Intel - even thought I'm
> an AMD fanboy - but I can't recommend AMD today ... unfortunately.
Is there some particular reason for this? The now shipping 0.45
micron quad-core Opterons seem quite nice indeed.
I looked at this a month back, i was leaning towards intel for
performance and power consumption but went for AMD doe to lack of ECC
support in most of the Intel chipsets.
I went for a AM2+ GeForce 8200 motherboard which seemed more stable
with Solaris than 8300. With the AM2+ socket I can w
Rich Reynolds wrote:
> BTW: I am loath to call them bugs until I know its not a
> configuration/pilot error.
IMHO, if you can cause the root to become corrupt, it's a bug. Short of
mucking around in /dev/kmem or /dev/dsk/*, it just shouldn't be possible
to corrupt a file system.
_
On Sat, Nov 15, 2008 at 00:46, Richard Elling <[EMAIL PROTECTED]> wrote:
> Adam Leventhal wrote:
>>
>> On Fri, Nov 14, 2008 at 10:48:25PM +0100, Mattias Pantzare wrote:
>>
>>>
>>> That is _not_ active-active, that is active-passive.
>>>
>>> If you have a active-active system I can access the same d
On Fri, Nov 14, 2008 at 4:43 PM, gnomad <[EMAIL PROTECTED]> wrote:
> Like many others, I am looking to put together a SOHO NAS based on ZFS/CIFS.
> The plan is 6 x 1TB drives in RAIDZ2 configuration, driven via mobo with 6
> SATA ports.
>
> I've read most, if not all, of the threads here, as wel
Adam Leventhal wrote:
> On Fri, Nov 14, 2008 at 10:48:25PM +0100, Mattias Pantzare wrote:
>
>> That is _not_ active-active, that is active-passive.
>>
>> If you have a active-active system I can access the same data via both
>> controllers at the same time. I can't if it works like you just
>> d
On Fri, Nov 14, 2008 at 3:18 PM, Al Hopper <[EMAIL PROTECTED]> wrote:
>> No clue. My friend also upgraded to b101. Said it was working awesome
>> - improved network performance, etc. Then he said after a few days,
>> he's decided to downgrade too - too many other weird side effects.
>
> Any more d
On Fri, Nov 14, 2008 at 10:22 AM, mike <[EMAIL PROTECTED]> wrote:
> No clue. My friend also upgraded to b101. Said it was working awesome
> - improved network performance, etc. Then he said after a few days,
> he's decided to downgrade too - too many other weird side effects.
Any more details avai
On Fri, Nov 14, 2008 at 01:07:29PM -0800, Ed Clark wrote:
hi,
>
> is the system still in the same state initially reported ?
Yes.
> ie. you have not manually run any commands (ie. installboot) that would have
> altered the slice containing the root fs where 137137-09 was applied
>
> could you
gnomad wrote:
> So, my questions:
>
> - Has the MCP55 copy/fs lockup bug been fixed yet?
>
>
Which bug ids? I've never seen any such problems in 18 months of heavy
use. Note the x4540 uses these.
> - Have the Nvidia 750a driver issues been resolved?
>
>
Which bug ids?
--
Ian.
___
These stack traces look like 6569719 (fixed in s10u5).
For update 5, you could start with the kernel stack of the hung commands.
(use ::pgrep and ::findstack) We might also need the sync thread's stack
(something like ::walk spa | ::print spa_t
spa_dsl_pool->dp_txg.tx_sync_thread | ::findstack
Like many others, I am looking to put together a SOHO NAS based on ZFS/CIFS.
The plan is 6 x 1TB drives in RAIDZ2 configuration, driven via mobo with 6 SATA
ports.
I've read most, if not all, of the threads here, as well as sbredon's excellent
article on building a home NAS, yet I still have a
On Fri, Nov 14, 2008 at 10:48:25PM +0100, Mattias Pantzare wrote:
> That is _not_ active-active, that is active-passive.
>
> If you have a active-active system I can access the same data via both
> controllers at the same time. I can't if it works like you just
> described. You can't call it activ
hi All,
I realize the subject is a bit incendiary, but we're running into what
I view as a design omission with ZFS that is preventing us from
building highly available storage infrastructure; I want to bring some
attention (again) to this major issue:
Currently we have a set of iSCSI targe
> I think you're confusing our clustering feature with the remote
> replication feature. With active-active clustering, you have two closely
> linked head nodes serving files from different zpools using JBODs
> connected to both head nodes. When one fails, the other imports the
> failed node's pool
Brent Jones wrote:
> *snip*
>>> a 'zfs send' on the sending host
>>> monitors the pool/filesystem for changes, and immediately sends them to
>>> the
>>> receiving host, which applies the change to the remote pool.
>> This is asynchronous, and isn't really different from running zfs send/recv
>> in
hi,
is the system still in the same state initially reported ? ie. you have not
manually run any commands (ie. installboot) that would have altered the slice
containing the root fs where 137137-09 was applied
could you please provide the following
1. a copy of the 137137-09 patchadd log if yo
*snip*
>> a 'zfs send' on the sending host
>> monitors the pool/filesystem for changes, and immediately sends them to
>> the
>> receiving host, which applies the change to the remote pool.
>
> This is asynchronous, and isn't really different from running zfs send/recv
> in a loop. Whether the loop
Bob Friesenhahn wrote:
> On Fri, 14 Nov 2008, Joerg Schilling wrote:
>>> -----
>>> Disk RPM 3,600 10,000x3
>>
>> The best rate I did see in 1985 was 800 kB/s (w. linear reads)
>> now I see 120 MB/s this is more than x100 ;
>On Fri, 14 Nov 2008, Joerg Schilling wrote:
>>> -----
>>> Disk RPM 3,600 10,000x3
>>
>> The best rate I did see in 1985 was 800 kB/s (w. linear reads)
>> now I see 120 MB/s this is more than x100 ;-)
>
>Yes. And how tha
Bob Friesenhahn <[EMAIL PROTECTED]> wrote:
> On Fri, 14 Nov 2008, Joerg Schilling wrote:
> >> -----
> >> Disk RPM 3,600 10,000x3
> >
> > The best rate I did see in 1985 was 800 kB/s (w. linear reads)
> > now I see 120 MB/
On Fri, 14 Nov 2008, Joerg Schilling wrote:
>> -----
>> Disk RPM 3,600 10,000x3
>
> The best rate I did see in 1985 was 800 kB/s (w. linear reads)
> now I see 120 MB/s this is more than x100 ;-)
Yes. And how that SSDs ar
[EMAIL PROTECTED] wrote:
> But zfs could certainly use bigger buffers; just like mbuffer, I also
> wrote my own "pipebuffer" which does pretty much the same.
You too? (My "buffer" program which I used to diagnose the problem is
attached to the bugid ;-)
I know Chris Gerhard wrote one too.
Seems
River Tarnell wrote:
> Daryl Doami:
>> As an aside, replication has been implemented as part of the new Storage
>> 7000 family. Here's a link to a blog discussing using the 7000
>> Simulator running in two separate VMs and replicating w/ each other:
>
> that's interesting, although 'less than a
Andrew Gabriel <[EMAIL PROTECTED]> wrote:
> I have put together a simple set of figures I use to compare how disks
> and systems have changed over the 25 year life of ufs/ffs, which I
> sometimes use when I give ZFS presentations...
>
>25 years ago Nowfactor
>
>BTW: a lot of numbers in Solaris did not grow since a long time and
>thus create problems now. Just think about the maxphys values
>63 kB on x86 does not even allow to write a single BluRay disk sector
>with a single transfer.
Any "fixed value" will soon be too small (think about ufs_throt
OpenSolaris + ZFS achieves 120MB/sec read speed with 4 SATA 7200 rpm discs.
440 MB/Sec read speed with 7 SATA discs. 220MB/sec write speed.
2GB/sec write speed with 48 discs (on SUN Thumper x4600).
I have links to websites were Ive read this.
--
This message posted from opensolaris.org
__
Neil Perrin wrote:
> I wouldn't expect any improvement using a separate disk slice for the Intent
> Log
> unless that disk was much faster and was otherwise largely idle. If it was
> heavily
> used then I'd expect quite the performance degradation as the disk head
> bounces
> around between slic
- original Nachricht
Betreff: Re: [zfs-discuss] 'zfs recv' is very slow
Gesendet: Fr, 14. Nov 2008
Von: Bob Friesenhahn<[EMAIL PROTECTED]>
> On Fri, 14 Nov 2008, Joerg Schilling wrote:
> >
> > On my first Sun at home (a Sun 2/50 with 1 MB of RAM) in 1986, I could
> > set the socket
Joerg Schilling wrote:
> Andrew Gabriel <[EMAIL PROTECTED]> wrote:
>> Andrew Gabriel wrote:
>>> Interesting idea, but for 7200 RPM disks (and a 1Gb ethernet link), I
>>> need a 250GB buffer (enough to buffer 4-5 seconds worth of data). That's
>>> many orders of magnitude bigger than SO_RCVBUF can
On Fri, 14 Nov 2008, Joerg Schilling wrote:
>
> On my first Sun at home (a Sun 2/50 with 1 MB of RAM) in 1986, I could
> set the socket buffer size to 63 kB. 63kB : 1 MB is the same ratio
> as 256 MB : 4 GB.
>
> BTW: a lot of numbers in Solaris did not grow since a long time and
> thus create probl
On Fri, Nov 14, 2008 at 10:04 AM, Joerg Schilling
<[EMAIL PROTECTED]> wrote:
> Andrew Gabriel <[EMAIL PROTECTED]> wrote:
>
>> Andrew Gabriel wrote:
>> > Interesting idea, but for 7200 RPM disks (and a 1Gb ethernet link), I
>> > need a 250GB buffer (enough to buffer 4-5 seconds worth of data). That'
Andrew Gabriel <[EMAIL PROTECTED]> wrote:
> Andrew Gabriel wrote:
> > Interesting idea, but for 7200 RPM disks (and a 1Gb ethernet link), I
> > need a 250GB buffer (enough to buffer 4-5 seconds worth of data). That's
> > many orders of magnitude bigger than SO_RCVBUF can go.
>
> No -- that's wro
On 11/14/08 04:29, Tobias Exner wrote:
> Hi experts,
>
> I need a little help from your site to understand what's going on.
>
>
> I've got a SUN X4540 Thumper and setup some zpools. Further I engaged
> the powerd configuration to stop the disks when there are idle for a
> specified time.
>
>
> No
Andrew Gabriel wrote:
> Interesting idea, but for 7200 RPM disks (and a 1Gb ethernet link), I
> need a 250GB buffer (enough to buffer 4-5 seconds worth of data). That's
> many orders of magnitude bigger than SO_RCVBUF can go.
No -- that's wrong -- should read 250MB buffer!
Still some orders of m
Joerg Schilling wrote:
> Andrew Gabriel <[EMAIL PROTECTED]> wrote:
>
>> That is exactly the issue. When the zfs recv data has been written, zfs
>> recv starts reading the network again, but there's only a tiny amount of
>> data buffered in the TCP/IP stack, so it has to wait for the network to
Jerry K schrieb:
> Hello Thomas,
>
> What is mbuffer? Where might I go to read more about it?
>
> Thanks,
>
> Jerry
>
>
>
>>
>> yesterday, I've release a new version of mbuffer, which also enlarges
>> the default TCP buffer size. So everybody using mbuffer for network data
>> transfer might
Hello Thomas,
What is mbuffer? Where might I go to read more about it?
Thanks,
Jerry
>
> yesterday, I've release a new version of mbuffer, which also enlarges
> the default TCP buffer size. So everybody using mbuffer for network data
> transfer might want to update.
>
> For everybody unfam
No clue. My friend also upgraded to b101. Said it was working awesome
- improved network performance, etc. Then he said after a few days,
he's decided to downgrade too - too many other weird side effects.
This has a comparison (at the time) as to what the differences are
with the different Solaris
> Could you provide the panic message and stack trace,
> plus the stack traces of when it's hung?
>
> --matt
Hello matt,
here is info and stack trace of a server running Update 3:
$ uname -a
SunOS qacpp03 5.10 Generic_127111-05 sun4us sparc FJSV,GPUSC-M
$ head -1 /etc/release
Joerg Schilling schrieb:
> Andrew Gabriel <[EMAIL PROTECTED]> wrote:
>
>> That is exactly the issue. When the zfs recv data has been written, zfs
>> recv starts reading the network again, but there's only a tiny amount of
>> data buffered in the TCP/IP stack, so it has to wait for the network to
Andrew Gabriel <[EMAIL PROTECTED]> wrote:
> That is exactly the issue. When the zfs recv data has been written, zfs
> recv starts reading the network again, but there's only a tiny amount of
> data buffered in the TCP/IP stack, so it has to wait for the network to
> heave more data across. In e
Hi guys. Read this thread, good info! I'm now considering getting one of the
MBs recommended in the Tom's Hardware review, to which a URL was posted
earlier. The article is here:
http://www.tomshardware.com/reviews/intel-e7200-g31,2039.html
I would like to know if any of you can confirm Solari
fwiw, my attempt to lu from sol 10 u6 to b101 failed miserably with lots
of broken services, etc. I ditched it but was able to revert to sol 10 u6.
Vincent Boisard wrote:
> Do you have an idea if your problem is due to live upgrade or b101
> itself ?
>
> Vincent
>
> On Thu, Nov 13, 2008 at 8:06
Do you have an idea if your problem is due to live upgrade or b101 itself ?
Vincent
On Thu, Nov 13, 2008 at 8:06 PM, mike <[EMAIL PROTECTED]> wrote:
> Depends on your hardware. I've been stable for the most part on b98. Live
> upgrade to b101 messed up my networking to nearly a standstill. It st
Hi experts,
I need a little help from your site to understand what's going on.
I've got a SUN X4540 Thumper and setup some zpools. Further I engaged
the powerd configuration to stop the disks when there are idle for a
specified time.
Now I noticed that all disks come up once in an hour due t
On Thu, Nov 13, 2008 at 04:54:57PM -0800, Gerry Haskins wrote:
> Jens, http://www.sun.com/bigadmin/patches/firmware/release_history.jsp on
> the Big Admin Patching center, http://www.sun.com/bigadmin/patches/ list
> firmware revisions.
Thanks a lot. Digged around there and found, that 121683-06
Still no luck :-(
I installed snv_100 on a new disk, mounted the old disk and copied the home
directories etc.
and now at least I have a system that works, if somewhat stunted cf the old
system.
It would be good if the old disk could be brought back to its former glory...
--
This message posted
53 matches
Mail list logo