Bob Friesenhahn writes:
> On Wed, 29 Jul 2009, Jorgen Lundman wrote:
> >
> > For example, I know rsync and tar does not use fdsync (but dovecot does)
> > on
> > its close(), but does NFS make it fdsync anyway?
>
> NFS is required to do synchronous writes. This is what allows NFS
> cli
Hi Jorgen,
warning ... weird idea inside ...
Ah it just occurred to me that perhaps for our specific problem, we
will buy two X25-Es and replace the root mirror. The OS and ZIL logs
can live together and put /var in the data pool. That way we would
not need to rebuild the data-pool and all th
> Ross wrote:
> > Great idea, much neater than most of my suggestions
> too :-)
> >
> What is? Please keep some context for those of us on
> email!
x25-e drives as a mirrored boot volume on an x4500, partitioning off some of
the space for the slog.
--
This message posted from opensolaris.org
Ross wrote:
Great idea, much neater than most of my suggestions too :-)
What is? Please keep some context for those of us on email!
--
Ian.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-
Great idea, much neater than most of my suggestions too :-)
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
X25-E would be good, but some pools have no spares, and since you can't
remove vdevs, we'd have to move all customers off the x4500 before we
can use it.
Ah it just occurred to me that perhaps for our specific problem, we will
buy two X25-Es and replace the root mirror. The OS and ZIL logs c
On Thu, 30 Jul 2009, Richard Elling wrote:
According to Gartner, enterprise SSDs accounted for $92.6M of a
$585.5M SSD market in June 2009, representing 15.8% of the SSD
market. STEC recently announced an order for $120M of ZeusIOPS
drives from "a single enterprise storage customer." From 20
On Jul 30, 2009, at 12:07 PM, Bob Friesenhahn wrote:
On Thu, 30 Jul 2009, Andrew Gabriel wrote:
Except for price/GB, it is game over for HDDs. Since price/GB is
based on
Moore's Law, it is just a matter of time.
SSD's are a sufficiently new technology that I suspect there's
significant
On Thu, 30 Jul 2009, Andrew Gabriel wrote:
Except for price/GB, it is game over for HDDs. Since price/GB is based on
Moore's Law, it is just a matter of time.
SSD's are a sufficiently new technology that I suspect there's significant
probably of discovering new techniques which give larger s
Richard Elling wrote:
On Jul 30, 2009, at 9:26 AM, Bob Friesenhahn wrote:
Do these SSDs require a lot of cooling?
No. During the "Turbo Charge your Apps" presentations I was doing around
the UK, I often pulled one out of a server to hand around the audience
when I'd finished the demos on i
On Jul 30, 2009, at 9:26 AM, Bob Friesenhahn wrote:
On Thu, 30 Jul 2009, Ross wrote:
Without spare drive bays I don't think you're going to find one
solution that works for x4500 and x4540 servers. However, are
these servers physically close together? Have you considered
running the slo
That should work just as well Bob, although rather than velcro I'd be tempted
to drill some holes into the server chassis somewhere and screw the drives on.
These things do use a bit of power, but with the airflow in a thumper I don't
think I'd be worried.
If they were my own servers I'd be ve
On Thu, 30 Jul 2009, Ross wrote:
Without spare drive bays I don't think you're going to find one
solution that works for x4500 and x4540 servers. However, are these
servers physically close together? Have you considered running the
slog devices externally?
This all sounds really sophistica
less than $200
-Kyle
Yours
Markus Kovero
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Jorgen Lundman
Sent: 30. heinäkuuta 2009 9:55
To: ZFS Discussions
Subject: Re: [zfs-discuss] [n/zfs-discuss] Strange sp
On Thu, Jul 30, 2009 at 5:27 AM, Ross wrote:
> Without spare drive bays I don't think you're going to find one solution that
> works for x4500 and x4540 servers. However, are these servers physically
> close together? Have you considered running the slog devices externally?
It appears as thoug
Without spare drive bays I don't think you're going to find one solution that
works for x4500 and x4540 servers. However, are these servers physically close
together? Have you considered running the slog devices externally?
One possible choice may be to run something like the Supermicro SC216
scuss-boun...@opensolaris.org] On Behalf Of Jorgen Lundman
Sent: 30. heinäkuuta 2009 9:55
To: ZFS Discussions
Subject: Re: [zfs-discuss] [n/zfs-discuss] Strange speeds with x4500, Solaris
10 10/08
Bob Friesenhahn wrote:
> Something to be aware of is that not all SSDs are the same. In fact,
> some "
Bob Friesenhahn wrote:
Something to be aware of is that not all SSDs are the same. In fact,
some "faster" SSDs may use a RAM write cache (they all do) and then
ignore a cache sync request while not including hardware/firmware
support to ensure that the data is persisted if there is power los
On Wed, 29 Jul 2009, Jorgen Lundman wrote:
So, it is slower than the CF test. This is disappointing. Everyone else seems
to use Intel X25-M, which have a write-speed of 170MB/s (2nd generation) so
perhaps that is why it works better for them. It is curious that it is slower
than the CF card.
Hi James, I'll not reply in line since the forum software is completely munging
your post.
On the X25-E I believe there is cache, and it's not backed up. While I haven't
tested it, I would expect the X25-E to have the cache turned off while used as
a ZIL.
The 2nd generation X25-E announced by
On 29/07/2009, at 5:47 PM, Ross wrote:
Everyone else should be using the Intel X25-E. There's a massive
difference between the M and E models, and for a slog it's IOPS and
low latency that you need.
Do they have any capacitor backed cache? Is this cache considered
stable storage? If s
Everyone else should be using the Intel X25-E. There's a massive difference
between the M and E models, and for a slog it's IOPS and low latency that you
need.
I've heard that Sun use X25-E's, but I'm sure that original reports had them
using STEC. I have a feeling the 2nd generation X25-E'
We just picked up the fastest SSD we could in the local biccamera, which
turned out to be a CSSDーSM32NI, with supposedly 95MB/s write speed.
I put it in place, and replaced the slog over:
0m49.173s
0m48.809s
So, it is slower than the CF test. This is disappointing. Everyone else
On Wed, 29 Jul 2009, Jorgen Lundman wrote:
For example, I know rsync and tar does not use fdsync (but dovecot does) on
its close(), but does NFS make it fdsync anyway?
NFS is required to do synchronous writes. This is what allows NFS
clients to recover seamlessly if the server spontaneously
This thread started over in nfs-discuss, as it appeared to be an nfs
problem initially. Or at the very least, interaction between nfs and zil.
Just summarising speeds we have found when untarring something. Always
in a new/empty directory. Only looking at write speed. read is always
very fas
25 matches
Mail list logo