erik.trim...@sun.com said:
> The suggestion was to make the SSD on each machine an iSCSI volume, and add
> the two volumes as a mirrored ZIL into the zpool.
I've mentioned the following before
For a poor-person's slog which gives decent NFS performance, we have had
good results with allocat
Andrey Kuzmin wrote:
And how do you expect the mirrored iSCSI volume to work after
failover, with secondary (ex-primary) unreachable?
Regards,
Andrey
As a normal Degraded mirror. No problem.
The suggestion was to make the SSD on each machine an iSCSI volume, and
add the two volumes as a
And how do you expect the mirrored iSCSI volume to work after
failover, with secondary (ex-primary) unreachable?
Regards,
Andrey
On Wed, Dec 23, 2009 at 9:40 AM, Erik Trimble wrote:
> Charles Hedrick wrote:
>>
>> Is ISCSI reliable enough for this?
>>
>
> YES.
>
> The original idea is a good o
Charles Hedrick wrote:
Is ISCSI reliable enough for this?
YES.
The original idea is a good one, and one that I'd not thought of. The
(old) iSCSI implementation is quite mature, if not anywhere as nice
(feature/flexibility-wise) as the new COMSTAR stuff.
I'm thinking that just putting in
Is ISCSI reliable enough for this?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Dec 22, 2009, at 9:08 PM, Bob Friesenhahn > wrote:
On Tue, 22 Dec 2009, Ross Walker wrote:
I think zil_disable may actually make sense.
How about a zil comprised of two mirrored iSCSI vdevs formed from a
SSD on each box?
I would not have believed that this is a useful idea except that
On Tue, 22 Dec 2009, Ross Walker wrote:
I think zil_disable may actually make sense.
How about a zil comprised of two mirrored iSCSI vdevs formed from a SSD on
each box?
I would not have believed that this is a useful idea except that I
have seen "IOPS offload" to a server on the network w
On Dec 22, 2009, at 8:58 PM, Richard Elling
wrote:
On Dec 22, 2009, at 5:40 PM, Charles Hedrick wrote:
It turns out that our storage is currently being used for
* backups of various kinds, run daily by cron jobs
* saving old log files from our production application
* saving old versions o
On Dec 22, 2009, at 5:40 PM, Charles Hedrick wrote:
It turns out that our storage is currently being used for
* backups of various kinds, run daily by cron jobs
* saving old log files from our production application
* saving old versions of java files from our production application
Most of th
On Dec 22, 2009, at 8:40 PM, Charles Hedrick
wrote:
It turns out that our storage is currently being used for
* backups of various kinds, run daily by cron jobs
* saving old log files from our production application
* saving old versions of java files from our production application
Most of
It turns out that our storage is currently being used for
* backups of various kinds, run daily by cron jobs
* saving old log files from our production application
* saving old versions of java files from our production application
Most of the usage is write-only, and a fair amount of it involves
Thanks. That's what I was looking for.
Yikes! I hadn't realized how expensive the Zeus is.
We're using Solaris cluster, so if the system goes down, the other one takes
over. That means that if the ZIL is on a local disk, we lose it in a crash.
Might as well just set zil_disable (something I'm c
Charles Hedrick wrote:
We have a server using Solaris 10. It's a pair of systems with a shared J4200,
with Solaris cluster. It works very nicely. Solaris cluster switches over
transparently.
However as an NFS server it is dog-slow. This is the usual synchronous write
problem. Setting zfs_disa
We have a server using Solaris 10. It's a pair of systems with a shared J4200,
with Solaris cluster. It works very nicely. Solaris cluster switches over
transparently.
However as an NFS server it is dog-slow. This is the usual synchronous write
problem. Setting zfs_disable fixes the problem. ot
14 matches
Mail list logo