Hello Duff
Thanks for emailing me the source & binary for your test app.
My PC for testing has snv_60 installed. I was about to upgrade to snv_70,
but I thought it might be useful to test with the older version of OpenSolaris
first, in case the problem you are seeing is a regression.
And for the
Ged wrote:
> Does anyone know if multi master replication can be done with ZFS.
What is your definition of multi-master replication?
> Use case is 2 data centers that you want to keep in sync.
>
> My understanding is that master slave is possible only.
ZFS doesn't do replication, per se. It is
Does anyone know if multi master replication can be done with ZFS.
Use case is 2 data centers that you want to keep in sync.
My understanding is that master slave is possible only.
ged
This message posted from opensolaris.org
___
zfs-discuss mailin
Hi Duff,
The OpenSolaris bug reporting system is not very robust yet. The team is
aware of it and plans to make it better.
So, the bugs you filed might have been lost.
I have filed bug 6617080 for you. You should be able to see it thru
bugs.opensolaris.org tomorrow.
I will contact Larry to ge
On Mon, 15 Oct 2007, Richard Elling wrote:
> I can neither confirm nor deny that I can confirm or deny what somebody else
> said.
> http://www.techworld.com/storage/features/index.cfm?featureID=3728&pagtype=samecatsamechan
Ooohh: JBOD 1400 (2U, 24 x 2.5" drives). Someone's been listening! :-)
Sun has seen all of this during various problems over the past year and a half,
but:
CX600 FLARE code 02.07.600.5.027
CX500 FLARE code 02.19.500.5.044
Brocade Fabric, relevant switch models are 4140 (core), 200e (edge), 3800
(edge).
Sun Branded Emulex HBAs in the following models:
SG-XPCI1FC-
> I can neither confirm nor deny that I can confirm or deny what somebody else
> said.
> http://www.techworld.com/storage/features/index.cfm?featureID=3728&pagtype=samecatsamechan
> -- richard
No problem, just say yes or no! :-)
--
regards
Claus
When lenity and cruelty play for a kingdom,
th
I can neither confirm nor deny that I can confirm or deny what somebody else
said.
http://www.techworld.com/storage/features/index.cfm?featureID=3728&pagtype=samecatsamechan
-- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mai
On Mon, 15 Oct 2007, Tom Davies wrote:
> Say for an example of old custom 32-bit perl scripts.Can it work with
> 128bit ZFS?
That question was posted either here or on some other help aliases
recently ...
If you have any non-largefile-aware application that must under all
circumstances be
Say for an example of old custom 32-bit perl scripts.Can it work with
128bit ZFS?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
FYI -
- The "unrecoverable errors = panic" problem is being fixed as part of
PSARC 2007/567.
- We should be able to recover *some* data when some (but not all)
toplevel vdevs are faulted. See 6406289.
- Reading corrupted blocks is a little trickier, but 6186106 is filed to
cover this.
Th
It may make sense to post your code level host->emc and your
topology/hba(type and firmware level) info for the systems you are having
the issues on. EMC setups are very well known to have their reliability
linked to code level and topology -- a machine running 16 code against
backreved emulex + c
Yes, I think that was the original intent of the project proposal. It
could probably be reworded to decrease emphasis on a single algorithm,
but I read it as a generic exploration of alternative algorithms.
Pluggable algorithms is tricky, because compression is encoded as a
single 8-bit quantity
Hello zfs-discuss,
http://leaf.dragonflybsd.org/mailarchive/kernel/2007-10/msg6.html
http://leaf.dragonflybsd.org/mailarchive/kernel/2007-10/msg8.html
--
Best regards,
Robert Milkowskimailto:[EMAIL PROTECTED]
http://milek.b
Hello Paul,
If you don't need a support then Sun Cluster 3.2 is free and it works
with ZFS.
What you could do is to setup 3-node cluster with 3 resource groups
each assigned with different primary node and failback set to true.
Of course in that config the storage requirements will be different.
Hello JS,
Sunday, October 14, 2007, 7:01:28 AM, you wrote:
J> I've been running ZFS against EMC Clariion CX-600 and CX-500s in
J> various configurations, mostly exported disk situations, with a
J> number of kernel flatlining situations. Most of these situations
J> include Page83 data errors in /v
> Having my 700Gb one disk ZFS crashing on me created ample need for a recovery
> tool.
>
> So I spent the weekend creating a tool that lets you list directories and
> copy files from any pool on a one disk ZFS filesystem, where for example the
> Solaris kernel keeps panicing.
>
> Is there an
On Sun, Oct 14, 2007 at 09:37:42PM -0700, Matthew Ahrens wrote:
> Edward Pilatowicz wrote:
> >hey all,
> >so i'm trying to mirror the contents of one zpool to another
> >using zfs send / recieve while maintaining all snapshots and clones.
>
> You will enjoy the upcoming "zfs send -R" feature, which
18 matches
Mail list logo