Ian Mather wrote:
> Fairly new to ZFS. I am looking to replicate data between two thumper boxes.
> Found quite a few articles about using zfs incremental snapshot send/receive.
> Just a cheeky question to see if anyone has anything working in a live
> environment and are happy to share the script
It's good article explains about how to use ZFS for replication.
http://swik.net/MySQL/Planet+MySQL/ZFS+Replication+for+MySQL+data/ckjo2
http://www.markround.com/archives/38-ZFS-Replication.html
=
"Free India Opensource India."
=
Thanks and regard
On Fri, Jan 16 at 9:29, Gray Carper wrote:
> Using the X25-E for the L2ARC, but having no separate ZIL, sounds like a
> worthwhile test. Is 32GB large enough for a good L2ARC, though?
Without knowing much about ZFS internals, I'd just ask if how your
average working data set compares to the s
Tomas Ögren wrote:
> On 15 January, 2009 - Jim Klimov sent me these 1,3K bytes:
>
>> Is it possible to create a (degraded) zpool with placeholders specified
>> instead
>> of actual disks (parity or mirrors)? This is possible in linux mdadm
>> ("missing"
>> keyword), so I kinda hoped this can be
On Mon, Jan 12 at 10:00, casper@sun.com wrote:
>>My impression is not that other OS's aren't interested in ZFS, they
>>are, it's that the licensing restrictions limit native support to
>>Solaris, BSD, and OS-X.
>>
>>If you wanted native support in Windows or Linux, it would require a
>>signific
Sorry, open folks, please, chatting on.
This is why my help cannot be provided very often.
Because the Zhou style of fighting is that –
Yes, if we have to step into the battle field, a Zhou will walk in front of
the real troops, and say, kill me, if you dare, and I am so sure with my
life on
Thank you!
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Thu, 15 Jan 2009, Jonny Gerold wrote:
> Hello,
> I was hoping that this would work:
> http://blogs.sun.com/zhangfan/entry/how_to_turn_a_mirror
>
> I have 4x(1TB) disks, one of which is filled with 800GB of data (that I
> cant delete/backup somewhere else)
>
>> r...@fsk-backup:~# zpool create -f
Very nice.
Ok.
If I don't see any post to promise some help in solving Jonny's solution in
the next 8 minutes --
I would go to chinatown and get some commitment.
I would have that commitment in 48 hours and a working and tested
blog site in 60 days.
But it will not be open
Hi James,
I have done nothing wrong. It was ok in my religion. Sue my if you care.
He asked for a solution to a ZFS problem.
I was calling for help, Zhou style.
All my C and Z and J folks, are we going to help Jonny or what???
darn!!! Do I have to put down my other work to make a solution
Beloved Jonny,
I am just like you.
There was a day, I was hungry, and went for a job interview for sysadmin.
They asked me - what is a "protocol"?
I could not give a definition, and they said, no, not qualified.
But they did not ask me about CICS and mainframe. Too bad.
baby, even there is a
G'Day Gray,
On Thu, Jan 15, 2009 at 03:36:47PM +0800, Gray Carper wrote:
>
>Hey, all!
>Using iozone (with the sequential read, sequential write, random read,
>and random write categories), on a Sun X4240 system running
>OpenSolaris b104 (NexentaStor 1.1.2, actually), we recently r
Hi Jonny,
So far there is no Sun comments here or at the blog site, I guess your
approach is good by the Sun folks.
I also noticed that the blog hit today is only 5.
If, I tell my folks to visit the blog often, can they also do chinese? most
of them are doing blogging in chinese, not english tod
Hey, Eric!
Now things get complicated. ;> I was naively hoping to avoid revealing our
exact pool configuration, fearing that it might lead to lots of tangential
discussion, but I can see how it may be useful so that you have the whole
picture. Time for the big reveal, then...
Here's the exact lin
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
The text is in spanish, but the article/commands is pretty verbose. Hope
you can find it useful.
My approach creates a new BE, to be able to recover if there is any problem.
Separar el "/var" de un "Boot Enviroment" en ZFS "root"/"boot"
http://www.jc
On Thu, Jan 15, 2009 at 21:51, wrote:
>
>
>>The performance issue of using a drive to multiple unrelated
>>consumers (ZFS & UFS) is that, if both are active at the
>>same time, this will defeat the I/O scheduling smarts
>>implemented in ZFS. Rather than have data streaming to some
>>p
Hello,
I was hoping that this would work:
http://blogs.sun.com/zhangfan/entry/how_to_turn_a_mirror
I have 4x(1TB) disks, one of which is filled with 800GB of data (that I
cant delete/backup somewhere else)
> r...@fsk-backup:~# zpool create -f ambry raidz1 c4t0d0 c5t0d0 c5t1d0
> /dev/lofi/1
> r.
Charles Wright wrote:
> I've tried putting this in /etc/system and rebooting
> set zfs:zfs_vdev_max_pending = 16
You can change this on the fly, without rebooting.
See the mdb command at:
http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Device_I.2FO_Queue_Size_.28I.2FO_Concurre
>The performance issue of using a drive to multiple unrelated
>consumers (ZFS & UFS) is that, if both are active at the
>same time, this will defeat the I/O scheduling smarts
>implemented in ZFS. Rather than have data streaming to some
>physical location of the rust, the competitio
Hi everyone,
Recent ZFS admin guide updates/troubleshooting wiki include the
following updates.
1. Revised root pool recovery steps.
The process has been changed slightly due to a recently uncovered zfs
receive problem. You can create a recursive root pool snapshot as was
previously documented.
I've tried putting this in /etc/system and rebooting
set zfs:zfs_vdev_max_pending = 16
Are we sure that number equates to a scsi command?
Perhaps I should set it to 8 and see what happens.
(I have 256 scsi commands I can queue across 16 drives)
I still got these error messages in the log.
Jan 15
[last 5 minutes on my lunch, just to say thank you and sorry]
Yes, I was wondering how the first one even made to the list.
None of those emails with large attachments should have been approved by the
mail server policy.
And I feel bad that I tested the server with some bad text and those got
t
On Wed, 14 Jan 2009 22:40:19 -0500, "JZ"
wrote:
>ok, you open folks are really .
>just one more, and I hope someone replies so we can save some open time.
[snip]
JZ, would you please be so kind to refrain from including
any attachments in your postings to our beloved
zfs-discuss@opensolaris
Thanks Tomas, I haven't checked yet, but your workaround seems feasible.
I've posted an RFE and referenced your approach as a workaround.
That's nearly what zpool should do under the hood, and perhaps can be done
temporarily with a wrapper script to detect min(physical storage sizes) ;)
//Jim
-
> "gm" == Gary Mills writes:
gm> Is there any more that I've missed?
1. Filesystem/RAID layer dispatches writes 'a' to iSCSI
initiator. iSCSI initiator accepts them, buffers them, returns
success to RAID layer.
2. iSCSI initiator sends to iSCSI target. iSCSI Target write
zfs-auto-snapshot (SUNWzfs-auto-snapshot) is what I'm using. Only trick
is that on the other end, we have to manage our own retention of the
snapshots we send to our offsite/backup boxes.
zfs-auto-snapshot can handle the sending of snapshots as well.
We're running this in OpenSolaris 2008.11 (s
Tim writes:
> On Tue, Jan 13, 2009 at 6:26 AM, Brian Wilson wrote:
>
> >
> > Does creating ZFS pools on multiple partitions on the same physical drive
> > still run into the performance and other issues that putting pools in
> > slices
> > does?
> >
>
>
> Is zfs going to own the whol
On 15 January, 2009 - Jim Klimov sent me these 1,3K bytes:
> Is it possible to create a (degraded) zpool with placeholders specified
> instead
> of actual disks (parity or mirrors)? This is possible in linux mdadm
> ("missing"
> keyword), so I kinda hoped this can be done in Solaris, but didn't
Jim Klimov wrote:
> Is it possible to create a (degraded) zpool with placeholders specified
> instead
> of actual disks (parity or mirrors)? This is possible in linux mdadm
> ("missing"
> keyword), so I kinda hoped this can be done in Solaris, but didn't manage to.
>
> Usecase scenario:
>
> I h
For the sake of curiosity, is it safe to have components of two different ZFS
pools on the same drive, with and without HDD write cache turned on?
How will ZFS itself behave, would it turn on the disk cache if the two imported
pools co-own the drive?
An example is a multi-disk system like mine
Is it possible to create a (degraded) zpool with placeholders specified instead
of actual disks (parity or mirrors)? This is possible in linux mdadm ("missing"
keyword), so I kinda hoped this can be done in Solaris, but didn't manage to.
Usecase scenario:
I have a single server (or home worksta
D'oh - I take that back. Upon re-reading, I expect that you weren't
indicting MLC drives generally, just the JMicron-controlled ones. It looks
like we aren't suffering from those, though.
-Gray
On Thu, Jan 15, 2009 at 11:12 PM, Gray Carper wrote:
> Hey there, Will! Thanks for the quick reply an
Hey there, Will! Thanks for the quick reply and the link.
And: Oops! Yes - the SSD models would probably be useful information. ;> The
32GB SSD is an Intel X-25E (SLC). The 80GB SSDs are Intel X-25M (MLC). If
MLC drives can be naughty, perhaps we should try an additional test: keep
the 80GB SSDs o
You might want to look at AVS for realtime replication
http://www.opensolaris.org/os/project/avs/
However, I have had huge performance hits after enabling that. The
replicated volume is almost 10% the speed of normal ones
On Thu, Jan 15, 2009 at 1:28 PM, Ian Mather wrote:
> Fairly new to ZFS. I
On Thu, Jan 15, 2009 at 02:36, Gray Carper wrote:
> In the third test, we rebuilt the ZFS pool with the ZIL on a 32GB SSD and
> the L2ARC on four 80GB SSDs.
An obvious question: what SSDs are these? Where did you get them?
Many, many consumer-level MLC SSDs have controllers by JMicron (also
known
Fairly new to ZFS. I am looking to replicate data between two thumper boxes.
Found quite a few articles about using zfs incremental snapshot send/receive.
Just a cheeky question to see if anyone has anything working in a live
environment and are happy to share the scripts, save me reinventing th
Thank you!
This is now real open storage discussion!
really goodnight now
cheers,
z
- Original Message -
From: "Tomas Ögren"
To: "JZ"
Cc:
Sent: Thursday, January 15, 2009 5:36 AM
Subject: Re: [zfs-discuss] Swap ZFS pool disks to another host hardware
On 15 January, 2009 - JZ sent
On 15 January, 2009 - JZ sent me these 7,9K bytes:
> [OMG, sorry, I cannot resist]
Please do.
> Hi Nikhi, so you were playing?
Please stop sending random crap to this list. Keep it about ZFS, not
Beer/sake/whatever comes to your mind. It's not a random chat channel.
And stop attaching large .w
[OMG, sorry, I cannot resist]
Hi Nikhi, so you were playing?
another Zhou thing is that -
we like playful chic
goodnight
z
- Original Message -
From: Nikhil
To: Sanjeev
Cc: zfs-discuss@opensolaris.org
Sent: Thursday, January 15, 2009 5:20 AM
Subject: Re: [zfs-discuss] S
Ian Collins wrote:
> satya wrote:
> > Any idea if we can use pax command to backup ZFS acls? will -p option of
> > pax utility do the trick?
> >
> >
> pax should, according to
> http://docs.sun.com/app/docs/doc/819-5461/gbchx?a=view
>
> tar and cpio do.
>
> It should be simple enough to tes
Actually yes. I figured it out while reading other archive posts in
zfs-discuss :-)
Thanks Sanjeev :-)
On Thu, Jan 15, 2009 at 3:29 PM, Sanjeev wrote:
> Nikhil,
>
> Comments inline...
>
> On Thu, Jan 15, 2009 at 02:09:16PM +0530, Nikhil wrote:
> > Hi,
> >
> > I am running a Solaris 10 box on a
satya wrote:
> Any update on star ability to backup ZFS ACLs? Any idea if we can use pax
> command to backup ZFS acls? will -p option of pax utility do the trick?
I am looking for people who like to discuss the archive format for ZFS ACLs and
for extended attribute files for star.
Please choo
Nikhil,
Comments inline...
On Thu, Jan 15, 2009 at 02:09:16PM +0530, Nikhil wrote:
> Hi,
>
> I am running a Solaris 10 box on a v20z with 11/06 release. It has got ZFS
> pool configured on S1 storage box with 3 146gb disks (it has got lot of
> data)
> I am planning to upgrade the machine to the
beer and http://en.wikipedia.org/wiki/Code_of_Hammurabi
enlightening?
fulfilling?
best,
z
- Original Message -
From: JZ
To: Nikhil ; zfs-discuss@opensolaris.org
Cc: yunlai...@hotmail.com ; guo_r...@hotmail.com ; "??? ??" ; Lu Bin ;
liaohelen ; Liao, Jane ; ??? ; gmsi...@sina
[still only me speaking? ok, more spam for whoever out there confused...]
http://en.wikipedia.org/wiki/Beer
if you are lazy to read through Sake...
best,
z
- Original Message -
From: JZ
To: Nikhil ; zfs-discuss@opensolaris.org
Sent: Thursday, January 15, 2009 4:18 AM
Sub
>ZFS does turn it off if it doesn't have the whole disk. That's where the
>performance issues come from.
But it doesn't "touch it" so ZFS continues to work if you enable
write caching. And I think we default to "write-cache" enabled for
ATA/IDE disks. (The reason is that they're shipped with
Another Zhou approach is that -
when there is a fight that needs to be fought, a Zhou will fight that fight,
and walking into the battlefield in front of the troops
- kind of like the western way of doing fighting in the earlier days
best,
z
___
zfs-
[Sun folks are not working?]
Hi Nikhi,
doing IT so late at this hour?
I had an email that also got blocked by the mail server.
that one talked about metadata too, which might be satisfying, with some worm
sake...
http://en.wikipedia.org/wiki/Sake
The first alcoholic drink in Japan may have been
Hi,
I am running a Solaris 10 box on a v20z with 11/06 release. It has got ZFS
pool configured on S1 storage box with 3 146gb disks (it has got lot of
data)
I am planning to upgrade the machine to the new hardware with the new
Solaris 10 release of 8/07 and to the new hardware of X2200M2.
I am w
Open Folks,
Did I give you the impression that only Sun folks can speak on the list
discussion?
Or did I give you the impression that you have to make the digital name as
such and such to do this?
No, that was not what I meant, if that would kill the open discussion.
Someone, earlier, asked
50 matches
Mail list logo