On Wed, Feb 2, 2011 at 8:38 PM, Carson Gaspar wrote:
> Works For Me (TM).
>
> c7t0d0 is hanging off an LSI SAS3081E-R (SAS1068E chip) rev B3 MPT rev 105
> Firmware rev 011d (1.29.00.00) (IT FW)
>
> This is a SATA disk - I don't have any SAS disks behind a LSI1068E to test.
When I try to do a
On Wed, Feb 9, 2011 at 12:02 AM, Richard Elling
wrote:
> The data below does not show heavy CPU usage. Do you have data that
> does show heavy CPU usage? mpstat would be a good start.
Here is mpstat output during a network copy; I think one of the CPUs
disappeared due to a L2 Cache error.
movax
On 2/16/2011 8:08 AM, Richard Elling wrote:
On Feb 16, 2011, at 7:38 AM, white...@gmail.com wrote:
Hi, I have a very limited amount of bandwidth between main office and a colocated rack of
servers in a managed datacenter. My hope is to be able to zfs send/recv small incremental
changes on a n
All of these responses have been very helpful and are much appreciated.
Thank you all.
Mark
On Feb 16, 2011 2:54pm, Erik ABLESON wrote:
Check out :
http://www.infrageeks.com/groups/infrageeks/wiki/8fb35/zfs_autoreplicate_script.html
It also works to an external hard disk with localho
On 02/16/11 07:38, white...@gmail.com wrote:
Is it possible to use a portable drive to copy the
initial zfs filesystem(s) to the remote location and then make the
subsequent incrementals over the network?
Yes.
> If so, what would I need to do
to make sure it is an exact copy? Thank you,
Ro
Am 16.02.11 16:38, schrieb white...@gmail.com:
Hi, I have a very limited amount of bandwidth between main office and
a colocated rack of servers in a managed datacenter. My hope is to be
able to zfs send/recv small incremental changes on a nightly basis as
a secondary offsite backup strategy. M
Sergey,
I think you are saying that you had 4 separate ZFS storage pools on 4
separate disks and one ZFS pool/fs didn't not import successfully.
If you created a new storage pool on the disk for the pool that
failed to import then the data on that disk is no longer available
because it was overw
>From what I have read, this is not the best way to do it.
Your best bet is to create a ZFS pool using the external device (or even
better, devices) then zfs send | zfs receive. You can then do the same at
your remote location.
If you just send to a file, you may find it was a wasted trip (or pos
On Feb 16, 2011, at 7:38 AM, whitetr6 at gmail.com wrote:
My question is about the initial "seed" of the data. Is it possible
to use a portable drive to copy the initial zfs filesystem(s) to the
remote location and then make the subsequent incrementals over the
network? If so, what would I
On Feb 16, 2011, at 7:38 AM, white...@gmail.com wrote:
> Hi, I have a very limited amount of bandwidth between main office and a
> colocated rack of servers in a managed datacenter. My hope is to be able to
> zfs send/recv small incremental changes on a nightly basis as a secondary
> offsite b
Hello everybody! Please, help me!
I have Solaris 10x86_64 server with a 5x40gb hdd.
1 HDD with /root and /usr (and other partition) (ufs filesystem) were crashed.
He's died.
Other 4 HDD (zfs file system) were mounted by 4 pool (zfs create pool disk1
c0t1d0 and etc.).
I install Solaris 10x86_64
Hello all,
I am trying to understand how the allocation of space_map happens.
What I am trying to figure out is how the recursive part is handled. From what
I understand a new allocation (say appending to a file) will cause the space
map to change by appending more allocs that will require extra
Hi, I have a very limited amount of bandwidth between main office and a
colocated rack of servers in a managed datacenter. My hope is to be able to
zfs send/recv small incremental changes on a nightly basis as a secondary
offsite backup strategy. My question is about the initial "seed" of the
On 16 February, 2011 - Richard Elling sent me these 1,3K bytes:
> On Feb 16, 2011, at 6:05 AM, Eff Norwood wrote:
>
> > I'm preparing to replicate about 200TB of data between two data centers
> > using zfs send. We have ten 10TB zpools that are further broken down into
> > zvols of various size
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Eff Norwood
>
> Are there any gotchas that I should be aware of? Also, at what level
should I
> be taking the snapshot to do the zfs send? At the primary pool level or at
the
> zvol level? Sinc
On Feb 16, 2011, at 6:05 AM, Eff Norwood wrote:
> I'm preparing to replicate about 200TB of data between two data centers using
> zfs send. We have ten 10TB zpools that are further broken down into zvols of
> various sizes in each data center. One DC is primary and the other will be
> the repli
On Feb 15, 2011, at 11:26 PM, Khushil Dep wrote:
> Could you not also pin process' to cores, preventing switching should help
> too? I've done this for performance reasons before on a 24 core Linux box
>
Yes. More importantly, you could send interrupts to a processor set. There are
many
ways to
Hi Fajar,
Thanks for your quick response, just playing it around for a while, it is very
useful to me.
Have a nice day!
-Jeff
在 2011-2-16,下午10:16, Fajar A. Nugraha 写道:
> On Wed, Feb 16, 2011 at 8:53 PM, Jeff liu wrote:
>> Hello All,
>>
>> I'd like to know if there is an utility like `Filef
On Wed, Feb 16, 2011 at 8:53 PM, Jeff liu wrote:
> Hello All,
>
> I'd like to know if there is an utility like `Filefrag' shipped with
> e2fsprogs on linux, which is used to fetch the extents mapping info of a
> file(especially a sparse file) located on ZFS?
Something like zdb - maybe?
http
I'm preparing to replicate about 200TB of data between two data centers using
zfs send. We have ten 10TB zpools that are further broken down into zvols of
various sizes in each data center. One DC is primary and the other will be the
replication target and there is plenty of bandwidth between th
Hello All,
I'd like to know if there is an utility like `Filefrag' shipped with e2fsprogs
on linux, which is used to fetch the extents mapping info of a file(especially
a sparse file) located on ZFS?
I am working on efficient sparse file detection and backup through
lseek(SEEK_DATA/SEEK_HOLE)
21 matches
Mail list logo