Joseph L. Casale wrote:
I have my own application that uses large circular buffers and a socket
connection between hosts. The buffers keep data flowing during ZFS
writes and the direct connection cuts out ssh.
Application, as in not script (something you can share)?
Not yet!
--
Ian.
>I have my own application that uses large circular buffers and a socket
>connection between hosts. The buffers keep data flowing during ZFS
>writes and the direct connection cuts out ssh.
Application, as in not script (something you can share)?
:)
jlc
___
Joseph L. Casale wrote:
With Solaris 10U7 I see about 35MB/sec between Thumpers using a direct
socket connection rather than ssh for full sends and 7-12MB/sec for
incrementals, depending on the data set.
Ian,
What's the syntax you use for this procedure?
I have my own application that u
>With Solaris 10U7 I see about 35MB/sec between Thumpers using a direct
>socket connection rather than ssh for full sends and 7-12MB/sec for
>incrementals, depending on the data set.
Ian,
What's the syntax you use for this procedure?
___
zfs-discuss mail
Paul Kraus wrote:
There are about 3.3 million files / directories in the 'dataset',
files range in size from 1 KB to 100 KB.
pkr...@nyc-sted1:/IDR-test/ppk> time sudo zfs send
IDR-test/data...@1250616026 >/dev/null
real91m19.024s
user0m0.022s
sys 11m51.422s
pkr...@nyc-sted1:/IDR-tes
On Aug 18, 2009, at 1:16 PM, Paul Kraus wrote:
Is the speed of a 'zfs send' dependant on file size / number of
files ?
Not directly. It is dependent on the amount of changes per unit time.
We have a system with some large datasets (3.3 TB and about 35
million files) and conventional
Thank you for all your replies, I'm collecting my responses in one
message below:
On Tue, Aug 18, 2009 at 7:43 PM, Nicolas
Williams wrote:
> On Tue, Aug 18, 2009 at 04:22:19PM -0400, Paul Kraus wrote:
>> We have a system with some large datasets (3.3 TB and about 35
>> million files) and co
On Tue, Aug 18, 2009 at 7:54 PM, Mattias Pantzare wrote:
> On Tue, Aug 18, 2009 at 22:22, Paul Kraus wrote:
>> Posted from the wrong address the first time, sorry.
>>
>> Is the speed of a 'zfs send' dependant on file size / number of files ?
>>
>> We have a system with some large datasets (3
On Tue, Aug 18, 2009 at 22:22, Paul Kraus wrote:
> Posted from the wrong address the first time, sorry.
>
> Is the speed of a 'zfs send' dependant on file size / number of files ?
>
> We have a system with some large datasets (3.3 TB and about 35
> million files) and conventional backups tak
On Tue, Aug 18, 2009 at 04:22:19PM -0400, Paul Kraus wrote:
> We have a system with some large datasets (3.3 TB and about 35
> million files) and conventional backups take a long time (using
> Netbackup 6.5 a FULL takes between two and three days, differential
> incrementals, even with very
>Is the speed of a 'zfs send' dependant on file size / number of files ?
I am going to say no, I have *far* inferior iron that I am running a backup
rig on, and doing a send/recv over ssh through gige and last night's replication
gave the following: "received 40.2GB stream in 3498 seconds (11.8MB/
Au contraire...
>From what I have seen, larger file systems and large numbers of files
seem to slow down zfs send/receive, worsening the problem. So it may be
a good idea to partition your file system, subdividing it into smaller
ones, replicating each one separately.
Dirk
Am Di, den 26.05.200
I changed to try zfs send on a UFS on zvolume as well:
received 92.9GB stream in 2354 seconds (40.4MB/sec)
Still fast enough to use. I have yet to get around to trying something
considerably larger in size.
Lund
Jorgen Lundman wrote:
So you recommend I also do speed test on larger volum
So you recommend I also do speed test on larger volumes? The test data I
had on the b114 server was only 90GB. Previous tests included 500G ufs
on zvol etc. It is just it will take 4 days to send it to the b114
server to start with ;) (From Sol10 servers).
Lund
Dirk Wriedt wrote:
Jorgen,
Jorgen,
what is the size of the sending zfs?
I thought replication speed depends on the size of the sending fs, too not only size of the
snapshot being sent.
Regards
Dirk
--On Freitag, Mai 22, 2009 19:19:34 +0900 Jorgen Lundman wrote:
Sorry, yes. It is straight;
# time zfs send zpool1/l
On Fri, May 22, 2009 at 04:40:43PM -0600, Eric D. Mudama wrote:
> As another datapoint, the 111a opensolaris preview got me ~29MB/s
> through an SSH tunnel with no tuning on a 40GB dataset.
>
> Sender was a Core2Duo E4500 reading from SSDs and receiver was a Xeon
> E5520 writing to a few mirrored
On Fri, May 22 at 11:05, Robert Milkowski wrote:
btw: caching data fro zfs send anf zfs recv on another side could make it
even faster. you could use something like mbuffer with buffers of 1-2GB
for example.
As another datapoint, the 111a opensolaris preview got me ~29MB/s
through an SSH t
Sorry, yes. It is straight;
# time zfs send zpool1/leroy_c...@speedtest | nc 172.20.12.232 3001
real19m48.199s
# /var/tmp/nc -l -p 3001 -vvv | time zfs recv -v zpool1/le...@speedtest
received 82.3GB stream in 1195 seconds (70.5MB/sec)
Sending is osol-b114.
Receiver is Solaris 10 10/08
Whe
Brent Jones wrote:
On Thu, May 21, 2009 at 10:17 PM, Jorgen Lundman wrote:
To finally close my quest. I tested "zfs send" in osol-b114 version:
received 82.3GB stream in 1195 seconds (70.5MB/sec)
Can you give any details about your data set, what you piped zfs
send/receive through (SS
btw: caching data fro zfs send anf zfs recv on another side could make it
even faster. you could use something like mbuffer with buffers of 1-2GB
for example.
On Fri, 22 May 2009, Jorgen Lundman wrote:
To finally close my quest. I tested "zfs send" in osol-b114 version:
received 82.3GB
On Thu, May 21, 2009 at 10:17 PM, Jorgen Lundman wrote:
>
> To finally close my quest. I tested "zfs send" in osol-b114 version:
>
> received 82.3GB stream in 1195 seconds (70.5MB/sec)
>
> Yeeaahh!
>
> That makes it completely usable! Just need to change our support contract to
> allow us to run b
To finally close my quest. I tested "zfs send" in osol-b114 version:
received 82.3GB stream in 1195 seconds (70.5MB/sec)
Yeeaahh!
That makes it completely usable! Just need to change our support
contract to allow us to run b114 and we're set! :)
Thanks,
Lund
Jorgen Lundman wrote:
We f
Jorgen Lundman wrote:
We finally managed to upgrade the production x4500s to Sol 10 10/08
(unrelated to this) but with the hope that it would also make "zfs send"
usable.
Exactly how does "build 105" translate to Solaris 10 10/08? My current
There is no easy/obvious mapping of Solaris Ne
We finally managed to upgrade the production x4500s to Sol 10 10/08
(unrelated to this) but with the hope that it would also make "zfs send"
usable.
Exactly how does "build 105" translate to Solaris 10 10/08? My current
speed test has sent 34Gb in 24 hours, which isn't great. Perhaps the
n
Torrey McMahon wrote:
Matthew Ahrens wrote:
I'm only doing an initial investigation now so I have no test data at
this point. The reason I asked, and I should have tacked this on at the
end of the last email, was a blog entry that stated zfs send was slow
http://www.lethargy.org/~jesus/archiv
Matthew Ahrens wrote:
Torrey McMahon wrote:
Howdy folks.
I've a customer looking to use ZFS in a DR situation. They have a
large data store where they will be taking snapshots every N minutes
or so, sending the difference of the snapshot and previous snapshot
with zfs send -i to a remote hos
Torrey McMahon wrote:
Howdy folks.
I've a customer looking to use ZFS in a DR situation. They have a large
data store where they will be taking snapshots every N minutes or so,
sending the difference of the snapshot and previous snapshot with zfs
send -i to a remote host, and in case of DR fi
27 matches
Mail list logo