> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Mickaël CANÉVET
>
> Unless I misunderstood something, zfs send of a volume that has
> compression activated, uncompress it. So if I do a zfs send|zfs receive
> from a compressed volume to a com
On Tue, January 24, 2012 13:37, Jim Klimov wrote:
> One more rationale - compatibility, including future-proof
> somewhat (the zfs-send format explicitly does not guarantee
> that it won't change incompatibly). I mean stransfer of data
> between systems that do not implement the same set of
> comp
2012-01-24 19:52, Jim Klimov wrote:
2012-01-24 13:05, Mickaël CANÉVET wrote:
Hi,
Unless I misunderstood something, zfs send of a volume that has
compression activated, uncompress it. So if I do a zfs send|zfs receive
from a compressed volume to a compressed volume, my data are
uncompressed and
On Jan 24, 2012, at 7:52 AM, Jim Klimov wrote:
> 2012-01-24 13:05, Mickaël CANÉVET wrote:
>> Hi,
>>
>> Unless I misunderstood something, zfs send of a volume that has
>> compression activated, uncompress it. So if I do a zfs send|zfs receive
>> from a compressed volume to a compressed volume, my d
2012-01-24 13:05, Mickaël CANÉVET wrote:
Hi,
Unless I misunderstood something, zfs send of a volume that has
compression activated, uncompress it. So if I do a zfs send|zfs receive
from a compressed volume to a compressed volume, my data are
uncompressed and compressed again. Right ?
Is there a
Hi,
Unless I misunderstood something, zfs send of a volume that has
compression activated, uncompress it. So if I do a zfs send|zfs receive
from a compressed volume to a compressed volume, my data are
uncompressed and compressed again. Right ?
Is there a more effective way to do it (without decom
On Mon, Feb 28, 2011 at 10:38 PM, Moazam Raja wrote:
> We've noticed that on systems with just a handful of filesystems, ZFS
> send (recursive) is quite quick, but on our 1800+ fs box, it's
> horribly slow.
When doing an incremental send, the system has to identify what blocks
have changed, which
Hi all, I have a test system with a large amount of filesystems which
we take snapshots of and do send/recvs with.
On our test machine, we have 1800+ filesystems and about 5,000
snapshots.The system has 48GB of RAM, and 8 cores (x86). The
filesystem is comprised of 2 regular 1TB in a mirror with a
On 2/16/2011 8:08 AM, Richard Elling wrote:
On Feb 16, 2011, at 7:38 AM, white...@gmail.com wrote:
Hi, I have a very limited amount of bandwidth between main office and a colocated rack of
servers in a managed datacenter. My hope is to be able to zfs send/recv small incremental
changes on a n
All of these responses have been very helpful and are much appreciated.
Thank you all.
Mark
On Feb 16, 2011 2:54pm, Erik ABLESON wrote:
Check out :
http://www.infrageeks.com/groups/infrageeks/wiki/8fb35/zfs_autoreplicate_script.html
It also works to an external hard disk with localho
On 02/16/11 07:38, white...@gmail.com wrote:
Is it possible to use a portable drive to copy the
initial zfs filesystem(s) to the remote location and then make the
subsequent incrementals over the network?
Yes.
> If so, what would I need to do
to make sure it is an exact copy? Thank you,
Ro
Am 16.02.11 16:38, schrieb white...@gmail.com:
Hi, I have a very limited amount of bandwidth between main office and
a colocated rack of servers in a managed datacenter. My hope is to be
able to zfs send/recv small incremental changes on a nightly basis as
a secondary offsite backup strategy. M
nt: 16 February 2011 16:46
> To: zfs-discuss@opensolaris.org
> Subject: Re: [zfs-discuss] ZFS send/recv initial data load
>
> On Feb 16, 2011, at 7:38 AM, whitetr6 at gmail.com wrote:
>
> > My question is about the initial "seed" of the data. Is it possible
>
On Feb 16, 2011, at 7:38 AM, whitetr6 at gmail.com wrote:
My question is about the initial "seed" of the data. Is it possible
to use a portable drive to copy the initial zfs filesystem(s) to the
remote location and then make the subsequent incrementals over the
network? If so, what would I
On Feb 16, 2011, at 7:38 AM, white...@gmail.com wrote:
> Hi, I have a very limited amount of bandwidth between main office and a
> colocated rack of servers in a managed datacenter. My hope is to be able to
> zfs send/recv small incremental changes on a nightly basis as a secondary
> offsite b
Hi, I have a very limited amount of bandwidth between main office and a
colocated rack of servers in a managed datacenter. My hope is to be able to
zfs send/recv small incremental changes on a nightly basis as a secondary
offsite backup strategy. My question is about the initial "seed" of the
thank you.
On 04/10/2010 19:55, Matthew Ahrens wrote:
That's correct.
This behavior is because the send|recv operates on the DMU objects,
whereas the recordsize property is interpreted by the ZPL. The ZPL
checks the recordsize property when a file grows. But the recv
doesn't grow any files,
That's correct.
This behavior is because the send|recv operates on the DMU objects,
whereas the recordsize property is interpreted by the ZPL. The ZPL
checks the recordsize property when a file grows. But the recv
doesn't grow any files, it just dumps data into the underlying
objects.
--matt
O
Hi,
I thought that if I use zfs send snap | zfs recv if on a receiving
side the recordsize property is set to different value it will be
honored. But it doesn't seem to be the case, at least on snv_130.
$ zfs get recordsize test/m1
NAME PROPERTYVALUESOURCE
test/m1 recordsi
On 07/14/10 03:55 AM, David Dyer-Bennet wrote:
On Fri, July 9, 2010 16:49, BJ Quinn wrote:
I have a couple of systems running 2009.06 that hang on relatively large
zfs send/recv jobs. With the -v option, I see the snapshots coming
across, and at some point the process just pauses, IO and CP
I was going with the spring release myself, and finally got tired of waiting.
Got to build some new servers.
I don't believe you've missed anything. As I'm sure you know, it was
originally officially 2010.02, then it was officially 2010.03, then it was
rumored to be .04, sort of leaked as .0
On Fri, July 9, 2010 18:42, Giovanni Tirloni wrote:
> On Fri, Jul 9, 2010 at 6:49 PM, BJ Quinn wrote:
>> I have a couple of systems running 2009.06 that hang on relatively large
>> zfs send/recv jobs. With the -v option, I see the snapshots coming
>> across, and at some point the process just pa
On Fri, July 9, 2010 16:49, BJ Quinn wrote:
> I have a couple of systems running 2009.06 that hang on relatively large
> zfs send/recv jobs. With the -v option, I see the snapshots coming
> across, and at some point the process just pauses, IO and CPU usage go to
> zero, and it takes a hard reboo
Actually my current servers are 2008.05, and I noticed the problems I was
having with 2009.06 BEFORE I put those up as the new servers, so my pools are
not too new to revert back to 2008.11, I'd actually be upgrading from 2008.05.
I do not have paid support, but it's just not going to go over we
On 07/13/10 06:48 AM, BJ Quinn wrote:
Yeah, it's just that I don't think I'll be allowed to put up a dev version, but
I would probably get away with putting up 2008.11 if it doesn't have the same
problems with zfs send/recv. Does anyone know?
That would be a silly thing to do. Your pool
Yeah, it's just that I don't think I'll be allowed to put up a dev version, but
I would probably get away with putting up 2008.11 if it doesn't have the same
problems with zfs send/recv. Does anyone know?
--
This message posted from opensolaris.org
__
On Mon, Jul 12, 2010 at 10:04 AM, BJ Quinn wrote:
> I'm actually only running one at a time. It is recursive / incremental (and
> hundreds of GB), but it's only one at a time. Was there still problems in
> 2009.06 in that scenario?
>
> Does 2008.11 have these problems? 2008.05 didn't, and I'm
I'm actually only running one at a time. It is recursive / incremental (and
hundreds of GB), but it's only one at a time. Was there still problems in
2009.06 in that scenario?
Does 2008.11 have these problems? 2008.05 didn't, and I'm considering moving
back to that rather than using a develo
On Fri, Jul 9, 2010 at 6:49 PM, BJ Quinn wrote:
> I have a couple of systems running 2009.06 that hang on relatively large zfs
> send/recv jobs. With the -v option, I see the snapshots coming across, and
> at some point the process just pauses, IO and CPU usage go to zero, and it
> takes a har
On 07/10/10 09:49 AM, BJ Quinn wrote:
I have a couple of systems running 2009.06 that hang on relatively large zfs
send/recv jobs. With the -v option, I see the snapshots coming across, and at
some point the process just pauses, IO and CPU usage go to zero, and it takes a
hard reboot to get b
I have a couple of systems running 2009.06 that hang on relatively large zfs
send/recv jobs. With the -v option, I see the snapshots coming across, and at
some point the process just pauses, IO and CPU usage go to zero, and it takes a
hard reboot to get back to normal. The same script running
# in localhost
# zfs list | grep data
localpool/data 447G 82.4G 392G /data
localpool/d...@now 54.4G - 419G -
# zfs get compressratio localpool/d...@now
NAME PROPERTY VALUE SOURCE
localpool/d...@now compressratio 1.00x -
On Fri, May 28, 2010 at 10:05 AM, Gregory J. Benscoter
wrote:
> I’m primarily concerned with in the possibility of a bit flop. If this
> occurs will the stream be lost? Or will the file that that bit flop occurred
> in be the only degraded file? Lastly how does the reliability of this plan
> compa
On May 28, 2010, at 10:35 AM, Bob Friesenhahn wrote:
> On Fri, 28 May 2010, Gregory J. Benscoter wrote:
>> I’m primarily concerned with in the possibility of a bit flop. If this
>> occurs will the stream be lost? Or will the file that that bit flop occurred
>> in be the only degraded file? Lastly
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Gregory J. Benscoter
>
> After looking through the archives I havent been able to assess the
> reliability of a backup procedure which employs zfs send and recv.
If there's data corruption in
On May 28, 2010, at 4:28 PM, Juergen Nickelsen wrote:
> Bob Friesenhahn writes:
>> On Fri, 28 May 2010, Gregory J. Benscoter wrote:
>>>
>>> I’m primarily concerned with in the possibility of a bit flop. If
>>> this occurs will the stream be lost? Or will the file that that bit
>>> flop occurred
Bob Friesenhahn writes:
> On Fri, 28 May 2010, Gregory J. Benscoter wrote:
>>
>> I’m primarily concerned with in the possibility of a bit flop. If
>> this occurs will the stream be lost? Or will the file that that bit
>> flop occurred in be the only degraded file? Lastly how does the
>> relia
On Fri, 28 May 2010, Gregory J. Benscoter wrote:
I’m primarily concerned with in the possibility of a bit flop. If
this occurs will the stream be lost? Or will the file that that bit
flop occurred in be the only degraded file? Lastly how does the
reliability of this plan compare to more tradi
After looking through the archives I haven't been able to assess the
reliability of a backup procedure which employs zfs send and recv. Currently
I'm attempting to create a script that will allow me to write a zfs stream to a
tape via tar like below.
# zfs send -R p...@something
On Sat, Feb 06, 2010 at 09:22:57AM -0800, Richard Elling wrote:
> I'm interested in anecdotal evidence which suggests there is a
> problem as it is currently designed.
I like to look at it differently: I'm not sure if there is a
problem. I'd like to have a simple way to discover a problem, using
> > Well, ok, and in my limited knowhow... zfs set checksum=sha256 only
> > covers user scribbled data [POSIX file metadata, file contents, directory
> > structure, ZVOL blocks] and not necessarily any zfs filesystem internals.
>
> metadata is fletcher4 except for the uberblocks which are self-
On Feb 5, 2010, at 10:50 PM, grarpamp wrote:
>>> Perhaps I meant to say that the box itself [cpu/ram/bus/nic/io, except disk]
>>> is assumed to handle data with integrity. So say netcat is used as
>>> transport,
>>> zfs is using sha256 on disk, but only fletcher4 over the wire with
>>> send/recv
>> Perhaps I meant to say that the box itself [cpu/ram/bus/nic/io, except disk]
>> is assumed to handle data with integrity. So say netcat is used as transport,
>> zfs is using sha256 on disk, but only fletcher4 over the wire with send/recv,
>> and your wire takes some undetected/uncorrected hits,
On Feb 5, 2010, at 8:09 PM, grarpamp wrote:
>>> Hmm, is that configurable? Say to match the checksums being
>>> used on the filesystem itself... ie: sha256? It would seem odd to
>>> send with less bits than what is used on disk.
>
>>> Was thinking that plaintext ethernet/wan and even some of the
>> Hmm, is that configurable? Say to match the checksums being
>> used on the filesystem itself... ie: sha256? It would seem odd to
>> send with less bits than what is used on disk.
>> Was thinking that plaintext ethernet/wan and even some of the 'weaker'
>> ssl algorithms
> Do you expect the sam
On Feb 5, 2010, at 7:20 PM, grarpamp wrote:
>> No. Checksums are made on the records, and there could be a different
>> record size for the sending and receiving file systems.
>
> Oh. So there's a zfs read to ram somewhere, which checks the sums on disk.
> And then entirely new stream checksums ar
> No. Checksums are made on the records, and there could be a different
> record size for the sending and receiving file systems.
Oh. So there's a zfs read to ram somewhere, which checks the sums on disk.
And then entirely new stream checksums are made while sending it all off
to the pipe.
I se
On Feb 5, 2010, at 3:11 AM, grarpamp wrote:
> Are the sha256/fletcher[x]/etc checksums sent to the receiver along
> with the other data/metadata?
No. Checksums are made on the records, and there could be a different
record size for the sending and receiving file systems. The stream itself
is check
Are the sha256/fletcher[x]/etc checksums sent to the receiver along
with the other data/metadata? And checked upon receipt of course.
Do they chain all the way back to the uberblock or to some calculated
transfer specific checksum value?
The idea is to carry through the integrity checks wherever po
On Sat, 12 Dec 2009, Brent Jones wrote:
There is a little bit of disk activity, maybe a MB/sec on average, and
about 30 iops.
So it seems the hosts are exchanging a lot of data about the snapshot,
but not actually replicating any data for a very long time.
Note that 'zfs send' is a one-way stre
On Sat, Dec 12, 2009 at 8:14 PM, Brent Jones wrote:
> On Sat, Dec 12, 2009 at 11:39 AM, Brent Jones wrote:
>> On Sat, Dec 12, 2009 at 7:55 AM, Bob Friesenhahn
>> wrote:
>>> On Sat, 12 Dec 2009, Brent Jones wrote:
>>>
I've noticed some extreme performance penalties simply by using snv_128
>>
On Sat, Dec 12, 2009 at 11:39 AM, Brent Jones wrote:
> On Sat, Dec 12, 2009 at 7:55 AM, Bob Friesenhahn
> wrote:
>> On Sat, 12 Dec 2009, Brent Jones wrote:
>>
>>> I've noticed some extreme performance penalties simply by using snv_128
>>
>> Does the 'zpool scrub' rate seem similar to before? Do
On Sat, Dec 12, 2009 at 7:55 AM, Bob Friesenhahn
wrote:
> On Sat, 12 Dec 2009, Brent Jones wrote:
>
>> I've noticed some extreme performance penalties simply by using snv_128
>
> Does the 'zpool scrub' rate seem similar to before? Do you notice any read
> performance problems? What happens if yo
On Sat, 12 Dec 2009, Brent Jones wrote:
I've noticed some extreme performance penalties simply by using snv_128
Does the 'zpool scrub' rate seem similar to before? Do you notice any
read performance problems? What happens if you send to /dev/null
rather than via ssh?
Bob
--
Bob Friesenha
I've noticed some extreme performance penalties simply by using snv_128
I take snapshots, and send them over SSH to another server over
Gigabit ethernet.
Before, I would get 20-30MBps, prior to snv_128 (127, and nearly all
previous builds).
However, simply image-updating to snv_128 has caused a m
On Nov 20, 2009, at 1:47 AM, Mart van Santen wrote:
Richard Elling wrote:
On Nov 19, 2009, at 7:39 AM, Mart van Santen wrote:
Hi,
We are using multiple opensolaris 06/09 and solaris 10 servers.
Currently we are 'dumping' (incremental)backups to a backup
server. I wonder if anybody knows wh
Richard Elling wrote:
On Nov 19, 2009, at 7:39 AM, Mart van Santen wrote:
Hi,
We are using multiple opensolaris 06/09 and solaris 10 servers.
Currently we are 'dumping' (incremental)backups to a backup server. I
wonder if anybody knows what happens when I send/recv a zfs volume
from version
On Nov 19, 2009, at 7:39 AM, Mart van Santen wrote:
Hi,
We are using multiple opensolaris 06/09 and solaris 10 servers.
Currently we are 'dumping' (incremental)backups to a backup server.
I wonder if anybody knows what happens when I send/recv a zfs volume
from version 15 to a (backup) sys
Hi,
We are using multiple opensolaris 06/09 and solaris 10 servers.
Currently we are 'dumping' (incremental)backups to a backup server. I
wonder if anybody knows what happens when I send/recv a zfs volume from
version 15 to a (backup) system with version 14. I've the feeling it's
not very wi
Joseph L. Casale wrote:
I apologize for replying in the middle of this thread, but I never
saw the initial snapshot syntax of mypool2, which needs to be
recursive (zfs snapshot -r mypo...@snap) to snapshot all the
datasets in mypool2. Then, use zfs send -R to pick up and
restore all the dataset p
>I apologize for replying in the middle of this thread, but I never
>saw the initial snapshot syntax of mypool2, which needs to be
>recursive (zfs snapshot -r mypo...@snap) to snapshot all the
>datasets in mypool2. Then, use zfs send -R to pick up and
>restore all the dataset properties.
>
>What wa
original snapshot syntax?
Cindy
- Original Message -
From: Ian Collins
Date: Tuesday, July 28, 2009 5:53 pm
Subject: Re: [zfs-discuss] zfs send/recv syntax
To: "zfs-discuss@opensolaris.org" , "Joseph L.
Casale"
> On Wed 29/07/09 10:49 , "Joseph L. Casal
On Wed 29/07/09 10:49 , "Joseph L. Casale" jcas...@activenetwerx.com sent:
> >Yes, use -R on the sending side and -d on the receiving side.
> I tried that first, going from Solaris 10 to osol 0906:
>
> # zfs send -vR mypo...@snap|ssh j...@catania "pfexec /usr/sbin/zfs recv -dF
> mypool/somenam
Try send/receive to the same host (ssh localhost). I used this when
trying send/receive as it removes ssh between hosts "problems"
The on disk format of ZFS has changed there is something about it in
the man pages from memory so I don't think you can go S10 ->
OpenSolaris without doing an up
>Yes, use -R on the sending side and -d on the receiving side.
I tried that first, going from Solaris 10 to osol 0906:
# zfs send -vR mypo...@snap |ssh j...@catania "pfexec /usr/sbin/zfs recv -dF
mypool/somename"
didn't create any of the zfs filesystems under mypool2?
Thanks!
jlc
_
On Wed 29/07/09 10:09 , "Joseph L. Casale" jcas...@activenetwerx.com sent:
> Is it possible to send an entire pool (including all its zfsfilesystems)
> to a zfs filesystem in a different pool on another host? Or must I send each
> zfs filesystem one at a time?
Yes, use -R on the sending side a
Is it possible to send an entire pool (including all its zfs filesystems)
to a zfs filesystem in a different pool on another host? Or must I send each
zfs filesystem one at a time?
Thanks!
jlc
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
Bill Shannon wrote:
> If I do something like this:
>
> zfs snapshot [EMAIL PROTECTED]
> zfs send [EMAIL PROTECTED] > tank.backup
> sleep 86400
> zfs rename [EMAIL PROTECTED] [EMAIL PROTECTED]
> zfs snapshot [EMAIL PROTECTED]
> zfs send -I [EMAIL PROTECTED] [EMAIL PROTECTED] > tank.incr
>
> Am I g
On Thu, Mar 06, 2008 at 10:34:07PM -0800, Bill Shannon wrote:
> Darren J Moffat wrote:
> > I know this isn't answering the question but rather than using "today"
> > and "yesterday" why not not just use dates ?
>
> Because then I have to compute yesterday's date to do the incremental
> dump.
Not
> > zfs send -i z/[EMAIL PROTECTED]z/[EMAIL PROTECTED]| bzip2 -c |\
> >ssh host.com "bzcat | zfs recv -v -F -d z"
>
> Since I see 'bzip2' mentioned here (a rather slow compressor), I
> should mention that based on a recommendation from a friend, I gave a
> compressor called 'lzop' (http
On Fri, Mar 07, 2008 at 01:52:45AM -0500, Rob Logan wrote:
> > Because then I have to compute yesterday's date to do the
> > incremental dump.
>
> snaps=15
> today=`date +%j`
> # to change the second day of the year from 002 to 2
> today=`expr $today + 0`
Er, can't this be confused with octal
Randy Bias wrote:
> On Mar 7, 2008, at 8:55 AM, Bob Friesenhahn wrote:
>> On Fri, 7 Mar 2008, Rob Logan wrote:
>> Since I see 'bzip2' mentioned here (a rather slow compressor), I
>> should mention that based on a recommendation from a friend, I gave a
>> compressor called 'lzop' (http://www.lzop.or
On Mar 7, 2008, at 8:55 AM, Bob Friesenhahn wrote:
> On Fri, 7 Mar 2008, Rob Logan wrote:
> Since I see 'bzip2' mentioned here (a rather slow compressor), I
> should mention that based on a recommendation from a friend, I gave a
> compressor called 'lzop' (http://www.lzop.org/) a try due to its
>
On Fri, 7 Mar 2008, Rob Logan wrote:
>
> zfs send -i z/[EMAIL PROTECTED] z/[EMAIL PROTECTED] | bzip2 -c |\
> ssh host.com "bzcat | zfs recv -v -F -d z"
> zfs send -i z/[EMAIL PROTECTED] z/[EMAIL PROTECTED] | bzip2 -c |\
> ssh host.com "bzcat | zfs recv -v -F -d z"
> zfs send -i z/[EMAIL PRO
> Because then I have to compute yesterday's date to do the
incremental dump.
snaps=15
today=`date +%j`
# to change the second day of the year from 002 to 2
today=`expr $today + 0`
nuke=`expr $today - $snaps`
yesterday=`expr $today - 1`
if [ $yesterday -lt 1 ] ; then
yesterday=365
fi
if [
Darren J Moffat wrote:
> I know this isn't answering the question but rather than using "today"
> and "yesterday" why not not just use dates ?
Because then I have to compute yesterday's date to do the incremental dump.
I don't suppose I can create symlinks to snapshots in order to give them
mult
Bill Shannon wrote:
> If I do something like this:
>
> zfs snapshot [EMAIL PROTECTED]
> zfs send [EMAIL PROTECTED] > tank.backup
> sleep 86400
> zfs rename [EMAIL PROTECTED] [EMAIL PROTECTED]
> zfs snapshot [EMAIL PROTECTED]
> zfs send -I [EMAIL PROTECTED] [EMAIL PROTECTED] > tank.incr
>
> Am I g
If I do something like this:
zfs snapshot [EMAIL PROTECTED]
zfs send [EMAIL PROTECTED] > tank.backup
sleep 86400
zfs rename [EMAIL PROTECTED] [EMAIL PROTECTED]
zfs snapshot [EMAIL PROTECTED]
zfs send -I [EMAIL PROTECTED] [EMAIL PROTECTED] > tank.incr
Am I going to be able to restore the streams?
Shannon Fiume wrote:
> Hi,
>
> I want to send peices of a zfs filesystem to another system. Can zfs
> send peices of a snapshot? Say I only want to send over /[EMAIL PROTECTED]
> and
> not include /app/conf data while /app/conf is still apart of the
> /[EMAIL PROTECTED] snapshot? I say app/con
Hi,
I want to send peices of a zfs filesystem to another system. Can zfs
send peices of a snapshot? Say I only want to send over /[EMAIL PROTECTED] and
not include /app/conf data while /app/conf is still apart of the
/[EMAIL PROTECTED] snapshot? I say app/conf as an example, it could be
webser
On Fri, 1 Jun 2007, Ben Bressler wrote:
> When I do the zfs send | ssh zfs recv part, the file system (folder) is
> getting created, but none of the data that I have in my snapshot is
> sent. I can browse on the source machine to view the snapshot data
> pool/.zfs/snapshot/snap-name and I see the
I'm trying to test an install of ZFS to see if I can backup data from one
machine to another. I'm using Solaris 5.10 on two VMware installs.
When I do the zfs send | ssh zfs recv part, the file system (folder) is getting
created, but none of the data that I have in my snapshot is sent. I can
82 matches
Mail list logo