Using TTL in cassandra-stress

2015-07-13 Thread Tzach Livyatan
How do I set TTL for cassandra-stress inserts, either in the profile yaml
file (better) or in the command line?

Thanks
Tzach


Re: Cassandra OOM on joining existing ring

2015-07-13 Thread Sebastian Estevez
Are you on the azure premium storage?
http://www.datastax.com/2015/04/getting-started-with-azure-premium-storage-and-datastax-enterprise-dse

Secondary indexes are built for convenience not performance.
http://www.datastax.com/resources/data-modeling

What's your compaction strategy? Your nodes have to come up in order for
them to start compacting.
On Jul 13, 2015 1:11 AM, "Kunal Gangakhedkar" 
wrote:

> Hi,
>
> Looks like that is my primary problem - the sstable count for the
> daily_challenges column family is >5k. Azure had scheduled maintenance
> window on Sat. All the VMs got rebooted one by one - including the current
> cassandra one - and it's taking forever to bring cassandra back up online.
>
> Is there any way I can re-organize my existing data? so that I can bring
> down that count?
> I don't want to lose that data.
> If possible, can I do that while cassandra is down? As I mentioned, it's
> taking forever to get the service up - it's stuck in reading those 5k
> sstable (+ another 5k of corresponding secondary index) files. :(
> Oh, did I mention I'm new to cassandra?
>
> Thanks,
> Kunal
>
> Kunal
>
> On 11 July 2015 at 03:29, Sebastian Estevez <
> sebastian.este...@datastax.com> wrote:
>
>> #1
>>
>>> There is one table - daily_challenges - which shows compacted partition
>>> max bytes as ~460M and another one - daily_guest_logins - which shows
>>> compacted partition max bytes as ~36M.
>>
>>
>> 460 is high, I like to keep my partitions under 100mb when possible. I've
>> seen worse though. The fix is to add something else (maybe month or week or
>> something) into your partition key:
>>
>>  PRIMARY KEY ((segment_type, something_else), date, user_id, sess_id)
>>
>> #2 looks like your jam version is 3 per your env.sh so you're probably
>> okay to copy the env.sh over from the C* 3.0 link I shared once you
>> uncomment and tweak the MAX_HEAP. If there's something wrong your node
>> won't come up. tail your logs.
>>
>>
>>
>> All the best,
>>
>>
>> [image: datastax_logo.png] 
>>
>> Sebastián Estévez
>>
>> Solutions Architect | 954 905 8615 | sebastian.este...@datastax.com
>>
>> [image: linkedin.png]  [image:
>> facebook.png]  [image: twitter.png]
>>  [image: g+.png]
>> 
>> 
>>
>> 
>>
>> DataStax is the fastest, most scalable distributed database technology,
>> delivering Apache Cassandra to the world’s most innovative enterprises.
>> Datastax is built to be agile, always-on, and predictably scalable to any
>> size. With more than 500 customers in 45 countries, DataStax is the
>> database technology and transactional backbone of choice for the worlds
>> most innovative companies such as Netflix, Adobe, Intuit, and eBay.
>>
>> On Fri, Jul 10, 2015 at 2:44 PM, Kunal Gangakhedkar <
>> kgangakhed...@gmail.com> wrote:
>>
>>> And here is my cassandra-env.sh
>>> https://gist.github.com/kunalg/2c092cb2450c62be9a20
>>>
>>> Kunal
>>>
>>> On 11 July 2015 at 00:04, Kunal Gangakhedkar 
>>> wrote:
>>>
 From jhat output, top 10 entries for "Instance Count for All Classes
 (excluding platform)" shows:

 2088223 instances of class org.apache.cassandra.db.BufferCell
 1983245 instances of class
 org.apache.cassandra.db.composites.CompoundSparseCellName
 1885974 instances of class
 org.apache.cassandra.db.composites.CompoundDenseCellName
 63 instances of class
 org.apache.cassandra.io.sstable.IndexHelper$IndexInfo
 503687 instances of class org.apache.cassandra.db.BufferDeletedCell
 378206 instances of class org.apache.cassandra.cql3.ColumnIdentifier
 101800 instances of class org.apache.cassandra.utils.concurrent.Ref
 101800 instances of class
 org.apache.cassandra.utils.concurrent.Ref$State
 90704 instances of class
 org.apache.cassandra.utils.concurrent.Ref$GlobalState
 71123 instances of class org.apache.cassandra.db.BufferDecoratedKey

 At the bottom of the page, it shows:
 Total of 8739510 instances occupying 193607512 bytes.
 JFYI.

 Kunal

 On 10 July 2015 at 23:49, Kunal Gangakhedkar 
 wrote:

> Thanks for quick reply.
>
> 1. I don't know what are the thresholds that I should look for. So, to
> save this back-and-forth, I'm attaching the cfstats output for the 
> keyspace.
>
> There is one table - daily_challenges - which shows compacted
> partition max bytes as ~460M and another one - daily_guest_logins - which
> shows compacted partition max bytes as ~36M.
>
> Can that be a problem?
> Here is the CQL schema for the daily_challenges column family:
>
> CREATE TABLE app_10001.daily_challenges (
> segment_type text,
> date timestamp,
> user_id int,
> sess_id text,
>>>

Re: Cassandra OOM on joining existing ring

2015-07-13 Thread Anuj Wadehra
We faced similar issue where we had 60k sstables due to coldness bug in 2.0.3. 
We solved it by following Datastax recommendation for Production at 
http://docs.datastax.com/en/cassandra/1.2/cassandra/install/installRecommendSettings.html
 :


Step 1 : Add the following line to /etc/sysctl.conf :

 

vm.max_map_count = 131072

 

Step 2: To make the changes take effect, reboot the server or run the following 
command:

 

$ sudo sysctl -p

 

Step 3(optional): To confirm the limits are applied to the Cassandra process, 
run the following command where pid is the process ID of the currently running 
Cassandra process:

 

$ cat /proc//limits



You can try above settings and share your results..


Thanks

Anuj

Sent from Yahoo Mail on Android

From:"Sebastian Estevez" 
Date:Mon, 13 Jul, 2015 at 7:02 pm
Subject:Re: Cassandra OOM on joining existing ring

Are you on the azure premium storage?
http://www.datastax.com/2015/04/getting-started-with-azure-premium-storage-and-datastax-enterprise-dse

Secondary indexes are built for convenience not performance.
http://www.datastax.com/resources/data-modeling

What's your compaction strategy? Your nodes have to come up in order for them 
to start compacting. 

On Jul 13, 2015 1:11 AM, "Kunal Gangakhedkar"  wrote:

Hi,


Looks like that is my primary problem - the sstable count for the 
daily_challenges column family is >5k. Azure had scheduled maintenance window 
on Sat. All the VMs got rebooted one by one - including the current cassandra 
one - and it's taking forever to bring cassandra back up online.


Is there any way I can re-organize my existing data? so that I can bring down 
that count?

I don't want to lose that data.

If possible, can I do that while cassandra is down? As I mentioned, it's taking 
forever to get the service up - it's stuck in reading those 5k sstable (+ 
another 5k of corresponding secondary index) files. :(

Oh, did I mention I'm new to cassandra?


Thanks,

Kunal


Kunal


On 11 July 2015 at 03:29, Sebastian Estevez  
wrote:

#1 

There is one table - daily_challenges - which shows compacted partition max 
bytes as ~460M and another one - daily_guest_logins - which shows compacted 
partition max bytes as ~36M.


460 is high, I like to keep my partitions under 100mb when possible. I've seen 
worse though. The fix is to add something else (maybe month or week or 
something) into your partition key:


 PRIMARY KEY ((segment_type, something_else), date, user_id, sess_id)


#2 looks like your jam version is 3 per your env.sh so you're probably okay to 
copy the env.sh over from the C* 3.0 link I shared once you uncomment and tweak 
the MAX_HEAP. If there's something wrong your node won't come up. tail your 
logs.




All the best,




Sebastián Estévez

Solutions Architect | 954 905 8615 | sebastian.este...@datastax.com

    





DataStax is the fastest, most scalable distributed database technology, 
delivering Apache Cassandra to the world’s most innovative enterprises. 
Datastax is built to be agile, always-on, and predictably scalable to any size. 
With more than 500 customers in 45 countries, DataStax is the database 
technology and transactional backbone of choice for the worlds most innovative 
companies such as Netflix, Adobe, Intuit, and eBay. 


On Fri, Jul 10, 2015 at 2:44 PM, Kunal Gangakhedkar  
wrote:

And here is my cassandra-env.sh

https://gist.github.com/kunalg/2c092cb2450c62be9a20


Kunal


On 11 July 2015 at 00:04, Kunal Gangakhedkar  wrote:

From jhat output, top 10 entries for "Instance Count for All Classes (excluding 
platform)" shows:

2088223 instances of class org.apache.cassandra.db.BufferCell 
1983245 instances of class 
org.apache.cassandra.db.composites.CompoundSparseCellName 
1885974 instances of class 
org.apache.cassandra.db.composites.CompoundDenseCellName 
63 instances of class org.apache.cassandra.io.sstable.IndexHelper$IndexInfo 
503687 instances of class org.apache.cassandra.db.BufferDeletedCell 
378206 instances of class org.apache.cassandra.cql3.ColumnIdentifier 
101800 instances of class org.apache.cassandra.utils.concurrent.Ref 
101800 instances of class org.apache.cassandra.utils.concurrent.Ref$State 

90704 instances of class org.apache.cassandra.utils.concurrent.Ref$GlobalState 
71123 instances of class org.apache.cassandra.db.BufferDecoratedKey 


At the bottom of the page, it shows: 

Total of 8739510 instances occupying 193607512 bytes.

JFYI.


Kunal


On 10 July 2015 at 23:49, Kunal Gangakhedkar  wrote:

Thanks for quick reply.

1. I don't know what are the thresholds that I should look for. So, to save 
this back-and-forth, I'm attaching the cfstats output for the keyspace.

There is one table - daily_challenges - which shows compacted partition max 
bytes as ~460M and another one - daily_guest_logins - which shows compacted 
partition max bytes as ~36M.

Can that be a problem? 

Here is the CQL schema for the daily_challenges column family:

CREATE TABLE app_10001.daily_challeng

Re: DROP Table

2015-07-13 Thread Saladi Naidu
Sebastian,Thank you so much for providing detailed explanation. I still have 
some questions and I need to provide some clarifications
1. We do not have code that is creating the tables dynamically. All DDL 
operations are done through Datastax DevCenter tool. When you say schema to 
settle, do you means we provide proper consistency level? I don't think there 
is a provision to do that in tool. Or I can change the SYSTEM KEYSPACE 
definition of replication factor equal to number of nodes?
2. In the steps described below for correcting this problem - when you say move 
data from old directory to new, do you mean move the .db file? It will override 
the current file right?
3. Do we have to rename the directory name to remove CFID i.e. just column 
family name without CFID? After that, update the System table as well?  Naidu 
Saladi 

  From: Sebastian Estevez 
 To: user@cassandra.apache.org; Saladi Naidu  
 Sent: Friday, July 10, 2015 5:25 PM
 Subject: Re: DROP Table
   
#1 The cause of this problem is a CREATE TABLE statement collision. Do not 
generate tables dynamically from multiple clients, even with IF NOT EXISTS. 
First thing you need to do is fix your code so that this does not happen. Just 
create your tables manually from cqlsh allowing time for the schema to settle.
#2 Here's the fix:
1) Change your code to not automatically re-create tables (even with IF NOT 
EXISTS).
2) Run a rolling restart to ensure schema matches across nodes. Run nodetool 
describecluster around your cluster. Check that there is only one schema 
version. 
ON EACH NODE:3) Check your filesystem and see if you have two directories for 
the table in question in the data directory.
If THERE ARE TWO OR MORE DIRECTORIES:4)Identify from schema_column_families 
which cf ID is the "new" one (currently in use). 

cqlsh -e "select * from system.schema_column_families"|grep 

5) Move the data from the "old" one to the "new" one and remove the old 
directory. 
6) If there are multiple "old" ones repeat 5 for every "old" directory.
7) run nodetool refresh
IF THERE IS ONLY ONE DIRECTORY:
No further action is needed.
All the best,
Sebastián EstévezSolutions Architect | 954 905 8615 | 
sebastian.este...@datastax.com


DataStax is the fastest, most scalable distributed database technology, 
delivering Apache Cassandra to the world’s most innovative enterprises. 
Datastax is built to be agile, always-on, and predictably scalable to any size. 
With more than 500 customers in 45 countries, DataStax is the database 
technology and transactional backbone of choice for the worlds most innovative 
companies such as Netflix, Adobe, Intuit, and eBay. 


On Fri, Jul 10, 2015 at 12:15 PM, Saladi Naidu  wrote:

My understanding is that Cassandra File Structure follows below naming 
convention
/cassandra/data/        

Whereas our file structure is as below, each table has multiple names and when 
we drop tables and recreate these directories remain. Also when we dropped the 
table one node was down, when it came back, we tried to do Nodetool repair and 
repair kept failing  referring to CFID error listed below

drwxr-xr-x. 16 cass cass 4096 May 24 06:49 ../drwxr-xr-x.  4 cass cass 4096 Jul 
 2 11:09application_by_user-e0eec95019a211e58b954ffc8e9bfaa6/drwxr-xr-x.  2 
cass cass 4096 Jun 25 
10:15application_info-4dba2bf0054f11e58b954ffc8e9bfaa6/drwxr-xr-x.  4 cass cass 
4096 Jul  2 11:09application_info-a0ee65d019a311e58b954ffc8e9bfaa6/drwxr-xr-x.  
4 cass cass 4096 Jul  2 
11:09configproperties-228ea2e0c13811e4aa1d4ffc8e9bfaa6/drwxr-xr-x.  4 cass cass 
4096 Jul  2 11:09user_activation-95d005f019a311e58b954ffc8e9bfaa6/drwxr-xr-x.  
3 cass cass 4096 Jun 25 
10:16user_app_permission-9fddcd62ffbe11e4a25a45259f96ec68/drwxr-xr-x.  4 cass 
cass 4096 Jul  2 
11:09user_credential-86cfff1019a311e58b954ffc8e9bfaa6/drwxr-xr-x.  4 cass cass 
4096 Jul  2 11:09user_info-2fa076221b1011e58b954ffc8e9bfaa6/drwxr-xr-x.  2 cass 
cass 4096 Jun 25 10:15user_info-36028c00054f11e58b954ffc8e9bfaa6/drwxr-xr-x.  3 
cass cass 4096 Jun 25 
10:15user_info-fe1d7b101a5711e58b954ffc8e9bfaa6/drwxr-xr-x.  4 cass cass 4096 
Jun 25 10:16user_role-9ed0ca30ffbe11e4b71d09335ad2d5a9/

WARN [Thread-2579] 2015-07-02 16:02:27,523 IncomingTcpConnection.java:91 
-UnknownColumnFamilyException reading from socket; 
closingorg.apache.cassandra.db.UnknownColumnFamilyException:Couldn't 
findcfId=218e3c90-1b0e-11e5-a34b-d7c17b3e318a   
atorg.apache.cassandra.db.ColumnFamilySerializer.deserializeCfId(ColumnFamilySerializer.java:164)~[apache-cassandra-2.1.2.jar:2.1.2]
   at 
org.apache.cassandra.db.ColumnFamilySerializer.deserialize(ColumnFamilySerializer.java:97)~[apache-cassandra-2.1.2.jar:2.1.2]
   
atorg.apache.cassandra.db.Mutation$MutationSerializer.deserializeOneCf(Mutation.java:322)~[apache-cassandra-2.1.2.jar:2.1.2]
   at 
org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:302)~[apache-cassandra-2.1.2.jar:2.1.2]
   
atorg.apache.cassandra.db.Mutation$MutationSeriali

Re: DROP Table

2015-07-13 Thread Mikhail Strebkov
Hi Saladi,

Recently I faced a similar problem, I had a lot of CFs to fix, so I wrote
this: https://github.com/kluyg/cassandra-schema-fix
I think it can be useful to you.

Kind regards,
Mikhail

On Mon, Jul 13, 2015 at 11:51 AM, Saladi Naidu 
wrote:

> Sebastian,
> Thank you so much for providing detailed explanation. I still have some
> questions and I need to provide some clarifications
>
> 1. We do not have code that is creating the tables dynamically. All DDL
> operations are done through Datastax DevCenter tool. When you say schema to
> settle, do you means we provide proper consistency level? I don't think
> there is a provision to do that in tool. Or I can change the SYSTEM
> KEYSPACE definition of replication factor equal to number of nodes?
>
> 2. In the steps described below for correcting this problem - when you say
> move data from old directory to new, do you mean move the .db file? It
> will override the current file right?
>
> 3. Do we have to rename the directory name to remove CFID i.e. just
> column family name without CFID? After that, update the System table as
> well?
>
>
> Naidu Saladi
>
>   --
>  *From:* Sebastian Estevez 
> *To:* user@cassandra.apache.org; Saladi Naidu 
> *Sent:* Friday, July 10, 2015 5:25 PM
> *Subject:* Re: DROP Table
>
> #1 The cause of this problem is a CREATE TABLE statement collision. Do
> *not* generate tables dynamically from multiple clients, even with IF NOT
> EXISTS. First thing you need to do is fix your code so that this does not
> happen. Just create your tables manually from cqlsh allowing time for the
> schema to settle.
>
> #2 Here's the fix:
>
> 1) *Change your code to not automatically re-create tables (even with IF
> NOT EXISTS).*
>
> 2) Run a rolling restart to ensure schema matches across nodes. Run
> nodetool describecluster around your cluster. Check that there is only one
> schema version.
>
> ON EACH NODE:
> 3) Check your filesystem and see if you have two directories for the table
> in question in the data directory.
>
> If THERE ARE TWO OR MORE DIRECTORIES:
> 4)Identify from schema_column_families which cf ID is the "new" one
> (currently in use).
>
> cqlsh -e "select * from system.schema_column_families"|grep 
>
>
> 5) Move the data from the "old" one to the "new" one and remove the old
> directory.
>
> 6) If there are multiple "old" ones repeat 5 for every "old" directory.
>
> 7) run nodetool refresh
>
> IF THERE IS ONLY ONE DIRECTORY:
>
> No further action is needed.
>
> All the best,
>
> [image: datastax_logo.png] 
> Sebastián Estévez
> Solutions Architect | 954 905 8615 | sebastian.este...@datastax.com
> [image: linkedin.png]  [image:
> facebook.png]  [image: twitter.png]
>  [image: g+.png]
> 
> 
>
> 
>
> DataStax is the fastest, most scalable distributed database technology,
> delivering Apache Cassandra to the world’s most innovative enterprises.
> Datastax is built to be agile, always-on, and predictably scalable to any
> size. With more than 500 customers in 45 countries, DataStax is the
> database technology and transactional backbone of choice for the worlds
> most innovative companies such as Netflix, Adobe, Intuit, and eBay.
>
>
>
> On Fri, Jul 10, 2015 at 12:15 PM, Saladi Naidu 
> wrote:
>
> My understanding is that Cassandra File Structure follows below naming
> convention
>
> /cassandra/*data/   *
>
>
>
> Whereas our file structure is as below, each table has multiple names and
> when we drop tables and recreate these directories remain. Also when we
> dropped the table one node was down, when it came back, we tried to do
> Nodetool repair and repair kept failing  referring to CFID error listed
> below
>
>
> drwxr-xr-x. 16 cass cass 4096 May 24 06:49 ../
> drwxr-xr-x.  4 cass cass 4096 Jul  2 11:09
> application_by_user-e0eec95019a211e58b954ffc8e9bfaa6/
> drwxr-xr-x.  2 cass cass 4096 Jun 25 10:15 application_info-
> 4dba2bf0054f11e58b954ffc8e9bfaa6/
> drwxr-xr-x.  4 cass cass 4096 Jul  2 11:09
> application_info-a0ee65d019a311e58b954ffc8e9bfaa6/
> drwxr-xr-x.  4 cass cass 4096 Jul  2 11:09
> configproperties-228ea2e0c13811e4aa1d4ffc8e9bfaa6/
> drwxr-xr-x.  4 cass cass 4096 Jul  2 11:09
> user_activation-95d005f019a311e58b954ffc8e9bfaa6/
> drwxr-xr-x.  3 cass cass 4096 Jun 25 10:16
> user_app_permission-9fddcd62ffbe11e4a25a45259f96ec68/
> drwxr-xr-x.  4 cass cass 4096 Jul  2 11:09
> user_credential-86cfff1019a311e58b954ffc8e9bfaa6/
> drwxr-xr-x.  4 cass cass 4096 Jul  2 11:09
> user_info-2fa076221b1011e58b954ffc8e9bfaa6/
> drwxr-xr-x.  2 cass cass 4096 Jun 25 10:15
> user_info-36028c00054f11e58b954ffc8e9bfaa6/
> drwxr-xr-x.  3 cass cass 4096 Jun 25 10:15
> user_info-fe1d7b101a5711e58b954ffc8e9bfaa6/
> drwxr-xr-x.  4 cass cass 4096 Jun 25 10:16
>

Bulk loading performance

2015-07-13 Thread David Haguenauer
Hi,

I have a use case wherein I receive a daily batch of data; it's about
50M--100M records (a record is a list of integers, keyed by a
UUID). The target is a 12-node cluster.

Using a simple-minded approach (24 batched inserts in parallel, using
the Ruby client), while the cluster is being read at a rate of about
150k/s, I get about 15.5k insertions per second. This in itself is
satisfactory, but the concern is that the large amount of writes
causes the read latency to jump up during the insertion, and for a
while after.

I tried using sstableloader instead, and the overall throughput is
similar (I spend 2/3 of the time preparing the SSTables, and 1/3
actually pushing them to nodes), but I believe this still causes a
hike in read latency (after the load is complete).

Is there a set of best practices for this kind of workload? We would
like to avoid interfering with reads as much as possible.

I can of course post more information about our setup and requirements
if this helps answering.

-- 
Thanks,
David Haguenauer


Re: Configuring the java client to retry on write failure.

2015-07-13 Thread Mikhail Strebkov
Hi Kevin,

Here is what we use, works for us in production:
https://gist.github.com/kluyg/46ae3dee9000a358edf9

To unit test it, you'll need to check that your custom retry policy returns
the RetryDecision you want for the inputs.

To verify that it works in production, you can wrap it in a
LoggingRetryPolicy like this:

  .withRetryPolicy(new LoggingRetryPolicy(new MyRetryPolicy(maxRetries =
3)))

and you'll see the retries in the logs.

Kind regards,
Mikhail

On Sun, Jul 12, 2015 at 10:07 AM, Kevin Burton  wrote:

> I can’t seem to find a decent resource to really explain this…
>
> Our app seems to fail some write requests, a VERY low percentage.  I’d
> like to retry the write requests that fail due to number of replicas not
> being correct.
>
>
> http://docs.datastax.com/en/developer/java-driver/2.0/common/drivers/reference/tuningPolicies_c.html
>
> This is the best resource I can find.
>
> I think the best strategy is to look at DefaultRetryPolicy and then create
> a custom one that keeps retrying on write failures up to say 1 minute.
> Latency isn’t critical for us as this is a batch processing system.
>
> The biggest issue is how to test it?  I could unit test that my methods
> return on the correct inputs but not really in real world situations.
>
> What’s the best way to unit test this?
>
> --
>
> Founder/CEO Spinn3r.com
> Location: *San Francisco, CA*
> blog: http://burtonator.wordpress.com
> … or check out my Google+ profile
> 
>
>


Re: Bulk loading performance

2015-07-13 Thread Graham Sanderson
Ironically in my experience the fastest ways to get data into C* are considered 
“anti-patterns” by most (but I have no problem saturating multiple gigabit 
network links if I really feel like inserting fast)

It’s been a while since I tried some of the newer approaches though (my fast 
load code is a few years old).

> On Jul 13, 2015, at 5:31 PM, David Haguenauer  wrote:
> 
> Hi,
> 
> I have a use case wherein I receive a daily batch of data; it's about
> 50M--100M records (a record is a list of integers, keyed by a
> UUID). The target is a 12-node cluster.
> 
> Using a simple-minded approach (24 batched inserts in parallel, using
> the Ruby client), while the cluster is being read at a rate of about
> 150k/s, I get about 15.5k insertions per second. This in itself is
> satisfactory, but the concern is that the large amount of writes
> causes the read latency to jump up during the insertion, and for a
> while after.
> 
> I tried using sstableloader instead, and the overall throughput is
> similar (I spend 2/3 of the time preparing the SSTables, and 1/3
> actually pushing them to nodes), but I believe this still causes a
> hike in read latency (after the load is complete).
> 
> Is there a set of best practices for this kind of workload? We would
> like to avoid interfering with reads as much as possible.
> 
> I can of course post more information about our setup and requirements
> if this helps answering.
> 
> -- 
> Thanks,
> David Haguenauer



smime.p7s
Description: S/MIME cryptographic signature