Kenneth:
What you said is not wrong.
Vertica and Riak are examples of distributed databases that don't require
hand-holding.
Cassandra is for Java-programmer DIYers, or more often Datastax clients, at
this point.
Thanks, James.
From: Kenneth Brotman
To: user@cassandra.apache.org
Cc: d
Hi Rahul,
I cannot confirm the size wrt Cassandra, but usually in berkley db for *10
M records* , it takes around 120 GB. Any operation takes hardly 2 to 3 ms
when query is performed on index attribute.
Usually 10 to 12 columns are the OOTB behaviour but one can configure any
attribute to be inde
Jeff, you helped me figure out what I was missing. It just took me a day to
digest what you wrote. I’m coming over from another type of engineering. I
didn’t know and it’s not really documented. Cassandra runs in a data center.
Now days that means the nodes are going to be in managed contai
Thanks for the response Rahul. I did not understand the “node density” point.
Charu
From: Rahul Singh
Reply-To: "user@cassandra.apache.org"
Date: Monday, February 19, 2018 at 12:32 PM
To: "user@cassandra.apache.org"
Subject: Re: Right sizing Cassandra data nodes
1. I would keep opscenter on d
What is the data size in TB / Gb and what what is the Operations Per second for
read and write.
Cassandra is both for high volume and high velocity for read and write.
How many of the columns need to be indexed? You may find that doing a secondary
index is helpful or looking to Elassandra / DSE
1. I would keep opscenter on different cluster. Why unnecessarily put traffic
and computing for opscenter data on a real business data cluster?
2. Don’t put more than 1-2 TB per node. Maybe 3TB. Node density as it increases
creates more replication, read repairs , etc and memory usage for doing t
Sounds good.
Thanks for the explanation!
On Sun, Feb 18, 2018 at 5:15 PM, Rahul Singh
wrote:
> If you don’t have access to the file you don’t have access to the file.
> I’ve seen this issue several times. It’s he easiest low hanging fruit to
> resolve. So figure it out and make sure that it’s C
Hi All,
Looking for some insight into how application data archive and purge is carried
out for C* database. Are there standard guidelines on calculating the amount of
space that can be used for storing data in a specific node.
Some pointers that I got while researching are;
- Alloca
PLEASE READ: MAXIMUM TTL EXPIRATION DATE NOTICE (CASSANDRA-14092)
--
The maximum expiration timestamp that can be represented by the storage
engine is 2038-01-19T03:14:06+00:00, which means that inserts with TTL
thatl expire after thi
PLEASE READ: MAXIMUM TTL EXPIRATION DATE NOTICE (CASSANDRA-14092)
--
The maximum expiration timestamp that can be represented by the storage
engine is 2038-01-19T03:14:06+00:00, which means that inserts with TTL
thatl expire after thi
Well said. Very fair. I wouldn’t mind hearing from others still. You’re a
good guy!
Kenneth Brotman
From: Jeff Jirsa [mailto:jji...@gmail.com]
Sent: Monday, February 19, 2018 9:10 AM
To: cassandra
Cc: Cassandra DEV
Subject: Re: Cassandra Needs to Grow Up by Version Five!
There's a
There's a lot of things below I disagree with, but it's ok. I convinced
myself not to nit-pick every point.
https://issues.apache.org/jira/browse/CASSANDRA-13971 has some of Stefan's
work with cert management
Beyond that, I encourage you to do what Michael suggested: open JIRAs for
things you car
Hi Javier,
Glad to hear it is solved now. Cassandra 3.11.1 should be a more stable
version and 3.11 a better series.
Excuse my misunderstanding, your table seems to be better designed than
thought.
Welcome to the Apache Cassandra community!
C*heers ;-)
---
Alain Rodriguez -
It can be minimum of 20 m to 10 billions
With each entry can contain upto 100 columns
Rajesh
On 19 Feb 2018 9:02 p.m., "Rahul Singh"
wrote:
How much data do you need to store and what is the frequency of reads and
writes.
--
Rahul Singh
rahul.si...@anant.us
Anant Corporation
On Feb 19, 2018
How much data do you need to store and what is the frequency of reads and
writes.
--
Rahul Singh
rahul.si...@anant.us
Anant Corporation
On Feb 19, 2018, 3:44 AM -0500, Rajesh Kishore , wrote:
> Hi All,
>
> I am a newbie to Cassandra world, got some understanding of the product.
> I have a appli
Thanks for your help,
I'been biased with Cassandra server and forget about the client completely!
Sent using Zoho Mail
On Mon, 19 Feb 2018 15:21:03 +0330 Lucas Benevides
wrote
Why did you set the number of 1000 threads?
Does it show to be the m
Why did you set the number of 1000 threads?
Does it show to be the more performatic when threads = auto?
I have used stress tool in a larger test bed (10 nodes) and my optimal
setup was 24 threads.
To check this you must monitor the stress node, both the CPU and I/O. And
give it a try with fewer t
Hi,
Thank you for your reply.
As I was bothered by this problem, last night I upgraded the cluster to
version 3.11.1 and everything is working now. As far as I can tell the
counter table can be read now. I will be doing more testing today with this
version but it is looking good.
To answer your
Comments inline
>-Original Message-
>From: Jeff Jirsa [mailto:jji...@gmail.com]
>Sent: Sunday, February 18, 2018 10:58 PM
>To: user@cassandra.apache.org
>Cc: d...@cassandra.apache.org
>Subject: Re: Cassandra Needs to Grow Up by Version Five!
>
>Comments inline
>
>
>> On Feb 18, 2018, at
Hi All,
I am a newbie to Cassandra world, got some understanding of the product.
I have a application (which is kind of datastore) for other applications,
the user queries are not fixed i.e the queries can come with any attributes.
In this case, is it recommended to use cassandra ? What benefits w
>
> (2.0 is getting pretty old and isn't supported, you may want to consider
> upgrading; 2.1 would be the smallest change and least risk, but it, too, is
> near end of life)
I would upgrade as well. Yet I think moving from Cassandra 2.0 to Cassandra
2.2 directly is doable smoothly and preferabl
Hello,
This table has 6 partition keys, 4 primary keys and 5 counters.
I think the root issue is this ^. There might be some inefficiency or
issues with counter, but this design, makes Cassandra relatively
inefficient in most cases and using standard columns or counters
indifferently.
Cassandra
22 matches
Mail list logo