Thank you Jeff
Regards,
Nitan
Cell: 510 449 9629
> On Jan 23, 2019, at 10:13 AM, Jeff Jirsa wrote:
>
>
>
>> On Jan 23, 2019, at 8:00 AM, Nitan Kainth wrote:
>>
>> Hi,
>>
>> Why does nodetool compactionstats not show time remaining when
>> compactionthroughput is set to 0?
>
> Because we
> On Jan 23, 2019, at 8:00 AM, Nitan Kainth wrote:
>
> Hi,
>
> Why does nodetool compactionstats not show time remaining when
> compactionthroughput is set to 0?
Because we don’t have a good estimate if we’re not throttling (could be added,
just not tracked now)
>
> If the node is restar
Full repair on TWCS maintains proper bucketing
--
Jeff Jirsa
> On Jan 9, 2018, at 5:36 PM, "wxn...@zjqunshuo.com"
> wrote:
>
> Hi All,
> If using TWCS, will a full repair trigger major compaction and then compact
> all the sstable files into big ones no matter the time bucket?
>
> Thanks
Petrus & Kiran,
Thank you for the guide and suggestions. I will have a try.
Cheers,
Simon
From: Petrus Gomes
Date: 2017-07-21 00:45
To: user
Subject: Re: Quick question to config Prometheus to monitor Cassandra cluster
I use the same environment. Follow a few links:
Use this link, is the
I use the same environment. Follow a few links:
Use this link, is the best one to connect Cassandra and prometheus:
https://www.robustperception.io/monitoring-cassandra-with-prometheus/
JMX agent: https://github.com/nabto/cassandra-prometheus
https://community.grafana.com/t/how-to-connect-prometh
You have to download the Prometheus HTTP jmx dependencies jar and download
the Cassandra yaml and mention the jmx port in the config (7199).
Run the agent on specific port" on all the Cassandra nodes.
After this go to your Prometheus server and make the scrape config to
metrics from all clients.
You have to download the Prometheus HTTP jmx dependencies jar and download
the Cassandra yaml and mention the jmx port in the config (7199).
Run the agent on specific port" on all the Cassandra nodes.
After this go to your Prometheus server and make the scrape config to
metrics from all clients.
Adding dev only for this thread.
On Wed, Feb 1, 2017 at 4:39 AM, Kant Kodali wrote:
> What is the difference between accepting a value and committing a value?
>
>
>
> On Wed, Feb 1, 2017 at 4:25 AM, Kant Kodali wrote:
>
>> Hi,
>>
>> Thanks for the response. I finished watching this video but I
What is the difference between accepting a value and committing a value?
On Wed, Feb 1, 2017 at 4:25 AM, Kant Kodali wrote:
> Hi,
>
> Thanks for the response. I finished watching this video but I still got
> few questions.
>
> 1) The speaker seems to suggest that there are different consistenc
Hi,
Thanks for the response. I finished watching this video but I still got few
questions.
1) The speaker seems to suggest that there are different consistency levels
being used in different phases of paxos protocol. If so, what is right
consistency level to set on these phases?
2) Right now, we
Hi,
I believe that this talk from Christopher Batey at the Cassandra Summit
2016 might answer most of your questions around LWT:
https://www.youtube.com/watch?v=wcxQM3ZN20c
He explains a lot of stuff including consistency considerations. My
understanding is that the quorum read can only see the d
I mean disk/cpu/network usage but I understand what Dynamic snitch does!
On Wed, Oct 19, 2016 at 3:34 AM, Vladimir Yudovin
wrote:
> What exactly do you mean by "resource usage"? If you mean "data size on
> disk" - no.
> If you mean "current CPU usage" - it depends on query. Modify query should
What exactly do you mean by "resource usage"? If you mean "data size on disk" -
no.
If you mean "current CPU usage" - it depends on query. Modify query should be
be sent to all nodes owning specific partition key.
For read queries see
http://www.datastax.com/dev/blog/dynamic-snitching-in-cassa
The coordinator can optimize latency for a SELECT by asking data from the
lowest-latency replica using DynamicSnitch. It's not really load balancing
per se but it's the closest idea.
> I had seed nodes ip1,ip2,ip3 as the seeds but what I didn't realize was then
> that these nodes had themselves as seeds. I am assuming that should never be
> done, is that correct.
The only reason nodes listing them selves as seeds can be a pain is during
bootstrap. Seeds nodes will not str
Hi ,
The seeds are only used when a node appears in the cluster. At this moment
it chooses a seed (in the same dc) in order to have some information.
So, the most secure way is to write all your other nodes as seed, but in
fact you need only one up.
if you think that you will never have 3 node do
Thanks Russell, that's the info I was looking for!
On Sat, Aug 11, 2012 at 11:23 AM, Russell Haering
wrote:
> Your update doesn't go directly to an sstable (which are immutable),
> it is first merged to an in-memory table. Eventually the memtable is
> flushed to a new sstable.
>
> See http://wiki
Aaron,
I have not deep dived the data files in a while but this is how I understand it.
http://wiki.apache.org/cassandra/ArchitectureSSTable
There is no need to store the row key each time with the column.
RowKey to columns is a one to many relationship. This would be a
diagram of a physical fil
Your update doesn't go directly to an sstable (which are immutable),
it is first merged to an in-memory table. Eventually the memtable is
flushed to a new sstable.
See http://wiki.apache.org/cassandra/MemtableSSTable
On Sat, Aug 11, 2012 at 11:03 AM, Aaron Turner wrote:
> So how does that work?
So how does that work? An sstable is for a single CF, but it can and
likely will have multiple rows. There is no read to write and as I
understand it, writes are append operations.
So if you have an sstable with say 26 different rows (A-Z) already in
it with a bunch of columns and you add a new
Rowkey is stored only once in any sstable file.
That is, in the spesial case where you get sstable file per column/value, you
are correct, but normally, I guess most of us are storing more per key.
Regards,
Terje
On 11 Aug 2012, at 10:34, Aaron Turner wrote:
> Curious, but does cassandra stor
Thanks. got it!
On Sat, Jan 8, 2011 at 9:44 PM, Tyler Hobbs wrote:
> A couple of alternatives off the top of my head:
>
> 1) A row of supercolumns becomes a row of standard columns with compound
> column names.
>
> 2) A row of N supercolumns becomes N rows of standard columns (with compound
> key
A couple of alternatives off the top of my head:
1) A row of supercolumns becomes a row of standard columns with compound
column names.
2) A row of N supercolumns becomes N rows of standard columns (with compound
keys if needed); a separate timeline or index replaces the super column
names.
Ther
Thanks Tyler & Stu,
Tyler, as the alternatives for large no of subcolumns in a
supercolumn, what do you suggest ? Like splitting up a
'supercolumnFamily' into several 'columnfamilies' ?? What else ?
On Sat, Jan 8, 2011 at 2:33 PM, Stu Hood wrote:
> Raj: the super column indexing is a longstandi
Raj: the super column indexing is a longstanding issue that we've been
considering recently, and would like to fix. See
https://issues.apache.org/jira/browse/CASSANDRA-674
On Fri, Jan 7, 2011 at 10:53 PM, Tyler Hobbs wrote:
> Not that I'm aware of. There are several other decent alternatives to
Not that I'm aware of. There are several other decent alternatives to large
amounts of subcolumns in a supercolumn, so I don't think it's a high
priority.
- Tyler
On Fri, Jan 7, 2011 at 9:59 PM, Rajkumar Gupta wrote:
> Hey Tyler,
>
> Is this limitation of supercolumns going to be removed anytim
Hey Tyler,
Is this limitation of supercolumns going to be removed anytime sooner ?
Raj
On Fri, Jan 7, 2011 at 8:51 PM, Tyler Hobbs wrote:
> An important bit to read about supercolumn limitations:
> http://www.riptano.com/docs/0.6/data_model/supercolumns#limitations
>
> Don't make supercolumns w
An important bit to read about supercolumn limitations:
http://www.riptano.com/docs/0.6/data_model/supercolumns#limitations
Don't make supercolumns with a huge number of subcolumns (or a few really
large subcolumns) unless you plan to always read all of them at once.
- Tyler
On Fri, Jan 7, 2011
Thanx to both of you. I can now go ahead a bit more.
Arijit
On 7 January 2011 12:53, Narendra Sharma wrote:
> With raw thrift APIs:
>
> 1. Fetch column from supercolumn:
>
> ColumnPath cp = new ColumnPath("ColumnFamily");
> cp.setSuper_column("SuperColumnName");
> cp.setColumn("ColumnName");
> C
With raw thrift APIs:
1. Fetch column from supercolumn:
ColumnPath cp = new ColumnPath("ColumnFamily");
cp.setSuper_column("SuperColumnName");
cp.setColumn("ColumnName");
ColumnOrSuperColumn resp = client.get(getByteBuffer("RowKey"), cp,
ConsistencyLevel.ONE);
Column c = resp.getColumn();
2. Add
On Fri, Jan 7, 2011 at 12:12 PM, Arijit Mukherjee wrote:
> Thank you. And is it similar if I want to search a subcolumn within a
> given supercolumn? I mean I have the supercolumn key and the subcolumn
> key - can I fetch the particular subcolumn?
>
> Can you share a small piece of example code fo
Thank you. And is it similar if I want to search a subcolumn within a
given supercolumn? I mean I have the supercolumn key and the subcolumn
key - can I fetch the particular subcolumn?
Can you share a small piece of example code for both?
I'm still new into this and trying to figure out the Thrif
On Fri, Jan 7, 2011 at 11:39 AM, Arijit Mukherjee wrote:
> Hi
>
> I've a quick question about supercolumns.
> EventRecord = {
>eventKey2: {
>e2-ts1: {set of columns},
>e2-ts2: {set of columns},
>...
>e2-tsn: {set of columns}
>}
>
> }
>
> If I want to
33 matches
Mail list logo