Hi Everyone,
I'm using C* v3.0.9 for a cluster of 3 DCs with RF 3 in each DC. All
read/write queries are set to consistency LOCAL_QUORUM.
The relevant keyspace is built as follows:
*CREATE KEYSPACE mykeyspace WITH replication = {'class':
'NetworkTopologyStrategy', 'DC1': '3', 'DC2': '3', 'DC3': '
Yes, but it is legitimate to supervise and monitor nodes. I only doubt that
cron is the best tool for it.
2017-01-12 7:42 GMT+01:00 Martin Schröder :
> 2017-01-12 6:12 GMT+01:00 Ajay Garg :
> > Sometimes, the cassandra-process gets killed (reason unknown as of now).
>
> That's why you have a clus
2017-01-12 6:12 GMT+01:00 Ajay Garg :
> Sometimes, the cassandra-process gets killed (reason unknown as of now).
That's why you have a cluster of them.
Best
Martin
Hi Kunal,
Razi's post does give a very lucid description of how cassandra manages the
hard links inside the backup directory.
Where it needs clarification is the following:
--> incremental backups is a system wide setting and so its an all or
nothing approach
--> as multiple people have stated,
I think you should take a look at supervisord or sth similar. This is a
much more reliable solution than using crons.
Am 12.01.2017 06:12 schrieb "Ajay Garg" :
On Wed, Jan 11, 2017 at 8:29 PM, Martin Schröder wrote:
> 2017-01-11 15:42 GMT+01:00 Ajay Garg :
> > Tried everything.
>
> Then try
>
Hi Hannu.
On Wed, Jan 11, 2017 at 8:31 PM, Hannu Kröger wrote:
> One possible reason is that cassandra process gets different user when run
> differently. Check who owns the data files and check also what gets written
> into the /var/log/cassandra/system.log (or whatever that was).
>
Absolutely
On Wed, Jan 11, 2017 at 8:29 PM, Martin Schröder wrote:
> 2017-01-11 15:42 GMT+01:00 Ajay Garg :
> > Tried everything.
>
> Then try
>service cassandra start
> or
>systemctl start cassandra
>
> You still haven't explained to us why you want to start cassandra every
> minute.
>
Hi Martin.
The objective of non-incremental primary-range repair is to avoid redoing
work, but with incremental repair anticompaction will segregate repaired
data so no extra work is done on the next repair.
You should run nodetool repair [ks] [table] in all nodes sequentially. The
more often you run, the sm
Hello Kunal,
Caveat: I am not a super-expert on Cassandra, but it helps to explain to
others, in order to eventually become an expert, so if my explanation is wrong,
I would hope others would correct me. ☺
The active sstables/data files are are all the files located in the directory
for the ta
One possible reason is that cassandra process gets different user when run
differently. Check who owns the data files and check also what gets written
into the /var/log/cassandra/system.log (or whatever that was).
Hannu
> On 11 Jan 2017, at 16.42, Ajay Garg wrote:
>
> Tried everything.
> Ever
2017-01-11 15:42 GMT+01:00 Ajay Garg :
> Tried everything.
Then try
service cassandra start
or
systemctl start cassandra
You still haven't explained to us why you want to start cassandra every minute.
Best
Martin
Hi All,
I have been looking for definitive information on this, and either it
doesn't seem to exists, or I cannot find the correct combination of
keywords to find it(entirely possible, maybe even likely).
When setting up multi rack/multi dc clusters(currently I am deploying in
AWS across multiAZ/
Tried everything.
Every other cron job/script I try works, just the cassandra-service does
not.
On Wed, Jan 11, 2017 at 8:51 AM, Edward Capriolo
wrote:
>
>
> On Tuesday, January 10, 2017, Jonathan Haddad wrote:
>
>> Last I checked, cron doesn't load the same, full environment you see when
>> yo
Nodetool repair always list lots of data and never stays repaired. I think.
Cheers
On 01/11/2017 02:15 PM, Hannu Kröger wrote:
> Just to understand:
>
> What exactly is the problem?
>
> Cheers,
> Hannu
>
>> On 11 Jan 2017, at 16.07, Cogumelos Maravilha
>> wrote:
>>
>> Cassandra 3.9.
>>
>> node
Just to understand:
What exactly is the problem?
Cheers,
Hannu
> On 11 Jan 2017, at 16.07, Cogumelos Maravilha
> wrote:
>
> Cassandra 3.9.
>
> nodetool status
> Datacenter: dc1
> ===
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> -- Address Load Tokens
Cassandra 3.9.
nodetool status
Datacenter: dc1
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host
ID Rack
UN 10.0.120.145 1.21 MiB 256 49.5%
da6683cd-c3cf-4c14
Thanks for the reply, Razi.
As I mentioned earlier, we're not currently using snapshots - it's only the
backups that are bothering me right now.
So my next question is pertaining to this statement of yours:
As far as I am aware, using *rm* is perfectly safe to delete the
> directories for snapsh
I don't understand why ALTER TYPE was even allowed initially. Apart from
very few corner cases, changing data type on existing data will lead to
disaster in many cases.
On Wed, Jan 11, 2017 at 12:20 PM, Tom van der Woerdt <
tom.vanderwoe...@booking.com> wrote:
> My understanding is that it's safe
My understanding is that it's safe... but considering "alter type" is going
to be removed completely (
https://issues.apache.org/jira/browse/CASSANDRA-12443), maybe not.
As for faster ways to do this: no idea :-(
Tom
On Wed, Jan 11, 2017 at 12:12 PM, Benjamin Roth
wrote:
> But it is safe to c
But it is safe to change non-primary-key columns from int to varint, right?
2017-01-11 10:09 GMT+01:00 Tom van der Woerdt
:
> Actually, come to think of it, there's a subtle serialization difference
> between varint and int that will break token generation (see bottom of
> mail). I think it's a
Wow okay! Fortunately I did not change the types, yet!
So there is no other way than reading the whole table and re-insert all
data?
Is there a faster way than doing all this with CQL? Like importing existing
SSTables directly into a new CF with the new column types?
2017-01-11 10:09 GMT+01:00 To
Actually, come to think of it, there's a subtle serialization difference
between varint and int that will break token generation (see bottom of
mail). I think it's a bug that Cassandra will allow this, so don't do this
in production.
You can think of varint encoding as regular bigints with all the
Few! You saved my life, thanks!
For my understanding:
When creating a new table, is bigint or varint a better choice for storing
(up to) 64bit ints? Is there a difference in performance?
2017-01-11 9:39 GMT+01:00 Tom van der Woerdt :
> Hi Benjamin,
>
> bigint and int have incompatible serializat
Hi Benjamin,
bigint and int have incompatible serialization types, so that won't work.
However, changing to 'varint' will work fine.
Hope that helps.
Tom
On Wed, Jan 11, 2017 at 9:21 AM, Benjamin Roth
wrote:
> Hi there,
>
> Does anyone know if there is a hack to change a "int" to a "bigint"
Hi Hannu
It should be as simple as copying the archived commit logs to the recovery
directory, specifying the point in time you like to restore from the logs
by using the 'restore_point_in_time' setting and afterwards starting the
node.
On Tue, Jan 10, 2017 at 7:45 PM, Hannu Kröger wrote:
> Hel
Hi there,
Does anyone know if there is a hack to change a "int" to a "bigint" in a
primary key?
I recognized very late, I took the wrong type and our production DB already
contains billions of records :(
Is there maybe a hack for it, because int and bigint are similar types or
does the SSTable ser
26 matches
Mail list logo