that was).
>
Absolutely nothing gets written to /var/log/cassandra/system.log (when
trying to invoke cassandra via cron).
>
> Hannu
>
>
> On 11 Jan 2017, at 16.42, Ajay Garg wrote:
>
> Tried everything.
> Every other cron job/script I try works, just the cassandra-service does
>
On Wed, Jan 11, 2017 at 8:29 PM, Martin Schröder wrote:
> 2017-01-11 15:42 GMT+01:00 Ajay Garg :
> > Tried everything.
>
> Then try
>service cassandra start
> or
>systemctl start cassandra
>
> You still haven't explained to us why you want to start cas
nvironment you see when
>> you log in. Also, why put Cassandra on a cron?
>> On Mon, Jan 9, 2017 at 9:47 PM Bhuvan Rawal wrote:
>>
>>> Hi Ajay,
>>>
>>> Have you had a look at cron logs? - mine is in path /var/log/cron
>>>
>>> Thanks &am
Hi All.
Facing a very weird issue, wherein the command
*/etc/init.d/cassandra start*
causes cassandra to start when the command is run from command-line.
However, if I put the above as a cron job
** * * * * /etc/init.d/cassandra start*
cassandra never starts.
I have checked, and "cron" se
Thanks and Regards,
Ajay
On Mon, Apr 18, 2016 at 9:55 AM, Ajay Garg wrote:
> Also, wondering what is the difference between "all" and "dc" in
> "internode_encryption".
> Perhaps my answer lies in this?
>
> On Mon, Apr 18, 2016 at 9:51 AM, Ajay Garg wr
Also, wondering what is the difference between "all" and "dc" in
"internode_encryption".
Perhaps my answer lies in this?
On Mon, Apr 18, 2016 at 9:51 AM, Ajay Garg wrote:
> Ok, trying to wake up this thread again.
>
> I went through the following link
backup centre, while DC1 is the
primary-centre connected directly to the application-server. We don't want
to screw things if something goes bad in DC1.
Will be grateful for pointers.
Thanks and Regards,
Ajay
On Sun, Jan 17, 2016 at 9:09 PM, Ajay Garg wrote:
> Hi All.
>
> A gentle
Something like ::
##
class A {
@Id
@Column (name = "pojo_key")
int key;
@Ttl(10)
@Column (name = "pojo_temporary_guest")
String guest;
}
##
When I persist, let's say value "ajay" in guest-field (pojo_
quot;.
Thanks and Regards,
Ajay
On Wed, Jan 6, 2016 at 4:16 PM, Ajay Garg wrote:
> Thanks everyone for the reply.
>
> I actually have a fair bit of questions, but it will be nice if someone
> could please tell me the flow (implementation-wise), as to how node-to-node
> encryptio
.
> - rack: Cassandra encrypts the traffic between the racks.
>
> regards
>
> Neha
>
>
>
> On Wed, Jan 6, 2016 at 12:48 PM, Singh, Abhijeet
> wrote:
>
>> Security is a very wide concept. What exactly do you want to achieve ?
>>
>>
>&g
Hi All.
We have a 2*2 cluster deployed, but no security as of now.
As a first stage, we wish to implement inter-dc security.
Is it possible to enable security one machine at a time?
For example, let's say the machines are DC1M1, DC1M2, DC2M1, DC2M2.
If I make the changes JUST IN DC2M2 and restar
(didn't really need that), and we have not
observed the error since about an hour or so.
Thanks Eric and Bryan for the help !!!
Thanks and Regards,
Ajay
On Wed, Nov 4, 2015 at 8:51 AM, Ajay Garg wrote:
> Hmm... ok.
>
> Ideally, we require ::
>
> a)
> The intra-DC-node
node timeouts .
>
> On Mon, Nov 2, 2015 at 8:01 PM, Ajay Garg wrote:
>
>> Hi Eric,
>>
>> I am sorry, but I don't understand.
>>
>> If there had been some issue in the configuration, then the
>> consistency-issue would be seen everytime (I guess).
>
consistency.
>
> See
> http://docs.datastax.com/en/cassandra/2.0/cassandra/dml/dml_ltwt_transaction_c.html
>
> On Mon, Nov 2, 2015 at 1:29 AM Ajay Garg wrote:
>
>> Hi All.
>>
>> I have a 2*2 Network-Topology Replication setup, and I run my application
>> via DataStax-drive
Hi All.
I have a 2*2 Network-Topology Replication setup, and I run my application
via DataStax-driver.
I frequently get the errors of type ::
*Cassandra timeout during write query at consistency SERIAL (3 replica were
required but only 0 acknowledged the write)*
I have already tried passing a "w
Right now, I have setup "LOCAL QUORUM" as the consistency level in the
driver, but it seems that "SERIAL" is being used during writes, and I
consistently get this error of type ::
*Cassandra timeout during write query at consistency SERIAL (3 replica were
required but only 0 acknowledged the write
can get more
> advanced and handle bleed out later, but you have to think of latencies.
>
> Final point, rely on repairs for your data consistency, hints are great
> and all but repair is how you make sure you're in sync.
>
> On Sun, Oct 25, 2015 at 3:10 AM, Ajay Garg wrote:
>
CAS12 to be up (although the expectation is that the driver must work fine
if ANY of the 4 nodes is up).
Thoughts, experts !? :)
On Sat, Oct 24, 2015 at 9:40 PM, Ajay Garg wrote:
> Ideas please, on what I may be doing wrong?
>
> On Sat, Oct 24, 2015 at 5:48 PM, Ajay Garg wrote:
&g
on to analyse that bit...
>
> Regards,
> Vasilis
>
> On Sat, Oct 24, 2015 at 5:09 PM, Ajay Garg wrote:
>
>> Thanks a ton Vasileios !!
>>
>> Just one last question ::
>> Does running "nodetool repair" affect the functionality of cluster for
>>
Ideas please, on what I may be doing wrong?
On Sat, Oct 24, 2015 at 5:48 PM, Ajay Garg wrote:
> Hi All.
>
> I have been doing extensive testing, and replication works fine, even if
> any permuatation of CAS11, CAS12, CAS21, CAS22 are downed and brought up.
> Syncing alw
g in the docs just email them (more info
> and contact email here: http://docs.datastax.com/en/ ).
>
> Regards,
> Vasilis
>
> On Sat, Oct 24, 2015 at 1:04 PM, Ajay Garg wrote:
>
>> Thanks Vasileios for the reply !!!
>> That makes sense !!!
>>
>> I will be gr
Hi All.
I have been doing extensive testing, and replication works fine, even if
any permuatation of CAS11, CAS12, CAS21, CAS22 are downed and brought up.
Syncing always takes place (obviously, as long as continuous-downtime-value
does not exceed *max_hint_window_in_ms*).
However, things behave
> My understanding is that if a node remains down for more than
> *max_hint_window_in_ms*, then you will need to repair that node.
>
> Thanks,
> Vasilis
>
> On Sat, Oct 24, 2015 at 7:48 AM, Ajay Garg wrote:
>
>> If a node in the cluster goes down and comes up, the data
If a node in the cluster goes down and comes up, the data gets synced up on
this downed node.
Is there a limit on the interval for which the node can remain down? Or the
data will be synced up even if the node remains down for weeks/months/years?
--
Regards,
Ajay
Hi All.
We have a scenario, where the Application-Server (APP), Node-1 (CAS11), and
Node-2 (CAS12) are hosted in DC1.
Node-3 (CAS21) and Node-4 (CAS22) are in DC2.
The intention is that we provide 4-way redundancy to APP, by specifying
CAS11, CAS12, CAS21 and CAS22 as the addresses via Java-Cassa
nce for ANY change you make from the default. As you've experienced
> here, not all settings are intended to work together.
>
> HTH,
> Steve
>
>
>
> On Fri, Oct 23, 2015 at 12:07 PM, Ajay Garg
> wrote:
>
>> Any ideas, please?
>> To repeat, we are using
Any ideas, please?
To repeat, we are using the exact same cassandra-version on all 4 nodes
(2.1.10).
On Fri, Oct 23, 2015 at 9:43 AM, Ajay Garg wrote:
> Hi Michael.
>
> Please find below the contents of cassandra.yaml for CAS11 (the files on
> the rest of the three nodes are also
ression: all
inter_dc_tcp_nodelay: false
What changes need to be made, so that whenever a downed server comes back
up, the missing data comes back over to it?
Thanks and Regards,
Ajay
On Fri, Oct 23, 2015 at 9:05 AM, Michael Shuler
wrote:
> On 10/22/2015 10:
io, after all hints are delivered, both
> CAS11 and CAS12 will have the exact same data.
>
> Cheers!
>
> Carlos Alonso | Software Engineer | @calonso <https://twitter.com/calonso>
>
> On 11 October 2015 at 05:21, Ajay Garg wrote:
>
>> Thanks a ton Anuja for the help !
Thanks a ton Anuja for the help !!!
On Fri, Oct 9, 2015 at 12:38 PM, anuja jain wrote:
> Hi Ajay,
>
>
> On Fri, Oct 9, 2015 at 9:00 AM, Ajay Garg wrote:
>>
> In this case, it will be the responsibility of APP1 to start connection to
> CAS12. On the other hand if yo
On Thu, Oct 8, 2015 at 9:47 AM, Ajay Garg wrote:
> Thanks Eric for the reply.
>
>
> On Thu, Oct 8, 2015 at 1:44 AM, Eric Stevens wrote:
>> If you're at 1 node (N=1) and RF=1 now, and you want to go N=3 RF=3, you
>> ought to be able to increase RF to 3 before bootstra
Thanks Eric for the reply.
On Thu, Oct 8, 2015 at 1:44 AM, Eric Stevens wrote:
> If you're at 1 node (N=1) and RF=1 now, and you want to go N=3 RF=3, you
> ought to be able to increase RF to 3 before bootstrapping your new nodes,
> with no downtime and no loss of data (even temporary). Effectiv
Hi Sean.
Thanks for the reply.
On Wed, Oct 7, 2015 at 10:13 PM, wrote:
> How many nodes are you planning to add?
I guess 2 more.
> How many replicas do you want?
1 (original) + 2 (replicas).
That makes it a total of 3 copies of every row of data.
> In general, there shouldn't be a problem
Hi All.
We have a scenario, where till now we had been using a plain, simple
single node, with the keyspace created using ::
CREATE KEYSPACE our_db WITH replication = {'class': 'SimpleStrategy',
'replication_factor': '1'} AND durable_writes = true;
We now plan to introduce replication (in the
Thanks and Regards,
Ajay
On Tue, Sep 15, 2015 at 12:04 PM, Ajay Garg wrote:
> Hi All.
>
> Taking snapshots sometimes works, sometimes don't.
> Following is the stacktrace whenever
, 2015 at 11:34 AM, Neha Dave wrote:
> Havent used it.. but u can try SSTaable Bulk Loader:
>
> http://docs.datastax.com/en/cassandra/2.0/cassandra/tools/toolsBulkloader_t.html
>
> regards
> Neha
>
> On Tue, Sep 15, 2015 at 11:21 AM, Ajay Garg wrote:
>>
>> Hi All
Hi All.
Taking snapshots sometimes works, sometimes don't.
Following is the stacktrace whenever the process fails ::
##
ajay@ajay-HP-15-Notebook-PC:/var/lib/cassandra/data/instamsg$ nodetool
-h localhost
Hi All.
We have a schema on one Cassandra-node, and wish to duplicate the
entire schema on another server.
Think of this a 2 clusters, each cluster containing one node.
We have found the way to dump/restore schema-metainfo at ::
https://dzone.com/articles/dumpingloading-schema
And dumping/rest
hidden problem.
I am heartfully grateful to everyone for bearing with me.
Thanks and Regards,
Ajay
On Tue, Sep 15, 2015 at 10:16 AM, Ajay Garg wrote:
> Hi Jared.
>
> Thanks for your help.
>
> I made the config-changes.
> Also, I changed the seed (right now, we are just
the node bootstrap process a script
> performs the above. The reason that we set seeds back to empty is that we
> don't want nodes coming up/down to cause the config file to change and thus
> cassandra to restart needlessly. So far we haven't had any issues with seeds
> bei
>>
>> On 14 September 2015 at 10:34, Ahmed Eljami
>> wrote:
>>>
>>> In cassanrda.yaml:
>>> listen_address:@ Of node
>>> rpc_address:0.0.0.0
>>>
>>> brodcast_rpc_address:@ Of node
>>>
>>> 2015-09-14 11:31 GMT+01:00 Neha Dav
Hi All.
We have setup a Ubuntu-14.04 server, and followed the steps exactly as
per http://wiki.apache.org/cassandra/DebianPackaging
Installation completes fine, Cassandra starts fine, however cqlsh does not work.
We get the error ::
###
Testing simple content, as my previous email bounced :(
--
Regards,
Ajay
43 matches
Mail list logo