Unfortunately, no. I've read that and the solution presented only works in
limited scenarios. Using the post's example, consider the query of "get
all readings for sensor 1". With dynamic columns, the query is just
"select * from data where sensor_id=1". In CQL, not only does this take N
differ
On Tue, Aug 26, 2014 at 12:14 PM, Shane Hansen
wrote:
> Does this answer your question Ian?
> http://www.datastax.com/dev/blog/does-cql-support-dynamic-columns-wide-rows
>
If dissembling can be considered to answer a question...
"A common misunderstanding is that CQL does not support dynamic co
Does this answer your question Ian?
http://www.datastax.com/dev/blog/does-cql-support-dynamic-columns-wide-rows
On Tue, Aug 26, 2014 at 1:12 PM, Ian Rose wrote:
> Is it possible in CQL to create a table that supports dynamic column
> names? I am using C* v2.0.9, which I assume implies CQL ver
Is it possible in CQL to create a table that supports dynamic column names?
I am using C* v2.0.9, which I assume implies CQL version 3.
This page appears to show that this was supported in CQL 2 with the 'with
comparator' and 'with default_validation' options but that CQL 3 does not
support this:
But if we look at thrift world "batch_mutate" then it used to perform all
mutations withing partition key atomically without using CAS i.e no extra
penalty.
Does this mean CQL degrades in performance as compared to thrift if we want
to do multiple updates to a partition key atomically?
On Tue, Au
AFAIK, it is not. With CAS it should br
On 26/08/2014 10:21 pm, "Jaydeep Chovatia"
wrote:
> Hi,
>
> I have question on inserting multiple cluster keys under same partition
> key.
>
> Ex:
>
> CREATE TABLE Employee (
> deptId int,
> empId int,
> name varchar,
> address varchar,
> salary
On Tue, Aug 26, 2014 at 11:38 AM, Paulo Ricardo Motta Gomes <
paulo.mo...@chaordicsystems.com> wrote:
> What is the solution here? The good old "change to STCS and then back to
> LCS", or is there something less brute force?
>
In theory you could use user defined compaction via JMX, but I'd proba
Hey folks,
After adding more nodes and moving tokens of "old" nodes to rebalance the
ring, I noticed that the "old" nodes had significant more data then the
newly bootstrapped nodes, even after cleanup.
I noticed that the old nodes had a much larger number of SSTables on LCS
CFs, and most of them
Computing how much actual memory the memtables cost including jvm overhead etc
is expensive (using https://github.com/jbellis/jamm). So instead the
memorymeter thread pool will periodically find the size and compare it to the
serialized size to compute the ratio to give an appropriate estimate
Hi,
I'm trying to understand what is the liveRatio and if I have to care about it.
I found some reference on the web and if I understand them, the liveRatio
represents the Memtable size divided by the amount of data serialized on the
disk. Is it the truth?
When I see the following log, what
Vineet,
One more thing -- you have initial_token and num_tokens both set. If you
are trying to use virtual nodes, you should comment out initial_token as
this setting overrides num_tokens.
Cheers,
On Tue, Aug 26, 2014 at 5:39 AM, Vineet Mishra
wrote:
> Thanks Vivek!
>
> It was indeed a format
Hi,
I have question on inserting multiple cluster keys under same partition
key.
Ex:
CREATE TABLE Employee (
deptId int,
empId int,
name varchar,
address varchar,
salary int,
PRIMARY KEY(deptId, empId)
);
BEGIN *UNLOGGED *BATCH
INSERT INTO Employee (deptId, empId, name, address,
There is a "Bring your own Hadoop" for DSE as well:
http://www.datastax.com/documentation/datastax_enterprise/4.5/datastax_enterprise/byoh/byohIntro.html
Can also run hadoop against your backup/snapshots:
https://github.com/Netflix/aegisthus
https://github.com/fullcontact/hadoop-sstable
Chris
O
Hi Clint,
I think I kind of found the reason for my problem, I doubt you have the
exact same problem but here it is:
We're using Zabbix as our monitoring system and it uses /usr/bin/at to
schedule it monitoring runs.
Every time the "at" command adds another scheduled task, it send a kill
signal to
If you want true integration of Cassandra and Hadoop and Spark then you will
need to use Datastax Enterprise (DSE). There are connectors that will allow
MapReduce over vanilla Cassandra, however, they are just making requests to
Cassandra under the covers while DSE uses CFS which is similar to
Hello,
I read that Cassandra has had MapReduce integration since early on. There
are instructions on how to use Hadoop or Spark. However, it appears to me
that according to these instructions, Hadoop and Spark just submit requests
to Cassandra just like any other client would. So, I'm not s
Hi Malay,
Have a look at this video, this will give you very clear instruction how
you can achieve your output.
https://www.youtube.com/watch?v=Wohi9B-1Omc
Thanks,
Umang Shah
Pentaho BI-ETL Developer
shahuma...@gmail.com
On Tue, Aug 26, 2014 at 12:41 PM, Malay Nilabh wrote:
> Hi
>
>
>
> I w
Hi
I want to setup one node Cassandra cluster on my Ubuntu machine which has Java
1.7 along with oracle jdk and I have already downloaded the cassandra 2.0 tar
file, so I need full document to setup single node Cassandra cluster please
guide me through this.
Thanks & Regards
Malay Nilabh
BIDW
Thanks Vivek!
It was indeed a formatting issue in yaml, got it work!
On Tue, Aug 26, 2014 at 6:06 PM, Vivek Mishra wrote:
> Please read about http://www.yaml.org/start.html.
> Looks like formatting issue. You might be missing/adding incorrect spaces
>
> Validate your YAML file. This should hel
Please read about http://www.yaml.org/start.html.
Looks like formatting issue. You might be missing/adding incorrect spaces
Validate your YAML file. This should help you out
http://yamllint.com/
-Vivek
On Tue, Aug 26, 2014 at 4:20 PM, Vineet Mishra
wrote:
> Hi Mark,
>
> Yes I was generating m
Hi Mark,
Yes I was generating my own cassandra.yaml with the configuration mentioned
below,
cluster_name: 'node'
initial_token: 0
num_tokens: 256
seed_provider:
- class_name: org.apache.cassandra.locator.SimpleSeedProvider
parameters:
- seeds: "192.168.1.32"
listen_address: 192.168.1.32
r
You are missing commitlog_sync in your cassandra.yaml.
Are you generating your own cassandra.yaml or editing the package default?
If you are generating your own there are several configuration options that
are required and if not present, Cassandra will fail to start.
Regards,
Mark
On 26 Augus
Thanks Mark,
That was indeed yaml formatting issue.
Moreover I am getting the underlying error now,
INFO 15:33:43,770 Loading settings from
file:/home/cluster/cassandra/conf/cassandra.yaml
INFO 15:33:44,100 Data files directories: [/var/lib/cassandra/data]
INFO 15:33:44,101 Commit log directory:
>
> Why is there so much OpsCenter work happening?
Opscenter stores a lot of information regarding all aspects of your
cluster, such as OS, cluster, keyspace and individual table metrics, after
a set period of time these granular data point are rolled up into
aggregates. This is what you are seei
It is telling you that your yaml is invalid, from looking at the snippet
you have provided it looks like the seed_provider.parameters is not
correctly indented, it should look something like:
seed_provider:
- class_name: org.apache.cassandra.locator.SimpleSeedProvider
parameters:
-
Hi All,
I am installing Cassandra Multinode Setup on a 4 node CentOs Cluster, my
cassandra.yaml looks like so
cluster_name: 'node'
initial_token: 0
num_tokens: 256
seed_provider:
- class_name: org.apache.cassandra.locator.SimpleSeedProvider
parameters:
- seeds: "192.168.1.32"
listen_addre
26 matches
Mail list logo