Hi Mikhail,
it is empty
max_hint_window_in_ms:
Let me check what does this value to see if I can find a relation on not
starting jmx4 service.
Thanks,
Xavi
On Thu, Nov 28, 2013 at 2:59 AM, Mikhail Stepura <
mikhail.step...@outlook.com> wrote:
> What’s the value of “max_hint_window_in_ms” i
Nothing needs to happen for the writetime() to be valid. It's basically the
underlying timestamp so it's part of the insert itself.
Now, you don't give a whole lot of detail so it's hard to guess what could
be the problem. But since you mention that you get 0, not null, I'd suggest
to double check
Mikhail,
tested it setting it to default value 360 and it stopped crashing!
That's awesome! I love this mailing list ;-)
Thanks a lot,
Xavi
On Thu, Nov 28, 2013 at 2:59 AM, Mikhail Stepura <
mikhail.step...@outlook.com> wrote:
> What’s the value of “max_hint_window_in_ms” in your cassand
On Wed, Nov 27, 2013 at 7:06 PM, Jacob Edelstein wrote:
> Hi all,
> We made a decision to use compact storage for a couple of very large
> tables to get maximum storage efficiency. We understood that this would
> limit us to a single non-primary key column. We did not realize at the time
> that w
Hi,
We have a Cassandra cluster of 28 nodes. Each one is an EC2 m1.xLarge based
on datastax AMI with 4 storage in raid0 mode.
Here is the ticket we opened with amazon support :
"This raid is created using the datastax public AMI : ami-b2212dc6. Sources
are also available here : https://github.co
hi all;
What is the best way to integrate cassandra pig-extension with oozie?
can be configure oozie to use pig-cassandra instead of pig?
Some ideas that I thinking are:
Launching a Shell jobthat runs ./pig-cassandra script.pig
or changing environment variables vakues
or the original to
If I remember correctly when I configured pig, cassandra, and oozie to work
together, I just used vanilla pig but gave it the jars it needed.
What is the problem you’re experiencing that you are unable to do this?
Jeremy
On 28 Nov 2013, at 12:56, Miguel Angel Martin junquera
wrote:
> hi all;
hi Jeremy,
I do not try test it still, I only test examples pig from oozie project
without cassadra.
* pig-cassandra* sets the cassandra pig libraries .jar in the the
PIG_CLASSPATH env var. and after call the original shell script *pig* from
PIG_HOME/bin/pig and , up to now, I launch pig scrip
I believe what I did was when I set up Oozie with the setup script where you
specify the version of Hadoop and such, I also added additional jars like the
Cassandra jars and some of its dependencies there and the cassandra.yaml,
cassandra-env.sh and potentially the topology properties file. The
What happens when you don't start the JMX service? That field has a default
in both cassandra.yaml and in Config.java:
https://github.com/apache/cassandra/blob/cassandra-1.2/src/java/org/apache/cassandra/config/Config.java#L43
This may be a bug that could be fixed with simply adding a null check f
I can reproduce this with the mx4j lib loaded by setting to max hint window
to 'empty':
max_hint_window_in_ms:
I guess you could call this a bug, but given that it has 2 defaults, you
have to explicitly set this to empty in the configuration to cause any
exceptions.
On Thu, Nov 28, 2013 at 10:1
Hi Sylvain,
That is correct - we currently have only primary-key columns, and wish to add a
single non-primary key column. Based on the data stored in the table, it is
highly unlikely we will need to add more columns in the future, but we can
always migrate the data then if we need to.
We have
Thanks Rob. Let me add one thing in case someone else finds this thread -
Restarting the nodes did not in and of itself get the schema disagreement
resolved. We had to run the ALTER TABLE command individually on each of the
disagreeing nodes once they came back up.
On Tuesday, November 26, 20
This article[1] cites gains in read performance can be achieved when
compression is enabled. The more I thought about it, even after reading the
DataStax docs about reads[2], I realized I do not understand how
compression improves read performance. Can someone provide some details on
this?
Is the
Hector is designed to use Column Families created via the thrift interface,
e.g. using cassandra-cli
Cheers
-
Aaron Morton
New Zealand
@aaronmorton
Co-Founder & Principal Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com
On 25/11/2013, at 8:51 pm, Santosh She
>I am a newbie to the Cassandra world. I would like to know if its possible
> for two different nodes to write to a single Cassandra node
>
Yes.
> Currently, I am getting a IllegalRequestException, what (): Default
> TException on the first system,
>
>
What is the full error stack ?
> I hope I get this right :)
Thanks for contributing :)
> a repair will trigger a mayor compaction on your node which will take up a
> lot of CPU and IO performance. It needs to do this to build up the data
> structure that is used for the repair. After the compaction this is streamed
> to the
I recently created a test database with about 400 million small records. The
disk space consumed was about 30 GB, or 75 bytes per record.
From: onlinespending
Reply-To:
Date: Monday, November 25, 2013 at 2:18 PM
To:
Subject: Inefficiency with large set of small documents?
I¹m trying to de
Couldn't another reason for doing cleanup sequentially be to avoid data
loss? If data is being streamed from a node during bootstrap and cleanup is
run too soon, couldn't you wind up in a situation with data loss if the new
node being bootstrapped goes down (permanently)?
On Thu, Nov 28, 2013 at
I¹m trying to estimate our disk space requirements and I¹m wondering about
disk space required for compaction.
My application mostly inserts new data and performs updates to existing data
very infrequently, so there will be very few bytes removed by compaction. It
seems that if a major compaction
Hi Aaron,
Thank you for your suggestion. I have created CF using thrift
interface(Cassandra-cli) and this error is resolved.
From: Aaron Morton [mailto:aa...@thelastpickle.com]
Sent: Friday, November 29, 2013 7:20 AM
To: Cassandra User
Subject: Re: While inserting data into Cassandra using Hecto
21 matches
Mail list logo