While using sstableloader in 2.0.14 we have discovered that setting theĀ
thrift_framed_transport_size_in_mb to 16 in cassandra.yaml doesn't honor it.
Did anybody see similar issue?
So, this is the exception seen,
org.apache.thrift.transport.TTransportException: Frame size (16165888) larger
than m
We had this issue when using hive on cassandra.
We had to replace the thrift jar with our own patches.
On Fri, Aug 14, 2015 at 5:27 PM, K F wrote:
> While using sstableloader in 2.0.14 we have discovered that setting
> the thrift_framed_transport_size_in_mb to 16 in cassandra.yaml doesn't
> hono
Hi Guys,
We have designed a table to have rows with large number of columns (more than
250k). One of my colleagues, mistakenly ran a select on the and that caused
the nodes to go out of memory. I was just wondering if there are ways to
configure Cassandra 1. To limit number of columns that can
250k columns?As in, you have a CREATE TABLE statement that would have
over 250K separate, typed fields?
On Fri, Aug 14, 2015 at 11:07 AM Ahmed Ferdous wrote:
> Hi Guys,
>
>
>
> We have designed a table to have rows with large number of columns (more
> than 250k). One of my colleagues, mistak
The idea that you have 250k columns is somewhat of an anti-pattern. In this
case you would typically have a few columns and many rows, then just run a
select with a limit clause in your partition.
From: Jonathan Haddad
Reply-To:
Date: Friday, August 14, 2015 at 2:16 PM
To: "user@cassandra.
Is it safe to run repairs in parallel on multiple nodes in the same DC at
the time or is this discouraged?
I've got a pretty neglected cluster where repairs have not been run for
quite some time and on average I'm seeing them take about 3.5 days to
complete per node. Just trying to figure out if I
My mistake. I wanted to mean column keys. For each row, we have one column and
for this column we have 250k+ column keys. When we are trying to load the list
of column keys , Cassandra is having OOM error.
Ahmed
Ahmed Ferdous
Systems Architect
ZE PowerGroup Inc.
Corporate: 604-244-1469
On Fri, Aug 14, 2015 at 11:33 AM, Stan Lemon wrote:
> Is it safe to run repairs in parallel on multiple nodes in the same DC at
> the time or is this discouraged?
>
If you have enough headroom, it's safe. It may impact latency.
It also depends on whether you have vnodes or not. If you don't, an
Hi all
We are planning to move C* from EC2 (region A) to VPC in region B. I will
enumerate our goals so that you guys can advice me keeping in mind the
bigger picture.
Goals:
- Move to VPC is another region.
- Enable Vnodes.
- Bump up RF to 3.
- Ability to have a spark cluster.
I know this is a L
The EC2 nodes must be in the default VPC.
create a ring in the VPC in region B. Use VPC peering to connect the
default and the region B VPC.
The two rings should join the existing one. Alter the replication strategy
to network replication so that the data is replicated to the new rings.
Repair the
10 matches
Mail list logo