The idea that you have 250k columns is somewhat of an anti-pattern.  In this 
case you would typically have a few columns and many rows, then just run a 
select with a limit clause in your partition.

From:  Jonathan Haddad
Reply-To:  <user@cassandra.apache.org>
Date:  Friday, August 14, 2015 at 2:16 PM
To:  "user@cassandra.apache.org"
Subject:  Re: Configuring Cassandra to limit number of columns to read

250k columns?    As in, you have a CREATE TABLE statement that would have over 
250K separate, typed fields?

On Fri, Aug 14, 2015 at 11:07 AM Ahmed  Ferdous <ahmed.ferd...@ze.com> wrote:
Hi Guys,

 

We have designed a table to have rows with large number of columns (more than 
250k). One of my colleagues, mistakenly ran a select on the  and that caused 
the nodes to go out of memory. I was just wondering if there are ways to 
configure Cassandra 1. To limit number of columns that can be read 2. To 
gracefully reject a read request if it appears to be consuming a lot of memory. 
Otherwise, we are leaving too much open to human mistakes.

 

Cheers,

 

Ahmed

 

Ahmed Ferdous
Systems Architect
Corporate: 604-244-1469     Email: ahmed.ferd...@ze.com


ZE PowerGroup Inc.
130 - 5920 No. Two Road, Richmond, BC, Canada V7C 4R9     Web: www.ze.com
North America: 1-866-944-1469      Europe: 0-800-520-0193       Singapore: 
800-130-1609

ZE PowerGroup Inc. Confidentiality Notice: This e-mail and any attachments are 
for the exclusive and confidential use of the intended recipient and contain 
confidential and proprietary information. If you are not the intended 
recipient, be aware that any reading, distribution, disclosure, copying, 
printing or taking action in reliance upon this e-mail is prohibited. If you 
have received this in error, please notify us immediately by replying to this 
e-mail and promptly delete this e-mail and its attachments from your computer 
system.


Reply via email to