Hi Guys,

We have designed a table to have rows with large number of columns (more than 
250k). One of my colleagues, mistakenly ran a select on the  and that caused 
the nodes to go out of memory. I was just wondering if there are ways to 
configure Cassandra 1. To limit number of columns that can be read 2. To 
gracefully reject a read request if it appears to be consuming a lot of memory. 
Otherwise, we are leaving too much open to human mistakes.

Cheers,

Ahmed



________________________________

Ahmed Ferdous
Systems Architect
Corporate: 604-244-1469     Email: 
ahmed.ferd...@ze.com<mailto:ahmed.ferd...@ze.com>

________________________________
[2015 ZEMA User Forum]<http://www.ze.com/events/zemauserforum2015/>

ZE PowerGroup Inc.
130 - 5920 No. Two Road, Richmond, BC, Canada V7C 4R9     Web: 
www.ze.com<http://www.ze.com>
North America: 1-866-944-1469      Europe: 0-800-520-0193       Singapore: 
800-130-1609

________________________________

ZE PowerGroup Inc. Confidentiality Notice: This e-mail and any attachments are 
for the exclusive and confidential use of the intended recipient and contain 
confidential and proprietary information. If you are not the intended 
recipient, be aware that any reading, distribution, disclosure, copying, 
printing or taking action in reliance upon this e-mail is prohibited. If you 
have received this in error, please notify us immediately by replying to this 
e-mail and promptly delete this e-mail and its attachments from your computer 
system.

Reply via email to