://www.thelastpickle.com
On 14/04/2013, at 11:56 AM, Rustam Aliyev wrote:
Just a followup on this issue. Due to the cost of shuffle, we decided not to do
it. Recently, we added new node and ended up in not well balanced cluster:
Datacenter: datacenter1
===
Status=Up/Down
|/ State=Normal/Leaving
does not show any
information about joining node. It appears only when join finished (on
v1.2.3).
-- Rustam
On 08/04/2013 22:33, Rustam Aliyev wrote:
After 2 days of endless compactions and streaming I had to stop this
and cancel shuffle. One of the nodes even complained that there's no
ouldn't it be
assigned ranges randomly from all nodes?
Some other notes inline below:
On 08/04/2013 15:00, Eric Evans wrote:
[ Rustam Aliyev ]
Hi,
After upgrading to the vnodes I created and enabled shuffle
operation as suggested. After running for a couple of hours I had to
disable i
Hi,
After upgrading to the vnodes I created and enabled shuffle operation as
suggested. After running for a couple of hours I had to disable it
because nodes were not catching up with compactions. I repeated this
process 3 times (enable/disable).
I have 5 nodes and each of them had ~35GB. Af
after upgrade to 1.2.3.
Cheers
-
Aaron Morton
Freelance Cassandra Consultant
New Zealand
@aaronmorton
http://www.thelastpickle.com
On 3/04/2013, at 4:09 AM, Rustam Aliyev <mailto:rustam.li...@code.az>> wrote:
Hi,
I just wanted to share our experience of upgrading 1
Hi,
I just wanted to share our experience of upgrading 1.0.10 to 1.2.3. It
happened that first we upgraded both of our two seeds to 1.2.3. And
basically after that old nodes couldn't communicate with new ones
anymore. Cluster was down until we upgraded all nodes to 1.2.3. We don't
have many n
Each storage system has its own purpose. While Cassandra would be good
for metadata, depending on the size of objects Cassandra could be not
the best fit. You need something more like Amazon S3 for blob storage.
Try Ceph RADOS or OpenStack Object Store which both provide S3
compatible API.
On
Hi Edward,
That's a great news!
One thing I'd like to see in the new edition is Counters, known issues
and how to avoid them:
- avoid double counting (don't retry on failure, use write consistency
level ONE, use dedicated Hector connector?)
- delete counters (tricky, reset to zero?)
- othe
ce Developer
@aaronmorton
http://www.thelastpickle.com
On 2/06/2012, at 7:53 AM, Rustam Aliyev wrote:
Hi all,
I have SCF with ~250K rows. One of these rows is relatively large - it's a wide
row (according to compaction logs) containing ~100.000 super columns and
overall size of 1GB. Each
Hi all,
I have SCF with ~250K rows. One of these rows is relatively large - it's a
wide row (according to compaction logs) containing ~100.000 super columns
and overall size of 1GB. Each super column has average size of 10K and ~10
sub columns.
When I'm trying to delete ~90% of the columns in th
No, it's not possible.
On 15/03/2012 10:53, Tamar Fraenkel wrote:
Watched the video, really good!
One question:
I wonder if it is possible to mix counter columns in
Cassandra 1.0.7 with regular columns in the same CF.
s
to 5.
--
Rustam.
On 12/03/2012 12:46, Vanger wrote:
Cassandra v1.0.8
once again: 4-nodes cluster, RF = 3.
On 12.03.2012 16:18, Rustam Aliyev wrote:
What version of Cassandra do you have?
On 12/03/2012 11:38, Vanger wrote:
We were aware of compaction overhead, but still don't unde
with
such disk space.
Why suddenly node needs 2x more space for data it already have? Why
decreasing token range not lead to decreasing disk usage?
On 12.03.2012 15:14, Rustam Aliyev wrote:
Hi,
If you use SizeTieredCompactionStrategy, you should have x2 disk
space to be on the safe side. So
Hi,
If you use SizeTieredCompactionStrategy, you should have x2 disk space
to be on the safe side. So if you want to store 2TB data, you need
partition size of 4TB at least. LeveledCompactionStrategy is available
in 1.x and supposed to require less free disk space (but comes at price
of I/O)
Hi Maxim,
If you need to store Blobs, then BlobStores such as OpenStack Object
Store (aka Swift) should be better choise.
As far as I know, MogileFS (which is also a sort of BlobStore) has
scalability bottleneck - MySQL.
There are few reasons why BlobStores are better choise. In the
follow
No more RPMs from DataStax?
http://rpm.datastax.com/community/x86_64/
On Mon Feb 13 10:40:13 2012, Sylvain Lebresne wrote:
The Cassandra team is pleased to announce the release of Apache Cassandra
version 0.8.10.
Cassandra is a highly scalable second-generation distributed database,
bringing t
Hi,
I was just about to upgrade to the latest 0.8.x, but noticed that
there's no RPM package for 0.8.9 on DataStax repo. Latest is 0.8.8.
Any plans to publish 0.8.9 rpm?
--
Rustam
On 14/12/2011 19:59, Sylvain Lebresne wrote:
The Cassandra team is pleased to announce the release of Apache Ca
Great, will try 0.7.1 when it's ready.
(Bug I mentioned was already reported)
On 19/01/2012 13:15, Andrei Savu wrote:
On Wed, Jan 18, 2012 at 7:58 PM, Rustam Aliyev <mailto:rus...@code.az>> wrote:
Hi Andrei,
As you know, we are using Whirr for ElasticInbox
(http
Hi Andrei,
As you know, we are using Whirr for ElasticInbox
(https://github.com/elasticinbox/whirr-elasticinbox). While testing we
encountered a few minor problems which I think could be improved. Note
that we were using 0.6 (there were some strange bug in 0.7, maybe fixed
already).
Althoug
My suggestion is simple: don't use any deprecated stuff out there. In
practically any case there is a good reason why it's deprecated.
SuperColumns are not deprecated.
On Sat Jan 7 19:51:55 2012, R. Verlangen wrote:
My suggestion is simple: don't use any deprecated stuff out there. In
practic
el ..
wondering, for the sake of argument/discussion .. if anyone can come
up with an alternative data model that doesn't use SC's.
-sd
On Fri, Dec 16, 2011 at 11:10 AM, Rustam Aliyev wrote:
Hi Sasha,
Replying to the old thread just for reference. We've released a code which
we us
Hi Sasha,
Replying to the old thread just for reference. We've released a code
which we use to store emails in Cassandra as an open source project:
http://elasticinbox.com/
Hope you find it helpful.
Regards,
Rustam.
On Fri Apr 29 15:20:07 2011, Sasha Dolgy wrote:
Great read. thanks.
On A
list.
Regards,
Rustam.
On 18/11/2011 13:08, Dotan N. wrote:
Thanks!!
--
Dotan, @jondot <http://twitter.com/jondot>
On Fri, Nov 18, 2011 at 2:48 PM, Rustam Aliyev <mailto:rus...@code.az>> wrote:
It's pleasing to see interest out there. We'll try to do some
o follow you on
twitter if I can.
On 18 November 2011 00:37, Rustam Aliyev <mailto:rus...@code.az>> wrote:
Hi Dotan,
We have already built something similar and were planning to open
source it. It will be available under http://www.elasticinbox.com/.
We haven't fol
Hi Dotan,
We have already built something similar and were planning to open source
it. It will be available under http://www.elasticinbox.com/.
We haven't followed exactly IBM's paper, we believe our Cassandra model
design is more robust. It's written in Java and provides LMTP and REST
inter
Hi David,
This is interesting topic and it would be interesting to hear from
someone who is using it in prod.
Particularly - How your fs implementation behaves for medium/large
files, e.g. > 1MB?
If you store large files, how large is your store per node and how does
it handle compactions
On 13/02/2011 13:49, Janne Jalkanen wrote:
Folks,
as it seems that wrapping the brain around the R+W>N concept is a big hurdle
for a lot of users, I made a simple web page that allows you to try out the
different parameters and see how they affect the system.
http://www.ecyrd.com/cassandracal
adding this to trunk (and thus moving
Hector trunk to Cassandra 0.8.x) in the next week or two.
On Wed, Jan 19, 2011 at 6:12 PM, Rustam Aliyev mailto:rus...@code.az>> wrote:
> Hi,
>
> Does anyone use CASSANDRA-1072 counters patch with 0.7 stable
branch? I need
hanks,
Rustam Aliyev.
<http://www.linkedin.com/in/aliyev>
recommended maximum size -- it all depends on your access
rates. Anywhere from 10 GB to 1TB is typical.
- Tyler
On Thu, Dec 9, 2010 at 5:52 PM, Rustam Aliyev <mailto:rus...@code.az>> wrote:
That depends on your scenario. In the worst case of one big CF,
there's not mu
o
keep in mind that Cassandra performs well with average disks, so you
don't need to spend a lot there. Additionally, most people find that
the replication protects their data enough to allow them to use RAID 0
instead of 1, 10, 5, or 6.
- Tyler
On Thu, Dec 9, 2010 at 12:20 PM, Rusta
Is there any plans to improve this in future?
For big data clusters this could be very expensive. Based on your
comment, I will need 200TB of storage for 100TB of data to keep
Cassandra running.
--
Rustam.
On 09/12/2010 17:56, Tyler Hobbs wrote:
If you are on 0.6, repair is particularly dang
32 matches
Mail list logo