Caused by: java.io.IOException: Channel not open for writing - cannot
extend file to required size
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:851)
at
org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.createSegments(MmappedSegmentedFile.java:192)
You should investigat
In our Twitter-like application users have their own timelines with news from
subscriptions. To populate timelines we're using fanout on write. But we
forced to trim it to keep free disk space under control.
We use wide rows pattern and trim them with "DELETE by primary key USING
TIMESTAMP". But i
Is that GC_grace 300 days?
Rahul Neelakantan
> On Aug 18, 2014, at 5:51 AM, Dimetrio wrote:
>
> In our Twitter-like application users have their own timelines with news from
> subscriptions. To populate timelines we're using fanout on write. But we
> forced to trim it to keep free disk space un
I was exactly in your same situation, I could only reclaim disk space for
trimmed data this way:
very low gc_grace + size tiered compaction + slice timestamp deletes +
major compaction
2014-08-18 12:06 GMT+02:00 Rahul Neelakantan :
> Is that GC_grace 300 days?
>
> Rahul Neelakantan
>
> > On Aug
Hi,
What is the state of Cassandra Wiki -- http://wiki.apache.org/cassandra ?
I tried to update a few pages, but it looks like pages are immutable. Do I
need to have my Wiki username (OtisGospodnetic) added to some ACL?
Thanks,
Otis
--
Performance Monitoring * Log Analytics * Search Analytics
S
Hi Otis,
On the front page, https://wiki.apache.org/cassandra/FrontPage there are
instructions on how to get edit permissions:
"If you would like to contribute to this wiki, please send an email to the
mailing list dev.at.cassandra.apache-dot-org with your wiki username and we
will be happy to ad
Ah, I missed that. Thanks. Email sent.
Otis
--
Performance Monitoring * Log Analytics * Search Analytics
Solr & Elasticsearch Support * http://sematext.com/
On Mon, Aug 18, 2014 at 12:19 PM, Mark Reddy wrote:
> Hi Otis,
>
> On the front page, https://wiki.apache.org/cassandra/FrontPage there
In our case major compaction (using sstableresetlevel) will take 15 days for
15 nodes plus trimming time. So it turns into never ending maintenance mode.
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/disk-space-and-tombstones-tp7596356p7596361
No, it is in seconds
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/disk-space-and-tombstones-tp7596356p7596363.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at
Nabble.com.
what about you timeline versioning?
every time a timeline has more than x columns, you bump its version (which
should be part of its row key) and start writing on that one (though this
will make your app substantially more complex).
AFAIK reclaiming disk space for delete rows is far easier than rec
On Wednesday, August 13, 2014, Robert Coli <> wrote:
> On Wed, Aug 13, 2014 at 5:53 AM, Ruchir Jha > wrote:
>
>> We are adding nodes currently and it seems like compression is falling
>> behind. I judge that by the fact that the new node which has a 4.5T disk
>> fills up to 100% while its bootstr
That scheme assumes we have to read counter value before write something to
the timeline. This is what we try to avoid as an anti-pattern.
By the way, is there any difference between slice trimming of one row and
sharding pattern in terms of compaction? AFAIK, delete with timestamp by
primary key
Hi!
I'm bulkloading via streaming from Hadoop to my Cassandra cluster. This
results in a rather large set of relatively small (~1MiB) sstables as
the number of mappers that generate sstables on the hadoop cluster is high.
With SizeTieredCompactionStrategy, the cassandra cluster would quickly
comp
sstableloader just loads given SSTables as they are.
TTLed columns are sent and will be compacted at the destination node eventually.
On Sat, Aug 16, 2014 at 4:28 AM, Erik Forsberg wrote:
> Hi!
>
> If I use sstableloader to load data to a cluster, and the source
> sstables contain some columns wh
On Mon, Aug 18, 2014 at 6:21 AM, Erik Forsberg wrote:
> Is there some configuration knob I can tune to make this happen faster?
> I'm getting a bit confused by the description for min_sstable_size,
> bucket_high, bucket_low etc - and I'm not sure if they apply in this case.
>
You probably don't
On Sun, Aug 17, 2014 at 6:46 AM, Maxime wrote:
> I've been spending the last few days trying to move a cluster on
> DigitalOcean 2GB machines to 4GB machines (same provider). To do so I
> wanted to create the new nodes, bootstrap them, then decommission the old
> ones (one by one seems to be the
On Sat, Aug 16, 2014 at 2:40 AM, Erik Forsberg wrote:
> I will want to
> run the sstableloader on the source cluster, but at the same time, that
> source cluster needs to keep running to serve data to clients.
>
Use the on-node bulkLoad interface, which is designed for this, instead of
sstablelo
2014-08-18 13:25 GMT+02:00 clslrns :
> That scheme assumes we have to read counter value before write something to
> the timeline. This is what we try to avoid as an anti-pattern.
You can work around the read counter before read, but I agree that it would
be much better if disk space was reclaim
We use Cassandra for multi-tenant application. Each tenant has own set of
tables and we have 1592 tables in total in our production Cassandra
cluster. It's running well and doesn't have any memory consumption issue,
but the challenge confronting us is the schema change problem.We have such
a large
"That scheme assumes we have to read counter value before write something to
the timeline. This is what we try to avoid as an anti-pattern."
Hummm, it looks like there is a need for a tool to take care of the
bucketing switch. I've seen a lot of use cases where people need to do wide
row bucketin
The DataStax java driver has a Row object which getInt, getLong methods…
However, the getString only works on string columns.
That's probably reasonable… but if I have a raw Row, how the heck do I
easily print it?
I need a handy way do dump a ResultSet …
--
Founder/CEO Spinn3r.com
Location: *
DuyHai Doan wrote
> it looks like there is a need for a tool to take care of the bucketing
> switch
But I still can't understand why bucketing should be better than `DELETE row
USING TIMESTAMP`. Looks like the only source of truth about this topic is
the source code of Cassa.
--
View this mes
Are you interested in cassandra-stress in particular? Or in any tool
which will allow you to stress test your schema?
I believe Apache Jmeter + CQL plugin may be useful in the latter case.
https://github.com/Mishail/CqlJmeter
-M
On 8/17/14 12:26, Clint Kelly wrote:
Hi all,
Is there a way to
23 matches
Mail list logo