Have you tried nodetool resetlocalschema on the 1.1.5 ?
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 20/09/2012, at 11:41 PM, Thomas Stets wrote:
> A follow-up:
>
> Currently I'm back on version 1.1.1.
>
> I tried - unsuccessfully - t
Reports - is a SuperColumnFamily
Each report has unique identifier (report_id). This is a key of
SuperColumnFamily.
And a report saved in separate row.
A report is consisted of report rows (may vary between 1 and 50,
but most are small).
Each report row is saved in separate super column. Hec
The commit log is essentially internal implementation. The total size of the
commit log is restricted, and the multiple files used to represent segments are
recycled. So once all the memtables have been flushed for segment it may be
overwritten.
To archive the segments see the conf/commitlog_a
On Fri, Sep 21, 2012 at 10:39 AM, aaron morton wrote:
> Have you tried nodetool resetlocalschema on the 1.1.5 ?
>
Yes, I tried a resetlocalschema, and a repair. This didn't change anything.
BTW I could find no documentation, what a resetlocalschema actually does...
regards, Thomas
Hi,
I have enabled the row caches by the using
nodetool setcachecapacity
.
But when i look at the cfstats.. i'm not getting any cache stats like
Row cache capacity:
Row cache size:
These properties are not reflected nor i'm getting any cache hit rates
in OPS center.
Do i need to restart the
Hi Aaron,
Thanks for your input.
On Fri, Sep 21, 2012 at 9:56 AM, aaron morton wrote:
> The commit log is essentially internal implementation. The total size of the
> commit log is restricted, and the multiple files used to represent segments
> are recycled. So once all the memtables have been f
The cache metrics for Cassandra 1.1 are currently broken in OpsCenter, but
it's something we should be able to fix soon. You can also use nodetool
cfstats to check the cache hit rates.
On Fri, Sep 21, 2012 at 5:34 AM, rohit reddy wrote:
> Hi,
>
> I have enabled the row caches by the using
> node
Found one more intersting fact.
As I can see in cfstats, compacted row maximum size: 386857368 !
On Fri, Sep 21, 2012 at 12:50 PM, Denis Gabaydulin wrote:
> Reports - is a SuperColumnFamily
>
> Each report has unique identifier (report_id). This is a key of
> SuperColumnFamily.
> And a report sav
And some stuff from log:
/var/log/cassandra$ cat system.log | grep "Compacting large" | grep -E
"[0-9]+ bytes" -o | cut -d " " -f 1 | awk '{ foo = $1 / 1024 / 1024 ;
print foo "MB" }' | sort -nr | head -n 50
3821.55MB
3337.85MB
1221.64MB
1128.67MB
930.666MB
916.4MB
861.114MB
843.325MB
711.813MB
2012/9/20 aaron morton
> Actually, if I use community edition for now, I wouldn't be able to use
> hadoop against data stored in CFS?
>
> AFAIK DSC is a packaged deployment of Apache Cassandra. You should be ale
> to use Hadoop against it, in the same way you can use hadoop against Apache
> Cassa
Brisk is no longer actively developed by the original author or Datastax. It
was left up for the community.
https://github.com/steeve/brisk
Has a fork that is supposedly compatible with 1.0 API
Your more than welcome to fork that and make it work with 1.1 :)
DSE != (Cassandra + Brisk)
From: M
Thanks a lot! Things are much more clear now.
2012/9/21 Michael Kjellman
> Brisk is no longer actively developed by the original author or Datastax.
> It was left up for the community.
>
> https://github.com/steeve/brisk
>
> Has a fork that is supposedly compatible with 1.0 API
>
> Your more tha
On Fri, Sep 21, 2012 at 4:31 AM, Ben Hood <0x6e6...@gmail.com> wrote:
> So if I understand you correctly, one shouldn't code against what is
> essentially an internal artefact that could be subject to change as
> the Cassandra code base evolves and furthermore may not contain the
> information an a
I can't seem to get Bulk Loading to Work in newer versions of Hadoop.
since they switched JobContext from a class to an interface
You lose binary backward compatibility
Exception in thread "main" java.lang.IncompatibleClassChangeError: Found
interface org.apache.hadoop.mapreduce.JobContext, but
> IMHO it's a better design to multiplex the data stream at the application
> level.
+1, agreed.
That is where we ended up. (and Storm is proving to be a solid
framework for that)
-brian
On Fri, Sep 21, 2012 at 4:56 AM, aaron morton wrote:
> The commit log is essentially internal implementation
Sorry for typo, this is 2.1 release.
-Vivek
On Sat, Sep 22, 2012 at 6:45 AM, Vivek Mishra wrote:
> Hi All,
>
> We are happy to announce release of Kundera 2.0.7.
>
> Kundera is a JPA 2.0 based, object-datastore papping library for NoSQL
> datastores. The idea behind Kundera is to make working w
> How does approach B work in CQL. Can we read/write a JSON
> easily in CQL? Can we extract a field from a JSON in CQL
> or would that need to be done via the client code?
Via client code. Support for this is much the same as support for JSON
CLOBs in an RDBMS.
Approach A is better if you ar
Well done, Vivek and team!! This release was much anticipated.
I'll give this a test with Spring Data JPA when I return from vacation.
thanks,
-brian
On Sep 21, 2012, at 9:15 PM, Vivek Mishra wrote:
> Hi All,
>
> We are happy to announce release of Kundera 2.0.7.
>
> Kundera is a JPA 2.0 b
I swapped in hadoop-core-1.0.3.jar and rebuilt cassandra, without
issues. What problems where you having?
On 09/21/2012 07:40 PM, Juan Valencia wrote:
I can't seem to get Bulk Loading to Work in newer versions of Hadoop.
since they switched JobContext from a class to an interface
You lose bin
Rob,
On Sep 22, 2012, at 0:39, Rob Coli wrote:
> The above gets you most of the way there, but Aaron's point about the
> commitlog not reflecting whether the app met its CL remains true. The
> possibility that Cassandra might coalesce to a value that the
> application does not know was successfu
Brian,
On Sep 22, 2012, at 1:46, "Brian O'Neill" wrote:
>> IMHO it's a better design to multiplex the data stream at the application
>> level.
> +1, agreed.
>
> That is where we ended up. (and Storm is proving to be a solid
> framework for that)
Thanks for the heads up, I'll check it out.
Che
21 matches
Mail list logo