You set the consistency with every request.
Usually a client library will let you set a default one for all write/read
requests.
I don't know if Hector lets you set a default consistency level per CF.
Take a look at the Hector docs or ask it in the Hector mailing list.
Shimi
On Thu, Mar 29,
Like everything else in Cassandra, If you need full consistency you need to
make sure that you have the right combination of (write consistency level)
+ (read consistency level)
if
W = write consistency level
R = read consistency level
N = replication factor
then
W + R > N
Shimi
On Thu, Mar
Yes.use get_indexed_slices (http://wiki.apache.org/cassandra/API)
On Tue, Mar 13, 2012 at 2:12 PM, Vivek Mishra wrote:
> Hi,
> Is it possible to iterate and fetch in chunks using thrift API by querying
> using "secondary indexes"?
>
> -Vivek
>
me. I a lower level
documntation
> Cheers
>
> -
> Aaron Morton
> Freelance Developer
> @aaronmorton
> http://www.thelastpickle.com
>
> On 6/01/2012, at 12:48 AM, Shimi Kiviti wrote:
>
> Is there a doc for using composite columns with thrift?
>
?
Shimi
The problem doesn't exist after the column family is truncated or
if durable_writes=true
Shimi
On Tue, Oct 11, 2011 at 9:30 PM, Shimi Kiviti wrote:
> I am running an Embedded Cassandra (0.8.7) and
> calling CassandraDaemon.deactivate() after I write rows (at least 1),
> do
I am running an Embedded Cassandra (0.8.7) and
calling CassandraDaemon.deactivate() after I write rows (at least 1),
doesn't shutdown Cassandra.
If I run only "reads" it does shutdown even without
calling CassandraDaemon.deactivate()
Anyone have any idea what can cause this problem?
Shimi
Modify your Capistrano script to install an init script. If you use debian
or redhat you can copy these or modify them:
https://github.com/Shimi/cassandra/blob/trunk/debian/init
https://github.com/Shimi/cassandra/blob/trunk/redhat/cassandra
and setup Capistrano to call /etc/init.d/cassandra stop
I finally found some time to get back to this issue.
I turned on the DEBUG log on the StorageProxy and it shows that all of these
request are read from the other datacenter.
Shimi
On Tue, Apr 12, 2011 at 2:31 PM, aaron morton wrote:
> Something feels odd.
>
> From Peters nice write
ad the SSTable you load data that is hardly accessed
to the OS cache.
Another thing which you should be aware is that if you need to run any of
the nodetool cf tasks, and you really need it for a specific CF running it
on the specific CF is better and faster.
Shimi
>
>
> On Sun, May 1
Big sstables, long compactions, in major compaction you will need to have
free disk space in the size of all the sstables (which you should have
anyway).
Shimi
On Sun, May 1, 2011 at 2:03 PM, David Boxenhorn wrote:
> I'm having problems administering my cluster because I have too
You can use memtable_flush_after_mins instead of the cron
Shimi
2011/4/19 Héctor Izquierdo Seliva
>
> El mié, 20-04-2011 a las 08:16 +1200, aaron morton escribió:
> > I think their may be an issue here, we are counting the number of columns
> in the operation. When deleting an e
I had the same thing.
Node restart should solve it.
Shimi
On Sun, Apr 17, 2011 at 4:25 PM, Dikang Gu wrote:
> +1.
>
> I also met this problem several days before, and I haven't got a solution
> yet...
>
>
> On Sun, Apr 17, 2011 at 9:17 PM, csharpplusproject <
oking the DynamicSnitch MBean I don't see any problems with any of the
nodes. My guess is that during the reset time there are reads that are sent
to the other data center.
>
> Hope that helps
> Aaron
>
Shimi
>
> On 12 Apr 2011, at 01:28, shimi wrote:
>
> I fina
:
org.apache.cassandra.locator.NetworkTopologyStrategy
strategy_options:
DC1 : 2
DC2 : 2
replication_factor: 4
(DC1 and DC2 are taken from the ips)
Does anyone familiar with this kind of behavior?
Shimi
; consider rebuilding the index as described in
http://www.mail-archive.com/user@cassandra.apache.org/msg03325.html
Shimi
The bigger the file the longer it will take for it to be part of a
compaction again.
Compacting bucket of large files takes longer then compacting bucket of
small files
Shimi
On Mon, Apr 4, 2011 at 3:58 PM, aaron morton wrote:
> mmm, interesting. My theory was
>
> t0 - major compac
How did you solve it?
On Sun, Apr 3, 2011 at 7:32 PM, Anurag Gujral wrote:
> Now it is using all the three disks . I want to understand why recommended
> approach is to use
> one single large volume /directory and not multiple ones,can you please
> explain in detail.
> I am using SSDs using thre
ll load it with 10th of GB of data (not
sstable copy) and test the upgrade again.
I did a mistake that I didn't backup the data files before I upgraded.
Shimi
On Tue, Feb 22, 2011 at 2:24 PM, David Boxenhorn wrote:
> Shimi,
>
> I am getting the same error that you report here.
at
org.apache.cassandra.io.sstable.IndexHelper.skipBloomFilter(IndexHelper.java:51)
at
org.apache.cassandra.io.sstable.SSTableIdentityIterator.(SSTableIdentityIterator.java:69)
... 19 more
Shimi
The first Exception is been thrown on 3 nodes during compaction.
The second Exception (Internal error processing get_range_slices) is been
thrown all the time by a forth node. I disabled gossip and any client
traffic to it and I still get the Exceptions.
Is it possible to boot a node with gossip disab
On 10 Feb 2011, at 13:42, Dan Hendry wrote:
Out of curiosity, do you really have on the order of 1,986,622,313 elements
(I believe elements=keys) in the cf?
Dan
No. I was too puzzled by the numbers
On Thu, Feb 10, 2011 at 10:30 AM, aaron morton
wrote:
> Shimi,
> You may be seei
what went wrong?
Shimi
Same here, Hector with Java.
Shimi
On Fri, Jan 14, 2011 at 9:13 PM, Dan Kuebrich wrote:
> We've done hundreds of gigs in and out of cassandra 0.6.8 with pycassa 0.3.
> Working on upgrading to 0.7 and pycassa 1.03.
>
> I don't know if we're using it wrong, but the &q
I modified the code to limit the size of the SSTables.
I will be glad if someone can take a look at it
https://github.com/Shimi/cassandra/tree/cassandra-0.6
<https://github.com/Shimi/cassandra/tree/cassandra-0.6>Shimi
On Fri, Jan 7, 2011 at 2:04 AM, Jonathan Shook wrote:
> I be
I use Capistrano for install, upgrades, start, stop and restart.
I use it for other projects as well.
It is very useful for automated tasks that needs to run on multiple machines
Shiy
On 2011 1 6 21:38, "B. Todd Burruss" wrote:
has anyone created a maven plugin, like cargo for tomcat, for autom
According to the code it make sense.
submitMinorIfNeeded() calls doCompaction() which calls
submitMinorIfNeeded().
With minimumCompactionThreshold = 1 submitMinorIfNeeded() will always run
compaction.
Shimi
On Thu, Jan 6, 2011 at 10:26 AM, shimi wrote:
>
>
> On Wed, Jan 5, 2011 at
On Wed, Jan 5, 2011 at 11:31 PM, Jonathan Ellis wrote:
> Pretty sure there's logic in there that says "don't bother compacting
> a single sstable."
No. You can do it.
Based on the log I have a feeling that it triggers an infinite compaction
loop.
> On Wed,
#x27;t bother compacting
>> > a single sstable."
>> >
>> > On Wed, Jan 5, 2011 at 2:26 PM, shimi wrote:
>> >> How does minor compaction is triggered? Is it triggered Only when a new
>> >> SStable is added?
>> >>
>> >>
the rest.
Shimi
On Tue, Jan 4, 2011 at 9:56 PM, Peter Schuller
wrote:
> > I don't have a problem with disk space. I have a problem with the data
> > size.
>
> [snip]
>
> > Bottom line is that I want to reduce the number of requests that goes to
> > disk. Since
Yes I am aware of that.
This is the reason I upgraded to 0.6.8.
Still all the deleted rows in the biggest SSTable will be remove in a major
compaction
Shimi
On Tue, Jan 4, 2011 at 6:40 PM, Robert Coli wrote:
> On Tue, Jan 4, 2011 at 4:33 AM, Peter Schuller
> wrote:
> > For som
(I think prior to 0.6.3) there was case of stuck bootstrap
that required restart to the new node and the nodes which were suppose to
stream data to it. As far as I remember this case was resolved. I haven't
seen this problem since then.
Shimi
On Tue, Jan 4, 2011 at 3:01 PM, Ran Tavory
ndra do it for me but then the data size will get
even bigger and the response time will be worst. I can do it manually but I
prefer it to happen in the background with less impact on the system
Shimi
On Tue, Jan 4, 2011 at 2:33 PM, Peter Schuller
wrote:
> > This is what I thought. I w
hurt you.
It might be that the only way to solve this problem is by having at least
two copies of each row in each data center and use a dynamic snitch.
Shimi
On Mon, Jan 3, 2011 at 7:55 PM, Peter Schuller
wrote:
> > Major compaction does it, but only if GCGraceSeconds has elapsed. See:
>
o
add/remove nodes. I do remember that it took a few hours.
The node will join the ring only when it will finish the bootstrap.
Shimi
On Tue, Jan 4, 2011 at 12:28 PM, Ran Tavory wrote:
> I asked the same question on the IRC but no luck there, everyone's asleep
> ;)...
>
> Us
Lets assume I have:
* single 100GB SSTable file
* min compaction threshold is set to 2
If I delete rows which are located in this file. Is the only way to "clean"
the deleted rows is by inserting another 100GB of data or by triggering a
painful major compaction?
Shimi
write is using append and sstables are only read after they were written.
Shimi
I have seen this error in 0.6.x when I was missing the cash directory
configuration.
Maybe you are missing something in your configuration.
Shimi
On Mon, Dec 13, 2010 at 12:45 PM, aaron morton wrote:
> I've seen that before when cassandra.yaml file cannot be found or is
> corrupted
So if I will use a different connection (thrift via Hector), will I get the
same results? It's make sense when you use OPP and I assume it is the same
with RP. I just wanted to make sure this is the case and there is no state
which is kept.
Shimi
On Sun, Dec 12, 2010 at 8:14 PM, Peter Sch
Is the same connection is required when iterating over all the rows with
Random Paritioner or is it possible to use a different connection for each
iteration?
Shimi
I was patient (although it is hard when you have millions of requests which
are not served in time). I was waiting for a long time. There was nothing in
the Logs and in JMX.
Shimi
On Mon, Sep 20, 2010 at 6:12 PM, Gary Dusbabek wrote:
> On Mon, Sep 20, 2010 at 09:51, shimi wrote:
> >
X.X.X.X is now part of the cluster
Does anyone have any idea how can I cleanup the problematic node?
Does anyone have any idea how can I get rid of the Gossip error?
Shimi
des
3. Restart all the nodes
4. If there is data in the bootstraing node I delete it before I restart.
Good luck
Shimi
On Sun, Jul 18, 2010 at 12:21 AM, Anthony Molinaro <
antho...@alumni.caltech.edu> wrote:
> So still waiting for any sort of answer on this one. The cluster still
> refus
the MIN free RAM that could be in your system?
Shimi
do you mean that you don't release the connection back to fhe pool?
On 2010 7 14 20:51, "Jorge Barrios" wrote:
Thomas, I had a similar problem a few weeks back. I changed my code to make
sure that each thread only creates and uses one Hector connection. It seems
that client sockets are not being
gets the same results.
I tried it both in single node and a cluster.
I use RP with version 0.6.3 and Hector.
Does anyone know how this can be done?
Shimi
46 matches
Mail list logo