Thanks Jonathan,
It's great that you still manage to help out individual users. I first came
across your blog while looking for a good reusable bloom filter implementation
a while back. Having surveyed every other Java implementation I could find, I
ended up extracting the implementation from
Hi Benjamin, as far as I know both memcache and redis does not support range
queries on keys. So it would be really hard to update the cassandra columns
reading from these and then updating them to cassandra.
On Thu, Jul 1, 2010 at 3:57 AM, Benjamin Black wrote:
> ZK is way overkill for counters
I recently learned that when I get a key, I might get a tombstone.
How can I know if a returned key is a tombstone? (I need to ignore them for
my application.)
Tombstones are internal to Cassandra and are never sent to the client.
On Thu, Jul 1, 2010 at 2:20 AM, David Boxenhorn wrote:
> I recently learned that when I get a key, I might get a tombstone.
>
> How can I know if a returned key is a tombstone? (I need to ignore them for
> my application.)
>
Great! Thanks!
On Thu, Jul 1, 2010 at 3:40 PM, Jonathan Ellis wrote:
> Tombstones are internal to Cassandra and are never sent to the client.
>
> On Thu, Jul 1, 2010 at 2:20 AM, David Boxenhorn wrote:
> > I recently learned that when I get a key, I might get a tombstone.
> >
> > How can I know
I understand that tombstones are internal implementation detail ... yet, the
fact remains in 0.6.2 that a key/col creation followed by a delete of the
key/col will result in the key being returned in a get_range_slices call. If
the CF is flushed and compacted (after GCGraceSeconds), the key will
Hi,
Can you share your experience with running Cassandra as a Windows Service?
Thank you,
Viktor
From: Kochheiser,Todd W - TO-DITT1 [mailto:twkochhei...@bpa.gov]
Sent: Thursday, June 10, 2010 8:34 PM
To: 'user@cassandra.apache.org'
Subject: Running Cassandra as a Windows Service
For various rea
Just a short note to add
If you delet a key, and it has not been removed via a flush and compaction,
it will be returned as a key with no (super)column(s) from a
get_range_slices call.
Should you try to write to the same column family using the same key as a
tombstone, it will be silently ign
http://wiki.apache.org/cassandra/FAQ#range_ghosts
On Thu, Jul 1, 2010 at 6:35 AM, Phil Stanhope wrote:
> I understand that tombstones are internal implementation detail ... yet, the
> fact remains in 0.6.2 that a key/col creation followed by a delete of the
> key/col will result in the key being
On Thu, Jul 1, 2010 at 6:44 AM, Jools wrote:
> Should you try to write to the same column family using the same key as a
> tombstone, it will be silently ignored.
Only if you perform the write with a lower timestamp than the delete
you previously performed.
--
Jonathan Ellis
Project Chair, Apac
It's happening consistently when I take any node out of rotation.
On Thu, Jul 1, 2010 at 2:24 AM, Jonathan Ellis wrote:
> Presumably the failure detector generated a false positive for a
> second node temporarily
>
> On Wed, Jun 30, 2010 at 10:55 PM, James Golick
> wrote:
> > Oops. I meant to s
Then either you have at least one machine that thinks RF=1 or you found a bug.
On Thu, Jul 1, 2010 at 7:08 AM, James Golick wrote:
> It's happening consistently when I take any node out of rotation.
>
> On Thu, Jul 1, 2010 at 2:24 AM, Jonathan Ellis wrote:
>>
>> Presumably the failure detector g
This problem was solved by forming a 3-node large-instance cluster. The
pause went away. I thought I would try a single node configuration to test
the intensive inserts, as you would expect to just work (though may not
performing well). It turns out somehow Cassandra likes to have a minimum
amount
Hi Utku,
If I'm not mistaken, I think for this case redis would be a good use case
for keeping counters. The actual data is (I believe) still being stored In
Cassandra. The data could be copied out of redis back into Cassandra every
night / hour / minute depending on the users need, and removed f
Or the same key, in some cases. If you have multiple operations
against the same columns 'at the same time', they ordering may be
indefinite.
This can happen if the effective resolution of your time stamp is
coarse enough to bracket multiple operations. Milliseconds are not
fine enough in many case
As a first step, I'd like to reproduce the test from
http://spyced.blogspot.com/2010/01/cassandra-05.html on my current setup.
Can you post the storage-conf.xml that was used so that I can match the
settings as much as possible?
Thanks,
-- Oren
On Jul 1, 2010, at 3:15 AM, Oren Benjamin wro
>From the stress.py code, it looks like the default storage-conf.xml was used
>(at least schema-wise). I'll give that a go for now.
On Jul 1, 2010, at 1:31 PM, Oren Benjamin wrote:
As a first step, I'd like to reproduce the test from
http://spyced.blogspot.com/2010/01/cassandra-05.html on my c
I've been running it in our development & test environments as a Windows
Service without any problem. I have not been too sophisticated in my
configurations, but have been running some simple two node clusters. At this
point nothing has "yet" caused me any concern.
I have been working on a co
Can someone direct me how to resolve this issue in cassandra 0.6.2 version?
./stress.py -o insert -n 1 -y regular -d
ec2-174-129-65-118.compute-1.amazonaws.com --threads 5 --keep-going
Created keyspaces. Sleeping 1s for propagation.Traceback (most recent call
last): File "./stress.py", line
you're running a 0.7 stress.py against a 0.6 cassandra, that's not going to
work
On Thu, Jul 1, 2010 at 12:16 PM, maneela a wrote:
> Can someone direct me how to resolve this issue in cassandra 0.6.2 version?
>
> ./stress.py -o insert -n 1 -y regular -d
> ec2-174-129-65-118.compute-1.amazona
Thanks Jonathan
--- On Thu, 7/1/10, Jonathan Ellis wrote:
From: Jonathan Ellis
Subject: Re: Cassandra 0.6.2 stress test failing due to setKeyspace issue
To: user@cassandra.apache.org
Date: Thursday, July 1, 2010, 3:32 PM
you're running a 0.7 stress.py against a 0.6 cassandra, that's not going
Hi,
Can someone please please throw some light how can I import the Data from mysql
into Cassandra cluster.
- Is there any tool available?
OR
- Do I have to write my own Client using Thrift that will read the export file
(*.sql) and insert the record in the database.
Thanks
raich
(I realize the ability to get/set a count constantly is coming in a upcoming
release)
Can someone give me a high level of the design of the vector map solution?
Is the actual count value stored in the CF row or is it stored separately?
In this video: http://vimeo.com/5185526
Avinash mentions that the previous presenter covered allot of what he was to
cover. Does anyone have a link to that presentation?
So trying to map how facebook implemented a CF of type Super to index
message terms.
Is this json representation correct?
MessageIndex = {
userid1 : {
aloha : { messageIdList:
"234,2343234,23423434,234255,345345,2342,532432"},
clown : { messageIdList: "632, 2342, 23452, 234234, 23423
I think that was Jay's Voldemort talk. Videos linked on
http://blog.oskarsson.nu/2009/06/nosql-debrief.html
On Thu, Jul 1, 2010 at 4:13 PM, S Ahmed wrote:
> In this video: http://vimeo.com/5185526
>
> Avinash mentions that the previous presenter covered allot of what he was to
> cover. Does any
Thanks Jonathan, We will have a try.
2010/6/30 Jonathan Ellis
> On Mon, Jun 28, 2010 at 10:11 PM, albert_e wrote:
> > Hi, all
> >
> > We have several nodes in DC1 and DC2 and we want to move all nodes in DC2
> to
> > a new DC3, also IPs will be changed. The whole progress will last about
> 1-2
throttle your writes when they start timing out
(moved to user@, bcc dev@)
On Thu, Jul 1, 2010 at 9:08 PM, Peng Guo wrote:
> Hi
>
> I am doing a test which 30 nodes.
>
> After I start 300 data insert process, I looked at the tpstats:
>
> [ dw-greenplum-4] Pool Name Active Pe
On Jul 1, 2010, at 1:33 PM, Rana Aich wrote:
> Can someone please please throw some light how can I import the Data from
> mysql into Cassandra cluster.
> - Is there any tool available?
> OR
> - Do I have to write my own Client using Thrift that will read the export
> file (*.sql) and insert the
As Paul said, you need to re-build your data in a Cassandra-friendly
manner. Reading SQL files does not seem a very efficient way to do
that though. Most databases can output in much simpler formats, like
CSV. But then, why export at all? If the MySQL instance and the
Cassandra instance are both ad
30 matches
Mail list logo