On 31 May 2011, at 08:56, Amrita Jayakumar wrote:
> So what you are saying is that, if i wanna write a php code to load data from
> a file into cassandra, i dont have to make any separate installations for
> thrift???
That's right. Thrift was used to generate the PHP client libraries, you don't
Hi,
I have a cluster with two nodes (node A and node B) and make a test as
follows:
1). set commitlog sync in batch mode and the sync batch window in 0 ms
2). one client wrote random keys in infinite loop with consistency level
QUORUM and record the keys in file after the insert() method return n
currently to prevent read failure in case of a single replica failure,
I need to specify CL > ONE when updating a counter, and when such an add
happens,
the wait is longer than a regular insert:
coordinator ---> leader ---> leader does sstable tally up > leader waits
for at least one replica t
>I went through https://issues.apache.org/jira/browse/CASSANDRA-1072
>and realize that the consistency guarantees of Counters are a bit different
>from those of regular columns,
Not anymore.
> so could you please confirm that the following are true?
>1) comment
>https://issues.apache.org/jira/b
thanks Sylvain, I agree with what you said for the first few paragraphs
Jeremy corrected me just now.
regarding the last point, you are right in using the term "by operation",
but you should also note that it's a leader
"data ownership", in the meaning that the leader has the authoritative po
How much replication factor did you set for the keyspace?
If the RF is 2, your data should be replicated to both of nodes. If
the RF is 1, you will lose the half of data when the node A is down.
maki
2011/5/31 Preston Chang :
> Hi,
> I have a cluster with two nodes (node A and node B) and make a
never mind , I see that if leader/owner dies, the other replicas can simply
use whoever has the highest count of the leader bucket,
though not the authoritative number
On Tue, May 31, 2011 at 1:21 AM, Yang wrote:
> thanks Sylvain, I agree with what you said for the first few paragraphs
> J
My RF is 2.
When the node A is down, the commit log should be fsynced to disk in my test
scene, so there should be no NOTFOUND key, but there are some NOTFOUND keys.
I am puzzled.
2011/5/31 Maki Watanabe
> How much replication factor did you set for the keyspace?
> If the RF is 2, your data sho
i see that the problem happens when i try to remove records from my application
but these records were created before from CLI.
is there a explication for this?
De : Peter Schuller
À : user@cassandra.apache.org
Envoyé le : Vendredi 27 Mai 2011 19h05
Objet : Re:
> we claim that, because of the FIFO assumption, here node 2 has seen
> all messages from coord 10 with a serial <=9, , and node 3 has seen
> all messages with serial <=14, so that node 2's history is a prefix of
> that of node 3. i.e. what we read out immediately from node 2
> represents a value
One more thing, if you keep one sub-count for each coordinator,
that won't be fun in a 400 nodes cluster. Or, to put this in another
way, you incur on the client the burden of making sure that for
each counter it will use a reasonably small set of coordinator ever.
Which can actually be a major hea
hi everyone,
the current nodes i have deployed (4) have all been working fine, with
not a lot of data ... more reads than writes at the moment. as i had
monitoring disabled, when one node's OS killed the cassandra process
due to out of memory problems ... that was fine. 24 hours later,
another n
In your step 5 which node are you connected to ?
There is a possibility that during the time node A was turning off, mutations
which had been started were completed on node B. In which case the client would
have gotten a TimedOutExceptions. Meaning the operation did not complete at the
require
Hi,
I try to create a MapReduce job that calculate the average of values stored
in cassandra and write the result back to cassandra (using
ColumnFamilyOutputFormat and ColumnFamilyInputFormat). I use the Brisk
distribution of Hadoop but I don't know if it's somehow related.
My code is here: http://
Suppose i have a cluster with 10 nodes and RF=5.
Will every write succeed if one or two of my nodes are down, and I use ConsistencyLevel=ALL? Or will some of the writes
fail?
Hi!
I'm considering setting up a small (4-6 nodes) Cassandra cluster on
machines that each have 3x2TB disks. There's no hardware RAID in the
machine, and if there were, it could only stripe single disks
together, not parts of disks.
I'm planning RF=2 (or higher).
I'm pondering what the best dis
You can write if you use CL=QUORUM, but can't write with CL=ALL.
maki
On 2011/05/31, at 21:03, Flavio Baronti wrote:
> Suppose i have a cluster with 10 nodes and RF=5.
> Will every write succeed if one or two of my nodes are down, and I use
> ConsistencyLevel=ALL? Or will some of the writes f
I'm trying to run a repair on a 7.6-2 Node. After running the repair command,
this line shows up in the cassandra.log, but nothing else. It's been hours.
Nothing is seen in the logs from other servers or with nodetool commands like
netstats or tpstats.
How do I know if the repair is actu
I have log files of the format . I want to load these
files into cassandra using PHPcassa.
I have installed Cassandra 7. Can anyone please guide me with the exact
procedures as in how to install PHPcassa and take things forward?
Thanks and Regards,
Amrita
There is roughly two steps in repair:
1) Each node involved (that's the node the repair was started on plus the one
listed in the log) constructs a merkle tree. This amount to a
specific compaction,
so you can see the progress in JMX->CompactionManager. If, for the nodes
involve
http://thobbs.github.com/phpcassa/installation.html
If you already have the log files, pycassa (python) may be better
suited and quicker
http://pycassa.github.com/pycassa/
On Tue, May 31, 2011 at 4:03 PM, Amrita Jayakumar
wrote:
> I have log files of the format . I want to load these
> f
http://thobbs.github.com/phpcassa/installation.html
They also have mailing list and irc channel.
http://thobbs.github.com/phpcassa/
maki
2011/5/31 Amrita Jayakumar :
> I have log files of the format . I want to load these
> files into cassandra using PHPcassa.
> I have installed Cassandra 7.
You're using a lower timestamp granularity in your app. The standard
is to use microseconds (not milliseconds!) since epoch. The cli and
any of the mainstream clients adhere to this.
On Tue, May 31, 2011 at 4:21 AM, karim abbouh wrote:
> i see that the problem happens when i try to remove records
The place to start is with the statistics Cassandra logs after each GC.
On Tue, May 31, 2011 at 5:01 AM, Sasha Dolgy wrote:
> hi everyone,
>
> the current nodes i have deployed (4) have all been working fine, with
> not a lot of data ... more reads than writes at the moment. as i had
> monitorin
Have you read http://wiki.apache.org/cassandra/CassandraHardware ?
On Tue, May 31, 2011 at 7:47 AM, Erik Forsberg wrote:
> Hi!
>
> I'm considering setting up a small (4-6 nodes) Cassandra cluster on
> machines that each have 3x2TB disks. There's no hardware RAID in the
> machine, and if there wer
Thanks for the comments. response inline and marked in blue, for easier
reading
On Tue, May 31, 2011 at 2:27 AM, Sylvain Lebresne wrote:
>
> Making the FIFO assumption stand in face of node failure is possible,
> but it's a complicated problem by itself. You basically have to make
> sure that whe
I'm wondering how cassandra implements appending values to fields. Since (so
the docs tell me) there's not really any such thing such thing as an update in
Cassandra, I wonder if it falls into the same trap as MySQL does. With a query
like "update x set y = concat(y, 'a') where id = 1", mysql re
On Tue, May 31, 2011 at 2:22 PM, Marcus Bointon
wrote:
> I'm wondering how cassandra implements appending values to fields. Since (so
> the docs tell me) there's not really any such thing such thing as an update
> in Cassandra
You've answered your own question.
--
Jonathan Ellis
Project Chair
On Tue, May 31, 2011 at 2:22 PM, Marcus Bointon
wrote:
> mysql reads the entire value of y, appends the data, then writes the whole
> thing back, which unfortunately is an O(n^2) operation.
Actually, this analysis is incorrect. Appending M bytes to N is O(N +
M) which isn't the same as N^2 at al
As Jonathan stated I believe that the insert is in O(N + M), unless there
are some operations that I don't know.
There are other NoSQL database that can be used with Cassandra as "buffers"
for quick access and modification and then after the content can be dumped
into Cassandra for long term stor
> 1). set commitlog sync in batch mode and the sync batch window in 0 ms
> 2). one client wrote random keys in infinite loop with consistency level
> QUORUM and record the keys in file after the insert() method return normally
> 3). unplug one server (node A) power cord
> 4). restart the server and
On Tue, May 31, 2011 at 4:57 PM, Victor Kabdebon
wrote:
> As Jonathan stated I believe that the insert is in O(N + M), unless there
> are some operations that I don't know.
>
> There are other NoSQL database that can be used with Cassandra as
> "buffers" for quick access and modification and then
Hi,
I'm query on cassandra like "select count(*) from table where column1 =
v1 and ...", based on a secondary index on column1.
But using get_indexed_slices(), I have to fetch all the rows and count
on them, which is not needed.
So a get_indexed_slices count api [1] would be very helpful, but it
On Tue, May 31, 2011 at 5:03 PM, Dan Kuebrich wrote:
>
>
> On Tue, May 31, 2011 at 4:57 PM, Victor Kabdebon <
> victor.kabde...@gmail.com> wrote:
>
>> As Jonathan stated I believe that the insert is in O(N + M), unless there
>> are some operations that I don't know.
>>
>> There are other NoSQL dat
I have a protocol schema hello.avpr
{
types: {
{ name : input_msg , type: record : fields [
{ name: date , type int },
{ name :msg, type: string}
]
}
}
}
now if I need to serialize the record "input_msg", I'm going to need its
schema,
so I'd need to pass a schema
sorry wrong list ... please ignore
On Tue, May 31, 2011 at 4:26 PM, Yang wrote:
> I have a protocol schema hello.avpr
>
> {
> types: {
>{ name : input_msg , type: record : fields [
> { name: date , type int },
>{ name :msg, type: string}
> ]
> }
>
> }
>
> }
>
>
On 31 May 2011, at 23:03, Dan Kuebrich wrote:
> I think perhaps OP meant O(N * M), where N is number of rows and M is total
> bytes.
That's probably more accurate.
This is what it was doing: Say I repeatedly append 100 bytes to the same 1000
records. First time around that's 100,000 bytes to
>at org.apache.log4j.Category.info(Category.java:666)
It seems that your cassandra can't write log by device full.
Check where your cassanra log is written to. The log file path is
configured at log4j.appender.R.File property
in conf/log4j-server.properties.
maki
2011/6/1 Bryce Godfrey :
Sounds like Ed is right and you should be doing the append as
add-a-new-column instead of overwrite-existing-column.
On Tue, May 31, 2011 at 8:36 PM, Marcus Bointon
wrote:
> On 31 May 2011, at 23:03, Dan Kuebrich wrote:
>
>> I think perhaps OP meant O(N * M), where N is number of rows and M is to
That did it. Once I moved the logs over to folder on /dev drive and deleted
the old logs directory it started up.
Thanks!
-Original Message-
From: Maki Watanabe [mailto:watanabe.m...@gmail.com]
Sent: Tuesday, May 31, 2011 6:40 PM
To: user@cassandra.apache.org
Subject: Re: No space left
Hi Maki,
I have extracted phpcassa-0.7.a.4 and now i am trying to
configure the C extension.
For that as per the link mentioned by you, i've installed it. now what is
left is to add the line
*extension=thrift_protocol.so*
into the *php.ini* file. But wen i fired *locate php.ini* i got
Create a file with in it.
View it in your browser
In the first section see where it is loading php.ini from
Failing that http://www.php.net/
On Jun 1, 2011 7:04 AM, "Amrita Jayakumar"
wrote:
> Hi Maki,
> I have extracted phpcassa-0.7.a.4 and now i am trying to
> configure the C extension.
> For
On 1 Jun 2011, at 07:03, Amrita Jayakumar wrote:
> into the php.ini file. But wen i fired locate php.ini i got many of them in
> the following locations.
>
> /etc/php5/apache2/php.ini
> /etc/php5/cgi/php.ini
> /etc/php5/cli/php.ini
> /usr/share/doc/php5-common/examples/php.ini-development
> /usr
so i just should create a directory *phpcassa *in the location*/etc/php5/conf.d/
* and in *phpcassa* just create a file* php.ini* and include the line *
extension=thrift_protocol.so* in it
Thanks and Regards,
Amrita
On Wed, Jun 1, 2011 at 10:44 AM, Marcus Bointon
wrote:
> On 1 Jun 2011, at
On 1 Jun 2011, at 07:21, Amrita Jayakumar wrote:
> so i just should create a directory phpcassa in the location
> /etc/php5/conf.d/ and in phpcassa just create a file php.ini and include
> the line extension=thrift_protocol.so in it
Nearly. Just run this:
echo 'extension=thrift_protocol.
Hi,
I have a column family Users. at present these are two keys in it.
*ajayakumar {*
=> (column=age, value=22, timestamp=1306820564285000)
=> (column=first, value=amrita, timestamp=1306820515836000)
=> (column=last, value=jayakumar, timestamp=1306820548681000)
On 1 Jun 2011, at 08:12, Amrita Jayakumar wrote:
> I have deployed this code into a php file phcass.php in the ubuntu machine in
> the location /var/www/vishnu/. But nothing happens when i execute the file
> through the browser. Neither can i find the data inserted in the column
> family 'Users
yeh i tried restarting... but i get to see the following
/etc/init.d/apache2 restart
* Restarting web server apache2
apache2: Could not reliably determine the server's fully qualified domain
name, using 127.0.0.1 for ServerName
... waiting apache2: Could not reliably determine the server's f
I disable the disk cache of RAID controller, unfortunately it still lost
some data.
2011/6/1 Peter Schuller
> > 1). set commitlog sync in batch mode and the sync batch window in 0 ms
> > 2). one client wrote random keys in infinite loop with consistency level
> > QUORUM and record the keys in f
49 matches
Mail list logo