We don't use PHP to talk to Cassandra directly. But we do have the front-end
communicate to our backend services which are over Thrift. We've used Framed
and Buffered, both required some tweaks. We use the PHP C-extension from the
Thrift repo. I have to admit, it's pretty crappy, we had to make
If digg uses PHP with cassandra, can the library really be that old?
Or they are using their own custom php cassandra client? (probably, but just
making sure).
On Fri, Apr 16, 2010 at 2:13 PM, Jonathan Ellis wrote:
> On Fri, Apr 16, 2010 at 12:50 PM, Lee Parker wrote:
> > Each time I start it
I did regenerate the thrift bindings. What I have found in testing is that
the batch_mutate command occasionally sends bad data to thrift when i try to
insert a set of items with too many columns. I don't know if this is a
problem with PHP, or the thrift PHP library. I have found that a limit of
On Fri, Apr 16, 2010 at 12:50 PM, Lee Parker wrote:
> This process is running on two clients each working on a separate part of
> the mysql data which totals to about 70G. Each time I start it up, it will
> work fine for about 1 hour and then it will crash the servers. The error
> message on th
On Fri, Apr 16, 2010 at 2:30 PM, Lee Parker wrote:
> As for the Memtable thresholds, when I ran with lower thresholds, the server
> would be thrashing with compaction runs due to the dramatically increased
> number of sstable files. That was when I was running 0.5.0. Has 0.6.0
> improved compact
I don't think it is a hardware issue. This is happening on multiple servers
and clients on ec2 instances and my local development VM. I think you are
right that the timestamp errors are likely being cause by the Thrift PHP
bindings. The frustrating part is that I can't get the error to
consisten
Two more things you can do:
1) If you're running the updaters in the JVM (sounded like you were doing
PHP?), then be sure that you're cleaning up the database sessions properly.
Hibernate, in particular, will keep a lot of bookkeeping data around otherwise,
and that can easily overflow your h
On Fri, Apr 16, 2010 at 12:50 PM, Lee Parker wrote:
> Each time I start it up, it will
> work fine for about 1 hour and then it will crash the servers. The error
> message on the servers is usually an out of memory error.
Sounds like http://wiki.apache.org/cassandra/FAQ#slows_down_after_lotso_in
Is crashing really how it should deal with restricted memory? Seems like if
this was true either a minimum required memory needs to be defined, or it
should adjust how it uses memory in the absence of it...
On Fri, Apr 16, 2010 at 11:07 AM, Avinash Lakshman <
avinash.laksh...@gmail.com> wrote:
>
Those memtable thresholds also need looking into. You are using some real
poor hardware configuration - 1.7 GB RAM is not a configuration worth
experimenting with IMO. Typical production deployments are running 16 GB RAM
and quad core 64 bit machines. Its hard I would presume to make any
recommenda
Row caching is not turned on.
Lee Parker
On Fri, Apr 16, 2010 at 12:58 PM, Paul Brown wrote:
>
> On Apr 16, 2010, at 10:50 AM, Lee Parker wrote:
> > [...]
> > I am trying to migrate data from mysql into the cluster using the
> following methodology:
> > 1. get 500 rows (12 columns each) from mys
On Apr 16, 2010, at 10:50 AM, Lee Parker wrote:
> [...]
> I am trying to migrate data from mysql into the cluster using the following
> methodology:
> 1. get 500 rows (12 columns each) from mysql
> 2. build a batch_mutate to insert these rows into one CF (1 row = 1 row )
> 3. build a second batch
I am having major issues with stability on my cassandra nodes. Here is the
setup:
Cassandra Cluster - 2 EC2 small instances (1.7G RAM, single 32 bit core)
with an EBS for the cassandra sstables
Cassandra 0.6.0 w/ 1G heap space and 128M/1mil Memtable Thresholds
Clients are also small EC2 webservers
13 matches
Mail list logo