Hello,
Im using cassandra with mx4j, I was googling half day but cant find anything
usable to connect it with zibbix. I just found Zapcat, but I dont wanna make
any change into code, then Munin with plugins
https://github.com/jamesgolick/cassandra-munin-plugins... I need something
to being easy co
to read info from that HTTP server, then write you own
> zabbix templates
>
> 2011/3/8 pob
>
> Hello,
>>
>> Im using cassandra with mx4j, I was googling half day but cant find
>> anything usable to connect it with zibbix. I just found Zapcat, but I dont
>> wan
Hello,
I set up cluster with 3 nodes/ 4Gram,4cores,raid0. I did experiment with
stress.py to see how fast my inserts are. The results are confusing.
In each case stress.py was inserting 170KB of data:
1)
stress.py was inserting directly to one node -dNode1, RF=3, CL.ONE
30 inserts in 1296 se
some estimation about how big
stream i can write into cluster, what happens if I double nodes of cluster
and so on.
Thanks for explanation or any hints.
Best,
Peter
2011/3/20 pob
> Hello,
>
> I set up cluster with 3 nodes/ 4Gram,4cores,raid0. I did experiment with
> stress.py to
Hi,
I'm inserting data from client node with stress.py to cluster of 6 nodes.
They are all on 1Gbps network, max real throughput of network is 930Mbps
(after measurement).
python stress.py -c 1 -S 17 -d{6nodes} -l3 -e QUORUM
--operation=insert -i 1 -n 50 -t100
The problem is stress.py
You mean,
more threads in stress.py? The purpose was figure out whats the
biggest bandwidth that C* can use.
Peter
2011/3/21 Ryan King
> On Mon, Mar 21, 2011 at 4:02 AM, pob wrote:
> > Hi,
> > I'm inserting data from client node with stress.py to cluster of 6 nodes.
Hello,
what kind of bug is it?
If I do nodetool host1 ring, the output is:
Address Status State LoadOwnsToken
141784319550391026443072753096570088105
1.174 Up Normal 4.14 GB 16.67% 0
1.173 Down Normal 4.07 GB 16.67%
283568639100782052886145
ou are working at
>> Quorum and your replication factor is less than 5.
>>
>> Aaron
>> On 23/03/2011, at 11:31 PM, pob wrote:
>>
>> > Hello,
>> >
>> > what kind of bug is it?
>> >
>> >
>> > If I do nodetool host1 ring,
Hello,
I'm experiencing really strange problem. I wrote data into cassandra
cluster. I'm trying to check if data inserted then fetched are equally to
source data (file). Code below is the task for celery that does
the comparison with sha1(). The problem is that celery worker returning
since time
Hello,
I did cluster configuration by
http://wiki.apache.org/cassandra/HadoopSupport. When I run
pig example-script.pig
-x local, everything is fine and i get correct results.
Problem is occurring with -x mapreduce
Im getting those errors :>
2011-04-20 01:24:21,791 [main] ERROR org.apache.pig.
nction do ?
> Set a break point, what are the two strings you are feeding into the hash
> functions ?
>
> Aaron
> On 15 Apr 2011, at 03:50, pob wrote:
>
> Hello,
>
> I'm experiencing really strange problem. I wrote data into cassandra
> cluster. I'm try
o
> * PIG_PARTITIONER or cassandra.partitioner.class : cluster partitioner
>
> Hope that helps.
> Aaron
>
>
> On 20 Apr 2011, at 11:28, pob wrote:
>
> Hello,
>
> I did cluster configuration by
> http://wiki.apache.org/cassandra/HadoopSupport. When I run
> pig example-scri
thrift.address : initial address to
> connect to
> * PIG_PARTITIONER or cassandra.partitioner.class : cluster partitioner
>
> Hope that helps.
> Aaron
>
>
> On 20 Apr 2011, at 11:28, pob wrote:
>
> Hello,
>
> I did cluster configuration by
> http://wiki.apache.org/ca
red.TaskTracker: Received
'KillJobAction' for job: job_201104200331_0002
2011/4/20 pob
> ad2. it works with -x local , so there cant be issue with
> pig->DB(Cassandra).
>
> im using pig-0.8 from official site + hadoop-0.20.2 from offic. site.
>
>
> thx
>
>
2011/4/20 pob
> Thats from jobtracker:
>
>
> 2011-04-20 03:36:39,519 INFO org.apache.hadoop.mapred.JobInProgress:
> Choosing rack-local task task_201104200331_0002_m_00
> 2011-04-20 03:36:42,521 INFO org.apache.hadoop.mapred.TaskInProgress: Error
> from attempt_2011042
gt;
> >> Did you set PIG_RPC_PORT in your hadoop-env.sh? I was seeing this error
> for a while before I added that.
> >>
> >> -Jeffrey
> >>
> >> From: pob [mailto:peterob...@gmail.com]
> >> Sent: Tuesday, April 19, 2011 6:42 PM
> >> To
wrote:
> >
> >> Did you set PIG_RPC_PORT in your hadoop-env.sh? I was seeing this error
> for a while before I added that.
> >>
> >> -Jeffrey
> >>
> >> From: pob [mailto:peterob...@gmail.com]
> >> Sent: Tuesday, April 19, 2011 6:42 PM
my false,
ignore last post.
2011/4/20 pob
> Hi,
>
> everything works fine with cassandra 0.7.5, but when I tried with 0.7.3
> another errors showed up, but task finished with success whats strange.
>
>
> 2011-04-20 11:45:40,674 INFO org.apache.hadoop.mapred.TaskInP
Hello,
Im trying these with pig 0.8 + c* 0.7.5 (branch).
Has anybody any idea?
Thanks.
x = foreach g2 generate group, data.(size);
dump x;
((drm,0),{(464868)})
((drm,1),{(464868)})
((snezz,0),{(8073),(8073)})
but:
x = foreach g2 generate group, SUM(data.size);
grunt> describe data;
map(PigMapBase.java:53)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:621)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:305)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:177)
2011/4/24 pob
>
Hello,
im expecting this problem:
with c-cli:
get
messagesContent['558a512f30a46f55e75e63f2f816f7435283269f92070618ba9213c0bfac730f'];
Returned 33 results.
within pycassa code:
server_list=['SERVER:9160',],
prefill=False, pool_size=15, max_overflow=10, max_retries=-1, timeout=5,
pool_
21 matches
Mail list logo