y cassandra is
> hanging indefinitly..
>
> Naresh
>
> On Tue, Aug 13, 2013 at 7:21 PM, Alexis Rodríguez <
> arodrig...@inconcertcc.com> wrote:
>
>> Naresh, are you deploying cassandra in windows?
>>
>> If that is the case you may need to change the d
Naresh, are you deploying cassandra in windows?
If that is the case you may need to change the data and commitlog
directories in cassandra.yaml. Also you should check the log directories.
See the section 2.1 http://wiki.apache.org/cassandra/GettingStarted
On Tue, Aug 13, 2013 at 8:28 AM, Nares
Carlo,
Do you read/write with the consistency levels according to your needs [1]?
Have you tried to see if it happens when using the cassandra-cli to get
that data?
[1] http://wiki.apache.org/cassandra/ArchitectureOverview
On Wed, Jul 24, 2013 at 5:34 PM, cbert...@libero.it wrote:
> Hi all,
36 AM, Richard Low wrote:
> On 19 July 2013 23:31, Alexis Rodríguez wrote:
>
>> Hi guys,
>>
>> I've read here [1] that you can make a deletion mutation "for" the
>> future. That mechanism operates as a schedule for deletions according to
>> the stack
Hi guys,
I've read here [1] that you can make a deletion mutation "for" the future.
That mechanism operates as a schedule for deletions according to the
stackoverflow post. But, I've been having problems to make it work with my
thrift c++ client. I believe it's related to this paragraph of the thr
Shubham,
You are right, my point is that with non schema-update thrift calls you can
tune the consistency level used.
bye.
On Wed, Jul 3, 2013 at 10:10 AM, Shubham Mittal wrote:
> hi Alexis,
>
> Even if I create keyspaces, column families using cassandra-cli, the
> column
That repo for libcassandra works for cassandra 0.7.x due to changes in the
thrift interface we have faced some problems in the past.
May be you can take a look to my fork of libcassandra https://github.com/axs
-mvd/libcassandra that we are using with cassandra 1.1.11.
Besides that, I recommend th
Nicolai,
Perhaps you can check the system.log to see if there are any errors on
compaction. Also, I believe C* 1.2.0 it's not a stable version.
On Thu, May 9, 2013 at 2:43 AM, Nicolai Gylling wrote:
> Hi
>
> I have a 3-node SSD-based cluster, with around 1 TB data, RF:3, C*
> v.1.2.0, vnodes
em, How I can speed that up?
>
> Thanks,
> Jay
>
>
> On Thu, Apr 18, 2013 at 12:07 PM, Jay Svc wrote:
>
>> Looks like formatting is bit messed up. Please let me know if you want
>> the same in clean format.
>>
>> Thanks,
>> Jay
>>
>>
>&g
Jay, Do you have metrics of disk usage on the disks that contains your data
directories? Compaction operates over those files, may be your problems are
with those disks and not with the disks that have the commitlog.
On Thu, Apr 18, 2013 at 1:33 PM, Jay Svc wrote:
> Hi Alexis, Yes compact
Jay,
I believe that compaction occurs on the data directories and not in the
commitlog.
http://wiki.apache.org/cassandra/MemtableSSTable
On Wed, Apr 17, 2013 at 7:58 PM, Jay Svc wrote:
> Hi Alexis,
>
> Thank you for your response.
>
> My commit log is on SSD. which shows me
:D
Jay, check if your disk(s) utilization allows you to change the
configuration the way Edward suggest. iostat -xkcd 1 will show you how much
of your disk(s) are in use.
On Wed, Apr 17, 2013 at 5:26 PM, Edward Capriolo wrote:
> three things:
> 1) compaction throughput is fairly low (yaml nod
Adeel,
It may be a problem in the remote node, could you check the system.log?
Also you might want to check the rpc_timeout_in_ms in both nodes, maybe an
increase in this parameter helps.
On Fri, Apr 12, 2013 at 9:17 AM, wrote:
> Hi,
>
> I have started repair on newly added node with -pr a
gt; (minor or major), long after all its records have been deleted. This causes
> disk usage to rise dramatically. The only way to make the SStable files
> disappear is to run “nodetool cleanup” (which takes hours to run).
>
> ** **
>
> Just a theory so far….
>
Aaron,
It seems that we are in the same situation as Nury, we are storing a lot of
files of ~5MB in a CF.
This happens in a test cluster, with one node using cassandra 1.1.5, we
have commitlog in a different partition than the data directory. Normally
our tests use nearly 13 GB in data, but when
Alain,
Can you post your mdadm --detail /dev/md0 output here as well as your
iostat -x -d when that happens. A bad ephemeral drive on EC2 is not unheard
of.
Alexis | @alq | http://datadog.com
P.S. also, disk utilization is not a reliable metric, iostat's await and
svctm are more useful
Hi guys!
We are getting the following message in our logs
ERROR [CompactionExecutor:535] 2012-10-31 12:14:14,254 CounterContext.java
(line 381) invalid counter shard detected;
(ea9feac0-ec3b-11e1--fea7847157bf, 1, 60) and
(ea9feac0-ec3b-11e1--fea7847157bf, 1, -60) differ only in count; wi
Thank you guys. It makes sense.
I'll have repair-pr schedule on each node.
On Thu, Oct 18, 2012 at 3:39 AM, aaron morton wrote:
> Without -pr the repair works on all token ranges the node is a replica
> for.
>
> With -pr it only repairs data in the token range it is assigned. In your
> case wh
forget it. this was nonsense.
On Mon, Oct 15, 2012 at 10:05 PM, Alexis Midon wrote:
> I see. So if I don't use the '-pr' option, triggering repair on node-00 is
> sufficient to repair the first 3 nodes.
> No need to cron a repair on node-{01,02}.
> correct?
>
>
is responsible for) will get repaired on all
> three nodes.
> Andrey
> On Mon, Oct 15, 2012 at 11:56 AM, Alexis Midon
> wrote:
> >
> > Hi all,
> >
> > I have a 9-node cluster with a replication factor R=3. When I run repair
> -pr
> > on node-00, I see the
still have to trigger a nodetool-repair on node-{01,02}?
Thanks,
Alexis
EC2020-3AEA-1069-A2DD-08002B30309D") = "Doe",
>> ...
>> }]
>>
>>
>> As far as i understand it seems to be the fastest way to retrieve all values
>> of a field in the same order.
>> To update, i don't need to read before writing.
>>
>> Problem : the row will be very large : 300 000 000 of columns. I can split
>> it in different rows based on the value of the specific field, for example
>> country.
>>
>> ---
>> Solution 3:
>>
>> Wide Row by field
>>
>> Column Family : customers
>> One row by field : so 300 rows
>> Columns : ID = FieldValue
>>
>> Benefits :
>> The row will be smaller, 1 000 000 colums.
>>
>> Problem :
>> Update seems more expensive, for every customer to update, i need to update
>> 300 rows.
>>
>> ---
>>
>> Witch solution seems to be the good one ? Does Cassandra is really a good
>> fit for this use case ?
>>
>> Thanks
>>
>> Alexis Coudeyras
>>
>> --
>> View this message in context:
>> http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Data-Modeling-tp7300846p7300846.html
>> Sent from the cassandra-u...@incubator.apache.org mailing list archive at
>> Nabble.com.
>
em, I'll be happy to provide more info.
Alexis Lauthier
De : aaron morton
À : user@cassandra.apache.org
Envoyé le : Mardi 17 Janvier 2012 1h49
Objet : Re: Compressed families not created on new node
eeek, HW errors.
I would guess (thats all it is)
ion the failing node, then
add the new node. And hope the schema will fully replicate. This will leave me
with only one node for a time, and I'm not sure it will play nice with
replication_factor=2.
This feels a lot like jumping out of a plane with an untested parachute. So any
othe
old nodes, I have a lot of I/O errors on the data files for
some (but not all) of the compressed families. It began a few days ago. All
"nodetool repair" calls have been blocking since then.
Any ideas on how I can get the data on the new node, before the old one dies?
Tha
va:603)
at java.lang.Thread.run(Thread.java:722)
How can I get the compressed families on the new node ?
Thanks,
Alexis Lauthier
n
> "carpe diem quam minimum credula postero"
>
For data accessed through a single path, I use the same trick: pickle, bz2
and insert.
--
Alexis Lê-Quôc | Datadog, Inc. | @alq
cassandra-people,
I'm trying to measure disk usage by cassandra after inserting some columns
in order to plan disk sizes and configurations for future deploys.
My approach is very straightforward:
clean_data (stop_cassandra && rm -rf
/var/lib/cassandra/{dara,commitlog,saved_caches}/*)
perform_in
vice.instance().sendReply(response, id, msg.getFrom());
43 }
44 }
Before I dig deeper in the code, has anybody dealt with this before?
Thanks,
--
Alexis Lê-Quôc
l -host
> 192.168.0.5 ring
> Address Status State Load Owns
> Token
>
> 127605887595351923798765477786913079296
> 192.168.0.253 Up Normal 171.17 MB 25.00%
> 0
> 192.168.0.4 ? Normal 212.11 MB 54.39%
> 92535295865117307932921825928971026432
> 192.168.0.
nderstand given that nodetool ring on
Node 1 yields:
...
Node2 Up Normal 52.04 GB 51.03% token1
and the same command on Node 2 yields:
...
Node1 Up Normal 50.89 GB 23.97% token2
Any light shed on both issues is appreciated.
--
Alexis Lê-Quôc (@datadoghq)
Could this be caused by old hinted handoffs for 2.3.4.193 that were processed
at that time, causing the rest of the nodes to think that the 2.3.4.193 is
still present (albeit down)?
Should cleanup be run periodically? I run repair every few days (my
gcgraceperiod is 10 days).
--
Alexis Lê-Q
5117307932921825928971026432
1.2.3.193 Up Normal 53.73 GB50.00%
127605887595351923798765477786913079296
1.2.3.252 Up Normal 43.11 GB12.52%
148904621249875869977532879268261763219
--
Alexis Lê-Quôc
34 matches
Mail list logo