Thanks, guys, for your help. I tried the filtering method and it works
great.
Sincerely,
Pete
On Sat, Aug 27, 2016 at 12:36 AM, Jonathan Haddad wrote:
> Ah, i see what you're looking for. No, my schema wouldn't work for that.
> I had read through your question a little quickly.
>
> In cassandr
Ah, i see what you're looking for. No, my schema wouldn't work for that.
I had read through your question a little quickly.
In cassandra 3.5 support was added for more flexible ALLOW FILTERING
statements. Here's an example:
CREATE TABLE mytable (
sensorname text,
date date,
time tim
To do 8-9am on Aug-1 through Aug-10, you’d likely need to do either multiple
queries in parallel (fire off async), or use some clever IN logic.
Or, you’d need to break your table up so the first clustering key is the hour
of the day, and then you could do this:
CREATE TABLE mytable (
I don't believe that would let me query a time of day range, over a date
range, would it? For example, between 8am and 9am, August 1st through
August 10th.
On Fri, Aug 26, 2016 at 11:52 PM, Jonathan Haddad wrote:
> Use a timestamp instead of 2 separate fields and you can query on the
> range.
>
Use a timestamp instead of 2 separate fields and you can query on the range.
CREATE TABLE mytable (
sensorname text,
reading_time timestamp,
data MAP,
PRIMARY KEY (sensorname, reading_time)
);
On Fri, Aug 26, 2016 at 8:17 PM Peter Figliozzi
wrote:
> I have data from many senso
I have data from many sensors as time-series:
- Sensor name
- Date
- Time
- value
I want to query windows of both date and time. For example, 8am - 9am from
Aug. 1st to Aug 10th.
Here's what I did:
CREATE TABLE mykeyspace.mytable (
sensorname text,
date date,
time time,
It's not that your disks are getting full. I suspect you don't have enough
throughput to handle the type of stress compaction and memtable flushing
produce. Blocked flush writers is almost always a disk problem.
Any storage with the words SAN, NAS, NFS or SATA in them, is going to make
your life m
An extract of this conversation should definitely be posted somewhere.
Read a lot but never learnt all these bits...
On Fri, Aug 26, 2016 at 2:53 PM, Paulo Motta
wrote:
> > I must admit that I fail to understand currently how running repair with
> -pr could leave unrepaired data though, even whe
Hi Benedict,
This makes sense now. Thank you very much for your input.
Regards,
Vasilis
On 25 Aug 2016 10:30 am, "Benedict Elliott Smith"
wrote:
> You should update from 2.0 to avoid this behaviour, is the simple answer.
> You are correct that when the commit log gets full the memtables are
>
Hi Patrick and thanks for your reply,
We are monitoring disk usage and more and we don't seem to be running out
of space at the moment. We have separate partitions/disks for
commitlog/data. Which one do you suspect and why?
Regards,
Vasilis
On 25 Aug 2016 4:01 pm, "Patrick McFadin" wrote:
Thi
The default when I wrote it was 0.4 but it was found this did not saturate
flush writers in JBOD configurations. Iirc it now defaults to 1/(1+#disks)
which is not a terrible default, but obviously comes out much lower if you
have many disks.
This smaller value behaves better for peak performance,
> I must admit that I fail to understand currently how running repair with
-pr could leave unrepaired data though, even when ran on all nodes in all
DCs, and how that could be specific to incremental repair (and would
appreciate if someone shared the explanation).
Anti-compaction, which marks tabl
I see. Didn't think about it that way. Thanks for clarifying!
On Fri, Aug 26, 2016 at 2:14 PM, Paulo Motta
wrote:
> > What is the underlying reason?
>
> Basically to minimize the amount of anti-compaction needed, since with
> RF=3 you'd need to perform anti-compaction 3 times in a particular nod
> What is the underlying reason?
Basically to minimize the amount of anti-compaction needed, since with RF=3
you'd need to perform anti-compaction 3 times in a particular node to get
it fully repaired, while without it you can just repair the full node's
range in one run. Assuming you run repair f
After running some tests I can confirm that using -pr leaves unrepaired
SSTables, while removing it shows repaired SSTables only once repair is
completed.
The purpose of -pr was to lighten the repair process by not repairing
ranges RF times, but just once. With incremental repair though, repaired
Forgot the most important thing. LogsERROR you should investigateWARN you
should have a list of known ones. Use case dependent. Ideally you change
configuration accordingly.*PoolCleaner (slab or native) - good indication node
is tuned badly if you see a ton of this. Set memtable_cleanup_thresho
Thomas,
Not all metrics are KPIs and are only useful when researching a specific issue
or after a use case specific threshold has been set.
The main "canaries" I monitor are:* Pending compactions (dependent on the
compaction strategy chosen but 1000 is a sign of severe issues in all cases)*
drop
Hi Paulo, could you elaborate on 2?
I didn't know incremental repairs were not compatible with -pr
What is the underlying reason?
Regards,
Stefano
On Fri, Aug 26, 2016 at 1:25 AM, Paulo Motta
wrote:
> 1. Migration procedure is no longer necessary after CASSANDRA-8004, and
> since you never ran
Hello,
I am working on setting up a monitoring tool to monitor Cassandra Instances.
Are there any wikis which specifies optimum value for each Cassandra KPIs?
For instance, I am not sure,
What value of "Memtable Columns Count" can be considered as "Normal".
What value of the same has to be
Hi Christian,
C* 2.2.7 doesn't cause this problem.
I can always reproduce it on some servers and my laptop by using 2.2.6.
I reviewed the source code of 2.2.7.
The above ReplayPosition updating was fixed.
Thank you for your cooperation.
yuji
On Thu, Aug 25, 2016 at 11:40 PM, horschi wrote:
>
20 matches
Mail list logo