Also AFAIK, visor is very inaccurate on offheap memory measurements. at
least it used to be
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
ignite is not optimized for count queries. it visits each object to do the
count and doesn't do the count off of an index (or some cached store).
seems kind of silly, especially if you have a count on indexed fields.
i think query cancellation only works within the API. I think you can
either s
BUMP. Can anyone verify this? If ignite cannot scale in this manner that is
fine, i'd just want to know if what i am seeing makes sense.
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
I could try a different AWS instance. Im running these tests on r4.8xlarge
boxes, which are pretty beefy and "ebs optimized". I tried the same tests
using IO1 disks at 20,000 IOPS but still had issues.
Dave with the i3 instances, were you using the local ssds? Or still using
EBS?
--
Sent
i've been having some issues with EC2 too. Are you actually paging from
memory to disk or just using the WAL for durable memory? you can use a
different walMode to make it "better".
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/configuration/WALMode.html.
I was playing wi
Yep. Four nodes running on ec2 with 2 gp2 ebs disks each (one for
persistence and one for wal). Running loader on first node connecting to
local host. The cache in question is using the data region included in this
config included at the bottom. here is the query info:
explain select count(*)
Here is a thread dump from when the client disconnects. looks like the only
thread doing something interesting is db-checkpoint-thread
018-02-21 19:31:36
Full thread dump Java HotSpot(TM) 64-Bit Server VM (25.92-b14 mixed mode):
"tcp-disco-client-message-worker-#36" #1128 prio=10 os_prio=0
tid=0
I've been testing ignite durable memory with trying to load a lot more data
than I have configured for the dataregion, thereby using disk for a lot of
data. I was wondering how indexes get persisted to disk in this situation
where more data exists than will fit in memory? Is there a way to config
I'll give that tweaking a try. It's hard to do a thread dump just when it
freezes, do you think there is harm in doing a thread dump every 10 seconds
or something?
I tried a new setup with more nodes to test out how that affects this
problem (from 2 to 4). I saw less datastreaming errors, but it
Should I decrease these? One other thing to note is im monitoring GC and the
GC times do not correlate with these issues (GC times are pretty low
anyway). I honestly think that persisting to disk somehow causes things to
freeze up. Could it be an AWS related issue? Im using EBS IO1 with 20,000
Okay I am trying to reproduce. It hasn't got stuck yet, but the client got
disconnected and reconnected recently. I don't think it is related to GC
because I am recording GC times and it does not jump up that much. Could
the system get slow on a lot of io? i see this in the ignite log:
[19:13:
I am testing a very large durable cache that mostly cannot fit into memory.
I start loading in a lot of data via a data streamer. At some point the
data becomes too large to fit into memory so ignite starts writing a lot to
disk during checkpoints. But after some point after that, the datastream
I can reproduce this pretty easily.
Steps:
1. Run ignite server on mac os laptop
2. Run loader on mac os laptop
3. Loader loads 200,000 or so queryable items with key and value as objects
4. Measure memory using top
5. Use visor to stop cache
6. Measure memory using top
On my mac laptop
Still digging in but something super fishy is going on. In prod environment
we have 4 nodes with 1 backup set for each cache. Each node has 4gb max
heap. I did the load tests and one node is taking 10gb. I went in visor
and deleted all of the caches except the default one. I am still at 10gb
p
Im just measuring the memory used by the ignite pid (using top or whatever
unix command).
I built a test app trying to reproduce this on my laptop by loading a lot of
data and then destroying it. I repeat this a few times and the process goes
back down to the same memory and works as expected.
That's fair but why doesn't the process's memory go down by what I expect?
Scenario:
1. Measure memory of process (A)
2. Load offheap cache
3. Measure memory of process (B)
4. Destroy cache
5. Measure memory of process(C)
I would assume that C=A, that is the memory should return back to the
i have a setup where sometimes i will create caches from scratch and destroy
old ones. when i call destroy on the cache, i don't think the process
memory goes down by the total memory used by that offheap cache. how does
destroy trigger a cleanup in offheap memory? is there a better way to do
th
What's the best way to get accurate metrics of offheap memory used? I've
tried a bunch of different ways such as utilizing jmx, utilizing the API and
iterating through each cache on each node to get offHeapAllocatedSize, using
rest api, etc... None of the values add up right. I see my ignite pro
Actually, I see the bug. Apparently DML statements don't work with fields
that have "names" different than the java pojo name? Should I file a
ticket?
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/insert-and-update-dml-issues-tp10456p10457.html
Sent from the
I can't seem to get insert or update DML queries to work. I have a simple
cache that I put one object in. I try to update it, the result says 1 row
was updated, but nothing was actually updated in the object.
Pojo:
public class ACLStatus {
@QuerySqlField(index = true, name = "user_id")
Hmmm has anyone else noticed that count queries are slow? I guess because it
needs to go through the entire result set to get the count but it feels
slower than MySQL or PgSQL. Do they do some interesting optimizations in
their DBMS?
--
View this message in context:
http://apache-ignite-users
We like to retrieve the count of possible items to allow pagination but doing
a regular count is slow so we want to limit it to some large number. I run
the query [1] and it returns 2000, which makes no sense. I am doing a limit
1000. What is going on? It doesn't seem to double every result set
Okay. Index hinting would be very useful for our use case and I can imagine
many others would want it too. Either way, thanks!
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/grouped-index-sort-vs-filter-tp9885p10324.html
Sent from the Apache Ignite Users maili
OMG YOU ARE THE BEST
This code has been such a pain in the butt to do. When do you think the
next release will be (im guessing 1.9.0?)
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/grouped-index-sort-vs-filter-tp9885p10294.html
Sent from the Apache Ignit
Sorry for the delay, been working hard on this. We have changed the data
model a lot but here it is from the git history:
Here is the class: LineItem - http://pastebin.com/hTuXa5E3
The query is something like select li._val from lineitem li where member_id
= 123 order by member_id, name asc.
Ou
I made a mistake when I said sometimes. It is consistently choosing one
index as of now. It's just that the specified index is not useful for
sorting.
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/grouped-index-sort-vs-filter-tp9885p9976.html
Sent from the
It's pretty significant, and it get's more significant with joins. Below is
attached some data from a simple example. When the index is used for
sorting, it is SUPER fast. Just hoping to take advantage of that :(
*Wrong index
[SELECT
LI._VAL AS __C0,
MEMBER_ID AS __C1,
NAME
I have an object with fields member_id, name, and priority. I have created a
grouped index on member_id+ name and member_id + priority. I want to run
queries like "select * from table where member_id = 123 order by member_id,
priority asc. The problem is that sometimes the query optimizer choose
Sorry i missed this response. The problem was actually that I was not
setting null correctly in binary objects so the schema kept updating in all
the nodes over and over. woops
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Disabling-client-notifications-tp870
Hi vkulichenko I am having a similar problem with BinaryObjects. I am
building binaryobjects off of DB results or Json results where columns may
be null. It seems like for binary objects you just don't set the null
columns. When I build objects this way I constantly see communication among
the n
I meant to say the cache is loaded by
ignite.cache(cacheName).loadCache(null);
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Disabling-client-notifications-tp8704p8705.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Is there a way to disable all of these event notifications going to clients?
I've started noticing a lot of issues with OOM and crashing. After some
debugging, I've discovered the issue:
Server loading millions of rows of binary objects by calling cache.load()
utilizing cachestoreadapter. Clien
32 matches
Mail list logo