Hi Calvin,
> Can I assume that BinaryMarshaller won't be used for any object embedded
> inside GridCacheQueryResponse?
Yes, because Binary can fallback to Optimized, but not vice versa.
> If I am correct, do you have any suggestion on how I can avoid this type
> of issue?
Probably you need t
Hi Akash,
First of all SQL is not transactional yet, this feature will be available
only since 2.7 [1]. Your exception might be caused if query was canceled or
node stopped.
[1] https://issues.apache.org/jira/browse/IGNITE-5934
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6
Hi,
It might be an issue with deactivation. Try update to 2.6 or wait 2.7. Right
now just skip cluster deactivation. Once you formed a baseline topology and
finished loading data, just enable WAL log for all caches. When log enabled
successfully, you can safely stop nodes.
On next time when all b
Hi,
Rules of field naming defined in BinaryIdMapper interface. By default used
BinaryBasicIdMapper implementation that is by default converts all field
names to lower case. So Ignite doesn't support the same field names in
different cases as it will treat them as same field.
But you can configure
Hi,
TTL fixes are not included in 2.6 as it was an emergency release. You'll
need to wait for 2.7.
https://issues.apache.org/jira/browse/IGNITE-5874
https://issues.apache.org/jira/browse/IGNITE-8503
https://issues.apache.org/jira/browse/IGNITE-8681
https://issues.apache.org/jira/browse/IGNITE-865
Hi,
It defines by AffinityFunction [1]. By default 1024 partitions, affinity
automatically calculates nodes that will keep required partitions and
minifies rebalancing when topology changes (nodes arrive or quit).
[1]https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/cache/affini
Hi Akash,
How do you measure partition distribution? Can you provide code for that
test? I can assume that you get partitions before exchange process if
finished. Try to use delay in 5 sec after all nodes are started and check
again.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.705
Hi,
I'm not sure that nightly builds are updates regularly, but you should a
try. The biggest impact that nightly build could have some bugs that will be
fixed on release.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Akash,
1) Actually exchange is a short-time process when nodes remap partitions.
But Ignite uses late affinity assignment, that means affinity distribution
will be switched after rebalance completed. In other words after rebalance
it will atomically switch partition distribution.
But you don't
Hi,
I've opened a ticket for this [1]. It seems LOCAL cache keeps all entries
on-heap. If you use only one node - switch to PARTITIONED, if more than one
- PARTITIONED + node filter [2]
[1] https://issues.apache.org/jira/browse/IGNITE-9257
[2]
https://ignite.apache.org/releases/latest/javadoc/or
Hi,
1) You need to add jetbrains annotation in compile-time [1].
2) Imports depend on what are you using :) It's hard to say if your imports
enough. Add ignite-core to your plugin dependencies.
I don't think that there are other examples besides that blog post.
[1] https://mvnrepository.com/arti
Hi,
Ignite by default uses Rendezvous hashing algorithm [1] and
RendezvoudAffinityFunction is an implementation that responsible of
partition distribution [2]. This allows significantly reduce traffic on
partiton rebalancing.
[1] https://en.wikipedia.org/wiki/Rendezvous_hashing
[2]
https://ignite
Hi,
Nice work, thank you! I'm sure it will be very useful. Looking forward for
your contributions in Apache Ignite project ;)
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
Looks like it was killed by kernel. Check logs for OOM Killer:
grep -i 'killed process' /var/log/messages
If process was killed by Linux, correct your config, you might be set too
much memory for Ignite paged memory, set to lower values [1]
If not, try to find in logs by PID, maybe it was ki
Hi,
Yes, you're right, it was missed during refactoring. I've created a ticket
[1], you may fix it and contribute to Apache Ignite :)
[1] https://issues.apache.org/jira/browse/IGNITE-9259
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
It looks like the most of the time transactions in receiver are waiting for
locks. Any lock adds serialization for parallel code. And in your case I
don't think it's possible to tune throughput with settings, because ten
transactions could wait when one finish. You need to change algorithm.
Hi,
I think the best way here would be to read items directly from kafka,
process and store in cache and rememeber in another cache kafka stream
offset. If node crashes, your service could start from the last point
(offset).
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nab
Hi,
Could you please explain how do you update database? Do you use CacheStore
with writeThrough or manually save?
Anyway, you can update data with custom eviction policy:
cache.withExpiryPolicy(policy) [1]
[1]
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/IgniteCache.html#
Hi,
Dynamic schema chages is available only via SQL/JDBC [1].
BTW caches created via SQL could be accessed from java API if you add
SQL_PUBLIC_ to table. For example: ignite.cache(SQL_PUBLIC_TABLENAME).
[1] https://apacheignite-sql.readme.io/docs/ddl
Thanks!
-Dmitry
--
Sent from: http://apac
Hi,
Where did you find it? It might be a broken link.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
There are no such limitations on peer class loading, but it was designed and
works for compute jobs, remote filters or queries only. All unknown classes
from tasks or queries will be deployed in cluster with dependencies
according to deployment mode [1]. Actually with job Ignite sends deployme
Hi,
Usually it's enough to open ports for communication and discovery, thair
default values: 47500 and 47100.
If you run more than one node per pachine, you'll need to open a port range:
47500..47509 and 47100...47109.
You always can configure other values [1, 2]
[1]
https://ignite.apache.org/re
Hi,
You can, for example, set SYNC rebalance mode for your replicated cache [1].
In that case all cache operations will be blocked unless rebalance is
finished, and when it's done you'll get a fully replicated cache.
But this will block cache on each topology change.
[1]
https://ignite.apache.or
Hi,
get() operation from client always go to the primary node. If you run
compute task on other nodes, where each will do get() request for that key,
it will read local value. REPLICATED has many other optimizations, for
example for SQL queries.
Thanks!
-Dmitry
--
Sent from: http://apache-igni
Hi,
You will have only one copy of data, tables are needed for correct H2 work.
When do some query, H2 builds a query plan that will be delegated to Ignite,
which underneath gets data from cache.
Thanks!
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
You heed to hash not just secret key, but "timestamp:secret_key", f.e.
1507726963290:LuM57LVuM3aN4tEjHF6XgkHo0fU=
where hash got from "1507726963290:test".
Thanks!
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Ankit,
Thanks for pointing to mistake in documentation. I've suggested edits for
it.
Thanks!
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Franck,
Yes, here is used client-side security, looks like it was made to allow
connect of different clients with different permissions. But it depends on
GridSecurityProcessor. For example, it may have a node validation logic that
will not accept nodes with unapproved security processor.
In
Hi,
@IgniteInstanceResource annotation is a correct and the best way to get
Ignite instance in service.
Thanks!
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Raymond,
Could you please attach full log and config for failed node?
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Franck,
You're definitely right, but this is more like client roles than regular
security.
On "they have a number of connected clients with actual applications" I
meant that user's application is connected to the grid via clients with
their local permissions. But end user cannot access the grid d
Raymond,
Without logs I see just that deserialization failed by some reason. Actually
I more interested in exceptions that come from Ignite's java part if any.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Alisher,
This issue is under active development:
https://issues.apache.org/jira/browse/IGNITE-3478
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Ray,
Could you please attach dstat and GC logs for client and servers for 4 and
12 configurations (put operations would be just enough)?
dstat --top-mem -msgdtc --fs --top-io
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Ray,
I've got the same results on my environment and checking what happens.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
In your code Ignite could not inject it's instance, because you have two
instances of your class: one in Ignite as a service, another one is object
that processes requests in Jersey. So when you're doing http query, it goes
to jersey instance.
How do you start Ignite? You may get ignite with
Hi Ray,
I wasn't able to run benchmarks quickly, but I've got following results for
atomic put throughput (numbers are a bit lower that could be, because of
profiling):
Throughput Cluster
264930 1c4s
513775 2c4s
968475 4c4s
281425 1c8s
530597
Hi Ray,
I've finally got results of query benchmarks:
4s1c 80725.80 80725.80
4s2c 78797.90 157595.80
4s4c 54029.70 216118.80
8s1c 64185.60 64185.60
8s2c 61058.10 122116.20
8s4c 34792.70 139170.80
First column - cluster configuration (in 8 server variant 2 nodes per
machine), second - average thr
Hi,
Try to enable paired connections [1].
[1]
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/spi/communication/tcp/TcpCommunicationSpi.html#setUsePairedConnections(boolean)
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi again,
This looks quite similar to your issue, and it was fixed in 2.3 [1]. Check
it out.
[1] https://issues.apache.org/jira/browse/IGNITE-6071
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
How many nodes do you have and how do you measure that 70 ms? Is it first
query or average time? Please show your EXPLAIN of the query.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
First of all, Ignite object represents an Ignite node. Each such node may
run more than one transaction and this object is thread safe. You may start
only one transaction in thread, but Ignite object could be safely shared
between your threads.
Each transaction is bound to thread that it's st
Hi,
You have a few options here:
1) Write code that scans all tables in MySQL and loads data to grid with
IgniteDataStreamer [1].
2) Write code that parses MySQL CSV and using IgniteDataStreamer loads to
grid.
3) Use existing CacheJdbcStore to preload data from MySQL (check out screen
casts [2] th
Hi,
Please attach thread dumps from all cluster nodes.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
How many records your query returns without LIMIT? How long does it take to
select all records without grouping?
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
Is it possible that version of thin driver is different from version of
cluster nodes? Does it happen on concrete queries or it could be on any one?
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
Currently it's not possible. What's for do you need such possibility?
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Indranil,
These measurements are not fully correct, for example select count(*) might
use only index and in select * was not actually invoked, because you need to
run over cursor.
Also by default query is not parallelized on one node, and scan with
grouping is going sequentially in one thread.
Hi,
It looks like anonymous EntryProcessor gets excess data in context. Try to
make it inner static class and check logs for exceptions on all nodes.
Thanks!
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
Is it possible that firewall configured to block DDoS breaks connection to
client node? Because I see here two possible cases:
1) STW pause on client, but we should see connection timeout exception;
2) Firewall rejects connections with a large traffic, and now you're getting
connection refuse
Is there any case that you're using Connection in more than one thread? It's
not thread safe for now.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Dmitriy,
1. You may use node filter [1] and specifically
org.apache.ignite.util.AttributeNodeFilter that could be configured in XML
without writing code.
2. Yes you can. You need to configure data regions and set
persistenceEnabled flag. After that you may apply cachesh to that regions.
[2]
Sure, I meant you need to create your own inner class:
private static class WorkflowEntryProcessor extends EntryProcessor {
@Override
public Object process(MutableEntry entry, Object...
arguments) throws EntryProcessorException {
System.out.println("EntryProcessor started");
Glad to hear that it was helpful! I wrote the example just in email, so
didn't have a compiler to check it :)
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
There are few options:
1) You need to have backups to survive node loss. [1]
2) You may enable persistence to survive grid restart and store more data
that available in memory. [2]
3) Checkout nohup command [3]
[1] https://apacheignite.readme.io/docs/primary-and-backup-copies
[2] https://apac
Hi,
Looks like not on all nodes exist your classes. Please check if all classes
that you're using in cache are available on all nodes.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
Please attach thread dumps from all nodes taken at the moment of hang.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
It's hard to say why it happens. I'm not familiar with mybatis and actually
don't know if it shares jdbc connection between threads. It would be great
if you could provide some reproducible example that will help to debug the
issue.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.
Hi,
Discovery events are processed in a single thread, and cache creation uses
discovery custom messages. Trying to create cache in discovery thread will
lead to deadlock, because discovery thread will wait in your lambda instead
of processing messages.
To avoid it just start another thread in yo
Hi,
Anonymous and inner classes have link to outer class object and might bring
it to marshaller. When you set it inner static or separate class you're
explicitly saying that you don't need such links.
In thread dumps you need to lookup for waiting or blocked threads. In your
case in service node
Hi Svonn,
I'm not sure that I properly understand your issue. Could you please provide
a problematic code snipped?
> is the policy also deleting the Map
Yes, if it was stored as a value.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Ranjit,
That metrics should be correct, you also may check [1], because Ignite
anyway keeps data in offheap. But if enabled on-heap, it caches entries in
java heap.
[1] https://apacheignite.readme.io/docs/memory-metrics
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabb
Hi Sharavya,
This exception means that client node is disconnected from cluster and tries
to reconnect. You may get reconnect future on it
(IgniteClientDisconnectedException.reconnectFuture().get()) and wait when
client will be reconnected.
So it looks like you're trying to create cache on stoppe
Hi Shravya,
To understand what's going on in your cluster I need full logs from all
nodes. Please, share all files, if it's possible.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
This exception says that client node was stopped, but by default it should
wait for servers. In other words, wait for reconnect, in this case it throws
IgniteClientDisconnectedException that contains future on which you may wait
for reconnect event.
You may locally listen for EventType.EVT_CLI
Hi,
Transaction here might be a not optimal solution, as it by default
optimistic and may throw optimistic transaction exception. I believe the
best solution would be to use EntryProcessor [1], it will atomically modify
entry as on TRANSACTIONAL as on ATOMIC cache on affinity data node (that
actua
Hi Jet,
Full text search creates Lucene in-memory indexes and after restart they are
not available, so you cannot use it with persistence. @QuerySqlField enables
DB indexes that are able to work with persisted data, and probably no way to
rebuild them for now.
Thanks!
-Dmitry
--
Sent from: htt
Hi,
You may fuse filter for that, for example:
ContinuousQuery qry = new ContinuousQuery<>();
final Set nodes = new
HashSet<>(client.cluster().forDataNodes("cache")
.forHost(client.cluster().localNode()).nodes());
qry.setRemoteFilterFactory(new
Factory>() {
Hi Prasad,
If you started Ignite with IgniteSpringBean or IgniteSpring try
@SpringApplicationContextResource [1] annotation. Ignite's resource injector
will use spring context to set a dependency annotated by it. But I'm not
sure that this will work with CacheStore, it should be rechecked.
[1]
ht
Jet,
Yep, this should work, but meanwhile this ticket remains unresolved [1].
[1] https://issues.apache.org/jira/browse/IGNITE-5371
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Prasad,
This approach will work with multiple keys if they are collocated on the
same node and you start/stop transaction in the same thread/task. There no
other workaround.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Bryan,
You need to use StatefulSet [1], Kubernetes will start nodes one-by-one when
each comes in a ready state.
[1] https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Anshu,
This looks like a bug that was fixed in 2.4, try to upgrade [1].
[1] https://ignite.apache.org/download.cgi
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Ray,
If your JVM process consumes more memory, then started swapping may cause
JVM freeze, and as a consequence, throwing it out from the cluster. Check
your free memory, disable swapping, if possible, or increase
IgniteConfiguration.failureDetectionTimeout.
To check that guess you may use dst
Duplicates
http://apache-ignite-users.70518.x6.nabble.com/Strange-node-fail-td21078.html.
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Naveen,
Unfortunately I'm unable to reproduce that error. Could you please attach
simple code/project that fails with specified exception?
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
If you have enabled read through mode for cache, entry will be loaded on
next IgniteCache.get() operation, or when IgniteCache.loadCache() was
called.
Next time entry will be evicted according to your eviction policy.
Please note that entry will not be counted in SQL queries if it was evicte
Hi Ray,
I think the only way to do it is to use
IgniteDataFrameSettings.OPTION_CONFIG_FILE and set path to xml configuration
with all settings you need. Here is a nice article about this [1]
[1]
https://medium.com/hashmapinc/apache-ignite-using-a-memory-grid-for-distributed-computation-frameworks
Hi Prasad,
This issue could not be completed in 2.5 as it's done in a low priority. As
a workaround, you can wrap your executeEntryProcessorTransaction() method
into affinity run [1], and no additional value transferring will happen.
[1] https://apacheignite.readme.io/docs/collocate-compute-and-d
Hi Christoph,
This metric is not implemented because of complexity. But you may get to
know now much of space your cache or cashes consumes with DataRegionMetrics:
DataRegionMetrics drm = ignite.dataRegionMetrics("region_name");
long used = (long)(drm.getPhysicalMemorySize() * drm.getPagesFillFac
Hi Dome,
Could you please attach full logs?
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
Blocked threads show only the fact that there are no tasks to process in
pool. Do you use persistence and/or indexing? Could you please attach your
configs and logs from all nodes? Please take few sequential thread dumps
when throughput is low.
Thanks!
-Dmitry
--
Sent from: http://apache-
Hi Praveen,
Stack traces only show that thread is waiting for response, to get the full
picture, please attach full logs and thread dumps at the moment of hang from
all nodes. I need from all nodes, because actual issue happened on remote
node.
Also, according to last exception, there might be c
Hi,
For sure Ignite caches queries, that's why first request runs much longer
than rest ones.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Ankit,
No, Ignite uses sun.misc.Unsafe for offheap memory. Direct memory may be
used in DirectBuffers used for intercommunication. Usually defaults quite
enough.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
Yes, for complex transaction this workaround will not work. So you need
either wait for fix or avoid using EntryProcessor for now.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
Yes, Ignite will send messages to all nodes, but you may use filter:
ignite.message(ignite.cluster().forAttribute("topic1", Boolean.TRUE));
In this case messages would be sent to all nodes from the cluster group, in
this example - only nodes with set attribute "topic1" [1].
[1]
https://igni
Hi,
It's hard to get what's going wrong from your question.
Please attach full logs and thread dumps from all server nodes.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
1. By default get() will read backups if node, on which it's invoked is
affinity node. In other words, if current node has backups, Ignite prefer to
read local data from backup rather requesting primary node over network.
This can be changed by setting CacheConfiguration.setReadFromBackup(fals
Hi,
Could you please provide a reproducer? I don't get such exception.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Mikael,
Please share your Ignite settings and logs.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
TcpDiscoveryMulticastIpFinder produces such a big number of connections. I'd
recommend to switch to TcpDiscoveryVmIpFinder with static set of addresses.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
Make sure that your keys are go to specific partition. Only one node could
keep that partition at a time (except backups, of course). To do that, you
may use @AffinityKeyMapped annotation [1].
Additionally you can implement your own AffinityFunction that will assign
partitions that you need t
Normally (without @AffinityKeyMapped) Ignite will use CustomerKey hash code
(not object hashCode()) to find a partition. Ignite will colsult with
AffinityFunction (partition() method) and to what partition put key and with
assignPartitions find concrete node that holds that partition.
In other han
There are various possible ways, but use one partition per node is
definitely a bad idea, because you're loosing scaling possibilities. If you
have 5 partitions and 5 nodes, then 6 node will be empty.
It's much better if you in AffinityFunction.partition() method will
calculate node according to
1. Affinity knows that, because it does assignments. Method
assignPartitions() returns that assignments. Please read the javadoc [1].
2. I just described how keys could be assigned to partition. For example:
@Override public int partition(Object key) {
if (key instanceof Integer)
There is no difference on how do you start/stop your node.
Node on start will examine all connections specified in address list: it
takes one address and port and tries to connect to it. If not successfull,
get another address and port. For instance if you have address
1.2.3.4:47500..47509, node w
Hi,
Ignite keeps Tx cached values on-heap.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
What IgniteConfiguration do you use? Could you please share it?
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
I totally agree with Val that implementing own AffinityFunction is quite
complex way. Requirement that you described is named affinity co-location as
I wrote before.
Let me explain in more details what to do and what are the drawbacks.
1. Use use @AffinityKeyMapped for all your keys. For exa
1 - 100 of 302 matches
Mail list logo