Hi Jon,
First of all, you don't have to use multicast for discovery. Using static IP
configuration or one other shared IP finder might simplify the setup:
https://apacheignite.readme.io/docs/tcpip-discovery
Second of all, I'm not sure I fully understand what you're trying to
achieve. Are both nod
You should use continuous queries for this:
https://apacheignite.readme.io/docs/continuous-queries. They provide exactly
once notification semantics with ordering guarantees.
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
You cannot include JARs into another JAR, Java is not going to add them to
classpath. You should list all required JARs in the '-cp' parameter, or
create an uber-JAR with all the dependencies unpacked there.
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
You server node bound to port 47501 rather than 47500 for some reason (most
likely the latter was occupied by some other process). Try to add port range
to the IP finder on client side. Replace this line:
ipFinder.setAddresses(Arrays.asList("127.0.0.1", "172.20.98.77"));
with this:
ipFinder.setA
Anand,
I don't think it's a version issue. As I mentioned earlier, you server was
bound to 47501 according to you logs. I believe after restart it bound to
47500 and your client was able to connect.
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
This test stops working though if you replace line 30 with this:
builder.setField("set", Collections.unmodifiableSet(Sets.newHashSet("a",
"b", "c")));
If unmodifiable set is written, it's then read as unmodifiable set as well,
and therefore can't be modified. I believe this is the reason for the
You can try using UNION for this:
(select * from cache1 where agid = 100 limit 2)
union
(select * from cache1 where agid = 101 limit 2)
https://gist.github.com/vkulichenko/8de603b28aa784ede84150614003e3a6
-Valid
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Roger,
When exactly do you get this exception and what are the steps to reproduce
it? Do you change the set of values in the enum? Do you restart the cluster
when doing this?
Ideally, it would be great if you provide a reproducer that we can just run
to recreate the problem. That would help to ge
Sounds like you're looking for the MERGE command:
https://apacheignite-sql.readme.io/docs/merge
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
The error means that there is no such class on classpath. Please check if
that's the case or not.
You can also provide your project and steps to reproduce the issue, I will
take a look then.
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
If you're using a database that doesn't have a driver in Maven Central repo
(SQL Server is one of the examples), you should copy the driver manually
into 'jdbc-drivers' folder. Please refer to README.txt file included in the
project for details.
-Val
--
Sent from: http://apache-ignite-users.705
Hi Dave,
Currently there is no ability to change any cache configuration parameters
except the ones you mentioned - you turn on/off statistics and modify schema
using DDL.
You can't change persistence settings in runtime as well, because that would
require changing the data region cache is assign
The point here is that Ignite stores data in binary format, and
deserialization happens only on client side. Therefore, you can start server
nodes without deploying Person and Organization classes there, and the
example will still work. On top of that, you can change class definition on
client side
You're iterating through some collection of key-value pairs and then use
value from that pair within the entry processor, which is incorrect. You
should always use the one acquired via entry.getValue(); there.
I would actually recommend you to create a separate class instead of using
lambda for en
Roger,
To be able to change the schema in runtime, you need to make sure there are
no model classes deployed on server nodes and therefore no deserialization
happens. Since you run in embedded mode and have only server nodes, then you
actually can't use POJOs in your data modes at all. You should
You're using the Multicast IP finder, therefore nodes discover each other via
multicast even if port ranges do not intersect. Try switching to static IP
finder:
https://apacheignite.readme.io/docs/tcpip-discovery#section-static-ip-finder
-Val
--
Sent from: http://apache-ignite-users.70518.x6.na
Prasad,
Affinity key is always a part of primary key. For example, in your case
primary key consists of two fields - id and subscriptionId. Therefore you
can query by primary key without providing the affinity key. On the other
hand, however, query can be routed to a single node even if only affin
There are several ways to do this.
First one is shown in your first message - DefaultDataAffinityKey instance
plays the role of primary key, while enclosed affinityId field plays the of
affinity key, because it's annotated with @AffinityKeyMapped.
In case you don't have classes at all and therefo
Prasad,
In this case using subscriptionId in query would be a syntax error, because
the name of the field is affinityId. If you use affinityId, however, Ignite
will route the query to a single node. It knows that it's affinity key based
on @AffinityKeyMapped annotation.
-Val
--
Sent from: http
Serialization of the filter must happen, because it's invoked on server nodes
and therefore needs to be instantiated there. However, peer class loading
can be used for production. If you see this callout in the documentation,
you're probably looking at some old version of it.
-Val
--
Sent from:
Hi Prasad,
1. First plan is for map phase (executed on server side) and the second one
is for reduce phase (executed on client side). Merge scan means that it just
merges result sets from all participating server nodes. Sometime it can
contain additional reduce steps like final groupings, ordering
Ignite community is helpful even to those who leaves :)
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Prasad,
1. Yes, you will always see it in the execution plan because there are
always two stages. Of course, if only one server node participates in the
execution, the reduce stage is effectively no-op.
2. Yes, you need to put the annotation to make sure you can access it from
queries. I would re
Prasad,
To achieve the best performance, join criteria should be the same as
collocation criteria. E.g., if you join by userId, then userId should be
used as affinity key. Please learn more about distributed joins here:
https://apacheignite-sql.readme.io/docs/distributed-joins
-Val
--
Sent fro
Why not use SQL API and execute a 'select *', since that's what you're
actually looking for?
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
You're confusing object schema and SQL schema. By adding those field via
binary object builder, you amended the former, but not the latter. SQL
schema stayed the same, with (a,b) being key fields not presented in value.
I would not recommend to do this, as you can end up having weird issues (for
ex
Hi Alper,
You can create two separate data regions [1], one for the data within the
threshold, another for outside. The latter can have very little memory
allocated and persistence enabled, which would mean that data in this region
is stored only on disc for the most part.
-Val
--
Sent from: h
Gordon,
Ignite thin client uses request-response model, which is not really suitable
for functionality like this. I would never say never, but I think it's very
unlikely that thin client would get any feature that imply pushing updates
from server to client (this includes near caches, any type of
Does it work without specifying sessionAffinity?
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Gordon,
Yes, generally we do recommend using thin client for such applications.
However, it doesn't mean that you can't us client node in case it's really
needed, although it might require some additional tuning.
Would you mind telling if you have any other technology in mind? I highly
doubt that
Jeff,
Ignite configuration is an XML file which can be quite large. What is the
reason for the requirement to specify it inline?
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Gaurav,
Web Console receives updates from web agent which periodically polls the
cluster.
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Gordon,
Generally, having CQ on thin client would definitely be awesome. My only
point is that thin client has several technical limitations that would
introduce multiple "ifs" into the functionality. What exactly those ifs are,
and weather there is still value with all those ifs, is a big questio
Ignition.start is supposed to start an Ignite instance, so passing
spring-cache.xml file that doesn't contain any Ignite configuration doesn't
make sense. The SpringCacheManager bean should be part of the Spring
Application Context, it then will be used as an entry point to Ignite
cluster. It looks
You need to make sure Ignite is started *before* SpringCacheManager.
Basically, you have two options:
1. Start Ignite manually using Ignition.start, and then start Spring app
with SpringCacheManager that references already started Ignite via
'igniteInstanceName' property.
2. Incorporate IgniteConf
Monil,
You can also use IgniteSpringBean to initialize Ignite separately from
SpringCacheManager, but within Spring context.
So you need to configure following beans:
1. PercentageDataRespository
2. IgniteConfiguration
3. IgniteSpringBean (IgniteConfiguration is injected via 'configuration'
prope
Ray,
Per my understanding, pushdown filters are propagated to Ignite either way,
it's not related to the "optimization". Optimization affects joins,
gropings, aggregations, etc. So, unless I'm missing something, the behavior
you're looking for is achieved by setting
OPTION_DISABLE_SPARK_SQL_OPTIMI
Can you show the full trace? Most likely it's failing on server side because
you don't actually have the class there.
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
If join is indexed and collocated, it still can be pretty fast. Do you have a
particular query that is slower with optimization than without?
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Does the configuration file exist on worker nodes? I looks like it actually
fails to start there for some reason, and then you eventually get this
exception. Are there any other exception in worker/executor logs?
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Ilya,
Do you know what is the reason for such limitation? It doesn't sounds right
to me, I believe any other marshaller would work just fine with final
fields.
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Ray,
This sounds suspicious. Please show your configuration and the execution
plan for the query.
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Probably that's the issue :) In any case, Java serialization successfully
deserializes such objects, so I think it's a bug.
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Prasad,
Since you're using withNoFailover(), failover will never happen and the task
will just fail with an exception on client side if primary nodes dies. It's
up to your code to retry in this case.
When you retry, the task will be mapped to the new primary, which is former
backup and therefore
Prasad,
When a primary node for a partition dies, former backup node for this
partition becomes new primary. Therefore there is no need to wait for
rebalancing in this case, data is already there. By default job will be
automatically remapped to that node, but with 'withNoFailover()' you'll have
t
Prasad,
prasadbhalerao1983 wrote
> Are you saying that when a primary node dies the former backup node
> becomes
> new primary for ALL backup partitions present on it and only primary
> partitions are moved in rebalancing process?
Not for all partitions, but only for those for which primary copy
This is a known issue: https://issues.apache.org/jira/browse/IGNITE-9229.
Looks like it didn't make it to 2.7, but hopefully someone in the community
will pick it up soon and fix.
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
James,
That would be enough - everything with the same affinity key will be stored
on the same node. That's assuming, of course, that both caches have equal
affinity configuration (same affinity function, same number of partitions,
etc.).
-Val
--
Sent from: http://apache-ignite-users.70518.x6.
Naresh,
This is correct behavior. Creation of new cache triggers global exchange
process which can't be done concurrently with transactions, so if you create
a cache synchronously within a transaction, you would create a deadlock. You
should create all caches required for a transaction prior to th
getOrCreateCache can cause deadlock only under certain circumstances, in
which we throw an exception for this particular reason. Scenarios you
describe should not cause a deadlock, please attach thread dumps from all
the nodes if you believe you have one.
-Val
--
Sent from: http://apache-ignite
Yes, in this mode fsync for this mode is applied for every update. LOG_ONLY
mode writes update to FS buffers, but not necessary to disk itself, so it's
obviously faster. However, in this mode there is a chance to lose some of
the updates in case of power loss (but not in case of node process failur
Hi Andrey,
No, you can't index data that is inside a collection. To achieve that, you
should create a separate entry for each element of this transaction and save
them in a cache separately.
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
Yes, you can do this with, just provide the field name and its type in the
QueryEntity#fields map. Is there anything in particular that doesn't work?
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Andrey,
You should remove collections from the POJO and create two new types (e.g.
StringValue and IntValue) that reference the parent POJO by ID. Something
like this:
StringValue {
String val;
int pojoid;
}
This way you will be able to index StringValue#val and join it with Pojo.
-Val
Andrey,
I think it's possible in theory, but I doubt it will appear in the product
any time soon. Unless you want to contribute, of course :)
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
As Vladimir mentioned, this is one of the features that is simply not
implemented yet. Transactional support for SQL is in development right now.
Hopefully we will have sometime this year, or probably early next year.
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
No, reads are not appended to WAL. It is designed for recovery, not for
auditing purposes.
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Looks like you use Scala 2.10 and corresponding Spark libraries. In this case
you should use 'ignite-spark_2.10' Ignite module instead of just
'ignite-spark'.
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Your code looks correct, tx.rollback() will rollback the whole transaction.
Is there anything in particular that is not working?
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Anji,
Daemon node means that most of the components for this node is disabled, and
it is also not publicly visible as a part of topology. The only use case for
it I can think of is some kind of monitoring tool which connects to cluster
and silently gathers metrics information. Visor uses exact thi
Hi Patrick,
Actually, Ignite by itself provides very rich SQL support [1]. So it sounds
like you can achieve your goals even without Spark: save all the data in
Ignite cluster, probably with persistence enabled [2] and run SQL against
this data. You will be able to create any arbitrary indexes and
Hi,
Cache is destroyed immediately with all its data. However, this does not
deallocate memory. It becomes free for other data and for other caches, so
Ignite can reuse it, but is not returned back to OS.
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Your standalone installation allows to have multiple accounts. Just register
to create one and use it going forward.
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
When you login and download the agent from the console, it already put
correct token to the config.
So steps are the following:
- Install Web Console
- Sing up on installed instance to create new account
- Log in with this new account
- Download web agent and run it
-Val
--
Sent from: http://a
Hi,
Do you have indexes configured and (if yes) are they applied properly to the
query? Did you check the execution plan?
It sounds like your query have to scan the whole cache which gets slower
with backups. Can you provide your full cache configuration and data model?
-Val
--
Sent from: htt
Hi Matt,
CacheStore is implementation usually should not be serialized in the first
place (actually I doubt it would be possible to properly serialize a HTTP
client instance). In CacheConfiguration you provide a Factory instead, so
you can provide your own implementation that will correctly create
Matt, Rick,
Can you show stack traces for these two invocations of the create() method?
It sounds like a bug to me.
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Backup copies are stored along with primary copies in the same storage and
indexed by same indexes. As a matter of fact, any backup copy can become a
primary copy at any moment of time due to topology change. Therefore, if
there a scan, the amount of data you have to go through doubles when you add
Hi John,
This is an implementation detail. If you're interested in such things (which
is great btw!), I would recommend to address the question to dev list
instead.
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Ranjir,
That's a known problem with embedded mode. Spark can start and stop executor
processors at any moment of time, and having embedded Ignite server nodes
there means frequent rebalancing an potential data loss. I would recommend
to use standalone mode instead.
-Val
--
Sent from: http:/
EntryProcessor is used in IgniteCache#invoke which is a cache operation that
works with a particular entry. Service Grid can be used to deploy any kind
of services on the grid. These are two different features serving completely
different purposes.
-Val
--
Sent from: http://apache-ignite-users.
Discovery events are essential, so they are always enabled, regardless of
configuration.
As for "last disconnection, or first connection", can you please clarify
what you mean by this? What's the use case behind this?
Also take a look at this page about client reconnection:
https://apacheignite.r
You should use disconnect/reconnect events in this case, as described in
provided doc. They are designed exactly for you scenario.
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Hugh,
Containers mentioned in the list are just the ones which were tested by
Ignite community. But actually only standard servlet APIs are used there, so
it's supposed to work with any container.
I would recommend to try setting this up and let us know if you have any
specific issues.
-Val
Ranjit,
You can use it with Yarn or without Yarn. The point is that Ignite cluster
is independent from Spark cluster, i.e. runs in separate processes.
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Frank,
Sorry for late response. SecurityCredentialsProvider is not used in Ignite
code because Ignite doesn't provide any implementations of SecurityProcessor
out of the box. If you want to make your cluster secure, you need to
implement it and configure as a part of your own custom plugin. The
Hi John,
1. Partition is basically just a portion of the data mapped to a particular
node. Every key in a cache is assigned to certain partition. By default
there are 1024 partitions and generally there is no need to change the
value. But if needed, it can be done via
RendezvousAffinityFunction#pa
Hi Chris,
Which version of Guava are you using?
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Saji,
To do a join caches must reside in the same cluster. What is the reason for
creating separate clusters in the first place?
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Amit,
1. On all nodes, clients and servers.
2-3. Easiest way is to create a JAR file with these classes and put it under
IGNITE_HOME/libs prior to start. This JAR will be automatically picked up.
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
No, this is not implemented yet. Here is the ticket where you can track the
progress: https://issues.apache.org/jira/browse/IGNITE-3478
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Easiest way is to generate all required configuration using Web Console tool:
https://apacheignite-tools.readme.io/docs/automatic-rdbms-integration
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
You should provide more information. What is your configuration? How do you
use the cluster (data, compute, services, ...)? What is consuming memory?
Did you try to analyze GC logs and heap dumps?
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Chris,
Can you define "prematurely"? What exactly is required for a node to be
ready to execute jobs? From Ignite perspective, it is ready as long as it's
in topology on discovery level, and that's when it starts receiving
requests.
BTW, with AdaptiveLoadBalancingSpi you can implement custom
A
Hi,
You should specify type name instead of cache name when creating the query:
SqlQuery sql = new SqlQuery("person","id >? and id http://apache-ignite-users.70518.x6.nabble.com/
Chris,
Sorry, I still don't understand. What are the things that need to happen
exactly? Is there anything on Ignite level that is not initialized when job
arrives for execution?
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Sumanta,
This can be specified via CacheKeyConfiguration:
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Amit,
These events appear on segmented node because it considers others to be
failed. Segmentation policy for sure affects only local node and I believe
your other nodes are up and running (of course, unless they had their own
problems).
-Val
--
Sent from: http://apache-ignite-users.70518.x6.n
Hi Kenan,
SQL queries are executed against the data that is in memory. Read-though
semantics works for key-based access only. If you need to query data that is
both in memory and on disk, I'd recommend to take a look at Ignite native
persistence:
https://apacheignite.readme.io/docs/distributed-per
Hi Luqman,
I believe there is no such functionality and I saw you already created a
corresponding thread on dev list. You should receive some responses there
soon.
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Try to replace lambda with a static class. Most likely it currently
serializes objects that you do not expect to be serialized, like
KafkaFileConsumer and its fields. With a static class you will be able to
make sure that only required data is included.
-Val
--
Sent from: http://apache-ignite-u
Krzysztof,
This will no work because service is invoked in server side and the returned
future gets serialized and sent to client. The deserialized instance is
obviously never completed. What are you trying to achieve?
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Alexey,
Something is wrong, but I don't see any obvious mistakes in your code. Is it
possible to provide a test as a standalone GitHub project so that I can run
it and reproduce the problem?
Is it reproduced on smaller data sets? Or if load not through Spark, but
just doing regular put/putAll ope
This improvement makes sense, but I doubt there are any plans to implement it
at the moment. Feel free to create a ticket in Jira.
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
You can use client node based driver [1] to get full failover support. Thin
client currently indeed goes though a single node and I believe it makes
sense to at least provide an ability to specify multiple addresses for the
connection. I created a ticket for this:
https://issues.apache.org/jira/bro
Alexey,
The point of service proxy is that you use your own API to do remote
invocations. I don't think that adding this generic "reflection-like" API
makes sense here. For asynchronous executions I would just use Compute Grid.
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
The original issue [1] was fixed long ago, but there is another one [2] that
was fixed in 2.4 (so not released yet).
[1] https://issues.apache.org/jira/browse/IGNITE-1026
[2] https://issues.apache.org/jira/browse/IGNITE-6437
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Looks like a regression bug, created a ticket:
https://issues.apache.org/jira/browse/IGNITE-7055
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
First of all, you should verify that nodes discover each other (search logs
for 'Topology snapshot' lines). Your configuration looks a bit weird, try
providing it like this:
10.
Hi Kenan,
Nothing changed there, ticket is still open:
https://issues.apache.org/jira/browse/IGNITE-4555
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
1 - 100 of 2301 matches
Mail list logo