Great, please let us know if you will face any issues and we will add it to
the documentation
Evgenii
2018-01-24 23:43 GMT+03:00 bintisepaha :
> Thanks Evgenii. We will let you know how it goes.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>
Hi all,
By the way, I run two nodes on localhost, and the multicastGroup ip and port
are default settings in the example-cache.xml, as:
===
Hi Rick,
Do you have a luck to resolve this?
Or you still observe the issue when configuring ipFinder via API?
On Thu, Jan 25, 2018 at 11:29 AM, wrote:
> Hi all,
>
>
>
> By the way, I run two nodes on localhost, and the multicastGroup ip and
> port are default settings in the example-cache.xml,
Hi all,
I am facing error while runnig sqlline.sh
I am running the following commands:
./sqlline.sh --color=true --verbose=true -u jdbc:ignite:thin://127.0.0.1/
!tables
Error Stack trace is as below:
Error: Failed to handle JDBC request because node is stopping.
(state=5,code=0)
java.sql
Hi Andrey,
1. There are no other running nodes when I triggered the two nodes.
2. If I firstly triggered the One node(shell script), and then triggered
the other node(maven.project.java).
I closed the other node(maven.project.java) and the One node was still
running. The proram r
Hi Rahul,
Could you please share full logs from Ignite - we need to check it's status.
Evgenii
2018-01-25 12:34 GMT+03:00 Rahul Pandey
:
> Hi all,
>
> I am facing error while runnig sqlline.sh
>
> I am running the following commands:
> ./sqlline.sh --color=true --verbose=true -u jdbc:ignite:t
Rick,
Looks like ok.
You run 2 nodes, then you kill one and other node report that killed node
was dropped from grid.
What the issue is?
On Thu, Jan 25, 2018 at 12:38 PM, wrote:
> Hi Andrey,
>
>
>
> 1. There are no other running nodes when I triggered the two nodes.
>
>
>
> 2. If I f
Hi,
I do not know where to find complete logs for sqlline.
The lists of steps which I am following is:
1. Starting one ignite server by using ignite.sh script with no xml
configuration.
The logs for this step I have attached.
2. Starting sqlline with following commands:
./sqlline.sh --color=tru
Default port for JdbcThinDriver is 10800, while this node started on:
Local ports: TCP:10801 TCP:11211 TCP:47101 UDP:47400 TCP:47501
It's possible that you have some problem node started on the port 10800.
OR
As I see, you have Topology snapshot [ver=6024, servers=1, clients=0,
CPUs=64, heap=1.0G
Hi Denis,
Thank you for the information.
Regards,
Shravya.
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
I have no idea why I am getting this exception. It occours when I try to do
cache.put(...)
class org.apache.ignite.binary.BinaryObjectException: Binary type has different
affinity key fields [typeName=no.toyota.gatekeeper.ignite.key.CredentialsKey,
affKeyFieldName1=id, affKeyFieldName2=username
Thanks Evgenni that was quick.
On Thu, Jan 25, 2018 at 3:50 PM, Evgenii Zhuravlev wrote:
> Default port for JdbcThinDriver is 10800, while this node started on:
> Local ports: TCP:10801 TCP:11211 TCP:47101 UDP:47400 TCP:47501
> It's possible that you have some problem node started on the port 10
Hi,
I am running ignite servers on yarn cluster with properties PFA
ignite-cluster.properties say on "host1". All server nodes are running on
these host itself.
The servers running are with persistence storage enabled PFA
ignite-config.xml configuration file.
When I start an main program from an
Hi,
We are trying to create tables on one particular ClusterGroup with 3 server
nodes: Node1, Node2, Node3. On all these 3 nodes we have set the following
configuration in default-config.xml.
On the client side code, we are trying to setNodeFilter for the
cac
Hello,
Write synchronization mode is one of the crucial settings of cache
configuration and cannot be changed after a cache has been created.
By default, SQL engine uses FULL_SYNC synchronization mode [1]
You can choose the required mode by specifying the additional parameter as
follows:
CREATE TA
I simply stopped my cluster and started it again, when I try to activate I get
the following:
[12:50:26,308][SEVERE][sys-#38][GridTaskWorker] Failed to obtain remote job
result policy for result from ComputeTask.result(..) method (will fail the
whole task): GridJobResultImpl [job=C4
[r=o.a.i.i
Hi Thomas,
Could you please share a small code snippet of cache configuration/cache
creation?
Do you use DDL for that?
I guess that you need to define affinity keys using upper-case
public class CredentialsKey {
@QuerySqlField(index = true)
@AffinityKeyMapped
private String USERNAME;
Hi all.
I found that IgniteCache.withAsync() is deprecated in v.2.3.0.
How to execute multiple SqlFieldsQuery asynchronously now?
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hello!
It seems that an issue was filed about the problem you are seeing:
https://issues.apache.org/jira/browse/IGNITE-7512
I can see that there's already work underway to fix is.
Regards,
--
Ilya Kasnacheev
2018-01-23 4:53 GMT+03:00 Lucky :
> Sorry, the fid is not UUID in tmpCompanyCuBaseD
Hi Humphrey,
>What will happen if at a later point I want to scale back up to 4
replica's?
>- So what will happen with the data it finds in the existing directory (and
probably are old), how does it handle this?
It depends on the time when the node was down. If it was a short period of
time an
Hi Slava
I did create the cache using DDL.
CREATE TABLE UserCache (
id long,
username varchar,
password varchar,
PRIMARY KEY (username, password)
)
WITH "template=partitioned, affinitykey=username, cache_name=UserCache,
key_type=no.toyota.gatekeeper.ignite.key.Cr
Hi!
I've got some issues I'm struggling to debug properly:
Im receiving two streams, each has a binary object as key and a binary
object as value.
The keys are built with a Extractor (the data coming from kafka has a String
as key).
When I'm simply starting my stack, everything runs fine - when I
Hi Sharavya,
This exception means that client node is disconnected from cluster and tries
to reconnect. You may get reconnect future on it
(IgniteClientDisconnectedException.reconnectFuture().get()) and wait when
client will be reconnected.
So it looks like you're trying to create cache on stoppe
Hi Thomas,
CREATE TABLE statement doesn't use your template because you specified the
predefined one - 'partitioned' *template=partitioned*
In case of using replicated mode, the following property does not make sense.
Could you please share full reproducer? I will try it on my side and come
bac
Hi Svonn,
It would be really helpful if you can prepare a small reproducer (for
example maven project) and upload it on github.
Is it possible?
Thanks!
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi!
I actually fixed the issue, even though I'm still not 100% sure why it
caused this:
The kafka connector got a setting called "tasks.max", which I set to a
number higher than 1.
After setting tasks.max=1 I can process all data I want without any issues -
I assume it somehow can't use the extra
Hi,
This method is not marked by IgniteAsyncSupport annotation and therefore,
it cannot be used with enabled asynchronous mode on Ignite API.
I mean that the following code throws IllegalStateException:
IgniteCache asyncCache = cache.withAsync();
QueryCursor cursor = asyncCache.query(sqlFi
any pointers please
Thanks,
Rajes
On Thu, Jan 25, 2018 at 10:07 AM, Rajesh Kishore
wrote:
> Hi All,
>
> Wanted to know - what does ignite supports unique or non -unique index.
> I have requirement to create non unique index on a field / group of field.
> Whats the way?
>
> Also, with the EXPL
Alexey,
I'm wondering you had a chance to look into this? I'd like to understand what
to expect in terms of node activation time and how it's related to the data
volume.
Thanks!
Andrey
From: Andrey Kornev
Sent: Monday, January 22, 2018 11:36 AM
To: Alexey Gonc
When you create a table via SQL, you already fully describe its schema, so
there is no need for QueryEntity. Can you clarify what you're trying to
achieve?
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Rajesh,
Ignite has only non-unique indexes. For information on how to create them
please refer to the documentation: https://apacheignite-sql.readme.io/docs.
You can do this either via cache configuration or using CREATE INDEX command
depending on your use case.
As for the logging, here is some i
Queries are actually always async, meaning that query method itself doesn't
return any data. You get a cursor and the data is then fetched while you
iterate.
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Ganesh,
Thin driver uses one of the nodes as a gateway, so once you add second node,
half of updates will have to make two network hops instead of one, so
slowdown is expected. Although, it should not get worse further when you add
third, fourth and so on node.
The best option for this case would
Hello Apache Ignite Community,
I am currently working with Ignite and Spark; I'm specifically interested in
the Shared RDD functionality. I have a few questions and hope I can find
answers here.
Goal:
I am trying to have a single physical page with multiple sharers (multiple
processes map to the
Umur,
When you talk about "physical page mappings", what exactly are you referring
to? Can you please elaborate a bit more on what and why you're trying to
achieve? What is the issue you're trying to solve?
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Val,
Thanks for the quick response.
I am referring to how Virtual and Physical Memory works.
For more background, when a process is launched, it will be allocated a
virtual address space. This virtual memory will have a translation to the
physical memory you have on your computer. The pages a
Hi Thomas,
Looks like you can reproduce the issue with a unit test.
Could you please share it with us?
Thanks,
Mike.
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Rahul,
Could you please share a log from a node where SQL failed? or even better to
share logs from all nodes, including client nodes.
Does YARN limit resources like CPU and memory for Ignite instances? Or each
Ignite instance on the host can see and use all CPUs?
Thanks,
Mike.
--
Sent fr
Val,
I would like to make one correction. Data could also be shared with Linux
shared memory (like shm). It does not have to be through copy-on-writes with
read-only mapped pages. A shared dataset in shared memory across different
processes also fits my use case.
Sincerely,
Umur
--
Sent from:
Hi Stan
Thank you for the quick reply.
Let me clarify my use case: I want to have expiration for all regular
operations.
Along with that, I want to be able to read some or all entries without
refreshing TTLs, for example for debugging.
Following your example, I create a view with expiration and
40 matches
Mail list logo