Hi guys,
I have the following query which is slow. Below is the explain plan. Any clue
whats going wrong ?
[WARNING][client-connector-#62][IgniteH2Indexing] Query execution is too long
[time=9649 ms, sql='SELECT
2018-10-31T17:09:25.071Z __Z0.ACCOUNTNUMBER __C0_0,
2018-10-31T17:09:25.071Z _
Hi Team,
We have implemented the Ignite key value store inside spark in embeded
mode. When we run the spark job in static mode it works ok, but in dynamic
mode we are seeing the job gets stuck while loading the data to ignite
cache. What happens in dynamic mode is the nodes get added when ever
ava
Hi Val,
Do you mean use yarn deployment ?
Thanks,
Ranjit
On Wed, Oct 18, 2017 at 10:54 PM, vkulichenko wrote:
> Hi Ranjir,
>
> That's a known problem with embedded mode. Spark can start and stop
> executor
> processors at any moment of time, and having embedded Ignite server nodes
> there mean
Hi Val,
Could you please confirm.
Thanks,
Ranjit
On Thu, Oct 19, 2017 at 6:24 PM, Ranjit Sahu wrote:
> Hi Val,
>
> Do you mean use yarn deployment ?
>
> Thanks,
> Ranjit
>
> On Wed, Oct 18, 2017 at 10:54 PM, vkulichenko <
> valentin.kuliche...@gmail.com> wrot
Hi Team,
I am geting the attached exception once in a while while starting the
ignite cluster.
Even though no one is running ignite i get some times address already in
use.
I have specified a ramnge of 10 for both the discovery and communication
ports.
Any help what i am missing here.
my code bel
for both discovery as well as communication spi you mean ?
On Mon, Oct 23, 2017 at 11:26 PM, slava.koptilin
wrote:
> Hi Ranjit,
>
> It seems that all ports in the range you provided are already bound.
> Perhaps, your machine doesn't release ports fast enough.
> Please try to increase the port ra
Hi ,
Can anyone share the experience of using ignite in yarn mode and the steps
to set it up in CDH ?
Thanks,
Ranjit
Hi,
We are using ignite key value store in spark embeded mode. As the resource
is managed by yarn, sometimes when we request for 30 executor nodes, 20
nodes may get into one server. Which means we are starting 20 ignite server
nodes in one server. Currently we have set the discovery port range as
Thanks Val. We are not using the RDD of ignite instead we are building the
ignite cluster with in spark with custom code and use the key value store.
We are using the static ip discovery while doing so.
What we do is we read the data in avro from hadoop. Start the ignite node
first in driver, and
Hi ,
I have a 40 node cluster. We are using on heap caching. I want to figure
out how much memory each cache node is consuming. I was looking at the
metrics where heap and non heap usage is printed. Is the metics gives me
the correct info ? Or are there different api’s in ignite to print it ?
Th
Hi Val,
What are the issue you guys have discovered with embededmode deploy ? If we
lost nodes in spark, the back up\replication should take care of those is
what i am thinking.
There is a overhead of rebalancing but how much is that compared to having
a stand alone cluster which we may not always
Hi Guys,
If i set back up count to 3 and rebalncing mode to None , do you think
there are any issues? I want to avoid the rebalncing of data when a node
crashes and a new one joins which is slowing down loading the data to
cache.
Thanks,
Ranjit
Hi Val,
Not always but out of 10, we see at least once the issue. Whats happening
is when one node crashes\stops the new node joins . The loading process
restarts but what ever was happening in few minutes (3-5) goes to 2-3
hours.
Thanks,
Ranjit
On Wed, Jan 31, 2018 at 3:12 AM, vkulichenko
wrot
was thinking
cause of rebalancing this is becoming slow. I can look at tuning the
rebalancing too. Let me know if you have any suggestions.
On Thu, Feb 1, 2018 at 1:54 PM, Evgenii Zhuravlev
wrote:
> Ranjit,
>
> How do you load data to the cache?
>
> Evgenii
>
> 2018-02-01 1
task re-starts,
> Looks like you run in embedded mode. Do avoid too frequently node stopping
> events, you need to run Ignite in standalone mode, in this case, the node
> will run even if your task fails.
>
> Please let me know if I missed something.
>
> Evgenii
>
> 2018-0
Thanks Dmitrty.
On Wed, Jan 24, 2018 at 8:27 PM, dkarachentsev
wrote:
> Hi Ranjit,
>
> That metrics should be correct, you also may check [1], because Ignite
> anyway keeps data in offheap. But if enabled on-heap, it caches entries in
> java heap.
>
> [1] https://apacheignite.readme.io/docs/memo
What is the best way to set up the stand alone ignite cluster for spark. I
think with standalone mode we need to deploy ignite separately on each
worker node. Can you send me some reference which i can look at. If suppose
we decide to go with Stand-alone can i still load data from my spark app
may
Hi Guys,
We are trying to use ignite key-value store inside the Spark cluster. What
we are to do is :
1) Load the data to a data frame in spark
2) While doing transformation on the data frame, on each executor node we
are starting the ignite cache and load the data.
The issue what we are seeing
..
println(companyResults)
companyResults.toVector
}
On Fri, Feb 10, 2017 at 11:00 AM, Jörn Franke wrote:
> Not sure I got the picture of your setup, but the ignite cache should be
> started indecently of the application and not within the application.
>
> Aside from t
Sorry for the long email.
Thanks,
Ranjit
On Fri, Feb 10, 2017 at 11:00 AM, Jörn Franke wrote:
> Not sure I got the picture of your setup, but the ignite cache should be
> started indecently of the application and not within the application.
>
> Aside from that, can you please ela
Hi Val,
Actually the idea was not to build something dedicated like the standalone
cluster for ignite.
Do you foresee any issues if i run the ignite inside the executor node.
So when i start my spark job, the ignite cluster will be kind of the
executor nodes and we will be querying again from the
Hi Val,
let me explain it again. What i am looking is to built the cache for each
spark application and destroy it when spark app completes.
Something like you guys have with IgniteRDD in embedded mode. I can't use
IgniteRDD as we are getting into a nested RDD situation with our legacy
code.
I ch
one more point to add is in my case the query will start from executor node
and not driver node.
On Tue, Feb 14, 2017 at 2:42 PM, Ranjit Sahu wrote:
> Hi Val,
>
> let me explain it again. What i am looking is to built the cache for each
> spark application and destroy it wh
Yep, i will change that , so that we can load to basically all nodes. It
will end up again as my initial function. Start and load on each executor.
One more thing i noticed is , when i connect as Client and load it is 150+%
slow. Any pointer here ?
We have a shared service in java which is used fo
Hi,
We are trying to start the Ignite node on spark worker node. When i try to
start 10 nodes, few starts and few not and its in hung state.
The log shows non heap free -1% and the log is below. Any clue whats
happening and how to fix this ?
17/02/16 13:04:54 INFO IgniteKernal%WCA:
Metrics for l
Hi Val,
This got resolved. What was happening is as we were trying to start the
nodes from the driver using a map function all were starting together and
they were not able to find each other.
So what i did is i start the ignite node on the driver first and passed the
ip of driver to others so th
Hi,
I am having issue making all nodes joining one topology. What i am trying
to do is start one server node first. And pass the ip address of that to
other nodes using the vm ip finder and start those.
val ipFinder = new TcpDiscoveryVmIpFinder();
ipFinder.setAddresses(addresses);
When i star
Hi Guys,
As per my understanding as of today ignite provides free text search using
lucene index. Is there any plan to support contains phrase search ? If not
what is the best way to achive this. I was trying the sql like search but
its very expensive.
Thanks,
Ranjit
Hi Val,
let me explain little more .
For my use case i am building a cache .
EntityVO have a field called entityName (String). Suppose i have three
names here.
YAMAHA MOTOR CORP
TOYOTA MOTOR CORP
HONDA MOTOR CORP
YAMAHA MOTOR CORPORATION
With the TextQuery ignite have if i search for "YAMAHA
to use lucene indexing we have to index the filed as @QueryTextField right
? Or index type @QuerySqlField this also uses lucene?
On Fri, Feb 24, 2017 at 2:46 AM, vkulichenko
wrote:
> Ranjit,
>
> This particular search can be even done with SQL (where entityName like
> 'YAMAHA MOTOR%'). As long a
Hi Val,
As per my requirement i have to do sql like with %text% query. The data i
have is almost 45M+ records. When i try the sql like query i think it does
not use the indexing and i am seeing the query time upto 10+ seconds. This
is in a cluster of 40+ nodes. Anything can be improved on this wi
I am getting the below exception on my executor node while doing a query.
This happens when i am using TextQuery.
My code while querying is like this:
val results = myIngiteCache.query(new TextQuery[String,
CompanyVO]("CompanyVO", companyName));
val companyResults = ListBuffer[CompanyVO]()
val ite
Hi Val,
I think i figured out the problem there. Now i am getting a lot of failure
with query execution. Please find below the error stack trace.
Could you please let me know what is going wrong here.
Thanks,
Ranjit
17/03/01 04:14:00 INFO dao.CompanyDAO: input company name inside Contains
sear
Hi,
Is there a difference in sql query performance if my cache object is in
java vs scala? I am using anotation to index the fields.
Thanks,
Ranjit
Hi Team,
Is this issue fixed ? If yes on which version? IS there any work around to
avoid this ?
Thanks,
Ranjit
When will that be ?
On Tue, 9 May 2017 at 10:10 PM, Andrey Gura wrote:
> No, it isn't fixed yet. Should be fixed in Ignite 2.1 I hope.
>
> On Tue, May 9, 2017 at 6:52 PM, Ranjit Sahu wrote:
> > Hi Team,
> >
> > Is this issue fixed ? If yes on which versio
Setrakyan
wrote:
> It is not clear to me what this issue is. Ranjit, can you explain what
> this is critical to you?
>
> On Tue, May 9, 2017 at 9:55 AM, Ranjit Sahu wrote:
>
>> When will that be ?
>>
>> On Tue, 9 May 2017 at 10:10 PM, Andrey Gura wrote:
>>
37 matches
Mail list logo