Why you would have Ignite, a horizontally scalable, in-memory database, to 
store 100 records?

> On 19 Jul 2023, at 04:37, Arunima Barik <arunimabari...@gmail.com> wrote:
> 
> I have a huge dataset and I am keeping few (say 100) rows in Ignite and the 
> entire dataset remains in Spark
> 
> When I query Ignite I want to write an Sql query to perform the same. 
> 
> Does option 1 still hold good? 
> 
> On Tue, 18 Jul, 2023, 10:40 pm Stephen Darlington, 
> <stephen.darling...@gridgain.com <mailto:stephen.darling...@gridgain.com>> 
> wrote:
>> “Correct” is hard to quantify without knowing your use case, but option 1 is 
>> probably what you want. Spark pushes down SQL execution to Ignite, so you 
>> get all the distribution, use of indexes, etc. 
>> 
>> > On 14 Jul 2023, at 16:12, Arunima Barik <arunimabari...@gmail.com 
>> > <mailto:arunimabari...@gmail.com>> wrote:
>> > 
>> > Hello team
>> > 
>> > What is the correct way out of these? 
>> > 
>> > 1. Write a spark dataframe to ignite
>> > Read the same back and perform spark.sql() on that
>> > 
>> > 2. Write the spark dataframe to ignite
>> > Connect to server via a thin client
>> > Perform client.sql() 
>> > 
>> > Regards
>> > Arunima
>> 

Reply via email to