MySQL and PgSQL scale to millions. Spark or any distributed/clustered computing environment would be inefficient for the kind of data size you mention. That's because of coordination of processes, moving data around etc.
On Mon, Jul 13, 2015 at 5:34 PM, Sandeep Giri <sand...@knowbigdata.com> wrote: > Even for 2L records the MySQL will be better. > > Regards, > Sandeep Giri, > +1-253-397-1945 (US) > +91-953-899-8962 (IN) > www.KnowBigData.com. <http://KnowBigData.com.> > > [image: linkedin icon] <https://linkedin.com/company/knowbigdata> [image: > other site icon] <http://knowbigdata.com> [image: facebook icon] > <https://facebook.com/knowbigdata> [image: twitter icon] > <https://twitter.com/IKnowBigData> <https://twitter.com/IKnowBigData> > > > On Fri, Jul 10, 2015 at 9:54 AM, vinod kumar <vinodsachin...@gmail.com> > wrote: > >> For records below 50,000 SQL is better right? >> >> >> On Fri, Jul 10, 2015 at 12:18 AM, ayan guha <guha.a...@gmail.com> wrote: >> >>> With your load, either should be fine. >>> >>> I would suggest you to run couple of quick prototype. >>> >>> Best >>> Ayan >>> >>> On Fri, Jul 10, 2015 at 2:06 PM, vinod kumar <vinodsachin...@gmail.com> >>> wrote: >>> >>>> Ayan, >>>> >>>> I would want to process a data which nearly around 50000 records to 2L >>>> records(in flat). >>>> >>>> Is there is any scaling is there to decide what technology is >>>> best?either SQL or SPARK? >>>> >>>> >>>> >>>> On Thu, Jul 9, 2015 at 9:40 AM, ayan guha <guha.a...@gmail.com> wrote: >>>> >>>>> It depends on workload. How much data you would want to process? >>>>> On 9 Jul 2015 22:28, "vinod kumar" <vinodsachin...@gmail.com> wrote: >>>>> >>>>>> Hi Everyone, >>>>>> >>>>>> I am new to spark. >>>>>> >>>>>> Am using SQL in my application to handle data in my application.I >>>>>> have a thought to move to spark now. >>>>>> >>>>>> Is data processing speed of spark better than SQL server? >>>>>> >>>>>> Thank, >>>>>> Vinod >>>>>> >>>>> >>>> >>> >>> >>> -- >>> Best Regards, >>> Ayan Guha >>> >> >> >