Hi Kunal,
It won't, Hive is not build for fast web access, but rather for heavy
analytic.
Working right with SQL server will give you very good result, when you
start working with millions and do reach a limit try exploring No-Sql
solutions like Cassandra, CouchBase etc.
Noam.
On Wed, Nov 4, 20
We had a case of retrieving a record which is bigger than the GC limit, for
example a column with Array or Map type that has 1M cells.
On Wed, Aug 19, 2015 at 9:35 PM, Sanjeev Verma
wrote:
> Can somebody gives me some pointer to looked upon?
>
> On Wed, Aug 19, 2015 at 9:26 AM, Sanjeev Verma
>
Hi,
Have you look at counters in Hadoop side? It's possible you are dealing
with a bad join which causes multiplication of items, if you see huge
number of record input/output in map/reduce phase and keeps increasing
that's probably the case.
Another thing I would try is to divide the job into se