implementation. Spark will try each class specified
until one of them returns the resource information for that resource. It
tries the discovery script last if none of the plugins return information
for that resource. 3.0.0
--
Best Regards,
Ayan Guha
r other alternative approaches can be done through reading
> Hbase tables in RDD and saving RDD to Hive.
>
> Thanks.
>
>
> On Thu, Jan 5, 2017 at 2:02 AM, ayan guha wrote:
>
>> Hi Chetan
>>
>> What do you mean by incremental load from HBase? There
;>>> *Approach 2:*
>>>>>
>>>>> Run Scheduled Spark Job - Read from HBase and do transformations and
>>>>> maintain flag column at HBase Level.
>>>>>
>>>>> In above both approach, I need to maintain column level flags. such as
>>>>> 0 - by default, 1-sent,2-sent and acknowledged. So next time Producer will
>>>>> take another 1000 rows of batch where flag is 0 or 1.
>>>>>
>>>>> I am looking for best practice approach with any distributed tool.
>>>>>
>>>>> Thanks.
>>>>>
>>>>> - Chetan Khatri
>>>>>
>>>>
>>>>
>>>
>>
>
--
Best Regards,
Ayan Guha
LOWING// no need to
> specify
>
> If we go with option 2, we should throw exceptions if users specify
> multiple from's or to's. A variant of option 2 is to require explicitly
> specification of begin/end even in the case of unbounded boundary, e.g.:
>
> Window.rowsFromBeginning().rowsTo(-3)
> or
> Window.rowsFromUnboundedPreceding().rowsTo(-3)
>
>
>
--
Best Regards,
Ayan Guha