BTW,
if i disable the swap at all. What will happen in the above situation?
currently it starts swapping at 90%
Sincerely
On Tue, Oct 8, 2013 at 7:09 AM, prakash kadel wrote:
> thanks,
>
> yup, it seems so. I have 48 gb memory. i see it swaps at that point.
>
> btw, why
14.30 sys=3.74,
> real=88.77
> secs]
>
> Is suspicious. Are you swapping?
>
> J-D
>
>
> On Mon, Oct 7, 2013 at 8:34 AM, prakash kadel >wrote:
>
> > Also,
> >why is the CMS not kicking in early, i have set XX:+
> > UseCMSInitiatingOccupancyOnly
Hello,
I am getting this YADE all the time
HBASE_HEAPSIZE=8000
Settings: -ea -XX:+UseConcMarkSweepGC -XX:MaxGCPauseMillis=200
-XX:+HeapDumpOnOutOfMemoryError -XX:+CMSIncrementalMode -XX:+UseParNewGC
-XX:CMSInitiatingOccupancyFraction=50 -XX:+UseCMSInitiatingOccupancyOnly
-XX:NewSize=256m -XX:Ma
Also,
why is the CMS not kicking in early, i have set XX:+
UseCMSInitiatingOccupancyOnly???
Sincerely,
Prakash
On Tue, Oct 8, 2013 at 12:32 AM, prakash kadel wrote:
> Hello,
> I am getting this YADE all the time
>
> HBASE_HEAPSIZE=8000
>
> Settings: -ea -XX:+
hi every one,
i am quite new to base and java. I have a few questions.
1. on the web ui for hbase i have the following entry in the region server
mining,60020,13711358624Fri Jun 14 00:04:22 GMT 2013requestsPerSecond=0,
numberOfOnlineRegions=106, usedHeapMB=5577, maxHeapMB=7933
when the hbase is
thank you very much.
i will try with snappy compression with data_block_encoding
On Wed, Apr 3, 2013 at 11:21 PM, Kevin O'dell wrote:
> Prakash,
>
> Yes, I would recommend Snappy Compression.
>
> On Wed, Apr 3, 2013 at 10:18 AM, Prakash Kadel
> wrote:
> >
Thanks,
is there any specific compression that is recommended of the use case i
have?
Since my values are all null will compression help?
I am thinking of using prefix data_block_encoding..
Sincerely,
Prakash Kadel
On Apr 3, 2013, at 10:55 PM, Ted Yu wrote:
> You should use d
i?
I want to have faster reads.
Please suggest.
Sincerely,
Prakash Kadel
hi everyone,
when i launch my mapreduce jobs to increment counters in hbase i
sometimes have maps
with multiple attempts like:
attempt_201303251722_0161_m_74_0
attempt_201303251722_0161_m_74_1
if there are multiple attempts running and if the first one gets
completed successful,
different docs and do the increments.
Sincerely,
Prakash Kadel
On Feb 20, 2013, at 11:14 PM, Michel Segel wrote:
>
> What happens when you have a poem like Mary had a little lamb?
>
> Did you turn off the WAL on both table inserts, or just the index?
>
> If you want t
and
>> iterate on the existing array. Write your own Split without going into
>> using String functions which goes through encoding (expensive). Just find
>> your delimiter by byte comparison.
>> 5. Enable BloomFilters on doc table. It should speed up the checkAndPut.
>&
t; As an example look at poem/rhyme 'Marry had a little lamb'.
> Then check your word count.
>
> Sent from a remote device. Please excuse any typos...
>
> Mike Segel
>
> On Feb 18, 2013, at 7:21 AM, prakash kadel wrote:
>
>> Thank you guys for your replies
table_idx.increment(inc);
}
}
} finally {
table_idx.close();
}
return true;
}
public void stop(env) {
pool.close();
}
I am a newbee to HBASE. I am not sure this is the way
its a local read. i just check the last param of PostCheckAndPut indicating if
the Put succeeded. Incase if the put success, i insert a row in another table
Sincerely,
Prakash Kadel
On Feb 18, 2013, at 2:52 PM, Wei Tan wrote:
> Is your CheckAndPut involving a local or remote READ? Due to
is there a way to do async writes with coprocessors triggered by Put operations?
thanks
Sincerely,
Prakash Kadel
On Feb 18, 2013, at 10:31 AM, Michael Segel wrote:
> Hmmm. Can you have async writes using a coprocessor?
>
>
> On Feb 17, 2013, at 7:17 PM, lars hofhansl wrote:
one more question.
even if the coprocessors are making insertions to different region, since i use
"postCheckAndPut" shouldnt there be not much prefomance slow down?
thanks
Sincerely,
Prakash Kadel
On Feb 18, 2013, at 10:17 AM, lars hofhansl wrote:
> Index maintenance will alw
coprocessor was to make the insertion code cleaner
and efficient index creation.
Sincerely,
Prakash Kadel
On Feb 18, 2013, at 10:17 AM, lars hofhansl wrote:
> Index maintenance will always be slower. An interesting comparison would be
> to also update your indexes from the M/R and see w
thank you lars,
That is my guess too. I am confused, isnt that something that cannot be
controlled. Is this approach of creating some kind of index wrong?
Sincerely,
Prakash Kadel
On Feb 18, 2013, at 10:07 AM, lars hofhansl wrote:
> Presumably the coprocessor issues Puts to another reg
Forgot to mention. I am using 0.92.
Sincerely,
Prakash
On Feb 18, 2013, at 9:48 AM, Prakash Kadel wrote:
> hi,
> i am trying to insert few million documents to hbase with mapreduce. To
> enable quick search of docs i want to have some indexes, so i tried to use
> the coprocesso
hi,
i am trying to insert few million documents to hbase with mapreduce. To
enable quick search of docs i want to have some indexes, so i tried to use the
coprocessors, but they are slowing down my inserts. Arent the coprocessors not
supposed to increase the latency?
my settings:
3 regio
20 matches
Mail list logo