ously loaded this coprocessor via the shell, without having
> to restart HBase. However, after some experimentation I have not found any
> way to reload a new version of the coprocessor without restarting HBase.
> Is there presently any mechanism for doing so?
>
> Regards,
> A
cache combined as this does not leave much room for
> hbase itself. What version is this on?
> On Aug 19, 2012 7:51 AM, "Sever Fundatureanu"
> wrote:
>
>> Hello,
>>
>> I have a RegionServer which OOMEd. The memstore is set to 0.4 and the
>> blockCache t
e why this would cause an OOME, if
the memstore is empty. Can someone please explain this?
You can find the last lines of .out and .log files here:
http://pastebin.com/NjV7Wrjj
http://pastebin.com/Px4McVDt
Thank you in advance,
--
Sever Fundatureanu
Vrije Universiteit Amsterdam
fs.default.name
hdfs://fs0.cm.cluster:8020
As you can see they point to the same namenode. So I really don't
understand why the above check fails..
Regards,
Sever
On Fri, Jul 27, 2012 at 1:17 PM, Sever Fundatureanu
wrote:
> Hi Anil,
>
> I am using HBase 0.94.0 with Hadoop 1.0
try to check
> the same.
>
> Best Regards,
> Anil
>
> On Jul 26, 2012, at 3:46 PM, Sever Fundatureanu
> wrote:
>
>> On Thu, Jul 26, 2012 at 6:47 PM, Sateesh Lakkarsu wrote:
>>>>
>>>>
>>>> For the bulkloading process, the HBase doc
On Thu, Jul 26, 2012 at 6:47 PM, Sateesh Lakkarsu wrote:
>>
>>
>> For the bulkloading process, the HBase documentation mentions that in
>> a 2nd stage "the appropriate Region Server adopts the HFile, moving it
>> into its storage directory and making the data available to clients."
>> But from my
Thank you for the responses. I am still not sure on the answer to my
second question.
>> 2. How is fault tolerance handled for an RS with a coprocessor loaded? Will
>> other servers load that coprocessor if the original RS crashes? If yes,
>> will the HLog be replayed with the coprocessor already
Hello,
For the bulkloading process, the HBase documentation mentions that in
a 2nd stage "the appropriate Region Server adopts the HFile, moving it
into its storage directory and making the data available to clients."
But from my experience the files also remain in the original location
from where
stem. It needs refinement of course
> > > but I have my own work to do, so I decided to open it for everyone to
> > > access, modify and any improvement or ideas would be great.
> > >
> > > https://github.com/danix800/hbase-indexed
> > >
> > > --
> > >
> > > Best Regards!
> > >
> > > Fei Ding
> > > fding.chu...@gmail.com
> > >
> >
>
>
>
> --
>
> Best Regards!
>
> Fei Ding
> fding.chu...@gmail.com
>
--
Sever Fundatureanu
Vrije Universiteit Amsterdam
E-mail: fundatureanu.se...@gmail.com
te:
> On Sat, Jul 7, 2012 at 2:58 AM, Sever Fundatureanu <
> fundatureanu.se...@gmail.com> wrote:
>
> > Also does anybody know what is the flow in the system when a coprocessor
> > from one RS make a Put call with a row key which falls on another RS?
> I.e.
> > do th
already loaded?
Thanks in advance for the responses.
--
Sever Fundatureanu
Vrije Universiteit Amsterdam
E-mail: fundatureanu.se...@gmail.com
nal Exception, and then throw an IOException?
> (as indicated by the method signature).
> What happens exactly on the RegionServer when an IOException is thrown from
> the Coprocessor?
>
> Thank you in advance.
>
> Martin Alig
>
--
Sever Fundatureanu
Vrije Universiteit Amsterdam
E-mail: fundatureanu.se...@gmail.com
generating HFiles). What are you hoping to gain from a
> coprocessor implementation vs the 6 MR jobs? Have you pre-split your
> tables? Can the RegionServer(s) handle all the concurrent mappers?
>
> -n
>
> On Mon, Jul 2, 2012 at 11:58 AM, Sever Fundatureanu <
> fundatureanu.s
overs from HDFS blocks not being fully occupied.
Thanks,
Sever
On Tue, Jul 3, 2012 at 2:29 PM, Stack wrote:
> On Tue, Jul 3, 2012 at 2:17 PM, Sever Fundatureanu
> wrote:
> > Right, forgot about the timestamps. These should be a long value each,
> so 8
> > bytes. The ve
1:48 PM, Michael Segel wrote:
> Timestamps on the cells themselves?
> # Versions?
>
> On Jul 3, 2012, at 4:54 AM, Sever Fundatureanu wrote:
>
> > Hello,
> >
> > I have a simpel table with 1.5 billion rows and one column familiy 'F'.
> > Each row
es up ~82GB. This is after
running major compactions a couple of times. Can someone explain where this
difference might come from?
Regards,
--
Sever Fundatureanu
Vrije Universiteit Amsterdam
E-mail: fundatureanu.se...@gmail.com
e with 120s so I really recommand the fist option.
>
> JM
>
> 2012/7/2, Sever Fundatureanu :
> > Can someone please help me with this?
> >
> > Thanks,
> > Sever
> >
> > On Tue, Jun 26, 2012 at 8:14 PM, Sever Fundatureanu <
> > fundatureanu.se.
Can someone please help me with this?
Thanks,
Sever
On Tue, Jun 26, 2012 at 8:14 PM, Sever Fundatureanu <
fundatureanu.se...@gmail.com> wrote:
> My keys are built of 4 8-byte Ids. I am currently doing the load with MR
> but I get a timeout when doing the loadIncrementalFiles call:
because we know Region Server 1 will have
> his region. We will know that by using his id to figure that out upfront.
> Just trying to minimize the latency further. ( Of course I understand that
> if nodes are down, there will be ways to route the traffic to another host
> to hand
umn-family_performance_options
>
> ~ Minh
>
> On Wed, Jun 27, 2012 at 7:18 AM, Sever Fundatureanu
> wrote:
> > Hello,
> >
> > I initially created a table without the IN_MEMORY option enabled and
> loaded
> > some data into it. Then I disabled it, modified
on servers has
increased. However for some queries I am still getting faster responses
only the 2nd time, as if I'm hitting the cache. Can someone tell me when
does the table content get transferred into RAM if IN_MEMORY option is
enabled like above?
Thank you,
--
Sever Fundatureanu
Vri
period?
Thank you,
On Tue, Jun 26, 2012 at 7:05 PM, Andrew Purtell wrote:
> On Tue, Jun 26, 2012 at 9:56 AM, Sever Fundatureanu
> wrote:
> > I have to bulkload 6 tables which contain the same information but with a
> > different order to cover all possible access patterns.
6
separate jobs?
Regards,
--
Sever Fundatureanu
Vrije Universiteit Amsterdam
6
separate jobs?
Regards,
--
Sever Fundatureanu
Vrije Universiteit Amsterdam
24 matches
Mail list logo