I have set dfs.datanode.max.xcievers=4096 and have swapping turned off,
Regionserver Heap = 24 GB
Datanode Heap = 1 GB
On Fri, May 11, 2012 at 9:55 AM, sulabh choudhury wrote:
> I have spent a lot of time trying to find a solution to this issue, but
> have had no luck. I think this is beca
While monitoring JMX attributes via JConsole I observed that there are some
VM arguments being reported multiple times, see example below.
VM arguments: -Xmx32768m -ea -XX:+UseConcMarkSweepGC
-XX:+CMSIncrementalMode -ea -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode
-ea -XX:+UseConcMarkSweep
You have to define at least 1 column family name while creating a table
hbase(main):010:0> create 'test', {NAME =>'columnFamily' }
On Sun, May 1, 2011 at 12:40 AM, Priya A wrote:
>
> I m new to hbase..
>
> here there is some errors as follows: while creatin hbase table and trying
> to disp
Cool, that resolved the issue.
On Thu, Apr 28, 2011 at 4:32 PM, Stack wrote:
> You need to fix "Could not resolve the DNS name of db2.dev.abc.net:60020"
> St.Ack
>
> On Thu, Apr 28, 2011 at 3:46 PM, sulabh choudhury
> wrote:
> > Thanks for the response.
> &
failed on local exception: java.io.IOException:
> > > Connection reset by peer
> > >
> > > at org.apache.hadoop.ipc.Client.wrapException(Client.java:775)
> > >
> > > at org.apache.hadoop.ipc.Client.call(Client.java:743)
> > >
> > > at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
> > >
> > > at $Proxy4.getProtocolVersion(Unknown Source)
> > >
> > > at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
> > >
> > > at
> org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:106)
> > >
> > > at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:207)
> > >
> > >etc
> > >
> > >
> > >
> > > I am pretty sure I am missing configuration pieces to achieve the
> > same..What
> > > could be those ?
> > >
> >
> >
> >
> > --
> > Harsh J
> >
>
--
--
Thanks and Regards,
Sulabh Choudhury
I am trying to run a M/R job remotely on a HBAse table.
I have added the conf.set("fs.default.name","hdfs://10.0.0.3:6") to the
code hence now it does go to the cluster there I see the error
WARN org.apache.hadoop.ipc.HBaseServer: IPC Server listener on 6:
readAndProcess threw exception j
ias my_data. Backend error : Unable to recreate
> exception from backed error:
> org.apache.hadoop.hbase.ZooKeeperConnectionException:
> org.apache.hadoop.hbase.ZooKeeperConnectionException:
> org.apache.zookeeper.KeeperException$ConnectionLossException:
> KeeperErrorCode = ConnectionLoss for /hbase
>at org.apache.pig.PigServer.openIterator(PigServer.java:742)
>at
> org.apache.pig.tools.grunt.GruntParser.processDump(GruntParser.java:612)
>at
> org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:303)
>at
> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:165)
>
>
>
>
> Thanks for helping
>
>
> Byambajargal
>
>
--
--
Thanks and Regards,
Sulabh Choudhury
t;max versions" property on the column
> family, and that will be enforced whenever a major compaction occurs [1].
>
> Jesse
>
> 1) http://www.outerthought.org/blog/465-ot.html
>
>
> On Tue, Mar 29, 2011 at 12:43 PM, sulabh choudhury wrote:
>
>> I just realiz
I just realized that using the increment function creates another version,
with a new timestamp.
Is there a way we can use the previous TS, hence over writing the value ?
On Tue, Mar 29, 2011 at 9:38 AM, sulabh choudhury wrote:
> Thanks Jesse. Changing the 10 to 10L made it work.
>
&g
Thanks Jesse. Changing the 10 to 10L made it work.
On Tue, Mar 29, 2011 at 8:59 AM, Jesse Hutton wrote:
> Hi,
>
> It looks like the problem is that the initial value you're inserting in the
> column is an int, while HTable#incrementColumnValue() expects a long.
> Instead of:
>
>
>> I enter data b
.Ack
>
> I have tried implementing the Increment function, but I was getting the
same error.
> On Tue, Mar 29, 2011 at 8:22 AM, sulabh choudhury
> wrote:
> > Hi,
> >
> > Unable to use the Increment function, can anybody suggest what am I doing
> >
Hi,
Unable to use the Increment function, can anybody suggest what am I doing
wrong...
I enter data by :-
theput.add(Bytes.toBytes("uid"),Bytes.toBytes("1"), 130108782L + t,
Bytes.toBytes(10))
Now when I try to increment the value I have tried...
mytable.incrementColumnValue(Bytes.toBytes("r
12 matches
Mail list logo