I am seeing extremely slow performance from Spark 1.2.1 (MAPR4) on Hadoop
2.5.1 (YARN) on hive external tables on s3n. I am running a 'select
count(*) from s3_table' query on the nodes using Hive 0.13 and Spark SQL
1.2.1.
I am running a 5 node cluster on EC2 c3.2xlarge Mapr 4.0.2 M3 cluster.
The t
This is the table that stores information about all the tables. It is
normal when a cluster is recovering for reads to be high on this table
while all the table information is being loaded into the regionservers.
http://hbase.apache.org/book/arch.catalog.html
-Pere
On Mon, Dec 15, 2014 at 12:21 P
Jiganesh,
Yes, they are compatible. I have found you might need to setup the configs
properly on Hue as it is not automatic.
-Pere
On Mon, Dec 1, 2014 at 3:15 PM, Jignesh Patel
wrote:
> can we use Apache Hbase/Hadoop with Hue?
>
> On Thu, Nov 6, 2014 at 12:41 AM, Dima Spivak wrote:
>
> > Yep,
Hi there,
Recently I have been experiencing instability when scanning our HBASE cluster.
The table we are trying to scan is 1.5B records 1TB, we have 12GB heap and 17
servers. Our GC options are as so:
-XX:OnOutOfMemoryError=kill -9 %p -Xmx12000m -XX:+UseConcMarkSweepGC -Xmx12g
-Xmx12g
The err
I think it may be a thrift issue, have you tried playing with the connection
queues?
set hbase.thrift.maxQueuedRequests to 0
From Varun Sharma:
"If you are opening persistent connections (connections that never close), you
should probably set the queue size to 0. Because those connections will
a
seems yet again that it
is a bad idea to have one drive per machine, I will eventually migrate these
instances to I2
Regards,
Pere
On Nov 6, 2014, at 4:20 PM, Pere Kyle wrote:
> So I have another symptom which is quite odd. When trying to take a snapshot
> of the the table with no writes
or as an example for adding tracing to
> your application.
>
> On Thursday, November 6, 2014, Pere Kyle wrote:
>
>> Bryan,
>>
>> Thanks again for the incredibly useful reply.
>>
>> I have confirmed that the callQueueLen is in fact 0, wi
Bryan,
Thanks again for the incredibly useful reply.
I have confirmed that the callQueueLen is in fact 0, with a max value of 2 in
the last week (in ganglia)
hbase.hstore.compaction.max was set to 15 on the nodes, from a previous 7.
Freezes (laggy responses) on the cluster are frequent and af
ot we use
> i2.4xlarge, but we also have about 250 of them. I'd just recommend trying
> different setups, even the c3 level would be great if you can shrink your
> disk size at all (compression and data block encodings).
>
> On Thu, Nov 6, 2014 at 2:31 PM, Pere Kyle wrote:
>
>&
actions running throughout the day.
>
> On Thu, Nov 6, 2014 at 2:14 PM, Pere Kyle wrote:
>
>> Thanks again for your help!
>>
>> I do not see a single entry in my logs for memstore pressure/global heap.
>> I do see tons of logs from the periodicFlusher:
>> http://pa
e/regionserver/MemStoreFlusher.java
> and you will find all the logs.
>
> Cheers
>
> On Thu, Nov 6, 2014 at 10:05 AM, Pere Kyle wrote:
>
>> So I have set the heap to 12Gb and the memstore limit to upperLimit .5
>> lowerLimit .45. I am not seeing any changes i
>
> Cheers
>
> On Wed, Nov 5, 2014 at 11:20 PM, Pere Kyle wrote:
>
>> Watching closely a region server in action. It seems that the memstores
>> are being flushed at around 2MB on the regions. This would seem to
>> indicate that there is not enough heap for the
27;, 'f1', {NUMREGIONS => 15, SPLITALGO =>
> 'HexStringSplit'}
>
> In 0.94.18, there isn't online merge. So you have to use other method to
> merge the small regions.
>
> Cheers
>
> On Wed, Nov 5, 2014 at 10:14 PM, Pere Kyle wrote:
>
>>
wrote:
>
>> Can you provide a bit more information (such as HBase release) ?
>>
>> If you pastebin one of the region servers' log, that would help us
>> determine the cause.
>>
>> Cheers
>>
>>
>> On Wed, Nov 5, 2014 at 9:29 PM, Pe
ase) ?
>
> If you pastebin one of the region servers' log, that would help us
> determine the cause.
>
> Cheers
>
>
> On Wed, Nov 5, 2014 at 9:29 PM, Pere Kyle wrote:
>
>> Hello,
>>
>> Recently our cluster which has been running fine for 2 weeks s
Hello,
Recently our cluster which has been running fine for 2 weeks split to 1024
regions at 1GB per region, after this split the cluster is unusable. Using the
performance benchmark I was getting a little better than 100 w/s, whereas
before it was 5000 w/s. There are 15 nodes of m2.2xlarge wit
Hi,
I am implementing disaster recovery for our Hbase cluster and had one quick
question about import/export of the s3n file system.
I know that ExportTable can be given a start time and end time enabling
incremental backups. My question is how to properly store these incremental
backups on s3
Nishanth,
In my experience the only way I have been able to clear the dead region
servers is to restart the master daemon.
-Pere
On Mon, Nov 3, 2014 at 9:49 AM, Nishanth S wrote:
> Hey folks,
>
> How do I remove a dead region server?.I manually failed over the hbase
> master but this is still
18 matches
Mail list logo