ich Talebzadeh
> wrote:
>
>> OK to your point below
>>
>> "... We are going to deploy 20 physical Linux servers for use as an
>> on-premise Spark & HDFS on Kubernetes cluster..
>>
>> Kubernetes is really a cloud-native technology. However, the
>
mesh structure to integrate these microservices together, including
> on-premise and in cloud?
> Now you have 20 tin boxes on-prem that you want to deploy for
> building your Spark & HDFS stack on top of them. You will gain benefit from
> Kubernetes and your microservices by
OK to your point below
"... We are going to deploy 20 physical Linux servers for use as an
on-premise Spark & HDFS on Kubernetes cluster..
Kubernetes is really a cloud-native technology. However, the cloud-native
concept does not exclude the use of on-premises infrastructure in cases
We are going to deploy 20 physical Linux servers for use as an on-premise
Spark & HDFS on Kubernetes cluster. My question is: within this
architecture, is it best to have the pods run directly on bare metal or
under VMs or system containers like LXC and/or under an on-premise instance
of somet
message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/hbase-spark-hdfs-tp28661.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org
Hi everybody.
I'm totally new in Spark and I wanna know one stuff that I do not manage to
find. I have a full ambary install with hbase, Hadoop and spark. My code
reads and writes in hdfs via hbase. Thus, as I understood, all data stored
are in bytes format in hdfs. Now, I know that it's possible
Spark will execute as a client for hdfs. In other words, it'll contact the
hadoop master for the hdfs cluster, which will return the block info, and then
the data will be fetched from the data nodes.
Date: Tue, 19 Apr 2016 14:00:31 +0530
Subject: Spark + HDFS
From: chaturvedich...@gmail.c
When I use spark and hdfs on two different clusters.
How does spark workers know that which block of data is available in which
hdfs node.
Who basically caters to this.
Can someone throw light on this.
Hello,
Spark collect HDFS read/write metrics per application/job see details
http://spark.apache.org/docs/latest/monitoring.html.
I have connected spark metrics to Graphite and then doing nice graphs
display on Graphana.
BR,
Arek
On Thu, Dec 31, 2015 at 2:00 PM, Steve Loughran wrote:
>
>> On
> On 30 Dec 2015, at 13:19, alvarobrandon wrote:
>
> Hello:
>
> Is there anyway of monitoring the number of Bytes or blocks read and written
> by an Spark application?. I'm running Spark with YARN and I want to measure
> how I/O intensive a set of applications are. Closest thing I have seen is
Thanks in advance
Best Regards.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Monitoring-Spark-HDFS-Reads-and-Writes-tp25838.html
Sent from the Apache Spark User List mailing list archive at
11 matches
Mail list logo