?None of the moneys here noticed that ?
Ever heard of a toggle switch.look behind you.
Look at the light switch.That is a Toggle switch ON/OFF.
Jane thorpe
janethor...@aol.com
Smith
Sent: Wednesday, April 15, 2020 2:23 PM
To: jane thorpe
Cc: dh.lo...@gmail.com; user@spark.apache.org; janethor...@aol.com;
em...@yeikel.com
Subject: Re: Going it alone.
|
CAUTION EXTERNAL EMAIL: DO NOT CLICK ON LINKS OR OPEN ATTACHMENTS THAT ARE
UNEXPECTED OR SENT FROM UNKNOWN SENDER
The Web UI only shows
"
The Storage Memory column shows the amount of memory used and reserved for
caching data. "
WEB UI does not show the values of Xmx or Xms or XSS.
you are are never going to know the cause of
OutofMemoryError or StackOverFlowError.
The visual tool is as useless as it
? I don't feel as if anything were
implied when you were asked for use cases or what problem you are solving. You
were asked to identify some use cases, of which you don't appear to have any.
On Tue, Apr 14, 2020 at 4:49 PM jane thorpe wrote:
That's what I want to know, Us
That's what I want to know, Use Cases.
I am looking for direction as I described and I want to know if Spark is
headed in my direction.
You are implying Spark could be.
So tell me about the USE CASES and I'll do the rest.
On Tuesday, 14 April 2020 yeikel valdes wrote:
It depends on y
Hi,
I consider myself to be quite good in Software Development especially using
frameworks.
I like to get my hands dirty. I have spent the last few months understanding
modern frameworks and architectures.
I am looking to invest my energy in a product where I don't have to relying on
the
Here a is another tool I use Logic Analyser 7:55
https://youtu.be/LnzuMJLZRdU
you could take some suggestions for improving performance queries.
https://dzone.com/articles/why-you-should-not-use-select-in-sql-query-1
Jane thorpe
janethor...@aol.com
-Original Message-
From: jane
h the control flow within an application.
These types of visualizations are useful, and AppOptics has them, but they can
be difficult to understand for those of us without a PhD."
Especially helpful if you want to understand through visualisation and you do
not have a phD.
Jane thorp
;
Thank you once again sir for clarifying WEKA and its scope of use case.
jane thorpe
janethor...@aol.com
-Original Message-
From: Teemu Heikkilä
To: jane thorpe
CC: user
Sent: Sun, 12 Apr 2020 22:33
Subject: Re: covid 19 Data [DISCUSSION]
Hi Jane!
The data you pointed there is coup
Hi,
Three weeks a phD guy proposed to start a project to use Apache Spark
to help the WHO with predictive analysis using COVID -19 data.
I have located the daily updated data.
It can be found here
https://github.com/CSSEGISandData/COVID-19.
I was wondering if Apache Spark is up to the job of
hi,
A phD guy proposed to start a project for the WHO
accumulated
jane thorpe
janethor...@aol.com
You seem to be implying the error is intermittent.
You seem to be implying data is being ingested via JDBC. So the connection has
proven itself to be working unless no data is arriving from the JDBC channel
at all. If no data is arriving then one could say it could be the JDBC.
If the e
rict emailing rules.
Do you think email rules are far more important than programming rules and
guidelines ?
https://github.com/apache/spark/blob/master/examples/src/main/scala/org/apache/spark/examples/streaming/clickstream/PageViewStream.scala
On Mon, 6 Apr 2020, 07:04 jane thorpe, wrote:
Hi Som ,
Did you know that simple demo program of reading characters from file didn't
work ?
Who wrote that simple hello world type little program ?
jane thorpe
janethor...@aol.com
-Original Message-
From: jane thorpe
To: somplasticllc ; user
Sent: Fri, 3 Apr 2020 2:44
Subjec
spark
# work around
sc.setJobGroup("a","b")
tempc = sc.parallelize([38.4,19.2,13.8,9.6])
tempf = tempc.map(lambda x: (float(9)/5)*x + 32)
tempf.collect()
OUTPUT :
[101.12, 66.56, 56.84, 49.28]
calculator result = 55.04
Is the answer correct when x = 12.8 ?
jane thorpe
janethor...@aol.com
0.1:9000/hdfs/spark/examples/README.txt MapPartitionsRDD[91] at
textFile at :27
counts: org.apache.spark.rdd.RDD[(String, Int)] = ShuffledRDD[94] at
reduceByKey at :30
scala> :quit
jane thorpe
janethor...@aol.com
-Original Message-
From: Som Lima
CC: user
Sent: Tue, 31 Mar 2020
hi,
Are there setup instructions on the website for
spark-3.0.0-preview2-bin-hadoop2.7I can run same program for hdfs format
val textFile = sc.textFile("hdfs://...")
val counts = textFile.flatMap(line => line.split(" "))
.map(word => (word, 1))
.reduceByKey(_ + _
17 matches
Mail list logo