ted correctly, if you're joining then overwrite otherwise only
> append as it removes dups.
>
> I think, in this scenario, just change it to write.mode('overwrite') because
> you're already reading the old data and your job would be done.
>
>
> On Sat 2 Ju
:
> Benjamin,
>
> The append will append the "new" data to the existing data with removing
> the duplicates. You would need to overwrite the file everytime if you need
> unique values.
>
> Thanks,
> Jayadeep
>
> On Fri, Jun 1, 2018 at 9:31 PM Benjamin Kim wrote:
&
I have a situation where I trying to add only new rows to an existing data set
that lives in S3 as gzipped parquet files, looping and appending for each hour
of the day. First, I create a DF from the existing data, then I use a query to
create another DF with the data that is new. Here is the co
To add, we have a CDH 5.12 cluster with Spark 2.2 in our data center.
On Mon, Nov 13, 2017 at 3:15 PM Benjamin Kim wrote:
> Does anyone know if there is a connector for AWS Kinesis that can be used
> as a source for Structured Streaming?
>
> Thanks.
>
>
I have a question about this. The documentation compares the concept
similar to BigQuery. Does this mean that we will no longer need to deal
with instances and just pay for execution duration and amount of data
processed? I’m just curious about how this will be priced.
Also, when will it be ready
Does anyone know if there is a connector for AWS Kinesis that can be used
as a source for Structured Streaming?
Thanks.
With AWS having Glue and GCE having Dataprep, is Databricks coming out with
an equivalent or better? I know that Serverless is a new offering, but will
it go farther with automatic data schema discovery, profiling, metadata
storage, change triggering, joining, transform suggestions, etc.?
Just cur
Has anyone seen AWS Glue? I was wondering if there is something similar going
to be built into Spark Structured Streaming? I like the Data Catalog idea to
store and track any data source/destination. It profiles the data to derive the
scheme and data types. Also, it does some sort-of automated s
Hi Bo,
+1 for your project. I come from the world of data warehouses, ETL, and
reporting analytics. There are many individuals who do not know or want to do
any coding. They are content with ANSI SQL and stick to it. ETL workflows are
also done without any coding using a drag-and-drop user inte
I’m curious about if and when Spark SQL will ever remove its dependency on Hive
Metastore. Now that Spark 2.1’s SparkSession has superseded the need for
HiveContext, are there plans for Spark to no longer use the Hive Metastore
service with a “SparkSchema” service with a PostgreSQL, MySQL, etc.
de which needs you to update it once
> again in 6 months because newer versions of SPARK now find it deprecated.
>
>
> Regards,
> Gourav Sengupta
>
>
>
> On Fri, Feb 24, 2017 at 7:18 AM, Benjamin Kim <mailto:bbuil...@gmail.com>> wrote:
> Hi Gourav,
>
&g
o Spark 2.0/2.1.
>
> And besides that would you not want to work on a platform which is at least
> 10 times faster What would that be?
>
> Regards,
> Gourav Sengupta
>
> On Thu, Feb 23, 2017 at 6:23 PM, Benjamin Kim <mailto:bbuil...@gmail.com>> wrote:
> We are t
can be
> hidden and read from Input Params.
>
> Thanks,
> Aakash.
>
>
> On 23-Feb-2017 11:54 PM, "Benjamin Kim" <mailto:bbuil...@gmail.com>> wrote:
> We are trying to use Spark 1.6 within CDH 5.7.1 to retrieve a 1.3GB Parquet
> file from AWS S
We are trying to use Spark 1.6 within CDH 5.7.1 to retrieve a 1.3GB Parquet
file from AWS S3. We can read the schema and show some data when the file is
loaded into a DataFrame, but when we try to do some operations, such as count,
we get this error below.
com.cloudera.com.amazonaws.AmazonClien
ur vendor should use the parquet internal compression and not take a
> parquet file and gzip it.
>
>> On 13 Feb 2017, at 18:48, Benjamin Kim wrote:
>>
>> We are receiving files from an outside vendor who creates a Parquet data
>> file and Gzips it before delivery.
We are receiving files from an outside vendor who creates a Parquet data file
and Gzips it before delivery. Does anyone know how to Gunzip the file in Spark
and inject the Parquet data into a DataFrame? I thought using sc.textFile or
sc.wholeTextFiles would automatically Gunzip the file, but I’m
Has anyone got some advice on how to remove the reliance on HDFS for storing
persistent data. We have an on-premise Spark cluster. It seems like a waste of
resources to keep adding nodes because of a lack of storage space only. I would
rather add more powerful nodes due to the lack of processing
till getting the same error. Can you think of anything
> else?
>
> Cheers,
> Ben
>
>
>> On Feb 2, 2017, at 11:06 AM, Asher Krim > <mailto:ak...@hubspot.com>> wrote:
>>
>> Ben,
>>
>> That looks like a scala version mismatch. Have you ch
did you see only scala 2.10.5 being pulled in?
>
> On Fri, Feb 3, 2017 at 12:33 PM, Benjamin Kim <mailto:bbuil...@gmail.com>> wrote:
> Asher,
>
> It’s still the same. Do you have any other ideas?
>
> Cheers,
> Ben
>
>
>> On Feb 3, 2017, at 8:16 AM, A
to
> check which version of the scala sdk your IDE is using
>
> Asher Krim
> Senior Software Engineer
>
>
> On Thu, Feb 2, 2017 at 5:43 PM, Benjamin Kim <mailto:bbuil...@gmail.com>> wrote:
> Hi Asher,
>
> I modified the pom to be the same Spark (1.6.0), HBas
7;re seeing this locally, you might want to
> check which version of the scala sdk your IDE is using
>
> Asher Krim
> Senior Software Engineer
>
> On Thu, Feb 2, 2017 at 5:43 PM, Benjamin Kim wrote:
>
> Hi Asher,
>
> I modified the pom to be the same Spark (1.6.0),
her Krim wrote:
>
> Ben,
>
> That looks like a scala version mismatch. Have you checked your dep tree?
>
> Asher Krim
> Senior Software Engineer
>
>
> On Thu, Feb 2, 2017 at 1:28 PM, Benjamin Kim <mailto:bbuil...@gmail.com>> wrote:
> Elek,
>
>
ltSource.createRelation(HBaseRelation.scala:51)
at
org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:158)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
If you can please help, I would be grateful.
Cheers,
Ben
> O
Elek,
If I cannot use the HBase Spark module, then I’ll give it a try.
Thanks,
Ben
> On Jan 31, 2017, at 1:02 PM, Marton, Elek wrote:
>
>
> I tested this one with hbase 1.2.4:
>
> https://github.com/hortonworks-spark/shc
>
> Marton
>
> On 01/31/2017 09:17 P
Does anyone know how to backport the HBase Spark module to HBase 1.2.0? I tried
to build it from source, but I cannot get it to work.
Thanks,
Ben
-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org
seful.
Thanks!
2016-12-23 7:01 GMT+09:00 Benjamin Kim :
Has anyone tried to merge *.gz.parquet files before? I'm trying to merge
them into 1 file after they are output from Spark. Doing a coalesce(1) on
the Spark cluster will not work. It just does not have the resources to do
it. I'm
wse/PARQUET-460>
>
> It seems parquet-tools allows merge small Parquet files into one.
>
>
> Also, I believe there are command-line tools in Kite -
> https://github.com/kite-sdk/kite <https://github.com/kite-sdk/kite>
>
> This might be useful.
>
>
> Th
Has anyone tried to merge *.gz.parquet files before? I'm trying to merge them
into 1 file after they are output from Spark. Doing a coalesce(1) on the Spark
cluster will not work. It just does not have the resources to do it. I'm trying
to do it using the commandline and not use Spark. I will us
eed. But as it states deeper integration with (scala) is yet to be
> developed.
> Any thoughts on how to use tensorflow with scala ? Need to write wrappers I
> think.
>
>
> On Oct 19, 2016 7:56 AM, "Benjamin Kim" <mailto:bbuil...@gmail.com>> wrote:
> On
Has anyone worked with AWS Kinesis and retrieved data from it using Spark
Streaming? I am having issues where it’s returning no data. I can connect to
the Kinesis stream and describe using Spark. Is there something I’m missing?
Are there specific IAM security settings needed? I just simply follo
On that note, here is an article that Databricks made regarding using
Tensorflow in conjunction with Spark.
https://databricks.com/blog/2016/01/25/deep-learning-with-apache-spark-and-tensorflow.html
Cheers,
Ben
> On Oct 19, 2016, at 3:09 AM, Gourav Sengupta
> wrote:
>
> while using Deep Lea
> table cache and expose it through the thriftserver. But you have to implement
> the loading logic, it can be very simple to very complex depending on your
> needs.
>
>
> 2016-10-17 19:48 GMT+02:00 Benjamin Kim <mailto:bbuil...@gmail.com>>:
> Is this techniq
terface in to the big data world
> revolves around the JDBC/ODBC interface. So if you don’t have that piece as
> part of your solution, you’re DOA w respect to Tableau.
>
> Have you considered Drill as your JDBC connection point? (YAAP: Yet another
> Apache project)
>
Is there only one process adding rows? because this seems a little risky if
> you have multiple threads doing that…
>
>> On Oct 8, 2016, at 1:43 PM, Benjamin Kim > <mailto:bbuil...@gmail.com>> wrote:
>>
>> Mich,
>>
>> After much searching, I
ll provide an in-memory cache for interactive analytics. You
> can put full tables in-memory with Hive using Ignite HDFS in-memory solution.
> All this does only make sense if you do not use MR as an engine, the right
> input format (ORC, parquet) and a recent Hive version.
>
>
responsibility for any loss,
> damage or destruction of data or any other property which may arise from
> relying on this email's technical content is explicitly disclaimed. The
> author will in no case be liable for any monetary damages arising from such
> loss, damage o
aming specifics, there are at least 4 or 5 different implementations
> of HBASE sources, each at varying level of development and different
> requirements (HBASE release version, Kerberos support etc)
>
>
> _
> From: Benjamin Kim mailto:bbuil...
e it at your own risk. Any and all responsibility for any loss,
> damage or destruction of data or any other property which may arise from
> relying on this email's technical content is explicitly disclaimed. The
> author will in no case be liable for any monetary damages arising from such
&
experience with this!
>
>
> _____
> From: Benjamin Kim mailto:bbuil...@gmail.com>>
> Sent: Saturday, October 8, 2016 11:00 AM
> Subject: Re: Spark SQL Thriftserver with HBase
> To: Felix Cheung <mailto:felixcheun...@hotmail.com>>
> Cc: m
Thrift Server (with USING,
> http://spark.apache.org/docs/latest/sql-programming-guide.html#tab_sql_10
> <http://spark.apache.org/docs/latest/sql-programming-guide.html#tab_sql_10>).
>
>
> _
> From: Benjamin Kim mailto:bbuil...@gmail.com>&
damage or destruction of data or any other property which may arise from
> relying on this email's technical content is explicitly disclaimed. The
> author will in no case be liable for any monetary damages arising from such
> loss, damage or destruction.
>
>
> On 8 Octo
book.html#spark>
>
> And if you search you should find several alternative approaches.
>
>
>
>
>
> On Fri, Oct 7, 2016 at 7:56 AM -0700, "Benjamin Kim" <mailto:bbuil...@gmail.com>> wrote:
>
> Does anyone know if Spark can work with HBase tab
I have a table with data already in it that has primary keys generated by the
function monotonicallyIncreasingId. Now, I want to insert more data into it
with primary keys that will auto-increment from where the existing data left
off. How would I do this? There is no argument I can pass into th
Does anyone know if Spark can work with HBase tables using Spark SQL? I know in
Hive we are able to create tables on top of an underlying HBase table that can
be accessed using MapReduce jobs. Can the same be done using HiveContext or
SQLContext? We are trying to setup a way to GET and POST data
On Oct 6, 2016, at 4:27 PM, Benjamin Kim wrote:
>>
>> Has anyone tried to integrate Spark with a server farm of RESTful API
>> endpoints or even HTTP web-servers for that matter? I know it’s typically
>> done using a web farm as the presentation interface, then data flows thro
Has anyone tried to integrate Spark with a server farm of RESTful API endpoints
or even HTTP web-servers for that matter? I know it’s typically done using a
web farm as the presentation interface, then data flows through a
firewall/router to direct calls to a JDBC listener that will SELECT, INSE
I got this email a while back in regards to this.
Dear Spark users and developers,
I have released version 1.0.0 of scalable-deeplearning package. This package is
based on the implementation of artificial neural networks in Spark ML. It is
intended for new Spark deep learning features that wer
gt; That sounds interesting, would love to learn more about it.
>
> Mitch: looks good. Lastly I would suggest you to think if you really need
> multiple column families.
>
> On 4 Oct 2016 02:57, "Benjamin Kim" <mailto:bbuil...@gmail.com>> wrote:
> Lately, I
COLUMN+CELL
> Tesco PLC
> column=stock_daily:close, timestamp=1475447365118, value=325.25
> Tesco PLC
> column=stock_daily:high, timestamp=1475447365118, value=332.00
> Tesc
any other property which may arise from
> relying on this email's technical content is explicitly disclaimed. The
> author will in no case be liable for any monetary damages arising from such
> loss, damage or destruction.
>
>
> On 1 October 2016 at 23:39, Benjamin Kim &
Mich,
I know up until CDH 5.4 we had to add the HTrace jar to the classpath to make
it work using the command below. But after upgrading to CDH 5.7, it became
unnecessary.
echo "/opt/cloudera/parcels/CDH/jars/htrace-core-3.2.0-incubating.jar" >>
/etc/spark/conf/classpath.txt
Hope this helps.
.
Thanks,
Ben
> On Sep 16, 2016, at 3:29 PM, Nikolay Zhebet wrote:
>
> Hi! Can you split init code with current comand? I thing it is main problem
> in your code.
>
> 16 сент. 2016 г. 8:26 PM пользователь "Benjamin Kim" <mailto:bbuil...@gmail.com>> на
Has anyone using Spark 1.6.2 encountered very slow responses from pulling data
from PostgreSQL using JDBC? I can get to the table and see the schema, but when
I do a show, it takes very long or keeps timing out.
The code is simple.
val jdbcDF = sqlContext.read.format("jdbc").options(
Map("u
> tables which "point to" any other DB. i know Oracle provides there own Serde
> for hive. Not sure about PG though.
>
> Once tables are created in hive, STS will automatically see it.
>
> On Wed, Sep 14, 2016 at 11:08 AM, Benjamin Kim <mailto:bbuil...@gmail.
Has anyone created tables using Spark SQL that directly connect to a JDBC data
source such as PostgreSQL? I would like to use Spark SQL Thriftserver to access
and query remote PostgreSQL tables. In this way, we can centralize data access
to Spark SQL tables along with PostgreSQL making it very c
own risk. Any and all responsibility for any loss,
> damage or destruction of data or any other property which may arise from
> relying on this email's technical content is explicitly disclaimed. The
> author will in no case be liable for any monetary damages arising from such
Does anyone have any thoughts about using Spark SQL Thriftserver in Spark 1.6.2
instead of HiveServer2? We are considering abandoning HiveServer2 for it. Some
advice and gotcha’s would be nice to know.
Thanks,
Ben
-
To unsubscri
We use Graphite/Grafana for custom metrics. We found Spark’s metrics not to be
customizable. So, we write directly using Graphite’s API, which was very easy
to do using Java’s socket library in Scala. It works great for us, and we are
going one step further using Sensu to alert us if there is an
may arise from
> relying on this email's technical content is explicitly disclaimed. The
> author will in no case be liable for any monetary damages arising from such
> loss, damage or destruction.
>
>
> On 3 September 2016 at 20:31, Benjamin Kim <mailto:bbuil...@gm
s.com/>
>
> Disclaimer: Use it at your own risk. Any and all responsibility for any loss,
> damage or destruction of data or any other property which may arise from
> relying on this email's technical content is explicitly disclaimed. The
> author will in no case be liable fo
I was wondering if anyone has tried to create Spark SQL tables on top of HBase
tables so that data in HBase can be accessed using Spark Thriftserver with SQL
statements? This is similar what can be done using Hive.
Thanks,
Ben
---
I am trying to implement checkpointing in my streaming application but I am
getting a not serializable error. Has anyone encountered this? I am deploying
this job in YARN clustered mode.
Here is a snippet of the main parts of the code.
object S3EventIngestion {
//create and setup streaming
I would like to know if anyone has tried using the hbase-spark module? I tried
to follow the examples in conjunction with CDH 5.8.0. I cannot find the
HBaseTableCatalog class in the module or in any of the Spark jars. Can someone
help?
Thanks,
Ben
---
It is included in Cloudera’s CDH 5.8.
> On Jul 22, 2016, at 6:13 PM, Mail.com wrote:
>
> Hbase Spark module will be available with Hbase 2.0. Is that out yet?
>
>> On Jul 22, 2016, at 8:50 PM, Def_Os wrote:
>>
>> So it appears it should be possible to use HBase's new hbase-spark module, if
>>
From what I read, there is no more Contexts.
"SparkContext, SQLContext, HiveContext merged into SparkSession"
I have not tested it, but I don’t know if it’s true.
Cheers,
Ben
> On Jul 18, 2016, at 8:37 AM, Koert Kuipers wrote:
>
> in my codebase i would like to gradually transition t
It takes me to the directories instead of the webpage.
> On Jul 13, 2016, at 11:45 AM, manish ranjan wrote:
>
> working for me. What do you mean 'as supposed to'?
>
> ~Manish
>
>
>
> On Wed, Jul 13, 2016 at 11:45 AM, Benjamin Kim <mailto:bbuil...
Has anyone noticed that the spark.apache.org is not working as supposed to?
-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org
requencyCol 'retweets', timeSeriesColumn
> 'tweetTime' )"
> where 'tweetStreamTable' is created using the 'create stream table ...' SQL
> syntax.
>
>
> -
> Jags
> SnappyData blog <http://www.snappydata.io/blog>
> D
> Jags
> SnappyData blog <http://www.snappydata.io/blog>
> Download binary, source <https://github.com/SnappyDataInc/snappydata>
>
>
> On Wed, Jul 6, 2016 at 12:49 AM, Benjamin Kim <mailto:bbuil...@gmail.com>> wrote:
> I recently got a sales email from Sna
I recently got a sales email from SnappyData, and after reading the
documentation about what they offer, it sounds very similar to what Structured
Streaming will offer w/o the underlying in-memory, spill-to-disk, CRUD
compliant data storage in SnappyData. I was wondering if Structured Streaming
I was wondering if anyone, who is a Spark Scala developer, would be willing to
continue the work done for the Kudu connector?
https://github.com/apache/incubator-kudu/tree/master/java/kudu-spark/src/main/scala/org/kududb/spark/kudu
I have been testing and using Kudu for the past month and compar
Has anyone implemented a way to track the performance of a data model? We
currently have an algorithm to do record linkage and spit out statistics of
matches, non-matches, and/or partial matches with reason codes of why we didn’t
match accurately. In this way, we will know if something goes wron
Has anyone run into this requirement?
We have a need to track data integrity and model quality metrics of outcomes so
that we can both gauge if the data is healthy coming in and the models run
against them are still performing and not giving faulty results. A nice to have
would be to graph thes
browser_major_version string
> browser_minor_version string
> os_family string
> os_name string
> os_version string
> os_major_versionstring
> os_minor_versionstring
> # Partition Information
> # col_name
t;
> Dr Mich Talebzadeh
>
> LinkedIn
> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>
> <https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>
>
> http://talebzadehmich.wordpress.com <http://tal
;,
`os_name` string COMMENT '',
`os_version` string COMMENT '',
`os_major_version` string COMMENT '',
Does anyone know how to save data in a DataFrame to a table partitioned using
an existing column reformatted into a derived column?
val partitionedDf = df.withColumn("dt",
concat(substring($"timestamp", 1, 10), lit(" "), substring($"timestamp", 12,
2), lit(":00")))
Ben
> On May 21, 2016, at 4:18 AM, Ted Yu wrote:
>
> Maybe more than one version of jets3t-xx.jar was on the classpath.
>
> FYI
>
> On Fri, May 20, 2016 at 8:31 PM, Benjamin Kim <mailto:bbuil...@gmail.com>> wrote:
> I am trying to stream files from an S3 buck
could be wrong.
Thanks,
Ben
> On May 21, 2016, at 4:18 AM, Ted Yu wrote:
>
> Maybe more than one version of jets3t-xx.jar was on the classpath.
>
> FYI
>
> On Fri, May 20, 2016 at 8:31 PM, Benjamin Kim <mailto:bbuil...@gmail.com>> wrote:
> I am trying to stream
I am trying to stream files from an S3 bucket using CDH 5.7.0’s version of
Spark 1.6.0. It seems not to work. I keep getting this error.
Exception in thread "JobGenerator" java.lang.VerifyError: Bad type on operand
stack
Exception Details:
Location:
org/apache/hadoop/fs/s3native/Jets3tNat
I have a curiosity question. These forever/unlimited DataFrames/DataSets will
persist and be query capable. I still am foggy about how this data will be
stored. As far as I know, memory is finite. Will the data be spilled to disk
and be retrievable if the query spans data not in memory? Is Tachy
obile: +972-54-7801286 | Email:
> ofir.ma...@equalum.io <mailto:ofir.ma...@equalum.io>
> On Sun, May 15, 2016 at 11:58 PM, Benjamin Kim <mailto:bbuil...@gmail.com>> wrote:
> Hi Ofir,
>
> I just recently saw the webinar with Reynold Xin. He mentioned the Spark
Hi Ofir,
I just recently saw the webinar with Reynold Xin. He mentioned the Spark
Session unification efforts, but I don’t remember the DataSet for Structured
Streaming aka Continuous Applications as he put it. He did mention streaming or
unlimited DataFrames for Structured Streaming so one can
gt; Cheers
>
> On Apr 27, 2016, at 10:31 PM, Benjamin Kim <mailto:bbuil...@gmail.com>> wrote:
>
>> Hi Ted,
>>
>> Do you know when the release will be? I also see some documentation for
>> usage of the hbase-spark module at the hbase website. But, I d
Next Thursday is Databricks' webinar on Spark 2.0. If you are attending, I bet
many are going to ask when the release will be. Last time they did this, Spark
1.6 came out not too long afterward.
> On Apr 28, 2016, at 5:21 AM, Sean Owen wrote:
>
> I don't know if anyone has begun a firm discuss
Can someone explain to me how the new Structured Streaming works in the
upcoming Spark 2.0+? I’m a little hazy how data will be stored and referenced
if it can be queried and/or batch processed directly from streams and if the
data will be append only to or will there be some sort of upsert capa
?
Thanks,
Ben
> On Apr 21, 2016, at 6:56 AM, Ted Yu wrote:
>
> The hbase-spark module in Apache HBase (coming with hbase 2.0 release) can do
> this.
>
> On Thu, Apr 21, 2016 at 6:52 AM, Benjamin Kim <mailto:bbuil...@gmail.com>> wrote:
> Has anyone found an easy way
i Benjamin,
> Yes it should work.
>
> Let me know if you need further assistance I might be able to get the code
> I've used for that project.
>
> Thank you.
> Daniel
>
> On 24 Apr 2016, at 17:35, Benjamin Kim <mailto:bbuil...@gmail.com>> wrote:
>
&g
I have data in a DataFrame loaded from a CSV file. I need to load this data
into HBase using an RDD formatted in a certain way.
val rdd = sc.parallelize(
Array(key1,
(ColumnFamily, ColumnName1, Value1),
(ColumnFamily, ColumnName2, Value2),
(
able with hbase storage handler and
> hiveContext but it failed due to a bug.
>
> I was able to persist the DF to hbase using Apache Pheonix which was pretty
> simple.
>
> Thank you.
> Daniel
>
> On 21 Apr 2016, at 16:52, Benjamin Kim <mailto:bbuil...@gmail.com&g
Thu, Apr 21, 2016 at 6:52 AM, Benjamin Kim <mailto:bbuil...@gmail.com>> wrote:
> Has anyone found an easy way to save a DataFrame into HBase?
>
> Thanks,
> Ben
>
>
> -
> To unsubscribe, e-mail
Has anyone found an easy way to save a DataFrame into HBase?
Thanks,
Ben
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org
I see that the new CDH 5.7 has been release with the HBase Spark module
built-in. I was wondering if I could just download it and use the hbase-spark
jar file for CDH 5.5. Has anyone tried this yet?
Thanks,
Ben
-
To unsubscribe,
d
> I create it based on the JSON structure below, especially the nested elements.
>
> Thanks,
> Ben
>
>
>> On Apr 14, 2016, at 3:46 PM, Holden Karau > <mailto:hol...@pigscanfly.ca>> wrote:
>>
>> You could certainly use RDDs for that, you might
you try this codes below?
>
> val csvRDD = ...your processimg for csv rdd..
> val df = new CsvParser().csvRdd(sqlContext, csvRDD, useHeader = true)
>
> Thanks!
>
> On 16 Apr 2016 1:35 a.m., "Benjamin Kim" <mailto:bbuil...@gmail.com>> wrote:
> Hi Hyukjin,
&
codes below?
>
> val csvRDD = ...your processimg for csv rdd..
> val df = new CsvParser().csvRdd(sqlContext, csvRDD, useHeader = true)
>
> Thanks!
>
> On 16 Apr 2016 1:35 a.m., "Benjamin Kim" <mailto:bbuil...@gmail.com>> wrote:
> Hi Hyukjin,
>
&g
Karau wrote:
>
> You could certainly use RDDs for that, you might also find using Dataset
> selecting the fields you need to construct the URL to fetch and then using
> the map function to be easier.
>
> On Thu, Apr 14, 2016 at 12:01 PM, Benjamin Kim <mailto:bbuil...@gmail.
> https://github.com/databricks/spark-csv/blob/master/src/main/scala/com/databricks/spark/csv/CsvParser.scala#L150
>
> <https://github.com/databricks/spark-csv/blob/master/src/main/scala/com/databricks/spark/csv/CsvParser.scala#L150>.
>
> Thanks!
>
> On 2 Apr 2016 2:47
I was wonder what would be the best way to use JSON in Spark/Scala. I need to
lookup values of fields in a collection of records to form a URL and download
that file at that location. I was thinking an RDD would be perfect for this. I
just want to hear from others who might have more experience
"true") // Automatically infer data types
.load("s3://" + bucket + "/" + key)
//save to hbase
})
ssc.checkpoint(checkpointDirectory) // set checkpoint directory
ssc
}
Thanks,
Ben
> On Apr 9, 2016, at 6:12 PM, Benjamin Kim wrote:
>
&g
1 - 100 of 133 matches
Mail list logo