Is it just a typo in the email or are you missing a space after your
--master argument?
The logs here actually don't say much but "something went wrong". It seems
fairly low-level, like the gateway process failed or didn't start, rather
than a problem with the program. It's hard to say more unles
I don't believe UDAFs are available in PySpark as this came up on the
developer list while I was asking for what features people were missing in
PySpark - see
http://apache-spark-developers-list.1001551.n3.nabble.com/Python-Spark-Improvements-forked-from-Spark-Improvement-Proposals-td19422.html
. T
Hi all,
I am trying to read data from couchbase using spark 2.0.0.I need to fetch
complete data from a bucket as Rdd.How can I solve this?Does spark 2.0.0
support couchbase?Please help.
Thanks
On Sun, Oct 16, 2016 at 10:51 AM, Devi P.V wrote:
> Hi all,
> I am trying to read data from couchbase using spark 2.0.0.I need to fetch
> complete data from a bucket as Rdd.How can I solve this?Does spark 2.0.0
> support couchbase?Please help.
>
> Thanks
>
https://github.com/couchbase/couchbase-
Thanks for the info Holden.
So it seems both the jira and the comment on the developer list are over a
year old. More surprising, the jira has no assignee. Any particular reason
for the lack of activity in this area?
Is writing scala/java the only work around for this? I hear a lot of people
say
The comment on the developer list is from earlier this week. I'm not sure
why UDAF support hasn't made the hop to Python - while I work a fair amount
on PySpark it's mostly in core & ML and not a lot with SQL so there could
be good reasons I'm just not familiar with. We can try pinging Davies or
Mi
OK, I misread the year on the dev list. Can you comment on work arounds?
(I.e. question about if scala/java are the only option.)
On Sun, Oct 16, 2016 at 12:09 PM, Holden Karau wrote:
> The comment on the developer list is from earlier this week. I'm not sure
> why UDAF support hasn't made the h
Hi,
I have trade data stored in Hbase table. Data arrives in csv format to HDFS
and then loaded into Hbase via periodic load with
org.apache.hadoop.hbase.mapreduce.ImportTsv.
The Hbase table has one Column family "trade_info" and three columns:
ticker, timecreated, price.
The RowKey is UUID. So
A single json object would mean for most parsers it needs to fit in memory
when reading or writing
On Oct 15, 2016 11:09, "codlife" <1004910...@qq.com> wrote:
> Hi:
>I'm doubt about the design of spark.read.json, why the json file is not
> a standard json file, who can tell me the internal r
Hello, everyone.
As I mentioned at the tile, I wonder that is spark a right tool for
updating a data frame repeatedly until there is no more date to
update.
For example.
while (if there was a updating) {
update a data frame A
}
If it is the right tool, then what is the best practice for this ki
If my understanding is correct about your query
In spark Dataframes are immutable , cant update the dataframe.
you have to create a new dataframe to update the current dataframe .
Thanks,
Divya
On 17 October 2016 at 09:50, Mungeol Heo wrote:
> Hello, everyone.
>
> As I mentioned at the tile,
Hi,
I want to configure my Hive to use Spark 2 as its engine. According to
Hive's instruction, the Spark should build *without *Hadoop, nor Hive. I
could build my own, but for some reason I hope I could use a official
binary build.
So I want to ask if the official Spark binary build labeled "with
12 matches
Mail list logo