y I get that ClassNotFound exception?
Need any more information, please let me know.
Much thanks,
Conor
package com.example.spark.streaming.reporting.live.jobs
import java.util.Date
import scala.Array.canBuildFrom
import scala.collection.mutable.MutableList
import org.apa
orage natively, so you would need to use an external data store).
You can also use Spark in combination with MemSQL, either row store or
column store, using the MemSQL Spark Connector.
Thanks,
Conor
On Thu, May 28, 2015 at 10:36 PM, Ashish Mukherjee <
ashish.mukher...@gmail.com> wrote:
tiesByVisitKey.map(activity =>
hashMapBroadcast.value("1"))
// should print (1, 1000) after 2 minutes when updated
broadcastValuesFromStream.print()
Regards,
Conor
On Fri, Jul 3, 2015 at 4:24 PM, Raghavendra Pandey <
raghavendra.pan...@gmail.com> wrote:
> You can
Just bumping the issue I am having, if anyone can provide direction? I
have been stuck on this for a while now.
Thanks,
Conor
On Fri, Dec 11, 2015 at 5:10 PM, Conor Fennell wrote:
> Hi,
>
> I have a memory leak in the spark driver which is not in the heap or
> the non-heap.
&
27;${range._1}','${range._2}',${range._3.getTime()},${range._4},${range._5},${range._6})")
})
session.close()
cluster.close()
Thanks,
Conor
On Mon, Dec 14, 2015 at 2:12 PM, Singh, Abhijeet
wrote:
> Hi Conor,
>
> What do you mean when you say leak
and they never throw these
exceptions.
I would be very appreciative for any direction and I can happily provide
more detail.
Thanks,
Conor
15/10/21 23:30:31 INFO consumer.SimpleConsumer: Reconnect due to
socket error: java.nio.channels.ClosedChannelException
15/10/21 23:31:01 ERROR
and they never throw these
exceptions.
I would be very appreciative for any direction and I can happily provide
more detail.
Thanks,
Conor
15/10/21 23:30:31 INFO consumer.SimpleConsumer: Reconnect due to
socket error: java.nio.channels.ClosedChannelException
15/10/21 23:31:01 ERROR
and they never throw these exceptions.
I would be very appreciative for any direction and I can happily
provide more detail.
Thanks,
Conor
15/10/22 13:30:00 INFO spark.CacheManager: Partition rdd_1528_0 not
found, computing it
15/10/22 13:30:00 INFO kafka.KafkaRDD: Computing topic events
and they never throw these exceptions.
I would be very appreciative for any direction and I can happily
provide more detail.
Thanks,
Conor
15/10/22 13:30:00 INFO spark.CacheManager: Partition rdd_1528_0 not
found, computing it
15/10/22 13:30:00 INFO kafka.KafkaRDD: Computing topic events
---++--+-+
I assume this isn't the intended behavior for mode = 'DROPMALFORMED'. It
appears that with Databricks lazy execution, somehow the 'mode =
'DROPMALFORMED' is ignored
I've included the CSV file I have been using
*(It
unsubscribe
--
*Conor Begley* | *QUERY*CLICK
*Software Developer Intern*
e: conor.beg...@queryclick.com <%23#--email--%23...@queryclick.com>
dd: +44 (0)131 516 5251
ed: +44 (0)131 556 7078 // lon: +44 (0)207 183 0344
[image: QueryClick is a Healthy Working Lives Bronze Award
ssNotFound exception?
Need any more information, please let me know.
Much thanks,
Conor
package com.example.spark.streaming.reporting.live.jobs
> import java.util.Date
> import scala.Array.canBuildFrom
> import scala.collection.mutable.MutableList
> import
nd
stopped passing in a bucket array to the ActiveJourney class.
And instead I hard code all the time buckets I need in the ActiveJourney
class; this approach works and recovers from checkpointing but is not
extensible.
Can the Spark gurus explain why I get that ClassNotFound exception?
Need any
I looking for that build too.
-Conor
On Mon, Apr 20, 2015 at 9:18 AM, Marius Soutier wrote:
> Same problem here...
>
> > On 20.04.2015, at 09:59, Zsolt Tóth wrote:
> >
> > Hi all,
> >
> > it looks like the 1.2.2 pre-built version for hadoop2.4 is not a
e reasonable time and also
set spark.streaming.unpersist=true.
Those together cleaned up the shuffle files for us.
-Conor
On Tue, Apr 21, 2015 at 11:27 AM, González Salgado, Miquel <
miquel.gonza...@tecsidel.es> wrote:
> thank you Luis,
>
> I have tried without the window oper
Hi,
We set the spark.cleaner.ttl to some reasonable time and also
set spark.streaming.unpersist=true.
Those together cleaned up the shuffle files for us.
-Conor
On Tue, Apr 21, 2015 at 8:18 AM, N B wrote:
> We already do have a cron job in place to clean just the shuffle files.
> H
The memory leak could be related to this
<https://issues.apache.org/jira/browse/SPARK-5967> defect that was resolved
in Spark 1.2.2 and 1.3.0.
It also was a HashMap causing the issue.
-Conor
On Wed, Apr 29, 2015 at 12:01 PM, Sean Owen wrote:
> Please use user@, not dev@
>
&g
17 matches
Mail list logo