Unable To Find Proto Buffer Class Error With RDD

2016-05-03 Thread kyle
Hi,

I ran into an issue when using proto buffer in spark RDD.
I googled this and found it seems to be a known compatible issue.
Have anyone ran into the same issue before and found any solutions?


The detailed description could be found in this link.
https://qnalist.com/questions/5156782/unable-to-find-proto-buffer-class-error-with-rdd-protobuf


Thanks,
Kyle


Re: [VOTE] Release Apache Spark 2.0.1 (RC4)

2016-09-29 Thread Kyle Kelley
+1

On Thu, Sep 29, 2016 at 4:27 PM, Yin Huai  wrote:

> +1
>
> On Thu, Sep 29, 2016 at 4:07 PM, Luciano Resende 
> wrote:
>
>> +1 (non-binding)
>>
>> On Wed, Sep 28, 2016 at 7:14 PM, Reynold Xin  wrote:
>>
>>> Please vote on releasing the following candidate as Apache Spark version
>>> 2.0.1. The vote is open until Sat, Oct 1, 2016 at 20:00 PDT and passes if a
>>> majority of at least 3+1 PMC votes are cast.
>>>
>>> [ ] +1 Release this package as Apache Spark 2.0.1
>>> [ ] -1 Do not release this package because ...
>>>
>>>
>>> The tag to be voted on is v2.0.1-rc4 (933d2c1ea4e5f5c4ec8d375b5ccaa
>>> 4577ba4be38)
>>>
>>> This release candidate resolves 301 issues:
>>> https://s.apache.org/spark-2.0.1-jira
>>>
>>> The release files, including signatures, digests, etc. can be found at:
>>> http://people.apache.org/~pwendell/spark-releases/spark-2.0.1-rc4-bin/
>>>
>>> Release artifacts are signed with the following key:
>>> https://people.apache.org/keys/committer/pwendell.asc
>>>
>>> The staging repository for this release can be found at:
>>> https://repository.apache.org/content/repositories/orgapachespark-1203/
>>>
>>> The documentation corresponding to this release can be found at:
>>> http://people.apache.org/~pwendell/spark-releases/spark-2.0.1-rc4-docs/
>>>
>>>
>>> Q: How can I help test this release?
>>> A: If you are a Spark user, you can help us test this release by taking
>>> an existing Spark workload and running on this release candidate, then
>>> reporting any regressions from 2.0.0.
>>>
>>> Q: What justifies a -1 vote for this release?
>>> A: This is a maintenance release in the 2.0.x series.  Bugs already
>>> present in 2.0.0, missing features, or bugs related to new features will
>>> not necessarily block this release.
>>>
>>> Q: What fix version should I use for patches merging into branch-2.0
>>> from now on?
>>> A: Please mark the fix version as 2.0.2, rather than 2.0.1. If a new RC
>>> (i.e. RC5) is cut, I will change the fix version of those patches to 2.0.1.
>>>
>>>
>>>
>>
>>
>> --
>> Luciano Resende
>> http://twitter.com/lresende1975
>> http://lresende.blogspot.com/
>>
>
>


-- 
Kyle Kelley (@rgbkrk <https://twitter.com/rgbkrk>; lambdaops.com)


Re: unsubscribe

2016-12-27 Thread Kyle Kelley
You are now in position 238 for unsubscription. If you wish for your
subscription to occur immediately, please email
dev-unsubscr...@spark.apache.org

Best wishes.

-
To unsubscribe e-mail: dev-unsubscr...@spark.apache.org



Re: Unsubscribe

2017-01-05 Thread Kyle Kelley
You are now in position 368 for unsubscription. If you wish for your
Unsubscription to occur immediately, please email
dev-unsubscr...@spark.apache.org


Re: Implementing TinkerPop on top of GraphX

2014-11-06 Thread Kyle Ellrott
I've taken a crack at implementing the TinkerPop Blueprints API in GraphX (
https://github.com/kellrott/sparkgraph ). I've also implemented portions of
the Gremlin Search Language and a Parquet based graph store.
I've been working out finalize some code details and putting together
better code examples and documentation before I started telling people
about it.
But if you want to start looking at the code, I can answer any questions
you have. And if you would like to contribute, I would really appreciate
the help.

Kyle


On Thu, Nov 6, 2014 at 11:42 AM, Reynold Xin  wrote:

> cc Matthias
>
> In the past we talked with Matthias and there were some discussions about
> this.
>
> On Thu, Nov 6, 2014 at 11:34 AM, York, Brennon <
> brennon.y...@capitalone.com>
> wrote:
>
> > All, was wondering if there had been any discussion around this topic
> yet?
> > TinkerPop <https://github.com/tinkerpop> is a great abstraction for
> graph
> > databases and has been implemented across various graph database backends
> > / gaining traction. Has anyone thought about integrating the TinkerPop
> > framework with GraphX to enable GraphX as another backend? Not sure if
> > this has been brought up or not, but would certainly volunteer to
> > spearhead this effort if the community thinks it to be a good idea!
> >
> > As an aside, wasn¹t sure if this discussion should happen on the board
> > here or on JIRA, but a made a ticket as well for reference:
> > https://issues.apache.org/jira/browse/SPARK-4279
> >
> > 
> >
> > The information contained in this e-mail is confidential and/or
> > proprietary to Capital One and/or its affiliates. The information
> > transmitted herewith is intended only for use by the individual or entity
> > to which it is addressed.  If the reader of this message is not the
> > intended recipient, you are hereby notified that any review,
> > retransmission, dissemination, distribution, copying or other use of, or
> > taking of any action in reliance upon this information is strictly
> > prohibited. If you have received this communication in error, please
> > contact the sender and delete the material from your computer.
> >
> >
> > -
> > To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
> > For additional commands, e-mail: dev-h...@spark.apache.org
> >
> >
>


Re: Implementing TinkerPop on top of GraphX

2014-11-06 Thread Kyle Ellrott
I still have to dig into the Tinkerpop3 internals (I started my work long
before it had been released), but I can say that to get the Tinerpop2
Gremlin pipeline to work in the GraphX was a bit of a hack. The
whole Tinkerpop2 Gremlin design was based around streaming pipes of
data, rather then large distributed map-reduce operations. I had to hack
the pipes to aggregate all of the data and pass a single object wrapping
the GraphX RDDs down the pipes in a single go, rather then streaming it
element by element.
Just based on their description, Tinkerpop3 may be more amenable to the
Spark platform.

Kyle


On Thu, Nov 6, 2014 at 11:55 AM, Kushal Datta 
wrote:

> What do you guys think about the Tinkerpop3 Gremlin interface?
> It has MapReduce to run Gremlin operators in a distributed manner and
> Giraph to execute vertex programs.
>
> The Tinkpop3 is better suited for GraphX.
>
> On Thu, Nov 6, 2014 at 11:48 AM, Kyle Ellrott 
> wrote:
>
>> I've taken a crack at implementing the TinkerPop Blueprints API in GraphX
>> (
>> https://github.com/kellrott/sparkgraph ). I've also implemented portions
>> of
>> the Gremlin Search Language and a Parquet based graph store.
>> I've been working out finalize some code details and putting together
>> better code examples and documentation before I started telling people
>> about it.
>> But if you want to start looking at the code, I can answer any questions
>> you have. And if you would like to contribute, I would really appreciate
>> the help.
>>
>> Kyle
>>
>>
>> On Thu, Nov 6, 2014 at 11:42 AM, Reynold Xin  wrote:
>>
>> > cc Matthias
>> >
>> > In the past we talked with Matthias and there were some discussions
>> about
>> > this.
>> >
>> > On Thu, Nov 6, 2014 at 11:34 AM, York, Brennon <
>> > brennon.y...@capitalone.com>
>> > wrote:
>> >
>> > > All, was wondering if there had been any discussion around this topic
>> > yet?
>> > > TinkerPop <https://github.com/tinkerpop> is a great abstraction for
>> > graph
>> > > databases and has been implemented across various graph database
>> backends
>> > > / gaining traction. Has anyone thought about integrating the TinkerPop
>> > > framework with GraphX to enable GraphX as another backend? Not sure if
>> > > this has been brought up or not, but would certainly volunteer to
>> > > spearhead this effort if the community thinks it to be a good idea!
>> > >
>> > > As an aside, wasn¹t sure if this discussion should happen on the board
>> > > here or on JIRA, but a made a ticket as well for reference:
>> > > https://issues.apache.org/jira/browse/SPARK-4279
>> > >
>> > > 
>> > >
>> > > The information contained in this e-mail is confidential and/or
>> > > proprietary to Capital One and/or its affiliates. The information
>> > > transmitted herewith is intended only for use by the individual or
>> entity
>> > > to which it is addressed.  If the reader of this message is not the
>> > > intended recipient, you are hereby notified that any review,
>> > > retransmission, dissemination, distribution, copying or other use of,
>> or
>> > > taking of any action in reliance upon this information is strictly
>> > > prohibited. If you have received this communication in error, please
>> > > contact the sender and delete the material from your computer.
>> > >
>> > >
>> > > -
>> > > To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
>> > > For additional commands, e-mail: dev-h...@spark.apache.org
>> > >
>> > >
>> >
>>
>
>


Re: Implementing TinkerPop on top of GraphX

2014-11-06 Thread Kyle Ellrott
I think I've already done most of the work for the OLTP objects (Graph,
Element, Vertex, Edge, Properties) when implementing Tinkerpop2. Singleton
write operations, like addVertex/deleteEdge, were cached locally until a
read operation was requested, then the set of build operations where
parallelized into an RDD and merged with the existing graph.
Its not efficient for large numbers of operations, but it passes unit tests
and works for small graph tweaking.

OLAP stuff looks completely new, but considering they have a Giraph
implementation, it should be pretty straight forward.

Kyle


On Thu, Nov 6, 2014 at 3:25 PM, York, Brennon 
wrote:

> This was my thought exactly with the TinkerPop3 release. Looks like, to
> move this forward, we’d need to implement gremlin-core per <
> http://www.tinkerpop.com/docs/3.0.0.M1/#_implementing_gremlin_core>. The
> real question lies in whether GraphX can only support the OLTP
> functionality, or if we can bake into it the OLAP requirements as well. At
> a first glance I believe we could create an entire OLAP system. If so, I
> believe we could do this in a set of parallel subtasks, those being the
> implementation of each of the individual API’s (Structure, Process, and, if
> OLAP, GraphComputer) necessary for gremlin-core. Thoughts?
>
>
> From: Kyle Ellrott 
> Date: Thursday, November 6, 2014 at 12:10 PM
> To: Kushal Datta 
> Cc: Reynold Xin , "York, Brennon" <
> brennon.y...@capitalone.com>, "dev@spark.apache.org" ,
> Matthias Broecheler 
> Subject: Re: Implementing TinkerPop on top of GraphX
>
> I still have to dig into the Tinkerpop3 internals (I started my work long
> before it had been released), but I can say that to get the Tinerpop2
> Gremlin pipeline to work in the GraphX was a bit of a hack. The
> whole Tinkerpop2 Gremlin design was based around streaming pipes of
> data, rather then large distributed map-reduce operations. I had to hack
> the pipes to aggregate all of the data and pass a single object wrapping
> the GraphX RDDs down the pipes in a single go, rather then streaming it
> element by element.
> Just based on their description, Tinkerpop3 may be more amenable to the
> Spark platform.
>
> Kyle
>
>
> On Thu, Nov 6, 2014 at 11:55 AM, Kushal Datta 
> wrote:
>
>> What do you guys think about the Tinkerpop3 Gremlin interface?
>> It has MapReduce to run Gremlin operators in a distributed manner and
>> Giraph to execute vertex programs.
>>
>> The Tinkpop3 is better suited for GraphX.
>>
>> On Thu, Nov 6, 2014 at 11:48 AM, Kyle Ellrott 
>> wrote:
>>
>>> I've taken a crack at implementing the TinkerPop Blueprints API in
>>> GraphX (
>>> https://github.com/kellrott/sparkgraph ). I've also implemented
>>> portions of
>>> the Gremlin Search Language and a Parquet based graph store.
>>> I've been working out finalize some code details and putting together
>>> better code examples and documentation before I started telling people
>>> about it.
>>> But if you want to start looking at the code, I can answer any questions
>>> you have. And if you would like to contribute, I would really appreciate
>>> the help.
>>>
>>> Kyle
>>>
>>>
>>> On Thu, Nov 6, 2014 at 11:42 AM, Reynold Xin 
>>> wrote:
>>>
>>> > cc Matthias
>>> >
>>> > In the past we talked with Matthias and there were some discussions
>>> about
>>> > this.
>>> >
>>> > On Thu, Nov 6, 2014 at 11:34 AM, York, Brennon <
>>> > brennon.y...@capitalone.com>
>>> > wrote:
>>> >
>>> > > All, was wondering if there had been any discussion around this topic
>>> > yet?
>>> > > TinkerPop <https://github.com/tinkerpop> is a great abstraction for
>>> > graph
>>> > > databases and has been implemented across various graph database
>>> backends
>>> > > / gaining traction. Has anyone thought about integrating the
>>> TinkerPop
>>> > > framework with GraphX to enable GraphX as another backend? Not sure
>>> if
>>> > > this has been brought up or not, but would certainly volunteer to
>>> > > spearhead this effort if the community thinks it to be a good idea!
>>> > >
>>> > > As an aside, wasn¹t sure if this discussion should happen on the
>>> board
>>> > > here or on JIRA, but a made a ticket as well for reference:
>>> > > https://issues.apache.org/jira/browse/SPARK-4279
>>> > >
>>>

Re: Implementing TinkerPop on top of GraphX

2014-11-06 Thread Kyle Ellrott
I think its best to look to existing standard rather then try to make your
own. Of course small additions would need to be added to make it valuable
for the Spark community, like a method similar to Gremlin's 'table'
function, that produces an RDD instead.
But there may be a lot of extra code and data structures that would need to
be added to make it work, and those may not be directly applicable to all
GraphX users. I think it would be best run as a separate module/project
that builds directly on top of GraphX.

Kyle



On Thu, Nov 6, 2014 at 4:39 PM, York, Brennon 
wrote:

> My personal 2c is that, since GraphX is just beginning to provide a full
> featured graph API, I think it would be better to align with the TinkerPop
> group rather than roll our own. In my mind the benefits out way the
> detriments as follows:
>
> Benefits:
> * GraphX gains the ability to become another core tenant within the
> TinkerPop community allowing a more diverse group of users into the Spark
> ecosystem.
> * TinkerPop can continue to maintain and own a solid / feature-rich graph
> API that has already been accepted by a wide audience, relieving the
> pressure of “one off” API additions from the GraphX team.
> * GraphX can demonstrate its ability to be a key player in the GraphDB
> space sitting inline with other major distributions (Neo4j, Titan, etc.).
> * Allows for the abstract graph traversal logic (query API) to be owned
> and maintained by a group already proven on the topic.
>
> Drawbacks:
> * GraphX doesn’t own the API for its graph query capability. This could be
> seen as good or bad, but it might make GraphX-specific implementation
> additions more tricky (possibly). Also, GraphX will need to maintain the
> features described within the TinkerPop API as that might change in the
> future.
>
> From: Kushal Datta 
> Date: Thursday, November 6, 2014 at 4:00 PM
> To: "York, Brennon" 
> Cc: Kyle Ellrott , Reynold Xin ,
> "dev@spark.apache.org" , Matthias Broecheler <
> matth...@thinkaurelius.com>
>
> Subject: Re: Implementing TinkerPop on top of GraphX
>
> Before we dive into the implementation details, what are the high level
> thoughts on Gremlin/GraphX? Scala already provides the procedural way to
> query graphs in GraphX today. So, today I can run
> g.vertices().filter().join() queries as OLAP in GraphX just like Tinkerpop3
> Gremlin, of course sans the useful operators that Gremlin offers such as
> outE, inE, loop, as, dedup, etc. In that case is mapping Gremlin operators
> to GraphX api's a better approach or should we extend the existing set of
> transformations/actions that GraphX already offers with the useful
> operators from Gremlin? For example, we add as(), loop() and dedup()
> methods in VertexRDD and EdgeRDD.
>
> Either way we get a desperately needed graph query interface in GraphX.
>
> On Thu, Nov 6, 2014 at 3:25 PM, York, Brennon  > wrote:
>
>> This was my thought exactly with the TinkerPop3 release. Looks like, to
>> move this forward, we’d need to implement gremlin-core per <
>> http://www.tinkerpop.com/docs/3.0.0.M1/#_implementing_gremlin_core>. The
>> real question lies in whether GraphX can only support the OLTP
>> functionality, or if we can bake into it the OLAP requirements as well. At
>> a first glance I believe we could create an entire OLAP system. If so, I
>> believe we could do this in a set of parallel subtasks, those being the
>> implementation of each of the individual API’s (Structure, Process, and, if
>> OLAP, GraphComputer) necessary for gremlin-core. Thoughts?
>>
>>
>> From: Kyle Ellrott 
>> Date: Thursday, November 6, 2014 at 12:10 PM
>> To: Kushal Datta 
>> Cc: Reynold Xin , "York, Brennon" <
>> brennon.y...@capitalone.com>, "dev@spark.apache.org" <
>> dev@spark.apache.org>, Matthias Broecheler 
>> Subject: Re: Implementing TinkerPop on top of GraphX
>>
>> I still have to dig into the Tinkerpop3 internals (I started my work long
>> before it had been released), but I can say that to get the Tinerpop2
>> Gremlin pipeline to work in the GraphX was a bit of a hack. The
>> whole Tinkerpop2 Gremlin design was based around streaming pipes of
>> data, rather then large distributed map-reduce operations. I had to hack
>> the pipes to aggregate all of the data and pass a single object wrapping
>> the GraphX RDDs down the pipes in a single go, rather then streaming it
>> element by element.
>> Just based on their description, Tinkerpop3 may be more amenable to the
>> Spark platform.
>>
>> Kyle
>>
>>
>> On Thu, Nov 6, 2014 at 11:55 AM, Kushal Datta 
>>

Re: Implementing TinkerPop on top of GraphX

2014-11-07 Thread Kyle Ellrott
Who here would be interested in helping to work on an implementation of the
Tikerpop3 Gremlin API for Spark? Is this something that should continue in
the Spark discussion group, or should it migrate to the Gremlin message
group?

Reynold is right that there will be inherent mismatches in the APIs, and
there will need to be some discussions with the GraphX group about the best
way to go. One example would be edge ids. GraphX has vertex ids, but no
explicit edges ids, while Gremlin has both. Edge ids could be put into the
attr field, but then that means the user would have to explicitly subclass
their edge attribute to the edge attribute interface. Is that worth doing,
versus adding an id to everyones's edges?

Kyle


On Thu, Nov 6, 2014 at 7:24 PM, Reynold Xin  wrote:

> Some form of graph querying support would be great to have. This can be a
> great community project hosted outside of Spark initially, both due to the
> maturity of the component itself as well as the maturity of query language
> standards (there isn't really a dominant standard for graph ql).
>
> One thing is that GraphX API will need to evolve and probably need to
> provide more primitives in order to support the new ql implementation.
> There might also be inherent mismatches in the way the external API is
> defined vs what GraphX can support. We should discuss those on a
> case-by-case basis.
>
>
> On Thu, Nov 6, 2014 at 5:42 PM, Kyle Ellrott 
> wrote:
>
>> I think its best to look to existing standard rather then try to make
>> your own. Of course small additions would need to be added to make it
>> valuable for the Spark community, like a method similar to Gremlin's
>> 'table' function, that produces an RDD instead.
>> But there may be a lot of extra code and data structures that would need
>> to be added to make it work, and those may not be directly applicable to
>> all GraphX users. I think it would be best run as a separate module/project
>> that builds directly on top of GraphX.
>>
>> Kyle
>>
>>
>>
>> On Thu, Nov 6, 2014 at 4:39 PM, York, Brennon <
>> brennon.y...@capitalone.com> wrote:
>>
>>> My personal 2c is that, since GraphX is just beginning to provide a full
>>> featured graph API, I think it would be better to align with the TinkerPop
>>> group rather than roll our own. In my mind the benefits out way the
>>> detriments as follows:
>>>
>>> Benefits:
>>> * GraphX gains the ability to become another core tenant within the
>>> TinkerPop community allowing a more diverse group of users into the Spark
>>> ecosystem.
>>> * TinkerPop can continue to maintain and own a solid / feature-rich
>>> graph API that has already been accepted by a wide audience, relieving the
>>> pressure of “one off” API additions from the GraphX team.
>>> * GraphX can demonstrate its ability to be a key player in the GraphDB
>>> space sitting inline with other major distributions (Neo4j, Titan, etc.).
>>> * Allows for the abstract graph traversal logic (query API) to be owned
>>> and maintained by a group already proven on the topic.
>>>
>>> Drawbacks:
>>> * GraphX doesn’t own the API for its graph query capability. This could
>>> be seen as good or bad, but it might make GraphX-specific implementation
>>> additions more tricky (possibly). Also, GraphX will need to maintain the
>>> features described within the TinkerPop API as that might change in the
>>> future.
>>>
>>> From: Kushal Datta 
>>> Date: Thursday, November 6, 2014 at 4:00 PM
>>> To: "York, Brennon" 
>>> Cc: Kyle Ellrott , Reynold Xin <
>>> r...@databricks.com>, "dev@spark.apache.org" ,
>>> Matthias Broecheler 
>>>
>>> Subject: Re: Implementing TinkerPop on top of GraphX
>>>
>>> Before we dive into the implementation details, what are the high level
>>> thoughts on Gremlin/GraphX? Scala already provides the procedural way to
>>> query graphs in GraphX today. So, today I can run
>>> g.vertices().filter().join() queries as OLAP in GraphX just like Tinkerpop3
>>> Gremlin, of course sans the useful operators that Gremlin offers such as
>>> outE, inE, loop, as, dedup, etc. In that case is mapping Gremlin operators
>>> to GraphX api's a better approach or should we extend the existing set of
>>> transformations/actions that GraphX already offers with the useful
>>> operators from Gremlin? For example, we add as(), loop() and dedup()
>>> methods in VertexRDD and EdgeRDD.
>>>
>>> Either way

Re: Implementing TinkerPop on top of GraphX

2014-11-18 Thread Kyle Ellrott
The new Tinkerpop3 API was different enough from V2, that it was worth
starting a new implementation rather then trying to completely refactor my
old code.
I've started a new project: https://github.com/kellrott/spark-gremlin which
compiles and runs the first set of unit tests (which it completely fails).
Most of the classes are structured in the same way they are in the Gigraph
implementation. There isn't much actual GraphX code in the project yet,
just a framework to start working in.
Hopefully this will keep the conversation going.

Kyle

On Fri, Nov 7, 2014 at 11:17 AM, Kushal Datta 
wrote:

> I think if we are going to use GraphX as the query engine in Tinkerpop3,
> then the Tinkerpop3 community is the right platform to further the
> discussion.
>
> The reason I asked the question on improving APIs in GraphX is because why
> only Gremlin, any graph DSL can exploit the GraphX APIs. Cypher has some
> good subgraph matching query interfaces which I believe can be distributed
> using GraphX apis.
>
> An edge ID is an internal attribute of the edge generated automatically,
> mostly hidden from the user. That's why adding it as an edge property might
> not be a good idea. There are several little differences like this. E.g. in
> Tinkerpop3 Gremlin implementation for Giraph, only vertex programs are
> executed in Giraph directly. The side-effect operators are mapped to
> Map-Reduce functions. In the implementation we are talking about, all of
> these operations can be done within GraphX. I will be interested to
> co-develop the query engine.
>
> @Reynold, I agree. And as I said earlier, the apis should be designed in
> such a way that it can be used in any Graph DSL.
>
> On Fri, Nov 7, 2014 at 10:59 AM, Kyle Ellrott 
> wrote:
>
>> Who here would be interested in helping to work on an implementation of
>> the Tikerpop3 Gremlin API for Spark? Is this something that should continue
>> in the Spark discussion group, or should it migrate to the Gremlin message
>> group?
>>
>> Reynold is right that there will be inherent mismatches in the APIs, and
>> there will need to be some discussions with the GraphX group about the best
>> way to go. One example would be edge ids. GraphX has vertex ids, but no
>> explicit edges ids, while Gremlin has both. Edge ids could be put into the
>> attr field, but then that means the user would have to explicitly subclass
>> their edge attribute to the edge attribute interface. Is that worth doing,
>> versus adding an id to everyones's edges?
>>
>> Kyle
>>
>>
>> On Thu, Nov 6, 2014 at 7:24 PM, Reynold Xin  wrote:
>>
>>> Some form of graph querying support would be great to have. This can be
>>> a great community project hosted outside of Spark initially, both due to
>>> the maturity of the component itself as well as the maturity of query
>>> language standards (there isn't really a dominant standard for graph ql).
>>>
>>> One thing is that GraphX API will need to evolve and probably need to
>>> provide more primitives in order to support the new ql implementation.
>>> There might also be inherent mismatches in the way the external API is
>>> defined vs what GraphX can support. We should discuss those on a
>>> case-by-case basis.
>>>
>>>
>>> On Thu, Nov 6, 2014 at 5:42 PM, Kyle Ellrott 
>>> wrote:
>>>
>>>> I think its best to look to existing standard rather then try to make
>>>> your own. Of course small additions would need to be added to make it
>>>> valuable for the Spark community, like a method similar to Gremlin's
>>>> 'table' function, that produces an RDD instead.
>>>> But there may be a lot of extra code and data structures that would
>>>> need to be added to make it work, and those may not be directly applicable
>>>> to all GraphX users. I think it would be best run as a separate
>>>> module/project that builds directly on top of GraphX.
>>>>
>>>> Kyle
>>>>
>>>>
>>>>
>>>> On Thu, Nov 6, 2014 at 4:39 PM, York, Brennon <
>>>> brennon.y...@capitalone.com> wrote:
>>>>
>>>>> My personal 2c is that, since GraphX is just beginning to provide a
>>>>> full featured graph API, I think it would be better to align with the
>>>>> TinkerPop group rather than roll our own. In my mind the benefits out way
>>>>> the detriments as follows:
>>>>>
>>>>> Benefits:
>>>>> * GraphX gains the ability to become another core tenant within the
>>

Re: Implementing TinkerPop on top of GraphX

2015-01-16 Thread Kyle Ellrott
Looking at https://github.com/kdatta/tinkerpop3/compare/graphx-gremlin I
only see a maven build file. Do you have some source code some place else?

I've worked on a spark based implementation (
https://github.com/kellrott/spark-gremlin ), but its not done and I've been
tied up on other projects.
It also look Tinkerpop3 is a bit of a moving target. I had targeted the
work done for gremlin-giraph (
http://www.tinkerpop.com/docs/3.0.0.M5/#giraph-gremlin ) that was part of
the M5 release, as a base model for implementation. But that appears to
have been refactored into gremlin-hadoop (
http://www.tinkerpop.com/docs/3.0.0.M6/#hadoop-gremlin ) in the M6 release.
I need to assess how much this changes the code.

Most of the code that needs to be changes from Giraph to Spark will be
simply replacing classes with spark derived ones. The main place where the
logic will need changed is in the 'GraphComputer' class (
https://github.com/tinkerpop/tinkerpop3/blob/master/hadoop-gremlin/src/main/java/com/tinkerpop/gremlin/hadoop/process/computer/giraph/GiraphGraphComputer.java
) which is created by the Graph when the 'compute' method is called (
https://github.com/tinkerpop/tinkerpop3/blob/master/hadoop-gremlin/src/main/java/com/tinkerpop/gremlin/hadoop/structure/HadoopGraph.java#L135
).


Kyle



On Fri, Jan 16, 2015 at 1:01 PM, Kushal Datta 
wrote:

> Hi David,
>
>
> Yes, we are still headed in that direction.
> Please take a look at the repo I sent earlier.
> I think that's a good starting point.
>
> Thanks,
> -Kushal.
>
> On Thu, Jan 15, 2015 at 8:31 AM, David Robinson 
> wrote:
>
> > I am new to Spark and GraphX, however, I use Tinkerpop backed graphs and
> > think the idea of using Tinkerpop as the API for GraphX is a great idea
> and
> > hope you are still headed in that direction.  I noticed that Tinkerpop 3
> is
> > moving into the Apache family:
> > http://wiki.apache.org/incubator/TinkerPopProposal  which might
> alleviate
> > concerns about having an API definition "outside" of Spark.
> >
> > Thanks,
> >
> >
> >
> >
> > --
> > View this message in context:
> >
> http://apache-spark-developers-list.1001551.n3.nabble.com/Implementing-TinkerPop-on-top-of-GraphX-tp9169p10126.html
> > Sent from the Apache Spark Developers List mailing list archive at
> > Nabble.com.
> >
> > -
> > To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
> > For additional commands, e-mail: dev-h...@spark.apache.org
> >
> >
>


[mllib] State of Multi-Model training

2014-09-16 Thread Kyle Ellrott
I'm curious about the state of development Multi-Model learning in MLlib
(training sets of models during the same training session, rather then one
at a time). The JIRA lists it as in progress targeting Spark 1.2.0 (
https://issues.apache.org/jira/browse/SPARK-1486 ). But there hasn't been
any notes on it in over a month.
I submitted a pull request for a possible method to do this work a little
over two months ago (https://github.com/apache/spark/pull/1292), but
haven't yet received any feedback on the patch yet.
Is anybody else working on multi-model training?

Kyle


Re: [mllib] State of Multi-Model training

2014-09-16 Thread Kyle Ellrott
I'd be interested in helping to test your code as soon as its available.
The version I wrote used a paired RDD and combined by key, it worked best
if it used a custom partitioner that put all the samples in the same area.
Running things in batched matrices would probably speed things up greatly.
You probably won't need my training code, but I did write some stuff
related to calculating Binary classifications metric (
https://github.com/apache/spark/pull/1292/files#diff-6) and AUC (
https://github.com/apache/spark/pull/1292/files#diff-5) for multiple models
that you might be able to use.

Kyle


On Tue, Sep 16, 2014 at 4:09 PM, Burak Yavuz  wrote:

> Hi Kyle,
>
> I'm actively working on it now. It's pretty close to completion, I'm just
> trying to figure out bottlenecks and optimize as much as possible.
> As Phase 1, I implemented multi model training on Gradient Descent.
> Instead of performing Vector-Vector operations on rows (examples) and
> weights,
> I've batched them into matrices so that we can use Level 3 BLAS to speed
> things up. I've also added support for Sparse Matrices (
> https://github.com/apache/spark/pull/2294) as making use of sparsity will
> allow you to train more models at once.
>
> Best,
> Burak
>
> - Original Message -
> From: "Kyle Ellrott" 
> To: dev@spark.apache.org
> Sent: Tuesday, September 16, 2014 3:21:53 PM
> Subject: [mllib] State of Multi-Model training
>
> I'm curious about the state of development Multi-Model learning in MLlib
> (training sets of models during the same training session, rather then one
> at a time). The JIRA lists it as in progress targeting Spark 1.2.0 (
> https://issues.apache.org/jira/browse/SPARK-1486 ). But there hasn't been
> any notes on it in over a month.
> I submitted a pull request for a possible method to do this work a little
> over two months ago (https://github.com/apache/spark/pull/1292), but
> haven't yet received any feedback on the patch yet.
> Is anybody else working on multi-model training?
>
> Kyle
>
>


Re: [mllib] State of Multi-Model training

2014-09-17 Thread Kyle Ellrott
This sounds like a pretty major re-write of the system. Is it going to live
in an different repo during development? Or will we be able to track
progress in the main Spark repo?

Kyle

On Tue, Sep 16, 2014 at 10:22 PM, Burak Yavuz  wrote:

> Hi Kyle,
>
> Thank you for the code examples. We may be able to use some of the ideas
> there. I think initially the goal is to have the optimizers ready (SGD,
> LBFGS),
> and then the evaluation metrics will come next. It might take some time,
> however as MLlib is going to have a significant API "face-lift" (e.g.
> https://issues.apache.org/jira/browse/SPARK-3530). Evaluation metrics
> will be significant in the new "pipeline"s and the ability to evaluate
> multiple models
> efficiently is very important. We encourage you to read through the design
> docs, and we would appreciate any feedback from you and the rest of the
> community!
>
> Best,
> Burak
>
> - Original Message -
> From: "Kyle Ellrott" 
> To: "Burak Yavuz" 
> Cc: dev@spark.apache.org
> Sent: Tuesday, September 16, 2014 9:41:45 PM
> Subject: Re: [mllib] State of Multi-Model training
>
> I'd be interested in helping to test your code as soon as its available.
> The version I wrote used a paired RDD and combined by key, it worked best
> if it used a custom partitioner that put all the samples in the same area.
> Running things in batched matrices would probably speed things up greatly.
> You probably won't need my training code, but I did write some stuff
> related to calculating Binary classifications metric (
> https://github.com/apache/spark/pull/1292/files#diff-6) and AUC (
> https://github.com/apache/spark/pull/1292/files#diff-5) for multiple
> models
> that you might be able to use.
>
> Kyle
>
>
> On Tue, Sep 16, 2014 at 4:09 PM, Burak Yavuz  wrote:
>
> > Hi Kyle,
> >
> > I'm actively working on it now. It's pretty close to completion, I'm just
> > trying to figure out bottlenecks and optimize as much as possible.
> > As Phase 1, I implemented multi model training on Gradient Descent.
> > Instead of performing Vector-Vector operations on rows (examples) and
> > weights,
> > I've batched them into matrices so that we can use Level 3 BLAS to speed
> > things up. I've also added support for Sparse Matrices (
> > https://github.com/apache/spark/pull/2294) as making use of sparsity
> will
> > allow you to train more models at once.
> >
> > Best,
> > Burak
> >
> > - Original Message -
> > From: "Kyle Ellrott" 
> > To: dev@spark.apache.org
> > Sent: Tuesday, September 16, 2014 3:21:53 PM
> > Subject: [mllib] State of Multi-Model training
> >
> > I'm curious about the state of development Multi-Model learning in MLlib
> > (training sets of models during the same training session, rather then
> one
> > at a time). The JIRA lists it as in progress targeting Spark 1.2.0 (
> > https://issues.apache.org/jira/browse/SPARK-1486 ). But there hasn't
> been
> > any notes on it in over a month.
> > I submitted a pull request for a possible method to do this work a little
> > over two months ago (https://github.com/apache/spark/pull/1292), but
> > haven't yet received any feedback on the patch yet.
> > Is anybody else working on multi-model training?
> >
> > Kyle
> >
> >
>
>