;> line 472
>>>>>>>>>>>>> <https://console.cloud.google.com/debug/fromlog?appModule=default&appVersion=live&file=%2Fbase%2Fdata%2Fhome%2Fapps%2Fs~spark-prs%2Flive.412416057856832734%2Flib%2Fjira%2Fclient.py&line=472&logInsertId
>>>>>> File
>>>>>>>>>>>> "/base/data/home/apps/s~spark-prs/live.412416057856832734/lib/jira/client.py",
>>>>>>>>>>>> line 2133
>>>>>>>>>>>> <https://console.c
56170805012269000&nestedLogIndex=3&project=spark-prs&src=ac>,
>>>>>>>>>>> in server_info j = self._get_json('serverInfo') File
>>>>>>>>>>> j = self._get_json('serverInfo')
>>>>>>>&
e
>>>>>>>>>> "/base/data/home/apps/s~spark-prs/live.412416057856832734/lib/jira/client.py",
>>>>>>>>>> line 2549
>>>>>>>>>> <https://console.cloud.google.com/debug/fromlog?appModule=default&appV
6057856832734%2Flib%2Fjira%2Fclient.py&line=2549&logInsertId=5cc1483600029309a7af76d5&logNanos=1556170805012269000&nestedLogIndex=3&project=spark-prs&src=ac>,
>>>>>>>>> in _get_json r = self._session.get(url, params=params) File
>>
1
>>>>>>>> <https://console.cloud.google.com/debug/fromlog?appModule=default&appVersion=live&file=%2Fbase%2Fdata%2Fhome%2Fapps%2Fs~spark-prs%2Flive.412416057856832734%2Flib%2Fjira%2Fresilientsession.py&line=151&logInsertId=5cc1483600029309a7af76d5&logNanos=15561708
', url, **kwargs) File
>>>>>>> return self.__verb('GET', url, **kwargs)
>>>>>>> File
>>>>>>> "/base/data/home/apps/s~spark-prs/live.412416057856832734/lib/jira/resilientsession.py",
>>>>>>> l
Please add me to spark-dev mailing list.
t; <https://console.cloud.google.com/debug/fromlog?appModule=default&appVersion=live&file=%2Fbase%2Fdata%2Fhome%2Fapps%2Fs~spark-prs%2Flive.412416057856832734%2Flib%2Fjira%2Fresilientsession.py&line=147&logInsertId=5cc1483600029309a7af76d5&logNanos=1556170805012269000&nes
309a7af76d5&logNanos=1556170805012269000&nestedLogIndex=3&project=spark-prs&src=ac>,
>>>>> in __verb raise_on_error(response, verb=verb, **kwargs) File
>>>>> raise_on_error(response, verb=verb, **kwargs)
>>>>> File
>>>>&g
ps/s~spark-prs/live.412416057856832734/lib/jira/resilientsession.py",
>>>> line 57
>>>> <https://console.cloud.google.com/debug/fromlog?appModule=default&appVersion=live&file=%2Fbase%2Fdata%2Fhome%2Fapps%2Fs~spark-prs%2Flive.412416057856832734%2Flib%2Fjira%2Fr
34%2Flib%2Fjira%2Fresilientsession.py&line=57&logInsertId=5cc1483600029309a7af76d5&logNanos=1556170805012269000&nestedLogIndex=3&project=spark-prs&src=ac>,
>>> in raise_on_error r.status_code, error, r.url, request=request, response=r,
>>> **kwargs) JIRAE
jira/rest/api/2/serverInfo text:
> CAPTCHA_CHALLENGE; login-url=https://issues.apache.org/jira/login.jsp
> r.status_code,
> error, r.url, request=request, response=r, **kwargs)
> JIRAError: JiraError HTTP 403 url:
> https://issues.apache.org/jira/rest/api/2/serverInfo
> text: CAPT
rror HTTP 403 url:
>> https://issues.apache.org/jira/rest/api/2/serverInfo text:
>> CAPTCHA_CHALLENGE; login-url=https://issues.apache.org/jira/login.jsp
>> r.status_code,
>> error, r.url, request=request, response=r, **kwargs)
>> JIRAError: JiraError HTTP 403 ur
Can anyone take a look for this one? OPEN status JIRAs are being rapidly
increased (from around 2400 to 2600)
2019년 4월 19일 (금) 오후 8:05, Hyukjin Kwon 님이 작성:
> Hi all,
>
> Looks 'spark/dev/github_jira_sync.py' is not running correctly somewhere.
> Usually the JIRA's stat
Hi all,
Looks 'spark/dev/github_jira_sync.py' is not running correctly somewhere.
Usually the JIRA's status should be updated to "IN PROGRESS" when
somebody opens a PR against a JIRA.
Looks now it only leaves a link and does not change JIRA's status.
Can someone els
Definitely the part on the PR. Thanks!
From: shane knapp
Sent: Thursday, March 28, 2019 11:19 AM
To: dev; Stavros Kontopoulos
Subject: [k8s][jenkins] spark dev tool docs now have k8s+minikube instructions!
https://spark.apache.org/developer-tools.html
search
https://spark.apache.org/developer-tools.html
search for "Testing K8S".
this is pretty much how i build and test PRs locally... the commands there
are lifted straight from the k8s integration test jenkins build, so they
might require a little tweaking to better suit your laptop/server.
k8s is g
Sorry for the spam, used the wrong email address.
On Wed, 22 Mar 2017 at 12:01 Yash Sharma wrote:
> subscribe to spark dev list
>
subscribe to spark dev list
Thanks a lot for the guidelines.
I could successfully configure and debug
On Wed, Aug 24, 2016 at 7:05 PM, Jacek Laskowski wrote:
> On Wed, Aug 24, 2016 at 2:32 PM, Steve Loughran
> wrote:
>
> > no reason; the key thing is : not in cluster mode, as there your work
> happens elsewhere
>
> Righ
On Wed, Aug 24, 2016 at 2:32 PM, Steve Loughran wrote:
> no reason; the key thing is : not in cluster mode, as there your work happens
> elsewhere
Right! Anything but cluster mode should make it easy (that leaves us
with local).
Jacek
--
> On 24 Aug 2016, at 11:38, Jacek Laskowski wrote:
>
> On Wed, Aug 24, 2016 at 11:13 AM, Steve Loughran
> wrote:
>
>> I'd recommend
>
> ...which I mostly agree to with some exceptions :)
>
>> -stark spark standalone from there
>
> Why spark standalone since the OP asked about "learning how
On Wed, Aug 24, 2016 at 11:13 AM, Steve Loughran wrote:
> I'd recommend
...which I mostly agree to with some exceptions :)
> -stark spark standalone from there
Why spark standalone since the OP asked about "learning how query
execution flow occurs in Spark SQL"? How about spark-shell in local
On 24 Aug 2016, at 07:10, Nishadi Kirielle
mailto:ndime...@gmail.com>> wrote:
Hi,
I'm engaged in learning how query execution flow occurs in Spark SQL. In order
to understand the query execution flow, I'm attempting to run an example in
debug mode with intellij IDEA. It would be great if anyon
Hi,
I'm engaged in learning how query execution flow occurs in Spark SQL. In
order to understand the query execution flow, I'm attempting to run an
example in debug mode with intellij IDEA. It would be great if anyone can
help me with debug configurations.
Thanks & Regards
Nishadi
On Tue, Jun 21,
You can read this documentation to get started with the setup
https://cwiki.apache.org/confluence/display/SPARK/Useful+Developer+Tools#UsefulDeveloperTools-IntelliJ
There was a pyspark setup discussion on SO over here
http://stackoverflow.com/questions/33478218/write-and-run-pyspark-in-intellij-i
Hi all,
I am interested in figuring out how pyspark works at core/internal level.
And would like to understand the code flow as well.
For that I need to run a simple example in debug mode so that I can trace
the data flow for pyspark.
Can anyone please guide me on how do I set up my developmen
o: "Sean Owen"
>> Cc: "dev" , "jay vyas" ,
>> "Paolo Platter"
>> , "Nicholas Chammas"
>> , "Will Benton"
>> Sent: Wednesday, January 21, 2015 2:09:35 AM
>> Subject: Re: Standardized Spark dev envir
- Original Message -
> From: "Patrick Wendell"
> To: "Sean Owen"
> Cc: "dev" , "jay vyas" ,
> "Paolo Platter"
> , "Nicholas Chammas" ,
> "Will Benton"
> Sent: Wednesday, January 21, 2015 2:09:3
Sure, can Jenkins use this new image too? If not then it doesn't help with
reproducing a Jenkins failure, most of which even Jenkins can't reproduce.
But if it does and it can be used for builds then that does seem like it is
reducing rather than increasing environment configurations which is good.
> If the goal is a reproducible test environment then I think that is what
> Jenkins is. Granted you can only ask it for a test. But presumably you get
> the same result if you start from the same VM image as Jenkins and run the
> same steps.
But the issue is when users can't reproduce Jenkins fai
well.
> > I suggest to look at sequenceiq/spark dockers, they are very active on
> that field.
> >
> > Paolo
> >
> > Inviata dal mio Windows Phone
> >
> > Da: jay vyas<mailto:jayunit100.apa...@gmail.com>
> >
lt;mailto:nicholas.cham...@gmail.com>
> Cc: Will Benton<mailto:wi...@redhat.com>; Spark dev
> list<mailto:dev@spark.apache.org>
> Oggetto: Re: Standardized Spark dev environment
>
> I can comment on both... hi will and nate :)
>
> 1) Will's Dockerfile solution
4:45
A: Nicholas Chammas<mailto:nicholas.cham...@gmail.com>
Cc: Will Benton<mailto:wi...@redhat.com>; Spark dev
list<mailto:dev@spark.apache.org>
Oggetto: Re: Standardized Spark dev environment
I can comment on both... hi will and nate :)
1) Will's Dockerfile solution is the mo
pendencies for the current Spark master, but it
> > would be trivial to do so:
> >
> > http://chapeau.freevariable.com/2014/08/jvm-test-docker.html
> >
> >
> > best,
> > wb
> >
> >
> > - Original Message -
> > > From:
t;
>
> best,
> wb
>
>
> - Original Message -
> > From: "Nicholas Chammas"
> > To: "Spark dev list"
> > Sent: Tuesday, January 20, 2015 6:13:31 PM
> > Subject: Standardized Spark dev environment
> >
> > What do y'all
ssage -
> From: "Nicholas Chammas"
> To: "Spark dev list"
> Sent: Tuesday, January 20, 2015 6:13:31 PM
> Subject: Standardized Spark dev environment
>
> What do y'all think of creating a standardized Spark development
> environment, perhaps encoded as a Vagr
Alto office on Jan
27th if any folks are interested.
Nate
-Original Message-
From: Sean Owen [mailto:so...@cloudera.com]
Sent: Tuesday, January 20, 2015 5:09 PM
To: Nicholas Chammas
Cc: dev
Subject: Re: Standardized Spark dev environment
My concern would mostly be maintenance. It adds t
My concern would mostly be maintenance. It adds to an already very complex
build. It only assists developers who are a small audience. What does this
provide, concretely?
On Jan 21, 2015 12:14 AM, "Nicholas Chammas"
wrote:
> What do y'all think of creating a standardized Spark development
> envir
How many profiles (hadoop / hive /scala) would this development environment
support ?
Cheers
On Tue, Jan 20, 2015 at 4:13 PM, Nicholas Chammas <
nicholas.cham...@gmail.com> wrote:
> What do y'all think of creating a standardized Spark development
> environment, perhaps encoded as a Vagrantfile,
Great suggestion.
On Jan 20, 2015 7:14 PM, "Nicholas Chammas"
wrote:
> What do y'all think of creating a standardized Spark development
> environment, perhaps encoded as a Vagrantfile, and publishing it under
> `dev/`?
>
> The goal would be to make it easier for new developers to get started with
What do y'all think of creating a standardized Spark development
environment, perhaps encoded as a Vagrantfile, and publishing it under
`dev/`?
The goal would be to make it easier for new developers to get started with
all the right configs and tools pre-installed.
If we use something like Vagran
Hi Harikrishna,
A good place to start is taking a look at the wiki page on contributing:
https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark
-Sandy
On Fri, Dec 19, 2014 at 2:43 PM, Harikrishna Kamepalli <
harikrishna.kamepa...@gmail.com> wrote:
>
> i am interested to contribu
i am interested to contribute to spark
Hi Saurabh,
Good way to start is to use Spark with your applications and file
issues you might have found and maybe provide patch for those or
existing ones.
Please take a look at Spark's how to contribute page [1] to help you
get started.
Hope this helps.
- Henry
[1] https://cwiki.apache.org
How can I become a spark contributor.
What's the good path that I can follow to become an active code submitter for
spark from a newbie.
Regards
- Saurabh
47 matches
Mail list logo