On Sat, 2016-11-19 at 20:48, Harish Kumar <harish.kuma...@gmail.com> wrote:
> Thank you. I agree on python.. but my question was did they update the
> Pyjulia libraries for latest Julia version? . We tried with 0.4.3 which
> failed 6 months back. So we revered to 0.3.4. Or is this library remain
> same for all Julia versions?
>
> Any suggestion on this?

They are testing against the latest release, i.e. 0.5:
https://github.com/JuliaPy/pyjulia/blob/master/.travis.yml

You should try and file an issue if it doesn't work.  6 months are a
long time at the current julia development pace.

>
> On Sat, Nov 19, 2016 at 7:38 PM, Mauro <mauro...@runbox.com> wrote:
>
>> On Sat, 2016-11-19 at 18:36, Harish Kumar <harish.kuma...@gmail.com>
>> wrote:
>> > Will it support Python 3.4 ? I am calling this from pyjulia interface
>>
>> https://github.com/JuliaPy/pyjulia says that it is tested against 3.5,
>> but it doesn't say that 3.4 is not supported.  So you should try.
>>
>> > On Nov 19, 2016 4:58 PM, "Mauro" <mauro...@runbox.com> wrote:
>> >
>> >> Julia 0.3.12, that's a stone-age version of Julia.  You should move to
>> 0.5!
>> >>
>> >> On Sat, 2016-11-19 at 16:42, Harish Kumar <harish.kuma...@gmail.com>
>> >> wrote:
>> >> > I am using Version 0.3.12 calling from python (pyjulia). I do LME fit
>> >> with
>> >> > 2.8 M rows and 60-70 Variables. It is taking 2 hours just to model (+
>> >> data
>> >> > transfer time). Any tips?
>> >> >       using MixedModels
>> >> >       modelREML = lmm({formula}, dataset)
>> >> >       reml!(modelREML,true)
>> >> >       lmeModel = fit(modelREML)
>> >> >       fixedDF = DataFrame(fixedEffVar = coeftable(lmeModel).rownms,
>> >> estimate
>> >> > = coeftable(lmeModel).mat[:,1],
>> >> >                      stdError = coeftable(lmeModel).mat[:,2],zVal =
>> >> > coeftable(lmeModel).mat[:,3])
>> >> >
>> >> > On Tuesday, February 23, 2016 at 9:16:47 AM UTC-6, Stefan Karpinski
>> >> wrote:
>> >> >>
>> >> >> I'm glad that particular slow case got faster! If you want to submit
>> >> some
>> >> >> reduced version of it as a performance test, we could still include
>> it
>> >> in
>> >> >> our perf suite. And of course, if you find that anything else has
>> ever
>> >> >> slowed down, please don't hesitate to file an issue.
>> >> >>
>> >> >> On Tue, Feb 23, 2016 at 9:55 AM, Jonathan Goldfarb <
>> jgol...@gmail.com
>> >> >> <javascript:>> wrote:
>> >> >>
>> >> >>> Yes, understood about difficulty keeping track of regressions. I was
>> >> >>> originally going to send a message relating up to 2x longer test
>> time
>> >> on
>> >> >>> the same code on Travis, but it appears as though something has
>> >> changed in
>> >> >>> the nightly build available to CI that now gives significantly
>> faster
>> >> >>> builds, even though the previous poor performance had been
>> >> dependable...
>> >> >>> Evidently that build is not as up-to-date as I thought. Our code is
>> >> >>> currently not open source, but should be soon after which I can
>> share
>> >> an
>> >> >>> example.
>> >> >>>
>> >> >>> Thanks for your comments, and thanks again for your work on Julia.
>> >> >>>
>> >> >>> -Max
>> >> >>>
>> >> >>>
>> >> >>> On Monday, February 22, 2016 at 11:12:58 AM UTC-5, Stefan Karpinski
>> >> wrote:
>> >> >>>>
>> >> >>>> Yes, ideally code should not get slower with new releases –
>> >> >>>> unfortunately, keeping track of performance regressions can be a
>> bit
>> >> of a
>> >> >>>> game of whack-a-mole. Having examples of code whose speed has
>> >> regressed is
>> >> >>>> very helpful. Thanks to Jarrett Revels excellent work, we now have
>> >> some
>> >> >>>> great performance regression tracking infrastructure, but of
>> course we
>> >> >>>> always need more things to test!
>> >> >>>>
>> >> >>>> On Mon, Feb 22, 2016 at 9:58 AM, Milan Bouchet-Valat <
>> nali...@club.fr
>> >> >
>> >> >>>> wrote:
>> >> >>>>
>> >> >>>>> Le lundi 22 février 2016 à 06:27 -0800, Jonathan Goldfarb a écrit
>> :
>> >> >>>>> > I've really been enjoying writing Julia code as a user, and
>> >> following
>> >> >>>>> > the language as it develops, but I have noticed that over time,
>> >> >>>>> > previously fast code sometimes gets slower, and (impressively)
>> >> >>>>> > previously slow code will sometimes get faster, with updates to
>> the
>> >> >>>>> > Julia codebase.
>> >> >>>>> Code is not supposed to get slower with newer releases. If this
>> >> >>>>> happens, please report the problem here or on GitHub (if possible
>> >> with
>> >> >>>>> a reproducible example). This will be very helpful to help
>> avoiding
>> >> >>>>> regressions.
>> >> >>>>>
>> >> >>>>> > No complaint here in general; I really appreciate the work all
>> of
>> >> the
>> >> >>>>> > Core and package developers do, and variations in performance of
>> >> >>>>> > different codes it to be expected.
>> >> >>>>> > My question is this: has anyone in the Julia community thought
>> >> about
>> >> >>>>> > updated performance tips for writing high performance code?
>> >> >>>>> > Obviously, using the profiler, along with many of the tips
>> >> >>>>> > at https://github.com/JuliaLang/julia/commits/master/doc/
>> >> manual/perfo
>> >> >>>>> > rmance-tips.rst still apply, but I am wondering more about
>> >> >>>>> > general/structural ideas to keep in mind in Julia v0.4, as well
>> as
>> >> >>>>> > guidance on how best to take advantage of recent changes on
>> >> master. I
>> >> >>>>> > know that document hasn't been stagnant in any sense, but
>> >> relatively
>> >> >>>>> > "big in any case, I'd be happy to help make some updates in a
>> PR if
>> >> >>>>> > there's anything we come up with.
>> >> >>>>> I've just skimmed through this page, and I don't think any of the
>> >> >>>>> advice given there is outdated. What's new in master is that
>> >> anonymous
>> >> >>>>> functions (and therefore map) are now fast, but that wasn't
>> >> previously
>> >> >>>>> mentioned in the tips as a performance issue anyway.
>> >> >>>>>
>> >> >>>>> The only small sentence which should likely be removed is "for
>> >> example,
>> >> >>>>> currently it’s not possible to infer the return type of an
>> anonymous
>> >> >>>>> function". Type inference seems to work fine now on master with
>> >> >>>>> anonymous functions. I'll leave others confirm this.
>> >> >>>>>
>> >> >>>>> Anyway, do you have any specific points in mind?
>> >> >>>>>
>> >> >>>>>
>> >> >>>>> Regards
>> >> >>>>>
>> >> >>>>
>> >> >>>>
>> >> >>
>> >>
>>

Reply via email to