> Just to correct a misconception earlier in the thread, the issue with
> Skylake has nothing to do with Flint. That is an issue in MPIR which is a
> community supported project. No one is currently paid to work on MPIR.
>
> Sorry about that; got my projects crossed.
Best,
Travis
--
You r
We are faster for sparse multivariate multiplication on multiple cores too.
We just haven't blogged about it yet. :-)
But you are right, Giac is years ahead at this point. We do not envision
adding multivariate factorisation within the scope of the OpenDreamKit
funded project that is allowing u
Bill Hart's blog is, as I expected, thorough and informative.
It does not make for an entirely fair comparison to show
timings for systems that restrict the exponents of polynomials
to different lengths. That is, there are problems that
can be done very simply in a system with 64 bit exponents
bu
> Also giac functionality is already in Sage, flint would need a new release
> and a Sage upgrade
>
> Something that will happen once Bill Hart gets a patch into Flint so it
will build on Skylake.
IMO, we should have all implementations of multivariate polynomials
available through Sage. Hope
Sure giac would be good as well!
--
You received this message because you are subscribed to the Google Groups
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to sage-devel+unsubscr...@googlegroups.com.
To post to this group, send email to sage
Also giac functionality is already in Sage, flint would need a new release
and a Sage upgrade
On Tuesday, September 5, 2017 at 7:54:10 AM UTC+2, parisse wrote:
>
> And why not giac? flint is a little faster for basic multivariate
> polynomial arithmetic on 1 thread, but giac is multithread and h
And why not giac? flint is a little faster for basic multivariate
polynomial arithmetic on 1 thread, but giac is multithread and has more
advanced fast functionnalities like gcd, factorization, Groebner basis or
rational univariate representation.
--
You received this message because you are s
Anyway we should definitely get some multivariate polynomial arithmetic
over Z based on flint and keep singular for groebner bases or things it is
meant for.
On Monday, September 4, 2017 at 7:13:59 AM UTC+2, parisse wrote:
>
>
>
> Le dimanche 3 septembre 2017 16:06:46 UTC+2, rjf a écrit :
>>
>>
Le dimanche 3 septembre 2017 16:06:46 UTC+2, rjf a écrit :
>
> I was doing timing on the same task and found that one system
> (used for celestial mechanics) was spectacularly fast on a test just like
> this one.
> One reason was that it first changed f*(f+1) to
>
> f^2 +f
> and was clever in
I was doing timing on the same task and found that one system
(used for celestial mechanics) was spectacularly fast on a test just like
this one.
One reason was that it first changed f*(f+1) to
f^2 +f
and was clever in computing f^2. You should be clever
at this too.
Anyway, be careful when y
Hey Simon,
> From Ulrich's timings, it seems like we are still loosing quite a lot in
> > converting to/from singular.
>
> Is it really the *conversion*? I wouldn't be surprised if that example
> would take a long time in Singular without a conversion.
It does take some time, but far less tha
FYI, this test takes a few seconds with the following giac script (6.2s on
my Mac with 1 thread):
threads:=1; n:=30;
f := symb2poly((1 + x + y + z+t)^n,[x,y,z,t]):;
time(p:=f*(f+1));
--
You received this message because you are subscribed to the Google Groups
"sage-devel" group.
To unsubscribe
Hi Travis,
On 2017-09-02, Travis Scrimshaw wrote:
> sage: R. = ZZ[]
> sage: %time f = (1+x+y+z+t)^30
> CPU times: user 232 ms, sys: 0 ns, total: 232 ms
> Wall time: 241 ms
> sage: g = f+1
> sage: %time temp = f * g
> CPU times: user 16min 34s, sys: 8 ms, total: 16min 34s
> Wall time: 16min 34s
>
13 matches
Mail list logo