The documentation should be expanded with more examples. Many of the linear
algebra functions work for arbitrary input types so if you construct a
matrix with rational or integer inputs then many of the functions will
still work. We don't have much support in base for fancy math on such
matrice
It's not on purpose. It is just that it hasn't been implemented yet. It
would be great if you could open a pull request with such a method.
You might also want to define a special type for C+λI such that you can
avoid creating a new matrix but it is probably better to experiment with
such a typ
The plan is that the transpose function will return this eventually and
within the 1.0 time frame but it's not done yet. It will probably not be
a PermutedDimsArray though because it wouldn't do the right thing for the
conjugate transpose of a complex matrix.
On Wednesday, October 12, 2016 at 5
There appears to be a problem with rfft.
julia> @code_typed rfft(randn(10))
1-element Array{Any,1}:
:($(Expr(:lambda, Any[:X],
Any[Any[],Any[Any[:X,Array{Float64,1},0]],Any[]], :(begin # fftw.jl, line
639:
return rfft(X::Array{Float64,1},$(Expr(:new, UnitRange{Int64}, 1,
:(((top(getfi
Done in https://github.com/JuliaLang/julia/issues/9772
2015-01-14 10:19 GMT-05:00 Andreas Noack :
> There appears to be a problem with rfft.
>
> julia> @code_typed rfft(randn(10))
>
> 1-element Array{Any,1}:
>
> :($(Expr(:lambda, Any[:X],
> Any[Any[],Any[Any[
The problem is that the shapes of L and U depend on the input matrix and it
need not be square. I'm very tempted to allow Triangular to be rectangular
so actually the triangular matrix would be more a trapezoid. I think it
could work but would require some efforts. It would also make it it
possible
By the way. The same thing is true for R in the QR.
2015-01-15 9:39 GMT-05:00 Andreas Noack :
> The problem is that the shapes of L and U depend on the input matrix and
> it need not be square. I'm very tempted to allow Triangular to be
> rectangular so actually the triangular m
What do you get from cond(m)?
Den torsdag den 15. januar 2015 skrev ShaoWei Teo :
> Hi group,
>
> I am using Julia v0.3.4. I would like to find the inverse of matrices,
> lets say the 15X15 matrix:-
> m =
> 2.69E-05-2.25E-05 1.25E-05-2.30E-05 -1.09E-05
> 9.02E-06
Rounding can make a numerically singular matrix regular so that is probably
what we are seeing here. You initial covariance matrix is actually
singular, but when copied from the mail or read from a file it becomes
non-singular. You try the following
julia> A = randn(15, 14);B = A*A'; # B is singul
try a make -C deps distclean-arpack and then run the tests again. If it
still fails, please provide the output form versioninfo().
2015-01-18 13:49 GMT-05:00 Comer Duncan :
> Today I am building julia on my debian wheezy machine. On my first build
> all went apparently ok except for the failure
>
> "...but after a few months, everyone agreed that it was really annoying
> so we changed it back."
Not everyone. Everyone in the room I'm sitting were against the reversal.
2015-01-23 18:07 GMT-05:00 Stefan Karpinski :
> This kind of thing is a balancing act. All of these behaviors are real
Use
p = sortperm(values)
values = values[p]
vectors = vectors[:,p]
but the values should already be sorted by the solver if your problem is
Hermitian.
2015-01-24 19:55 GMT-05:00 :
> Anything that will sort matrices? Maybe I'm not asking the right question:
> I want to sort some eigenvalues, and
You can use eigs. Usually, you only ask for a few of the values, but in
theory, you could get all of them, but it could take some time to compute
them.
2015-01-26 9:40 GMT-05:00 Andrei Berceanu :
> Is there any Julia function for computing the eigenvalues of a large,
> sparse, hermitian matrix M?
a lot of things besides the eigenvalues.
>>
>> On Monday, January 26, 2015 at 3:43:01 PM UTC+1, Andreas Noack wrote:
>>>
>>> You can use eigs. Usually, you only ask for a few of the values, but in
>>> theory, you could get all of them, but it could take some time to co
ll. What would be very
> useful for me is the ability to get eigenvalues within a certain interval,
> emin to emax. I dont see this in the capabilities of eigs.
>
> //A
>
> On Monday, January 26, 2015 at 4:21:58 PM UTC+1, Andreas Noack wrote:
>>
>> Yes. There is s
.87693
It also works for the vectors.
2015-01-26 11:54 GMT-05:00 Andrei Berceanu :
> The matrix is 1681x1681 with 8240 non-zero entries (i.e. 0.29% non-zero).
> I'm not sure how this relates to your second comment though :)
>
> On Monday, January 26, 2015 at 5:48:37 PM UTC+1, And
I'm not aware of any public Julia repositories containing such code. In
general, I don't think there is written much time series econometrics code
Julia yet, so there is a good opportunity for you to contribute here.
2015-01-27 0:17 GMT-05:00 Jung Soo Park :
>
> Is there any way I can have the co
This is an issue with the underlying divide and conquer algorithm used by
the LAPACK routine we are calling. It has been reported at Numpy's list as
well.
I think we should have an option for choosing the algorithm in svdfact,
such that you can easily switch to a QR based solver when the DnC solve
This is cheaper: rand(4,4) - 5*I because 5I is a special type that doesn't
store all elements.
2015-02-01 11:03 GMT-05:00 paul analyst :
> I have somethink like this :
>
> rand(4,4).-eye(4,4)*5
>
> Qestion : how to simpler subtracted from the diagonal value(5) ,
> Without multiplying matrices ..
i = 1:10^6;A - 5.0*I;end
end;
elapsed time: 0.053749097 seconds (167 MB allocated, 5.04% gc time in 8
pauses with 0 full sweep)
2015-02-01 11:49 GMT-05:00 paul analyst :
> Thx, nice. Is the fast way in Julia ?
> Paul
>
> W dniu niedziela, 1 lutego 2015 17:18:03 UTC+1 użytko
I was about to write that as well. Arithmetic on BigFloats BigInts should
be quite a bit faster with the new gc.
The idea of mutable BigInts and -Float has been discussed a couple of places
https://groups.google.com/forum/#!msg/julia-dev/uqp7LziUEfY/3Klx_KFy0mkJ
https://github.com/JuliaLang/julia
This issue has been reported a couple of times over the last couple of days
and there are some explanations in the those issues so please have a look
at them, but in short, yes it is too old.
2015-02-05 19:23 GMT-05:00 Peter Simon :
> Trying to build Julia 0.4 master on CentOS 6.4 with gcc 4.4.7.
I also think it is important internally in the sparse code, but others can
answer that better than me. However, I don't think it is necessary to
export those functions.
2015-02-06 10:47 GMT-05:00 Jake Bolewski :
> This method is used a lot in the in the sparse matrix code, have to looked
> at the
Simon Danisch in
https://groups.google.com/forum/#!topic/julia-users/BYRAeQJuvTw
2015-02-07 14:26 GMT-05:00 Stefan Karpinski :
> There was a thread at some point where someone posted a plot comparing
> lines of code versus performance for a bunch of benchmarks (probably our
> microbenchmarks), wh
It is difficult to conclude anything when we only see part of your problem,
but from what you write, I wouldn't say the noise in summation s1 + s2 + s3
+ s4 + s5 is the problem. With floating points, it is unavoidable to get
small (or larger) errors with addition and subtraction. I'd rather say tha
Try readdlm("g.txt"). In most I cases it will be sufficient. For details,
take a look in the documentation for that function.
2015-02-14 6:14 GMT-05:00 :
> Hi
> I'm new to Julia and stuck on the following
>
> I have a text file called "g.txt" with the following saved to it
> 1,2,
You could Use `AbstractVecOrMat` and `idx=1` as default value, but you
could also use `sub` or `slice` to avoid copying the matrix columns since
*julia> **A = randn(2,2)*
*2x2 Array{Float64,2}:*
* -0.124018 1.3846 *
* -0.259116 -1.19279*
*julia> **isa(sub(A, :,1), AbstractVector{Float64})*
I'm unable to reproduce this on 0.3.5, 0.3.7pre or master. How did you
construct A?
2015-02-18 9:17 GMT-05:00 Tamas Papp :
> Is this a bug, or am I missing something obvious?
>
> julia> versioninfo()
> Julia Version 0.3.5
> Platform Info:
> System: Linux (x86_64-linux-gnu)
> CPU: Intel(R) Cor
Maybe related https://github.com/JuliaLang/julia/issues/8869
2015-02-18 15:41 GMT-05:00 Ivar Nesje :
> Also log will throw a DomainErrror in Julia if you give it a negative
> argument. That is another check and might prevent some optimizations.
>
> onsdag 18. februar 2015 21.36.12 UTC+1 skrev Ste
gh, so it would certainly be interesting to hear
> what system this is.
>
> On Wed, Feb 18, 2015 at 3:57 PM, Andreas Noack <
> andreasnoackjen...@gmail.com> wrote:
>
>> Maybe related https://github.com/JuliaLang/julia/issues/8869
>>
>> 2015-02-18 15:41 GMT-0
@everywhere srand(seed) would give reproducibility, but it would probably
not be a good idea since the exact same random variates will be generated
on each process. Maybe something like
for p in workers()
@spawnat p srand(seed + p)
end
However, out RNG gives no guarantees about independence of th
Would that be to get the exact same variates as the serial execution would
create?
2015-02-26 15:04 GMT-05:00 Steve Kay :
> Thanks for the comments. - nice to know it's not my usual programming
> inadequacies. I like the
>
> for p in workers()
> @spawnat p srand(seed + p)
> end
>
> idea. It would
I'd like to have something like this.
2015-02-27 15:02 GMT-05:00 Jutho :
> Or in this particular case, maybe their should be some functionality like
> that in Base, or at least in Base.LinAlg, where is often necessary to mix
> complex variables and real variables of the same type used to build to
the real type. Maybe something like realtype , or typereal if we want
> to go with the other type... functions.
>
> Op vrijdag 27 februari 2015 21:18:34 UTC+1 schreef Andreas Noack:
>>
>> I'd like to have something like this.
>>
>> 2015-02-27 15:02 GMT-05:00 Jutho :
&
vanaf mijn iPhone
>
> Op 27-feb.-2015 om 21:27 heeft Andreas Noack
> het volgende geschreven:
>
> I think it is fine that the type of the argument determines the behavior
> here. Having "type" in the name would be a bit like having
> `fabs(x::Float64)`.
>
>
I don't see an obvious reason for this so please try to post this as an
issue on GLM.jl.
2015-02-27 10:47 GMT-05:00 Andrew Newman :
> Hi Julia-users,
>
> I am trying to run a few simple regressions on simulated data. I had no
> problem with a logit and was able to run it using glm and subsequent
Steve, I don't think that method works. The mapping between the argument to
srand and the internal state of the MT is quite complicated. We are calling
a seed function in the library we are using that maps an integer to a state
vector so srand(1) and srand(2) end up as two quite different streams.
I don't think it is possible right now. We have been discussing more
flexible solutions, but so far nothing has been done.
2015-03-04 9:22 GMT-05:00 Simone Ulzega :
> Is it possible to construct a DArray with unevenly distributed chunks?
> For example, I want to create a distributed array with si
I hope so. It is something we really want to do but I cannot promise when
we'll do it.
2015-03-04 17:01 GMT-05:00 Simone Ulzega :
> Thank you Andreas. Is there any plan to implement something more flexible
> in the near future?
>
>
> On Wednesday, March 4, 2015 at 10:09:
Hi Weijian
This is a great functionality. It seems that you are using MAT.jl to read
in the sparse matrices. You could consider using the the MatrixMarket
reader in Base e.g.
A = sparse(Base.SparseMatrix.CHOLMOD.Sparse("matrix.mtx"))
It will also have the benefit of using the Symmetric matrix ty
0.4. I will change to this reader when Julia
> v0.4 is released.
>
> Best,
>
> Weijian
>
>
> On Friday, 6 March 2015 20:35:43 UTC, Andreas Noack wrote:
>>
>> Hi Weijian
>>
>> This is a great functionality. It seems that you are using MAT.jl to read
>>
It sometimes helps to run Pkg.update(); Pkg.build("IJulia").
2015-03-08 15:15 GMT-04:00 Ariel Keselman :
> hi,
>
> I updated to current Julia nightly, and it is now causing the IJulia
> kernel to crash. Some investigation lead to unmatched
> convert(::Type{Ptr{Uint8}}, ::ASCIIString) etc. in ZMQ.
It is more helpful if you can provide a self contained example that we can
run. However, I think you've been bitten by our white space concatenation.
When you define f, the second element is written
*-w^2*sin(x)-u*w^2*cos(x) -2γ*y*
*but I think that is getting parsed as*
*hvcat(**-w^2*sin(x)-u*w
@elapsed is what you are looking for
2015-03-11 7:43 GMT-04:00 Patrick Kofod Mogensen :
> I am testing the run times of two different algorithms, solving the same
> problem. I know there is the @time macro, but I cannot seem to wrap my head
> around how I should save the printed times. Any clever
You can get around this by specifying that the matrix is symmetric. This
can be done with
cholfact(Symmetric(A, :L))
which then bypasses the test for symmetry and cholfact only looks at the
lower triangle.
However, the error you got is not consistent with the way cholfact works
for dense matrice
Good to to hear that. I've filed an issue to figure out how to make this
more consistent
https://github.com/JuliaLang/julia/issues/10520
2015-03-14 21:06 GMT-04:00 Kristoffer Carlsson :
>
>
> On Sunday, March 15, 2015 at 1:02:11 AM UTC+1, Andreas Noack wrote:
>>
>>
On 0.3.x there is a very expensive error bounds calculation in the
triangular solve which the reason for the surprisingly slow calculation.
This is not acceptable and we have therefore removed the error bounds
calculation in 0.4. On my machine I get
julia> @time L\B;
elapsed time: 2.535437796 seco
I've tried to make the package that Jiahao mentioned usable. I think it
works, but it probably still has some rough edges. You can find it here
https://github.com/andreasnoack/TSVD.jl
and there is a help entry for tsvd that explains the arguments.
For a 2000x2000 dense non-symmetric complex matr
new_array = vcat(data...)
2015-03-17 20:59 GMT-04:00 Christopher Fisher :
>
>
> Hi all-
>
> pmap outputs the results as an array of arrays and I am trying to find a
> flexible way to change it into a one dimensional array. I can hardcode the
> results as new_array = vcat(data[1],data[2],data[3],d
It has caused a lot of frustration. See #9118. I think the easiest right
now is
for p in procs()
@spawnat p blas_set_num_threads(k)
end
2015-03-17 23:19 GMT-04:00 Sheehan Olver :
> Hi,
>
> I've created the following to test the performance of parallel processing
> on our departments server. But
Distributed reduce is already implemented, so maybe these slightly simpler
with e.g. sum(A::DArray) = reduce(Base.AddFun(), A)
2015-03-26 8:41 GMT-04:00 Jameson Nash :
> `eval` (typically) isn't allowed to handle `import` and `export`
> statements. those must be written explicitly
>
> On Thu, Mar
I think that countmap in StatsBase does that
Den torsdag den 26. marts 2015 skrev DumpsterDoofus <
peter.richter@gmail.com>:
> In Mathematica, there is a function called Tally which takes a list and
> returns a list of the unique elements of the input list, along with their
> multiplicities.
I think that you could use sub(A,:,2:2:4) for BLAS, but not sub(A,:,[2,4])
because the indexing has to be with ranges for BLAS to be able to extract
the right elements of the matrix.
2015-03-29 17:34 GMT-04:00 Dominique Orban :
> Sorry if this is another [:] kind of question, but I can't seem to
1.0, A[:,idx], B[:,idx], 1.0, C1);
>
> does what it's supposed to do, it just doesn't update C (which is awkward
> for a ! function, though I realize there's something else going on here).
>
>
> On Sunday, March 29, 2015 at 5:56:27 PM UTC-4, Andreas Noack wrote:
&g
:23 GMT-04:00 Dominique Orban :
> Unfortunately, my idx will be computed on the fly and there's zero chance
> that it would be a range. Is there a plan to support more general indexing
> in subarrays and/or ArrayView?
>
>
> On Sunday, March 29, 2015 at 6:13:16 PM UTC-4, A
Which pca?
2015-04-06 6:53 GMT-07:00 Steven Sagaert :
> does pca() center the input & output data or do you have to do that
> yourself?
>
There is no pca in Julia Base
2015-04-06 9:16 GMT-07:00 Steven Sagaert :
> the one from the standard lib
>
> On Monday, April 6, 2015 at 4:01:00 PM UTC+2, Andreas Noack wrote:
>>
>> Which pca?
>>
>> 2015-04-06 6:53 GMT-07:00 Steven Sagaert :
>>
>>>
enderson wrote:
>>>>>
>>>>> In setting up the livestream a new link was created:
>>>>>
>>>>> https://www.youtube.com/watch?v=WrFURbHwwrs
>>>>>
>>>>> Videos are archived here:
>>>>>
>>>>
The notebook is now available from
http://andreasnoack.github.io/talks/2015AprilStanford_AndreasNoack.ipynb
Note that it is based on master so some parts of the code might fail on
Julia release.
2015-04-11 15:21 GMT-04:00 Andreas Noack :
> I've been in transit back to Boston and the
You are reading the docs for the development version of Julia. The svds
function has been added recently so it is not available in version 0.3.
You can either try the development version of Julia or
https://github.com/andreasnoack/TSVD.jl
which works with 0.3, but it hasn't been tested thoroughl
This has been fixed on master. I've just backported the fix to the release
branch so it should be okay in 0.3.8.
2015-04-16 4:05 GMT-04:00 Rasmus Brandt :
> Hey everyone,
>
> I just stumbled over this behaviour in Julia-0.3.7, which seems a bit
> unintuitive to me:
>
> julia> pinv(0)
> Inf
>
> ju
We mainly use SymTridiagonal for eigenvalue problem and therefore it is not
necessary to allow for complex matrices because the Hermitian eigenvalue
problem can be reduced to a real symmetric problem. It might be easier to
specify the problem in Hermitian form, so we might change this. What is
your
I'm not sure what the best solution is here because I don't fully
understand your objective. If A has low rank then the solution is not
unique and if it has almost low rank, the solution is very ill conditioned.
A solution could our new "shift" argument to our complex Cholesky
factorization. This w
The reason is that it is not exported so you can either use the full path
bar(Base.LinAlg.CHOLMOD.CholmodFactor{Float64,Int64}) = 1
or "use" the type first
using Base.LinAlg.CHOLMOD.CholmodFactor
bar(CholmodFactor{Float64,Int64}) = 1
2015-04-21 10:27 GMT-04:00 andreas :
>
> Hi everybody,
>
> I
Hi Michela
It is easier to help if your example is complete such that it can just be
pasted into the terminal. The variable gmax is not defined in your example,
but I guess it is equal to length(SUC_C). It is also useful to provide the
exact error message.
That said, I think the root of the probl
This problem is quite common in the LinAlg code. We have two type of
definitions to handle the conversion of the element types. First an idea
due to Jeff as implemented in
https://github.com/JuliaLang/julia/blob/237cdab7100b29a6769313391b1f8d2563ada06e/base/linalg/triangular.jl#L20
and
https://g
Try running it twice. It spends time compiling the function the first time.
I get
julia> include("../../Downloads/test.jl")
elapsed time: 0.666072953 seconds (42 MB allocated, 1.12% gc time in 2
pauses with 0 full sweep)
julia> @time test()
elapsed time: 0.014324694 seconds (25 MB allocated, 28.8
Hej Valentin
There is a couple of simple examples. At least this
http://acooke.org/cute/FiniteFiel1.html
and I did one in this
http://andreasnoack.github.io/talks/2015AprilStanford_AndreasNoack.ipynb
notebook. The arithmetic definitions are simpler for GF(2), but should be
simple modifications
toff (in
> the snippet above, 5e-5). Is it possible to pass a parameter to obtain all
> eigenvalues and eigenvectors above certain threshold? Or should I simply
> identify one eigenvalue at a time and deflate A by removing the
> corresponding eigenvector?
>
> Thanks, Mladen
>
>
>
If B and D are different then why is it not okay to calculate x = C\D and
then B'x afterwards?
2015-04-27 15:24 GMT-04:00 matt :
> I would like to compute multiple quadratic forms B'*C^(-1)*D, where B, C,
> and D are sparse, and C is always the same symmetric positive matrix. For
> fast computati
Ds, and I have to
> compute the quadratic form for a number of different combinations. I can
> probably do things the way you suggested, but I would lose the nice
> symmetry that I have in my current R code. (I am trying to switch to Julia
> for speed.)
>
>
> On Monday, April 27,
I like the idea of something like factorize(MyType,...), but it is not
without problems for generic programming. Right now cholfact(Matrix) and
cholfact(SparseMatrixCSC) return different types, i.e. LinAlg.Cholesky and
SparseMatrix.CHOLMOD.Factor. The reason is that internally, they are very
differ
As I'm writing this, I'm running Julia on a pretty new 90 node cluster. I
don't know if that counts as medium size cluster, but recently it was
reported on the mailing list that Julia was running on
http://www.top500.org/system/178451
which I think counts as a supercomputer.
2015-04-28 19:58 GMT
Calculating the covariance requires two sequences of data points. Either
from two vectors or between the columns of a matrix. The mean is different
as it requires one sequence. What did you expect to get from the covariance
function of a vector? The variance?
2015-05-08 16:01 GMT-04:00 JPi :
> He
the covariance function.
2015-05-08 17:05 GMT-04:00 JPi :
> Yes, the variance.
>
> But that doesn't explain why you can't get the covariance matrix of an
> array of vectors.
>
> On Friday, May 8, 2015 at 4:51:13 PM UTC-4, Andreas Noack wrote:
>>
>> Calculati
1. In Julia fft(A) is the 2d DFT of A. You can get MATLAB's behavior with
fft(A, 1)
2. I might not understand what you are trying to do, but it appears to me
that you can just apply the DFT to the full vector and then sample the
elements of the vector.
2015-05-10 22:32 GMT-04:00 Edward Chen :
>
In 0.3 the sparse LDLt and Cholesky factorizations are both in the
cholfact function. If the matrix is symmetric, but not positive definite
the result of cholfact will be an LDLt factorization. In 0.4 the
factorizations have been split into cholfact and ldltfact.
Den onsdag den 27. maj 2015 kl.
ed with this error.
>
>
> On Wednesday, May 27, 2015 at 2:22:30 PM UTC-3, Andreas Noack wrote:
>>
>> In 0.3 the sparse LDLt and Cholesky factorizations are both in the
>> cholfact function. If the matrix is symmetric, but not positive definite
>> the result of
RROR: CHOLMOD not defined
>
>
>
> On Wednesday, May 27, 2015 at 3:25:46 PM UTC-3, Eduardo Lenz wrote:
>>
>> Funny... I dont have CHOLMOD installed...but I am using the official
>> windows installer.. I will try to make a fresh install.
>>
>> Thanks Andreas
nd why I dont have
> CHOLMOD avaliable in a regular
> windows install.
>
> Thanks for your help Andreas !
>
>
> On Wednesday, May 27, 2015 at 5:37:53 PM UTC-3, Andreas Noack wrote:
>>
>> You are using 0.3.8 and not 0.4. Have you tried cholfact(A)?
h for your time and knowledge
>
>
>
> On Wednesday, May 27, 2015 at 5:54:20 PM UTC-3, Andreas Noack wrote:
>
>> As I wrote in the first reply: in 0.3 the cholfact function returns the
>> LDLt when the matrix is symmetric but not positive definite, e.g.
>> julia&g
You can use try/catch, e.g.
julia> try
cholfact(A)
catch e
e.info
end
1
In 0.3, you can construct a Triangular matrix with Triangular(B, :L) and in
0.4 with LowerTriangular(B).
Den onsdag den 27. maj 2015 kl. 18.12.51 UTC-4 skrev Roy Wang:
>
> Is there an easy way t
I think the chosen matrix has very good convergence properties for
iterative methods, but I agree that iterative methods are very useful to
have in Julia. There is already quite a few implementations in
https://github.com/JuliaLang/IterativeSolvers.jl
I'm not sure if these methods cover the one
The convert methods for Date.Period, Complex and Rational are inferred to
give Any. The problem in Period is because of the use of the value method
in line 4 of periods.jl. It extracts a field from an abstract type so even
though all subtypes in base have the specified field and have it defined
wrote:
>>
>> This is very interesting !
>>
>> So UMFPACK is more robust and this is why I am not having any issues with
>> the same matrix.
>>
>> Thanks.
>>
>>
>>
>> On Wednesday, May 27, 2015 at 6:15:57 PM UTC-3, Andreas Noack wrote:
>&g
Hi Jared
The short answer is yes. Different algorithms are used in `svdvals` and
`svd`/`svdfact`. In both cases, we are using the divide and conquer routine
xGESDD from LAPACK, but internally in the routine uses two different
algorithms depending on the choice for the vectors. Are requested or
No such BLAS routine exists, but for larger matrices the calculation will
be dominated by the final matrix-matrix product anyway.
Den tirsdag den 7. juli 2015 kl. 18.24.34 UTC-4 skrev Matthieu:
>
> Thanks, this is what I currently do :)
>
> However, I'd like to find a solution that is both memory
You could, but unless the matrices are small, it would be slower because it
wouldn't use optimized matrix multiplication.
2015-07-08 10:36 GMT-04:00 Josh Langsfeld :
> Maybe I'm missing something obvious, but couldn't you easily write your
> own 'cross' function that uses a couple nested for-loop
?
>
> On Wed, Jul 8, 2015 at 10:39 AM, Andreas Noack <
> andreasnoackjen...@gmail.com> wrote:
>
>> You could, but unless the matrices are small, it would be slower because
>> it wouldn't use optimized matrix multiplication.
>>
>> 2015-07-08 10:36 GM
The OpenBLAS framework is described in
http://dl.acm.org/citation.cfm?id=2503219 and builds on top of GotoBLAS,
http://dl.acm.org/citation.cfm?id=1377607
Jutho: Part of the conclusion from your link is that you cannot write a
fast matrix multiplication in C, but that you'll have to write it in
ass
Hi Ivan
This is fixed on 0.4 but needs a backport to 0.3. I'll take a look.
Den tirsdag den 7. juli 2015 kl. 08.41.37 UTC-4 skrev Ivan Slapnicar:
>
> In [1]:
>
> Z=givens(1.0,2.0,1,3,3)
>
> Z, Z', transpose(Z)
>
> Out[1]:
>
> (
> 3x3 Givens{Float64}:
> 0.447214 0.0 0.894427
> 0.0 1.0
You can use sub or slice for this, e.g. W = slice(Wall, 2:size(Wall, 1),
2:size(Wall, 2))
Den onsdag den 15. juli 2015 kl. 06.14.14 UTC-4 skrev Ferran Mazzanti:
>
> Hi folks,
>
> I have a little mess with the way arrays are being handled in Julia. I
> come from C and fortran95 and I know I can d
For the example you describe, you can simply use the tools in base, but
unfortunately I don't think our reader can handle continental style decimal
comma yet. However, that is easy to search/replace with a dot. Something
like
cov(diff(log(readdlm("prices.csv", ';'
should then do the job.
Take a look in StatsBase.jl
On Sunday, August 9, 2015 at 8:17:02 AM UTC-4, paul analyst wrote:
>
>
>
> *Is some Autocorrelation function in Julia?Paul*
>
I think you are right that we should simply remove the mean keyword
argument from cov and cor. If users want the efficient versions with user
provided means then they can use corm and covm. Right now they are not
exported, but we could consider doing it, although I'm in doubt if it is
really ne
gt; if ar < 0.4
>>
>> zta = zta/1.1;
>>
>> end
>>
>> if ar > 0.6
>>
>> zta = zta*1.1;
>>
>> end
>>
>>
>> A = (In - rho*W);
>>
>>
>> Basically, I'm wondering if there is any way to make the "A" and "B"
>> matrices sparse and possibly make it run faster, especially in the
>> log(det(A)) terms. Currently, 110 draws (with 10 burn) takes approximately 8
>> seconds on my 64 bit, core i7 laptop. The computational speed decreases with
>> the sample size n because the weight matrices are treated as full.
>>
>>
>> Any help would be greatly appreciated and if anyone is interested in running
>> the code, the Distributions and Distance packages must be included and
>> initiated first.
>>
>>
>> Regards,
>>
>> Don
>>
>>
>>
>>
--
Med venlig hilsen
Andreas Noack Jensen
e a view for informing
> Julia about this.
>
> Thank you.
>
--
Med venlig hilsen
Andreas Noack Jensen
is not). Here is my versioninfo() if that helps:
>>
>> *julia> **versioninfo()*
>>
>> Julia Version 0.3.0-prerelease+3868
>>
>> Commit e7a9a7d* (2014-06-24 19:39 UTC)
>>
>> Platform Info:
>>
>> System: Darwin (x86_64-apple-darwin13.2.0)
>>
>> CPU: Intel(R) Core(TM)2 Duo CPU P8700 @ 2.53GHz
>>
>> WORD_SIZE: 64
>>
>> BLAS: libopenblas (USE64BITINT NO_AFFINITY)
>>
>> LAPACK: libopenblas
>>
>> LIBM: libopenlibm
>>
>>
--
Med venlig hilsen
Andreas Noack Jensen
(commit 79e4771).
>
> Is there documentation on the right way to parallelize simulations in
> Julia? If not, should my next step be to carefully read the "parallel
> computing" documentation?
>
--
Med venlig hilsen
Andreas Noack Jensen
gt;
>> I have several other regularized regression methods implemented in Julia
so maybe time to collect them together into a lib. Anyone knows if there is
something already out there (I have seen GLMNET.jl which wraps the fortran
LASSO code)?
>>
>> Cheers,
>>
>> Robert Feldt
--
Med venlig hilsen
Andreas Noack Jensen
1 - 100 of 369 matches
Mail list logo