Dear Doug,

Could you post a short code how you want to use the LinearOperator so that 
I know what actually is not working.

Regarding Trilinos + LA::dist::Vectror: there is an open PR (
https://github.com/dealii/dealii/pull/9925) which adds the instantiations 
(hope I did not miss any).

Regarding GPU: currently there is only support for matrix-free and non for 
matrix-based algorithms. These GPU implementations use currently 
LA::dist::Vector. 

Personally, I am always using LA::dist::Vector (not just in the matrix-free 
code but also) in combination with TrilinosWrappers::SparseMatrix. It works 
very well! I have no experience how well it works with PETSc.

Peter

On Thursday, 23 April 2020 04:00:10 UTC+2, Doug Shi-Dong wrote:
>
> Hello,
>
> I have been using the LinearAlgebra::distributed::Vector class for MPI 
> parallelization since the way it works is more familiar to what I had 
> worked with and seemed more flexible.
>
> However, for parallelization, I have to either use a Trilinos or PETSc 
> matrix since the native deal.II SparseMatrix is only serial (correct me if 
> I'm wrong). Seems like I can do matrix-vector multiplications just fine 
> between LA::dist::Vector and the wrapped matrices. However, when it gets to 
> LinearOperator, it looks like a TrilinosWrappers::SparseMatrix wrapped 
> within a LinearOperator only works with a TrilinosWrappers::MPI::Vector, 
> and same thing for PETSc.
>
> I am wondering what the community is using as their go-to parallel 
> matrices and vectors, and if you've been mixing them. E.g. matrix-free with 
> Trilinos/PETSc vectors, or PETSc matrices with LA::dist::Vector. From what 
> I've seen from some tutorials, there is a way to code it up such that 
> either Trilinos or PETSc wrappers are used interchangeably, but the 
> LA::dist::Vector does not seem be nicely interchangeable with the 
> Trilinos/PETSc ones. 
>
> I was kind of hoping to be able to use LA::dist::Vector for everything, am 
> I expecting too much from it? Maybe I just need to fix the LinearOperator 
> implementation to mix-and-match the data structure? If I do commit to 
> Trilinos matrices/vectors, will I have trouble doing some matrix-free or 
> GPU stuff in the far future?
>
> Best regards,
>
> Doug
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/71148713-9510-4c6f-bf1c-0b977690a603%40googlegroups.com.

Reply via email to