On 11/10/2016 04:12 PM, 'Joaquin M Valencia Bravo' via deal.II User
Group wrote:
bash:
/home/jomivalen/intel/compilers_and_libraries/linux/bin/compilervars.sh:
No such file or directory
You reference this file in your ~/.bashrc script. Just remove the line
in that script that reference the 'c
Thanks to all for your suggestions and explanations.
I already installed and configured dealii + trilinos and passed the 5 steps
after doing make test. The way I did that is:
1. I delete the intel compiler directory.
2. I follow the same steps as I sent in my message above.
My question now is if
On 11/10/2016 03:34 PM, Hamed Babaei wrote:
After two months of struggling with the parallel code, I finally found
the bug. I had made a stupid mistake, initializing the temporary
distributed solution inside the Newton loop
Outstanding. I'm glad to hear!
The pessimistic view about softwa
Hi All,
After two months of struggling with the parallel code, I finally found the
bug. I had made a stupid mistake, initializing the temporary distributed
solution inside the Newton loop
I have no word to thank all of you dear friends, Wolfgang and Daniel in
particular, for your incredib
Dear Julian,
In general its not easy to provide the functionality that you're looking
for because not every degree-of-freedom is associated with a support point
(e.g. the FE_DGPMonomial element) and let alone a vertex (e.g. an FE_Q of
polynomial order 2 has DoFs with support points at face and
Hi all,
It seems that although before first call to solve(in the zero newoton
iteration) very few system matrix components are zero but after first
solving in the first Newoton iteration, most of the system_matrix
components are zero except all the diagonal components and few off-diagonal
ones
On 11/09/2016 05:35 PM, 'Joaquin M Valencia Bravo' via deal.II User
Group wrote:
jomivalen@Nalia ~/cfem/trilinos/lib $ ldd libepetra.so.12.6.2 | grep mpi
libmpi_cxx.so.1 => /usr/lib/libmpi_cxx.so.1 (0x7fb4ac8f7000)
libmpi.so.1 => /usr/lib/libmpi.so.1 (0x7fb4ac576000)
jomivalen@N
Dear Daniel,
Then I would expect that the solver should behave the same for both
> matrices. Are you still running into the same problems using just 4 cells
> with your parallel code?
>
Yes, the parallel code is not solved by SolverCG+SSOR even for only 4 cell.
It is really weird to me that de
ok. Something like this:
cell = dof_handler.begin_active ();
endc = dof_handler.end ();
for (; cell != endc; ++cell)
{
for (unsigned int vertex = 0;
vertex < GeometryInfo::vertices_per_cell; ++vertex)
{
cell->vertex(vertex)
}
}
Thank you!
...
Dear Joaquin,
It been a while since I've compiled trilinos manually, but if nothings
changed since version 11.4 then your should be able to specify which MPI
trilinos should be compiled against using the following cmake parameters:
-D TPL_ENABLE_MPI:BOOL=ON \
> -D MPI_BASE_DIR:PATH=$DIR_BASE/Op
Dear Julian,
This question has been asked in the past and there are already some threads
which might be of help to you:
https://groups.google.com/forum/#!searchin/dealii/dof$20coordinates%7Csort:relevance
Best,
Deepak
On Thu, Nov 10, 2016 at 2:41 PM, Julian Dorn
wrote:
> Dear all,
>
> if I hav
Dear all,
if I have
FE_Q fe; // to be Q1 finite element in 2D
DoFHandler dof_handler;
how to get coordinates of dofs (for Q1 this will be exactly coordinates of
quads' vertices)?
Thank you in advance!
--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options
Hamed,
Using the print function of Sparsematrix class (Thanks to Daniel for
> letting me know that) I printed the elements of system_matrix for both
> sequential and parallel codes.
> I reduced the problem to only 54 DoFs. It seems that both system_matrices
> are symmetric and Identical except
Hello Bruno,
Apologies for the delayed response.
On Tuesday, November 1, 2016 at 6:06:16 PM UTC+5:30, Bruno Turcksin wrote:
> Have you tried to increase the number of Newton iterations? The Newton
> solver is pretty basic (it doesn't do any line search) so it might be
> the reason it doesn't
14 matches
Mail list logo