On 8/19/22 14:25, Uclus Heis wrote:
"/That said, from your code, it looks like all processes are opening the same/
/file and writing to it. Nothing good will come of this. There is of course
also the issue that importing all vector elements to one process cannot scale
to large numbers of process
Dear Wolfgang,
Thank you very much for your answer. Regarding what you mentioned:
"*That said, from your code, it looks like all processes are opening the
same*
*file and writing to it. Nothing good will come of this. There is of
coursealso the issue that importing all vector elements to one
On 8/19/22 13:29, Raghunandan Pratoori wrote:
I am trying to run a simulation with grid refine factor 7. I know this is
significantly large and any improper code will raise memory issues. I am in
fact getting memory issues after completion of first time step and I am not
able to pin point whe
Hello team,
I am trying to run a simulation with grid refine factor 7. I know this is
significantly large and any improper code will raise memory issues. I am in
fact getting memory issues after completion of first time step and I am not
able to pin point where I probably am making a mistake. T
" When you solve a new linear system with
the matrix, that linear system knows nothing about what happened when you
first built the matrix and the original right hand side."
Yes, but I have to call constraints.distribute_local_to_global (...)
also when building the new linear system. But I observe
On 8/19/22 05:46, Simon Wiesheier wrote:
This system boilds down to a 2x2 system for x1 and x2 with x0=0.
This is exactly what I want to compute, but without having -c*K10 substracted.
(Because the new rhs comes from a different problem and has nothing to do with
the constrainted dofs - I just
On 8/19/22 03:25, Uclus Heis wrote:
/
/
The way of extracting and exporting the solution with
/testvec=locally_relevant_solution / is a bad practice? I am saving the
locally relevant solution from many different processes in one single file for
a given frequency. I am afraid that there is no s
Syed,
parallel::distributed::Triangulation is just not implemented for dim==1 so
you can't run step-40 for the one-dimensional case.
Best,
Daniel
On Fri, Aug 19, 2022 at 7:07 AM syed ansari wrote:
> Dear all,
> I was trying to run step-40 in 1 dimension and encountered the
> error
" But the easier approach may be to use the same 20x20 matrix and just copy
the
new rhs you want to solve with into a vector with size 20, leaving the
entries
of the rhs vector that correspond to constrained DoFs zero (or, in fact,
whatever you want -- the value there doesn't matter). By zeroing ou
Dear all,
I was trying to run step-40 in 1 dimension and encountered the
error corresponding to MeshSmoothing in the constructor. The details of the
error are as follows:
An error occurred in line <3455> of file
in function
d
Dear all,
after some time I came back to this problem again. I would kindly ask for
some guidance to see if I can understand and solve the issue.
I am using a parallel::distributed::Triangulation with MPI. I call the
function solve() in a loop for different frequencies and want to export the
11 matches
Mail list logo