Satish and Matt,
What is the syntax multiple libraries? I have this now but need to addd
more files.
Thanks,
Mark
'--with-blaslapack-lib=/opt/intel/compilers_and_libraries_2018.1.163/linux/mkl/lib/intel64/libmkl_intel_thread.a',
On Mon, Jul 2, 2018 at 6:24 PM Satish Balay wrote:
> On Mon, 2 Jul 2018, Mark Adams wrote:
>
> > Satish and Matt,
> > What is the syntax multiple libraries? I have this now but need to addd
> > more files.
> > Thanks,
> > Mark
> &g
Well this does work without valgrind.
On Tue, Jul 3, 2018 at 6:36 AM Mark Adams wrote:
> I built a 32 bit integer version and now it dies in PetscInit. Ugh.
>
> ==3965== Conditional jump or move depends on uninitialised value(s)
> ==3965==at 0x27AFFD8F: _int_free (malloc.c:39
SNES ex56 seems to have regressed:
04:35 nid02516 master *= ~/petsc_install/petsc/src/snes/examples/tutorials$
make
PETSC_DIR=/global/homes/m/madams/petsc_install/petsc-cori-knl-opt-intel-omp
PETSC_ARCH="" runex56
srun -n 1 ./ex56 -cells 2,2,1 -max_conv_its 3 -petscspace_order 2
-snes_max_it 2 -ks
GAMG drills into AIJ data structures and will need to be fixed up to work
with MKL matrices, I guess, but it is failing now from a logic error.
This example works with one processor but fails with 2 (appended). The code
looks like this:
ierr =
PetscObjectBaseTypeCompare((PetscObject)Amat,MATS
ked at some point in the past. In that case, we should fix
> it in 'maint' even if some new feature in 'master' has also fixed it.
> Note that Mark's prompt says he's on 'master', not 'maint'.
>
Yea, let me double check this.
>
I have having to fix AIJ methods that don't get the type from the args to
set created matrix type, so as to keep the MKL typing.
I am not sure how to fix this because I don't know to say 'make MPI from
SEQ' in this method
PetscErrorCode MatCreateMPIAIJWithSeqAIJ(MPI_Comm comm,Mat A,Mat B,const
Pe
tter way to do it. I'll try it. I've been trying to
keep the MPI matrix as an MKL matrix but I guess that is not necessary.
>
> --Richard
>
> On Tue, Jul 3, 2018 at 6:28 AM, Mark Adams wrote:
>
>> GAMG drills into AIJ data structures and will need to be fixe
l keep that in mind. I pushed a branch with fixes to get my gamg
tests working (ksp ex56 primarily). I'm going to make a new branch and
try "-mat_seqaij_type
seqaijmkl".
Thanks,
Mark
> --Richard
>
> On Tue, Jul 3, 2018 at 4:31 PM, Mark Adams wrote:
>
>> I have ha
Sorry, this error was with MKL matrices. It is still an error inside of
Plex. My non-Plex tests (KSP) now work in a branch (pushed).
On Tue, Jul 3, 2018 at 7:24 PM Mark Adams wrote:
>
>
> On Tue, Jul 3, 2018 at 6:26 PM Jed Brown wrote:
>
>> "Smith, Barry F." writes
snes/examples/tutorials/ex56.c
On Wed, Jul 4, 2018 at 9:57 AM Matthew Knepley wrote:
> On Wed, Jul 4, 2018 at 8:47 AM Mark Adams wrote:
>
>> Sorry, this error was with MKL matrices. It is still an error inside of
>> Plex. My non-Plex tests (KSP) now work in a branch (pushed).
"-mat_seqaij_type seqaijmkl" just worked.
On Wed, Jul 4, 2018 at 9:44 AM Mark Adams wrote:
>
>
> On Wed, Jul 4, 2018 at 3:01 AM Richard Tran Mills wrote:
>
>> Hi Mark,
>>
>> I'd never looked at the code for MatCreateMPIAIJWithSeqAIJ(), but it
>&
>
>
> Please share the results of your experiments that prove OpenMP does not
> improve performance for Mark’s users.
>
This obviously does not "prove" anything but my users use OpenMP primarily
because they do not distribute their mesh metadata. They can not replicated
the mesh on every core, on
On Wed, Jul 4, 2018 at 1:09 PM Smith, Barry F. wrote:
>
>
> > On Jul 4, 2018, at 9:40 AM, Mark Adams wrote:
> >
> > "-mat_seqaij_type seqaijmkl" just worked.
>
> Mark,
>
> Please clarify. Does this mean you can use -mat_seqaij_type
>
On Thu, Jul 5, 2018 at 12:41 PM Tobin Isaac wrote:
> On Thu, Jul 05, 2018 at 09:28:16AM -0400, Mark Adams wrote:
> > >
> > >
> > > Please share the results of your experiments that prove OpenMP does not
> > > improve performance for Mark’s users.
>
, I don't think it will be a problem in the near term ... and I don't
understand why there is no equivalent of MatMultSymbolic (setting up
communication maps?). How does if function without this. Please clarify!
Thanks,
>
> --Richard
>
>
>>
>>>
>>>
&g
On Fri, Jul 6, 2018 at 6:20 PM Matthew Knepley wrote:
> On Fri, Jul 6, 2018 at 3:07 PM Richard Tran Mills wrote:
>
>> True, Barry. But, unfortunately, I think Jed's argument has something to
>> it because the hybrid MPI + OpenMP model has become so popular. I know of a
>> few codes where adoptin
F. wrote:
>
>I know Mark Adams has tried recently; without limited success.
>
>As always the big problem is facilities removing accounts, such as
> Satish's, so testing gets difficult.
>
>But yes, we want to support Titan so have users send
> configure.log
load-hypre',
'--download-metis',
'--with-hwloc=0',
'--download-parmetis',
'--download-cmake',
'--with-cc=mpicc',
#'--with-fc=0',
#'--with-clib-autodetect=0',
#'--with-cxx=mpiCC'
I agree with Matt's comment and let me add (somewhat redundantly)
> This isn't how you'd write MPI, is it? No, you'd figure out how to
> decompose your data properly to exploit locality and then implement an
> algorithm that minimizes communication and synchronization. Do that with
> OpenMP.
>
> If PETSc was an application, it could do whatever it wanted, but it's
> not. If PETSc is a library that intends to meet the needs of HPC
> applications, it needs to support the programming models the applications
> are using.
>
To repeat, PETSc supports threads, currently with MKL kernels and t
On Mon, Jul 9, 2018 at 7:19 PM Jeff Hammond wrote:
>
>
> On Mon, Jul 9, 2018 at 7:38 AM, Mark Adams wrote:
>
>> I agree with Matt's comment and let me add (somewhat redundantly)
>>
>>
>>> This isn't how you'd write MPI, is it? No, you
>
> I don't know if Chris has ever lived there. And he's great, but GFDL is
> an application, not a library.
>
GFDL is a lab next door to PPPL.
On Thu, Jul 26, 2018 at 12:16 PM Fande Kong wrote:
>
>
> On Thu, Jul 26, 2018 at 9:51 AM, Junchao Zhang
> wrote:
>
>> Hi, Pierre,
>> From your log_view files, I see you did strong scaling. You used 4X
>> more cores, but the execution time only dropped from 3.9143e+04
>> to 1.6910e+04.
>> Fro
>
>
> > Well, you could use _threshold to do more aggressive coarsening, but not
> > for thinning out
> > the interpolation.
>
> Increasing the threshold results in slower coarsening.
>
> Note that square_graph 10 is very unusual.
>
>
Actually this is not crazy. For linear tets it is the way to go
On Thu, Jul 26, 2018 at 11:04 AM Pierre Jolivet
wrote:
>
>
> > On 26 Jul 2018, at 4:24 PM, Karl Rupp wrote:
> >
> > Hi Pierre,
> >
> >> I’m using GAMG on a shifted Laplacian with these options:
> >> -st_fieldsplit_pressure_ksp_type preonly
> >> -st_fieldsplit_pressure_pc_composite_type additive
>
>
>>
>>> -pc_gamg_threshold[] - Before aggregating the graph
>>> GAMG will remove small values from the graph on each level
>>> -pc_gamg_threshold_scale - Scaling of threshold on
>>> each coarser grid if not specified
>>>
>>
>> Nope. Totally different things.
>>
>
> Well, you could use _thresh
>
>
>> Increasing the threshold results in slower coarsening.
>>
>
> Hmm, I think we have to change the webpage then:
>
>
> http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCGAMGSetThreshold.html
>
> I read it the opposite way.
>
I will make this clear. The doc just says what it doe
This is a complex shifted (in the parabolic, definite Helmholtz, I assume).
GAMG's default parameters can do strange things on a mass matrix (the limit
of this shift, except for the complex part).
Please run with -info and send me the output (big) or just grep on GAMG
(small). This will give detai
>
>
> Do you mean the coarse grids? GAMG reduces active processors (and
> repartitions the coarse grids if you ask it to) like telescope.
>
>
> No I was talking about the fine grid. If I do this (telescope then GAMG),
>
What does Telescope do on the fine grids? Is this a structured grid and
Telesc
>
>
> What does Telescope do on the fine grids? Is this a structured grid and
> Telescope does geometric MG?
>
>
> No, it’s a MatMPIAIJ. Telescope just concatenates local matrices together,
> makes some process idle (throughout the complete setup + solve), while the
> others do the work.
>
So you
On Fri, Jul 27, 2018 at 11:12 AM Jed Brown wrote:
> Pierre Jolivet writes:
>
> > Everything is fine with GAMG I think, please find the (trimmed)
> -eps_view attached. The problem is that, correct me if I’m wrong, there is
> no easy way to redistribute data efficiently from within PETSc when usin
>
> Everything is fine with GAMG I think, please find the (trimmed) -eps_view
> attached.
>
This looks fine. The eigen estimates are pretty low, but I don't know what
they really are.
> The problem is that, correct me if I’m wrong, there is no easy way to
> redistribute data efficiently from wit
On Wed, Sep 19, 2018 at 7:44 PM Smith, Barry F. wrote:
>
> Look at the code in KSPSolve_Chebyshev().
>
> Problem 1) VERY MAJOR
>
> Once you start running the eigenestimates it always runs them, this is
> because the routine begins with
>
> if (cheb->kspest) {
>
>but once cheb->kspes
FYI, I've built Barry's updates on SUMMIT and tested it on SUMMITDEV. I
can't run on SUMMIT now. It has been merged in the master branch.
THis is how you run the cuda tests (in the PETSc root directory):
make -f gmakefile.test test globsearch="snes*tutorials*ex19*cuda*"
Mark
On Thu, Sep 20, 20
On Sun, Oct 28, 2018 at 4:54 PM Smith, Barry F. wrote:
>
>Moved a question not needed in the public discussions to petsc-dev to
> ask Mark.
>
>
>Mark,
>
> PCGAMGSetCoarseEqLim - Set maximum number of equations on coarsest
> grid
>Is there a way to set the minimum number of equat
Defined make macro "NPMAX" to "128"
>
> then somehow this gets passed down to the AMGX cmake. Perhaps you could
> try less greedy numbers.
>
> Executing: /usr/bin/gmake -j59 -l166.4
>
> Perhaps try
>
> --with-make-np=20
>
> --with-m
>
>
>
> --with-make-np=20
>
This (with 16) seems to work. That is what I use on this machine.
Thanks,
>
> --with-make-load=20
>
> Barry
>
>
>
> > On Dec 3, 2019, at 4:17 PM, Mark Adams wrote:
> >
> > That might have been from an old .n
I have a branch on bitbucket but its not on gilab. How do I get it?
Thanks,
Mark
Nevermind, I figure it out,
On Tue, Dec 10, 2019 at 8:19 AM Mark Adams wrote:
> I have a branch on bitbucket but its not on gilab. How do I get it?
> Thanks,
> Mark
>
nu-cuda/externalpackages/git.amgx/../../thrust
>> -I/autofs/nccs-svm1_home1/adams/petsc/arch-summit-opt64-gnu-cuda/externalpackages/git..amgx/base/include
>> -I/sw/summit/cuda/10.1.168/include
>> -I/autofs/nccs-svm1_home1/adams/petsc/arch-summit-opt64-gnu-cuda/externalpackages
;
> On Fri, Dec 13, 2019 at 11:29 AM Mark Adams wrote:
>
>> There is a MR winding its way in. I think it is ready to go. There is
>> confusion with branches but it is something like
>> barry/12-01-19-pc-feature-amgx.
>>
>> It is a PC, but it is so fragile. I could
On Fri, Feb 21, 2020 at 4:51 PM Junchao Zhang via petsc-dev <
petsc-dev@mcs.anl.gov> wrote:
> Hello,
>
> I want to evaluate MatMult on GPU. I took a 2M x 2M matrix and ran with 6
> mpi ranks and 6 GPUs. It took about 0.9 seconds. A kernel launch or a
> stream synchronization took about 10us.
>
Maybe the announcement is somewhere but its not on the first Google hit (
https://www.mcs.anl.gov/petsc/). The logo should get on there also.
(Patrick Farrell did not know about it)
Thanks,
Mark
cs.anl.gov/petsc/meetings/ is
> supposed to have this info? Karl?
>
> Wrt logo - perhaps this is what you are referring to?
>
> https://gitlab.com/petsc/petsc/-/merge_requests/2501
That does the trick. Yes.
>
>
> Satish
>
> And perhaps On Sat, 22 Feb 2020, Mark A
ust a matter of a few days. :-)
>
> Best regards,
> Karli
>
> On 2/22/20 9:11 PM, Mark Adams wrote:
> > Maybe the announcement is somewhere but its not on the first Google hit
> > (https://www.mcs.anl.gov/petsc/). The logo should get on there also.
> >
> >
One more thought. Maybe just put a permanent link on the main page to the
meeting page. eg, 'Join us at our annual users meeting this summer'
On Sun, Feb 23, 2020 at 12:32 AM Mark Adams wrote:
> Cool, One of our friends did not know about it because it was not on the
> web page.
a few days. :-)
>
> Best regards,
> Karli
>
>
>
> On 2/23/20 6:32 AM, Mark Adams wrote:
> > Cool, One of our friends did not know about it because it was not on the
> > web page.
> >
> > No big deal, but it might be nice to put the place and date as
I am trying to debug an application code that works with v3.7 but fails
with master. The code works for "normal" solvers but for a solver that uses
FieldSplit it fails. It looks like vectors are not getting created from
MatCreateVecs with a matrix that is a MatNest (I can't run the code).
I have p
se.
>
> Best regards,
>
> Jacob Faibussowitsch
> (Jacob Fai - booss - oh - vitch)
> Cell: (312) 694-3391
>
> On Mar 16, 2020, at 7:54 PM, Mark Adams wrote:
>
> I am trying to debug an application code that works with v3.7 but fails
> with master. The code works for
On Mon, Mar 16, 2020 at 10:04 PM Satish Balay wrote:
> Wrt fortran I/O - you can try adding calls to flush() after that.
> Similarly C has fflush()
>
This is in zmatnestf.c. So C code that is the fortran stubs. Custom in this
case.
>
> Satish
>
>
> On Mon, 16 Ma
,
>
> Jacob Faibussowitsch
> (Jacob Fai - booss - oh - vitch)
> Cell: (312) 694-3391
>
> On Mar 16, 2020, at 9:26 PM, Mark Adams wrote:
>
>
>
> On Mon, Mar 16, 2020 at 9:50 PM Jacob Faibussowitsch
> wrote:
>
>> Hello Mark,
>>
>> If you are us
e a print but nothing immediately outputs to
> the screen. We do not explicitly flush, so it could
> be that this weird OS is buffering in a way you do not expect.
>
> Matt
>
> On Mon, Mar 16, 2020 at 11:02 PM Mark Adams wrote:
>
>> Thanks, I have verified that PetscP
Our code broke in moving from v3.7 to current. The problem seems to be in
MatCreateVecs
Our code has:
Vec::XVec
Vec::BVec
this%xVec2 = PETSC_NULL_VEC
this%bVec2 = PETSC_NULL_VEC
call MatCreateVecs(solver%KKTmat,solver%xVec2,solver%bVec2,ierr)
Petsc code:
PETSC_EXTERN void PETSC_STDCALL m
>
> Passing NULL to MatCreateVecs() means that you do not want a vector out:
>
>
Yes, I want both vectors out and we pass it Vec that have been initialized
with PETSC_NULL_VEC
>
> https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatCreateVecs.html
>
> I am guessing that this was b
On Tue, Mar 17, 2020 at 2:25 PM Matthew Knepley wrote:
> On Tue, Mar 17, 2020 at 2:16 PM Mark Adams wrote:
>
>>
>>
>>> Passing NULL to MatCreateVecs() means that you do not want a vector out:
>>>
>>>
>> Yes, I want both vectors out an
that.
(Did SNESSetFunction require a non-null vector since v3.7?)
Thanks,
Mark
On Tue, Mar 17, 2020 at 2:52 PM Matthew Knepley wrote:
> On Tue, Mar 17, 2020 at 2:48 PM Mark Adams wrote:
>
>>
>>
>> On Tue, Mar 17, 2020 at 2:25 PM Matthew Knepley
>> wrote:
>>
-2 )
> (gdb) p A
> $5 = ( v = -2 )
> (gdb) p ksp
> $6 = ( v = -2 )
> (gdb) p pc
> $7 = ( v = -2 )
> (gdb) c
> Continuing.
>
> Breakpoint 3, MAIN__ () at ex1f.F90:52
> 52call MPI_Comm_size(PETSC_COMM_WORLD,size,ierr)
> (gdb) p x
> $8 = ( v =
eturn arguments are not going to work
in Fortran and we need to work around it.
On Tue, Mar 17, 2020 at 3:09 PM Mark Adams wrote:
> I see what is happening. I think. We have:
>
> call MatCreateVecs(solver%KKTmat,solver%xVec2,solver%bVec2,ierr)
> call VecDuplicate(solver%bVec2,
tscErrorCode ierr
> > 35PetscInt i,n,col(3),its,i1,i2,i3
> > 36PetscBool flg
> > 37PetscMPIInt size
> > (gdb) p x
> > $2 = ( v = -2 )
> > (gdb) p b
> > $3 = ( v = -2 )
> > (gdb) p u
> > $4 = ( v = -2 )
&
Is there any plans/interest in assembling AIJ matrices on GPUs?
I have a somewhat strangely balanced FE operator and I have each thread
block create one element matrix, one thread per integration point with an
expensive computation at each integration point. Then I assemble the
matrices on the CPU
fusing the element construction with assembly? That
seems reasonable now that I think about it.
It's the same coloring, right?
And, do we have an appropriate coloring method now?
Thanks,
Mark
> but lots of people
> will need AIJ so it's certainly worthwhile.
>
> Mark Adams
On Sun, Mar 29, 2020 at 7:51 PM Jed Brown wrote:
> Mark Adams writes:
>
> > On Sun, Mar 29, 2020 at 6:20 PM Jed Brown wrote:
> >
> >> We are interested in assembling point-block diagonals for use in
> >> pbjacobi smoothing. This is much simpler than AIJ,
&g
Note, when I rebased I got one conflict, in a header where I added stuff at
the end and master did also so I just kept them both and continued.
Other than that it was a clean rebase.
On Thu, Apr 2, 2020 at 10:15 PM Mark Adams wrote:
>
>
> On Thu, Apr 2, 2020 at 9:32 PM Satish Bal
On Thu, Apr 2, 2020 at 10:33 PM Satish Balay wrote:
> Is this branch mark/feature-xgc-interface-rebase pushed?
yes.
> I had done a build with the current state of it - and the build went
> through fine. This was on linux.
>
> Will try OSX now.
>
> Satish
>
> On
" to merge the remote branch into yours)
Untracked files:
(use "git add ..." to include in what will be committed)
out2.txt
src/dm/impls/plex/examples/tutorials/Landau/
On Thu, Apr 2, 2020 at 10:45 PM Mark Adams wrote:
>
>
> On Thu, Apr 2, 2020 at 10:33 PM
should I
just start over? I thought I started with a clean version of my code from
the repo.
Thanks,
> So likely something in your rebased sources is causing bfort to fail..
>
> Satish
>
> On Thu, 2 Apr 2020, Mark Adams wrote:
>
> > I see:
> > 22:46 128 mark/feature-xgc-
he primary issue: configure, the primary error is not logged in
> configure.log. Running manually - I get:
>
> bfort terminating at 664: Exceeded output count limit!
>
> $ grep '/*@' *.c |wc -l
> 600
>
> This single dir creates 600 man pages? Perhaps its time to re
I am in no hurry to rebase.
This branch added just a few fortran stubs, so Plex is clearly about to
bust. (Matt has a branch with more Fortran stubs)
I can wait until it's in master.
Thanks,
Mark
On Fri, Apr 3, 2020 at 12:06 PM Satish Balay via petsc-dev <
petsc-dev@mcs.anl.gov> wrote:
> On Fri,
I use grep.
On Thu, Apr 9, 2020 at 9:11 PM Jacob Faibussowitsch
wrote:
> Hello All,
>
> Is there any built-in way to filter -help (like -info)? Standard PETSc
> -help dumps an ungodly amount of stuff and if using SLEPc it spits out 8x
> as much.
>
> Best regards,
>
> Jacob Faibussowitsch
> (Jaco
Should I do an MR to fix this?
make.log
Description: Binary data
include petsclog.h. Don't know why OMP triggered the
> error.
> --Junchao Zhang
>
>
> On Mon, Apr 13, 2020 at 9:59 AM Mark Adams wrote:
>
>> Should I do an MR to fix this?
>>
>
BTW, I can build on SUMMIT with logging and OMP, apparently. I also seem to
be able to build with debugging. Both of which are not allowed according
the the docs. I am puzzled.
On Mon, Apr 13, 2020 at 12:05 PM Mark Adams wrote:
> I think the problem is that you have to turn off logging w
-log=0 with --with-threadsafety
***
On Mon, Apr 13, 2020 at 2:54 PM Junchao Zhang
wrote:
>
>
>
> On Mon, Apr 13, 2020 at 12:06 PM Mark Adams wrote:
>
>> BTW, I can build on SUMMIT with logging and OM
tey.
> I have https://gitlab.com/petsc/petsc/-/merge_requests/2714 that should
> fix your original compilation errors.
>
> --Junchao Zhang
>
> On Mon, Apr 13, 2020 at 2:07 PM Mark Adams wrote:
>
>> https://www.mcs.anl.gov/petsc/miscellaneous/petscthreads.html
>>
wrote:
> Probably matrix assembly on GPU is more important. Do you have an example
> for me to play to see what GPU interface we should have?
> --Junchao Zhang
>
> On Mon, Apr 13, 2020 at 5:44 PM Mark Adams wrote:
>
>> I was looking into assembling matrices with thre
On Thu, Apr 16, 2020 at 9:31 AM Matthew Knepley wrote:
> On Thu, Apr 16, 2020 at 8:42 AM Mark Adams wrote:
>
>> Yea, GPU assembly would be great. I was figuring OMP might be simpler.
>>
>> As far as the interface, I am flexible, the simplest way to do it would
>>
On Thu, Apr 16, 2020 at 10:18 AM Matthew Knepley wrote:
> On Thu, Apr 16, 2020 at 10:11 AM Mark Adams wrote:
>
>> On Thu, Apr 16, 2020 at 9:31 AM Matthew Knepley
>> wrote:
>>
>>> On Thu, Apr 16, 2020 at 8:42 AM Mark Adams wrote:
>>>
>>>>
I would like to modify SuperLU_dist but if I change the source and
configure it says no need to reconfigure, use --force. I use --force and it
seems to clobber my changes. Can I tell configure to use build but not
download SuperLU?
rst, commit your changes to the superlu_dist branch, then rerun
>> configure with
>>
>> —download-superlu_dist-commit=HEAD
>>
>>
>> > On Apr 20, 2020, at 12:50 AM, Mark Adams wrote:
>> >
>> > I would like to modify SuperLU_dist but if I chang
Also, we have PRNTlevel>=2 in SuperLU_dist. This is causing a lot of
output. It's not clear where that is set (it's a #define)
On Sun, Apr 19, 2020 at 9:28 PM Mark Adams wrote:
> Sherry, I found the problem.
>
> I added this print statement to dDestroy_LU
>
>
level=1
-DPROFlevel=0', but I think it is set at >= 2. I have manually disabled the
print statements (~ 5 places).
Thanks,
Mark
> Sherry
>
>
> On Sun, Apr 19, 2020 at 6:32 PM Mark Adams wrote:
>
>> Also, we have PRNTlevel>=2 in SuperLU_dist. This is causing a lot
e print statements (~ 5 places).
>
> Thanks,
> Mark
>
>
>> Sherry
>>
>>
>> On Sun, Apr 19, 2020 at 6:32 PM Mark Adams wrote:
>>
>>> Also, we have PRNTlevel>=2 in SuperLU_dist. This is causing a lot of
>>> output. It's n
Failing for empty matrices sucks of MKL.
I don't know of any reason that we care how many columns are in a
matrix with no rows. I see no reason not to let it stay the way it is, that
is with the number of columns that it should have if it had rows.
I would vote for just doing what you need to do to
My pipeline is failing on ksp/ex71.c and it seems to be picking up an "alt"
version of the output. I tried REPLACE=1 and both output files seemed to
change. What is going on with these "alt" output files?
total number of mallocs used during MatSetValues calls=0
# < inner preconditioner:
# 94,95c66,67
# < type: ml
# < type is MULTIPLICATIVE, levels=3 cycles=v
# ---
# > type: gamg
# > type is MULTIPLICATIVE, levels=2 cycles=v
# 97a70,
>
>
>
> alt files are to mask diffs that come up due to differences in OS, CPU,
> compilers [and versions] etc - that produce numerical differences. So the
> solver type should not change from regular to alt file. [unless its a
> default - and this changes on certain builds]
>
> Or if the alt file
I am getting these diff failures in the pipeline. I am guessing that we
started throwing diffs for integer diffs recently?
DMSwarm seems to pick a different size 'atomic' for different sized
integers.
not ok diff-dm_impls_swarm_tutorials-ex1_0 # Error code: 1
# 24c24
# < [ 1]DMSwarm_rank
requires: !pgf90_compiler
>
> Satish
>
> On Tue, 7 Jul 2020, Mark Adams wrote:
>
> > DMSwarm has custom fortran wrappers in ftn-custom and f90-custom. I am
> > finding that the f90 one has junk in the string length with PGI compilers
> > (see attached). The o
_FORTRAN_CHARLEN_T lenN)
>
> Satish
>
>
> On Tue, 7 Jul 2020, Mark Adams wrote:
>
> > Thanks, I got this error:
> >
> > PETSC_EXTERN void dmswarmgetfield_(DM *dm, char *name, PetscInt
> *blocksize,
> > PetscDataType *type, F90Array1d *ptr, int *ie
I just started getting this error and I have no idea what it from. Any
ideas?
08:52 knepley/feature-swarm-fortran= ~/Codes/petsc$ make -f gmakefile test
search='dm_impls_swarm_tutorials-ex1f90_0'
Using MAKEFLAGS: search=dm_impls_swarm_tutorials-ex1f90_0
FC arch-macosx-gnu-g/tests/dm/impl
>
>
> I suspect you have upgraded gfortran and now have gfortran version 10,
09:35 knepley/feature-swarm-fortran= ~/Codes/petsc$ mpif90 --version
GNU Fortran (Homebrew GCC 10.1.0) 10.1.0
> which is pickier about argument matching.
>
> If you add -fallow-argument-mismatch to the fortran flags, d
REPLACE=1 is doing something funny:
11:12 knepley/feature-swarm-fortran *= ~/Codes/petsc$ make -f gmakefile
test search='dm_impls_swarm_tutorials-ex1f90_0' REPLACE=1
Using MAKEFLAGS: REPLACE=1 search=dm_impls_swarm_tutorials-ex1f90_0
TEST
arch-macosx-gnu-g/tests/counts/dm_impls_swarm_tutor
I am getting pipeline errors for adams/feature-dmplex-snes-landau on
Windows-uni. mswin-gnu is fine.
There are lots of errors, here is the start. I've tried to match
mat/viennacl, which has .cxx files, w/o luck.
Any ideas?
Thanks,
Mark
CXX
arch-ci-mswin-uni/obj/dm/impls/plex/landau/kokk
s in the pipeline. I don't have this machine.
>
> Barry
>
>
> > On Jul 12, 2020, at 11:51 AM, Mark Adams wrote:
> >
> > I am getting pipeline errors for adams/feature-dmplex-snes-landau on
> Windows-uni. mswin-gnu is fine.
> >
> > There are lots of
; > How is kokkos enabled? Is there no --download-kokkos [or equivalent]
> needed in configure for kokkos?
> >
> > Note --with-cxx=0 - likely g++ is somehow getting picked up here [ with
> /usr/include/stdio.h] - resulting in failure
> >
> > Satish
> >
> > O
h] - resulting in failure
>
> Satish
>
> On Sun, 12 Jul 2020, Mark Adams wrote:
>
> > I am getting pipeline errors for adams/feature-dmplex-snes-landau on
> > Windows-uni. mswin-gnu is fine.
> >
> > There are lots of errors, here is the start. I've
I have a wip C++ test that just calls PetscInitialize and Finalize, and I
am getting some sort of error in the include file:
https://gitlab.com/petsc/petsc/-/jobs/635768809
All I do is:
#include
Any ideas?
Thanks,
Mark
#requiresdefine is not meant for example makefile. You can use:
>
> build: requires: in /*TEST section
>
> Satish
>
>
> On Mon, 13 Jul 2020, Mark Adams wrote:
>
> > I have a wip C++ test that just calls PetscInitialize and Finalize, and I
> > am getting some s
I am getting this error on two pipeline tests. They don't seem to have
anything to do with my branch. I have gotten these several times today.
/home/petsc/builds/KFnbdjNX/0/petsc/petsc/arch-ci-linux-cuda-single-cxx/lib/libpetscdm.so:
undefined reference to `DMPlexSNESComputeJacobianFEM'
/home/pets
1 - 100 of 1129 matches
Mail list logo