Okay, I'm solidly impressed now.
I know it says all over the place that because Paraview uses Python, you can
use Python. But for some reason, I always just look at it as using the
grammar/structure of Python to interface with vtk and Paraview -- for some
reason it takes me some time to recognize that I can use everything that comes
with Python. Not sure why since I use Python every day for a lot of things.
I also get stuck on what is a transparent, helper function and what is an
opaque interface to some vtk C++ function. For example, it didn't occur to me
that the coprocessor.WriteData() function just looped over a list of writers,
formatted the file name, and then updated the filter. I thought it was
something in C++ that did things that weren't exposed to the Python interface.
Now that I'm slowly recognizing those things though, Catalyst is immensely more
useful than the way I was using it before. I was sort of pigeonholed into
thinking it had to look identical to the output from the generator and the only
thing to tweak was in the pipeline.
I've got it set up now to track volume integrated variables every fixed unit of
time and also to check if different features appear in the solution. If they
appear, it writes some different data files and images. I've attached the
processing method just to close the loop and provide an example of how to do
the things I couldn't figure out (and didn't think was possible). I know
there's several things that could probably be done better in it, but it works
for me at the moment.
Thanks again!
Tim
________________________________
From: Andy Bauer <[email protected]>
Sent: Tuesday, February 7, 2017 11:03 AM
To: Gallagher, Timothy P
Cc: [email protected]
Subject: Re: [Paraview] Pipeline update with Catalyst
Indeed it is possibly to write whenever you choose instead of just based on
time step number. If you skip the RegisterWriter() when making a Pipeline
object writer and then setting the file name and writing it manually you will
get what you want. It would look something like the attached file.
Note that I've set the outputfrequency to 1 here since I want to have
DoCoProcessing() called every time step. You can put more logic into
RequestDataDescription() if you don't want to have to create your grid and have
DoCoProcessing() called every time step.
Thanks for asking detailed questions! There is a whole lot of flexibility
provided by using the Python scripts in Catalyst but it can take a bit of
expertise to get exactly what you want. I am glad that you are taking advantage
of this.
Cheers,
Andy
On Tue, Feb 7, 2017 at 10:33 AM, Gallagher, Timothy P
<[email protected]<mailto:[email protected]>> wrote:
Along those lines, is it possible to turn on/off specific data writers based on
something other than time step number?
I have the data collection based on something from IntegrateVariables (it did
require a broadcast from root), but I am creating multiple surfaces and would
like to only save the data files when the surface appears.
Is there a way to turn on/off the writers through the coprocessor.Pipeline
object, or maybe through arguments to WriteData?
As always, thanks for your help.
Tim
________________________________
From: Andy Bauer <[email protected]<mailto:[email protected]>>
Sent: Tuesday, February 7, 2017 10:05 AM
To: Gallagher, Timothy P
Cc: [email protected]<mailto:[email protected]>
Subject: Re: [Paraview] Pipeline update with Catalyst
The pipeline is update mechanism is often called a lazy update scheme meaning
only do the requested work and no more. If the UpdateProducers() method
automatically updated all of the filters there would be a lot of unneeded work
that would be done. Think about a Catalyst script that branched into 3 separate
outputs with a lot of work done in each branch. If only one output was wanted
during a time step that could potentially be a lot of extra, unneeded work.
As for parallel communication in Catalyst, again it's an efficiency thing
meaning that if we can avoid communication we may as well do it. In interactive
mode (i.e. through the GUI) it's useful to know things like the total number of
points and cells, bounds and field ranges which requires communication. In
Catalyst this information isn't usually needed so that's the type of
information that would need to be globally communicated. For filters like the
Integrate Attributes filter, you should get the same behavior in the GUI as in
Catalyst with regard to parallel output since the filter itself does not know
how ParaView is running. Some filters (e.g. Resample with Dataset) though only
have non-empty results on process 0 so it is definitely safest to check filters
like Integrate Attributes in parallel.
Cheers,
Andy
On Tue, Feb 7, 2017 at 7:51 AM, Gallagher, Timothy P
<[email protected]<mailto:[email protected]>> wrote:
Thanks Andy,
I had a feeling the pipeline had to be manually updated. I thought maybe the
UpdateProducers also updated the pipeline. Thanks for clarifying that.
I also have the same concern about the value stored in each block's CellData
field. I couldn't find any documentation that indicates how the
IntegrateVariables filter distributes the data. But, looking at the block VTK
files that are written out, they all contain the same value of Area so I think
a global reduction is done in the filter and all blocks have the total values.
I could be wrong though.
Thanks again,
Tim
________________________________
From: Andy Bauer <[email protected]<mailto:[email protected]>>
Sent: Tuesday, February 7, 2017 7:44 AM
To: Gallagher, Timothy P
Cc: [email protected]<mailto:[email protected]>
Subject: Re: [Paraview] Pipeline update with Catalyst
Hi Tim,
The short answer is that you need to do
coprocessor.Pipeline.flameArea.UpdatePipeline() before checking for the area.
You also may need to use MPI to do the global sum or broadcast of the value
since I think that value will either just have local values or maybe just the
proper global value on processor 0. You should have access to mpi4py in the
Python script in order to do that.
The long answer is that ParaView & VTK use a demand-driven pipeline design
meaning that until something explicitly tells a pipeline to update it won't.
For the Catalyst scripts this is done in the coprocessor.WriteData(),
coprocessor.WriteImages() and coprocessor.DoLiveVisualization() calls for each
output that is required (e.g. for a writer that is supposed to output at that
time step but not for other writers that are not supposed to output at that
time step). You can manually force a filter to update by using the
UpdatePipeline() method which will update that pipeline along with everything
it depends on. It won't update any filters though that don't have an effect on
it computing its requested information which means that for the flameArea
filter it won't affect any writers or image output stuff which is what you want.
There is some information on Catayst script details at
https://blog.kitware.com/anatomy-of-a-paraview-catalyst-python-script/ but I
don't think that contains this information.
[https://blog.kitware.com/source/files/Small.4_447936566.jpg]<https://blog.kitware.com/anatomy-of-a-paraview-catalyst-python-script/>
Anatomy of a ParaView Catalyst Python Script | The Kitware
...<https://blog.kitware.com/anatomy-of-a-paraview-catalyst-python-script/>
blog.kitware.com<http://blog.kitware.com>
Introduction. ParaView Catalyst has yielded some amazing results. Such results
include iso-surfacing and generating images with 256K Message Passing Interface
(MPI ...
Cheers,
Andy
On Tue, Feb 7, 2017 at 7:14 AM, Gallagher, Timothy P
<[email protected]<mailto:[email protected]>> wrote:
Hello again,
I am working on a pipeline using Catalyst that writes data only when features
are detected. The idea is to have a 3D contour generated in the pipeline, and
when it is big enough, start recording data. There is a long lead-up to when
the features appear, and then they disappear rapidly, so I would like to only
collect data when the features are present.
To that end, my DoCoProcessing() function has something that checks the 'Area'
value in the CellData of an IntegrateVariables filter. The full function is
below. However, this doesn't ever write any images or data. It also doesn't
throw any errors, so I have a feeling the pipeline isn't actually
evaluated/updated after the call to UpdateProducers.
So, my question -- at what point in the DoCoProcessing function is the pipeline
actually evaluated? Do all the filters execute when the UpdatedProducers
function is called? Or do they only update when the outputs to which they are
connected are called, ie. WriteData and WriteImages?
Thanks,
Tim
def DoCoProcessing(datadescription):
"Callback to do co-processing for current timestep"
global coprocessor
global lastTime
global deltaT
# Update the coprocessor by providing it the newly generated simulation
data.
# If the pipeline hasn't been setup yet, this will setup the pipeline.
coprocessor.UpdateProducers(datadescription)
curTime = datadescription.GetTime()
if curTime >= lastTime + deltaT:
lastTime = curTime
if coprocessor.Pipeline.flameArea.CellData['Area'] > 1e-9:
# Write output data, if appropriate.
coprocessor.WriteData(datadescription);
# Write image capture (Last arg: rescale lookup table), if appropriate.
coprocessor.WriteImages(datadescription, rescale_lookuptable=False)
# Live Visualization, if enabled.
coprocessor.DoLiveVisualization(datadescription, "localhost", 22222)
_______________________________________________
Powered by www.kitware.com<http://www.kitware.com>
Visit other Kitware open-source projects at
http://www.kitware.com/opensource/opensource.html
Please keep messages on-topic and check the ParaView Wiki at:
http://paraview.org/Wiki/ParaView
Search the list archives at: http://markmail.org/search/?q=ParaView
Follow this link to subscribe/unsubscribe:
http://public.kitware.com/mailman/listinfo/paraview
# ------------------------ Processing method ------------------------
def WriteDataFile(comm, object, filename, curTime, CellData=True):
object.UpdatePipeline()
if CellData:
data = object.CellData
else:
data = object.PointData
if comm.Get_rank() == 0:
with open(filename, 'a+') as fid:
# Rewind the file to see if it is empty or not
fid.seek(0)
if len(fid.read(1)) == 0:
# The file is empty, we need to create the header
outputStr = '# Time '
for var in data.keys():
outputStr += '%s ' % var
outputStr += '\n'
fid.write(outputStr)
outputStr = '%15.7e ' % curTime
for var in data.keys():
value = data[var].GetRange()[0]
outputStr += '%15.7e ' % value
outputStr += '\n'
fid.write(outputStr)
def DoCoProcessing(datadescription):
"Callback to do co-processing for current timestep"
global coprocessor
global lastTime
global deltaT
global comm
global paths
# Update the coprocessor by providing it the newly generated simulation data.
# If the pipeline hasn't been setup yet, this will setup the pipeline.
coprocessor.UpdateProducers(datadescription)
curTime = datadescription.GetTime()
timestep = datadescription.GetTimeStep()
if curTime >= lastTime + deltaT:
lastTime = curTime
coprocessor.Pipeline.volumeIntegrals.UpdatePipeline()
WriteDataFile(comm, coprocessor.Pipeline.volumeIntegrals,
paths['outdir']+'/post/volumeIntegrals.dat',
curTime,
CellData = True)
# Check if we need to write images and the first flameArea datasets to disk
coprocessor.Pipeline.flameArea.UpdatePipeline()
area = np.zeros(1)
if comm.Get_rank() == 0:
try:
area[0] = coprocessor.Pipeline.flameArea.CellData['Area'].GetRange()[0]
except AttributeError:
area[0] = 0.0
else:
area[0] = 0.0
comm.Bcast(area, root=0)
if area[0] > 1e-9:
# Write image capture (Last arg: rescale lookup table), if appropriate.
coprocessor.WriteImages(datadescription, rescale_lookuptable=False)
# Write out the surface and the area
coprocessor.Pipeline.flameSurfaceWriter.FileName = paths['outdir']+'/post/flameSurface_%07i.vtm' % timestep
coprocessor.Pipeline.flameSurfaceWriter.UpdatePipeline(curTime)
WriteDataFile(comm, coprocessor.Pipeline.flameArea,
paths['outdir']+'/post/flameArea.dat',
curTime,
CellData = False)
# Check for the second flame surface
coprocessor.Pipeline.flameArea2.UpdatePipeline()
area = np.zeros(1)
if comm.Get_rank() == 0:
try:
area[0] = coprocessor.Pipeline.flameArea2.CellData['Area'].GetRange()[0]
except AttributeError:
area[0] = 0.0
else:
area[0] = 0.0
comm.Bcast(area, root=0)
if area[0] > 1e-9:
# Write image capture (Last arg: rescale lookup table), if appropriate.
coprocessor.WriteImages(datadescription, rescale_lookuptable=False)
coprocessor.Pipeline.flameSurface2Writer.FileName = paths['outdir']+'/post/flameSurface2_%07i.vtm' % timestep
coprocessor.Pipeline.flameSurface2Writer.UpdatePipeline(curTime)
WriteDataFile(comm, coprocessor.Pipeline.flameArea2,
paths['outdir']+'/post/flameArea2.dat',
curTime,
CellData = False)
_______________________________________________
Powered by www.kitware.com
Visit other Kitware open-source projects at
http://www.kitware.com/opensource/opensource.html
Please keep messages on-topic and check the ParaView Wiki at:
http://paraview.org/Wiki/ParaView
Search the list archives at: http://markmail.org/search/?q=ParaView
Follow this link to subscribe/unsubscribe:
http://public.kitware.com/mailman/listinfo/paraview