"Jason Dunham" <[EMAIL PROTECTED]> wrote:

>Other synchronous nodes may block the entire thread, preventing any
>other LV tasks in that thread from executing.  Remember that we LV
>programmers can choose "execution systems" but we don't have direct
>control over threads. We are also limited in that parallel code on the
>same diagram is always going to end up in the same execution system,
>which probably puts it in the same thread. The term we LV programmers
>use for these kinds of nodes is "pure evil".
>
>There are plenty of nodes in LV which appear synchronous to LV
>programmers (nearly all the wait functions, queue functions, etc), but
>don't block the LV execution system from  executing parallel tasks in
>the same VI.  That's what we want from our DAQ and GPIB calls too, but
>we're at the mercy of the NI engineers who create the LabVIEW node.  I'd
>guess that these kinds of nodes have to be internally asynchronous, so
>they don't block the thread, but must contain code to wait until the
>spawned thread finishes its task so that they appear synchronous to us.

Well there are several ways of parallelism in LabVIEW. One is indeed
the LabVIEW task scheduler itself. This one can be only used for LabVIEW
nodes itself and has been in LabVIEW since at least 3.0 but probably
already in 2.0. Most nodes in LabVIEW are synchronous in respect to this
scheduler, with exception of the Wait and similar functions and some
other low level nodes such as VISA Read/Write, TCP Read/Write, and the
obsolete Device nodes etc. Those nodes internally use callbacks and
occurrences to allow asynchronous operation and also interface to the
scheduler to tell it that they are waiting for something and that other
stuff should get priority.
The Call Library Node and CIN can not really make use of such things
(Well they could but NI would have to document a very complicated API
to interface to that scheduler, which I'm sure has a very delicate
mechanisme easily affected with new features in LabVIEW. Second the
C code would need to take very specific precoutions, very difficult
to completely document). As such I'm sure NI does not have the slightest
interest to use such an interface even in their own external code
drivers as it would make such a driver way to dependant on the actual
LabVIEW version.

So CLN and CIN are synchronous in that they block the calling thread
in LabVIEW. This is also almost not avoidable as LabVIEW has no way
of knowing what an external code is doing while executing. For all
it is worth it could be doing horrible things to the stack and only
restore it properly just before returning. If LabVIEW would try to
preempt such an external code even a BSOD could be a possibility.
The big difference a CLN or CIN can make, if it is safely written to
be reentrant, you can configure the according LabVIEW node to call
the code reentrant which in LabVIEW makes a difference as reentrant
external code is called in the current execution system while non-
reentrant external code is called in the UI execution system.

Before LabVIEW 7.0 this still had some limitation as LabVIEW by default
only allocated 1 thread per execution system. It was ok as you only
blocked one of the many execution systems but you had to be careful when
assigining execution systems to VIs to account for any external code
which might block such an execution system for some time. In LabVIEW 7
this has been increased to 4 threads I believe (except the UI system
which still uses only one thread to make non-reentrant code safe to run
in there). Of course non-reentrant external code will use up the single
thread available for the UI system and will also compete with the actual
UI drawing itself and in that way block many other things in LabVIEW
indirectly as LabVIEW sometimes has to wait for the UI thread to complete
its thing before it can continue executing the diagram code.

So in LabVIEW 7 even though the reentrant CLN blocks the calling thread
LabVIEW still has more threads left in that execution system to execute
other not dataflow dependant parts of the diagram. As such I have to admit
I'm very impressed by the almost seemingless way LabVIEW makes multi-
threading simply work.

There is a VI in vi.lib/utilities/sysinfo.llb to change the number of
threads allocated to an execution system. This supposedly also works in
LabVIEW 6.x but there may be issues with multiple threads in one execution
system which work not as smooth as in LabVIEW 7.0.

>If the DAQmx nodes in LV don't have this capability, then I'm not sure
>DAQmx the Great Leap Forward which I've been led to believe.  We can get
>the behavior we want by polling the status of AI Read, so why change to
>DAQmx if we still have to implement this workaround to get decent
>multitasking from our computers.

The big leap forward in DAQmx is that the entire underlying DAQ framework
is made reentrant whereas that was not the case for NI-DAQ. Eventhough I
think the CLNs calling NI-DAQ were mostly configured to be reentrant, the
underlying API was in big parts not and therefore protection was used in
the intermediate lvdaq.dll to block entire resources with semaphores while
they were in use. DAQmx is a new framework supposedly redesigned from the
ground up and blocking most probably only happens on the lowest level for
the time an external resource (hardware registers etc) are accessed but
not for an entire high level call.

Rolf Kalbermatter
CIT Engineering Nederland BV    tel: +31 (070) 415 9190
Treubstraat 7H                  fax: +31 (070) 415 9191
2288 EG Rijswijk        http://www.citengineering.com
Netherlands             mailto:[EMAIL PROTECTED]
 



Reply via email to