That's funny, because I recently experienced the CPU slowdown also when
reading fixed size blocks from a continuous acquisition in DAQmx.
Perhaps it is not being totally blocked, but it definitely slows it
down.  Perhaps there is a bug in DAQmx that NI is not aware of.  I am
pretty sure I set up multithreading properly, but I'm not positive.

I fixed the problem by using the old method.  I check the number of
samples available using a property node, calculate how many milliseconds
it will be until my data is ready, use a millisecond delay to wait until
the data is ready, then read the block of data.  My CPU time went from a
high percentage to a very low percentage when this was added to the
code.

Bruce

------------------------------------------
Bruce Ammons
Ammons Engineering
www.ammonsengineering.com
(810) 687-4288 Phone
(810) 687-6202 Fax



-----Original Message-----
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of
[EMAIL PROTECTED]
Sent: Wednesday, May 19, 2004 8:59 AM
To: [EMAIL PROTECTED]; [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Subject: Re: DAQmx, etc.






Hi,

I'd like to clarify a few things about the "blocking" behavior of the
Read vis (the same rules also apply to the rest of DAQmx).

- One of the main problems with the blocking behavior in Traditional DAQ
was that you could not do other operations while you were waiting in the
AI.Read.vi for samples to become available. This was due to the single
threaded model of the Traditional DAQ driver and the fact the
Traditional DAQ used CINs (though they were replaced by DLLs in NI-DAQ
6.x) DAQmx if fully multithreaded and does not have this limitation. For
example, you can run two versions of your "cont&graph.vi" example
simultaneously (on different devices) w/o problems. Alternately, if you
have two parallel while loops in a VI with a Read.vi in one of them, the
other while loop will not be slowed down while the Read vi is waiting
for samples to become available.

- Scott's "non-blocking AI. Read vi" in Traditional DAQ works by waiting
for the requested number of samples to become available before calling
the Read.vi.  DAQmx supports this feature natively when "-1" is wired
into the "number of samples per channel" parameter on the DAQmx Read vis
for continuous acquisitions. Here is the online documentation for this
parameter.
 

 number of samples per channel specifies the number of samples to read.
If  
 you leave this input unwired or set it to -1, NI-DAQmx determines how
many 
 samples to read based on if the task acquires samples continuously or

 acquires a finite number of samples.

 

 

 If the task acquires samples continuously and you set this input to -1,

 this VI reads all the samples currently available in the buffer.

 

 

 If the task acquires a finite number of samples and you set this input
to  
 -1, the VI waits for the task to acquire all requested samples, then
reads 
 those samples. If you set the Read All Available Data property to TRUE,

 the VI reads the samples currently available in the buffer and does not

 wait for the task to acquire all requested samples.

 





Regards,
Rajesh Vaidya

Measurements Infrastructure Group
National Instruments




>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

Subject: Re: DAQmx, etc.
From: "Scott Hannahs" <[EMAIL PROTECTED]>
Date: Tue, 18 May 2004 15:16:16 -0400

At 10:24 -0700 5/18/04, tim wen wrote:
>In LV6 the 'AI read.vi' hogs the CPU waiting for the number of samples
requested to be available.
>someone (sorry for not remembering the name) came up with a 
>'non-blocking
AI read.vi' which i have
>been using happily.
>i noticed the new DAQmx 'cont&graph.vi' is hogging the CPU also.  is 
>there
a cure for it ( i think it makes
>DLL calls)?

One version is at <http://sthmac.magnet.fsu.edu/labview> in the VI
library. I think there are a number of these around.  I have not updated
it for DAQmx since it is not available for my development platform. :-(

I don't know if it would be a simple modification to make it work with
DAQmx.  It is a fairly simple concept and not too complicated code.


>another question:  i've been using LV2 style globals to pass data 
>between
parallel loops
>and am wondering if a queue is a better way to go?
Probably.  If you are just passing data in one direction it can work
well. With a LV2 style global you can build in internal processing and
value manipulation to the global (ie intelligent global).

-Scott





Reply via email to