Again, thanks for your response. That certainly is helpful and makes alot of sense but there is one thing I don't understand unless I'm reading "when it starts up" too literally. I would think that if it really happens at driver start time (i.e. net start npf), then the drift would be cumulative and restarting my application would not cause it to reset. Given the behavior that I am seeing, it would only make sense if it were to happen at pcap_open() time.
Thanks very much for your help.
Mike
| "David Barnish"
<[EMAIL PROTECTED]>
04/21/2005 12:05 PM
|
|
So, over a 24 hour period, your time deltas drift by 4600 milliseconds. This is about 3.2 milliseconds per minute.
I think the reason that you are seeing this is due to the fact that you are comparing apples and oranges. By this I mean that you are comparing the time stamp from the packet itself to a timestamp you get from the system via your call to GetSystemTimeAsFileTime().
The driver captures the packet and sets the time stamp in the packet's header. It sets the time stamp by calling the kernel function KeQueryPerformanceCounter(). This function returns the current number of ticks since the system was booted. A "tick" is a value that is associated with the CPU's clock. If you pass a pointer to a large int to this function, it will return the number of ticks per second (the timer frequency). If you know the number of ticks per second and the current tick count you can compute elapsed time, but, unless you get the frequency each time you call KeQueryPerformanceCounter() you will always have to contend with a slight amount of drift in times calculated using this method. Because of concerns for performance in the driver, it only calls this function to retrieve the frequency once when it starts up. When a packet arrives, it calls KeQueryPerformanceCounter() passing a NULL so it only returns the current number of ticks. It uses the time frequence it got at startup to calculate time stamps for the packets when they are received.
The help page for this function states that it should not be called too frequently and should not be used to calculate actual time. Here is what they say:
------------------------------------
KeQueryPerformanceCounter returns the performance counter value in units of ticks.
Comments
KeQueryPerformanceCounter always returns a 64-bit integer representing the current number of ticks. Accumulating the count begins when the system is booted.
The resolution of the timer used to accumulate the current count can be obtained by specifying a pointer value for PerformanceFrequency. The value returned is the number of ticks per second.
KeQueryPerformanceCounter is intended for time-stamping packets or for computing performance and capacity measurements. It is not intended for measuring elapsed time, for computing stalls or waits, or for iterations.
Use this routine as infrequently as possible. Depending on the platform, KeQueryPerformanceCounter can disable system-wide interrupts for a minimal interval. Consequently, calling this routine frequently, as in an iteration, defeats its purpose of returning very fine-grained, running time-stamp information. Calling this routine too frequently can degrade I/O performance for the calling driver and for the system as a whole.
--------------------------------------------
In your program, you are using the system function GetSystemTimeAsFileTime() to get your time value. I don't know how this function works under the covers, but I suspect that it is more accurate as a time difference value. So, at the granularity of a millisecond, this comparison is flawed in that we would expect deltas that drift (increase) over time. This problem with time, especially at this granularity or lower, is a constant problem that developers must contend with. I'm sure you can find many good articles on this subject with some web surfing. Perhaps you'll find an answer that will help you achieve what you are trying to do.
Hope this helps.
Thank you,
David Barnish
"God does not play dice!"
-- Albert Einstein
"Not only does God play dice with
the Universe - he sometimes casts them where they can't be seen."
-- Stephen Hawking
______
_ _ _
-----Original Message-----
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Sent: Wednesday, April 20, 2005 6:03 PM
To: [email protected]
Subject: RE: [WinPcap-users] capture delay based on timestamps increases?
Good questions.
1.) It's MPEG video so it has continuity counters that help you detect
if packets are missing. I can watch it on devices that are very sensitive
to data being missing and those devices show no problems. The real application
(that this sample is emulating) actually writes those packets to disk for
later playout.
2.) I've only actually run the sample program for the last several hours
but the real program exhibiting the problem has been run at least overnight
and up to 24+ hours at a time.
3.) During an overnight period, the delay has increased from approx. 400ms
to 5000ms
Thanks for responding.
Mike
| "David Barnish"
<[EMAIL PROTECTED]>
04/20/2005 06:46 PM
|
|
Mike,
A few questions:
1. How do you know that you are keeping up with the bitstream? Your app is getting packets from the driver's buffer. Could you be falling behind while the buffer fills up?
2. How long have you run your program before restarting?
3. What was the largest delta you saw at this time?
Thank you,
David Barnish
"God does not play dice!"
-- Albert Einstein
"Not only does God play dice with
the Universe - he sometimes casts them where they can't be seen."
-- Stephen Hawking
-----Original Message-----
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Sent: Wednesday, April 20, 2005 5:24 PM
To: [email protected]
Subject: [WinPcap-users] capture delay based on timestamps increases?
Hello,
Below is a sample program adapted from the print packets sample in the
documentation.
Instead of printing the packets, for each packet it calculates the difference
between the current
time and the timestamp in the pcap packet header and puts those deltas
into 500ms buckets.
It prints those buckets and the maximum delay value after so many samples
to a file. My problem
is that when capturing, for example, from a constant bitrate
4Mbps stream, over time that maximum
delta value keeps getting larger and larger but the program is keeping
up with the input data.
If I restart the application, the delay goes back to the minimum value.
Can anyone help explain this increased delay in timestamps, please?
Thanks very much.
Mike Maffa
[EMAIL PROTECTED]
The sample was built using VS.NET 2003 using WinPcap3.1-Beta4
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------
#include "stdafx.h"
#define HAVE_REMOTE
#include "pcap.h"
__int64 m_i64_500OrLess = 0;
__int64 m_i64_500To1000 = 0;
__int64 m_i64_1000To1500 = 0;
__int64 m_i64_1500To2000 = 0;
__int64 m_i64_2000To2500 = 0;
__int64 m_i64_2500To3000 = 0;
__int64 m_i64_3000OrMore = 0;
__int64 m_i64TotalSamples = 0;
__int64 m_i64Max = 0;
__int64 m_i64Min = 0;
void AddDeltaTimeStat(struct
timeval ts);
int _tmain(int
argc, _TCHAR* argv[])
{
pcap_if_t *alldevs;
pcap_if_t *d;
int inum;
int i=0;
pcap_t *adhandle;
int res;
char errbuf[PCAP_ERRBUF_SIZE];
// struct tm *ltime;
// char timestr[16];
struct pcap_pkthdr *header;
const u_char *pkt_data;
/* Retrieve
the device list on the local machine */
if
(pcap_findalldevs_ex(PCAP_SRC_IF_STRING, NULL, &alldevs, errbuf) ==
-1)
{
fprintf(stderr,"Error in pcap_findalldevs:
%s\n", errbuf);
exit(1);
}
/* Print
the list */
for(d=alldevs;
d; d=d->next)
{
printf("%d. %s", ++i, d->name);
if
(d->description)
printf(" (%s)\n", d->description);
else
printf(" (No description
available)\n");
}
if(i==0)
{
printf("\nNo interfaces found! Make sure
WinPcap is installed.\n");
return
-1;
}
printf("Enter the interface number (1-%d):",i);
scanf("%d", &inum);
if(inum
< 1 || inum > i)
{
printf("\nInterface number out of range.\n");
/*
Free the device list */
pcap_freealldevs(alldevs);
return
-1;
}
/* Jump
to the selected adapter */
for(d=alldevs,
i=0; i< inum-1 ;d=d->next, i++);
/* Open
the device */
if
( (adhandle= pcap_open(d->name, //
name of the device
65536,
// portion of
the packet to capture.
//
65536 guarantees that the whole packet will be captured on all the link
layers
PCAP_OPENFLAG_PROMISCUOUS, //
promiscuous mode
1000,
// read timeout
NULL,
// authentication
on the remote machine
errbuf
// error buffer
) ) == NULL)
{
fprintf(stderr,"\nUnable to open the adapter.
%s is not supported by WinPcap\n", d->name);
/*
Free the device list */
pcap_freealldevs(alldevs);
return
-1;
}
// compile
the filter
struct
bpf_program fcode;
unsigned
int
uiNetMask = 0xffffff;
char
szFilter[] = "ip and udp and dst host 238.5.5.21 and dst port 258";
if(pcap_compile(adhandle,
&fcode, szFilter, 1, uiNetMask) < 0)
{
_tprintf(_T("Unable to compile the packet
filter. Check the syntax.\n"));
}
// set
the filter
if(pcap_setfilter(adhandle,
&fcode) < 0)
{
_tprintf(_T("Unable to set the packet filter,
error from pcap_setfilter()."));
}
printf("\nlistening on %s...\n", d->description);
/* At
this point, we don't need any more the device list. Free it */
pcap_freealldevs(alldevs);
/* Retrieve
the packets */
while((res
= pcap_next_ex( adhandle, &header, &pkt_data)) >= 0){
if(res
== 0)
/*
Timeout elapsed */
continue;
AddDeltaTimeStat(header->ts);
/*
convert the timestamp to readable format */
//ltime=localtime(&header->ts.tv_sec);
//strftime(
timestr, sizeof timestr, "%H:%M:%S", ltime);
//printf("%s,%.6d
len:%d\n", timestr, header->ts.tv_usec, header->len);
}
if(res
== -1){
printf("Error reading the packets: %s\n",
pcap_geterr(adhandle));
return
-1;
}
return
0;
}
void AddDeltaTimeStat(struct
timeval ts)
{
#define EPOCHFILETIME (116444736000000000i64)
LARGE_INTEGER liNow;
FILETIME ftNow;
__int64
i64Now;
struct
timeval tvNow;
GetSystemTimeAsFileTime(&ftNow);
liNow.LowPart = ftNow.dwLowDateTime;
liNow.HighPart = ftNow.dwHighDateTime;
i64Now = liNow.QuadPart; /*
In 100-nanosecond intervals */
i64Now -= EPOCHFILETIME; /*
Offset to the Epoch time */
i64Now /= 10;
/* In
microseconds */
tvNow.tv_sec = (long)(i64Now
/ 1000000);
tvNow.tv_usec = (long)(i64Now
% 1000000);
__int64
i64TsInusec = 0;
i64TsInusec = Int32x32To64(ts.tv_sec, 1000000) + ts.tv_usec;
__int64
i64DiffInusec = 0;
if
((i64Now - i64TsInusec) > 0)
{
i64DiffInusec = i64Now - i64TsInusec; //
in microseconds
}
__int64
i64DiffInms = i64DiffInusec / 1000; //
in milliseconds
if
(i64DiffInms <= m_i64Min)
{
m_i64Min = i64DiffInms;
}
if
(i64DiffInms >= m_i64Max)
{
m_i64Max = i64DiffInms;
}
// bucketize
if
(i64DiffInms <= 500)
{
++m_i64_500OrLess;
}
else
if
((i64DiffInms > 500) && (i64DiffInms <= 1000))
{
++m_i64_500To1000;
}
else
if
((i64DiffInms > 1000) && (i64DiffInms <= 1500))
{
++m_i64_1000To1500;
}
else
if
((i64DiffInms > 1500) && (i64DiffInms <= 2000))
{
++m_i64_1500To2000;
}
else
if
((i64DiffInms > 2000) && (i64DiffInms <= 2500))
{
++m_i64_2000To2500;
}
else
if
((i64DiffInms > 2500) && (i64DiffInms <= 3000))
{
++m_i64_2500To3000;
}
else
{
++m_i64_3000OrMore;
}
++m_i64TotalSamples;
if
((m_i64TotalSamples % 200000) == 0)
{
FILE *fp = _tfopen(_T("c:\\stats.log"),
_T("a+"));
if
(fp != NULL)
{
TCHAR szTimeBuf[MAX_PATH];
FILETIME LocalFileTime;
SYSTEMTIME LocalTime;
FileTimeToLocalFileTime(&ftNow,
&LocalFileTime);
FileTimeToSystemTime(&LocalFileTime,
&LocalTime);
_stprintf(szTimeBuf, _T("%02d/%02d/%04d
%02d:%02d:%02d.%03d"), LocalTime.wMonth, LocalTime.wDay, LocalTime.wYear,
LocalTime.wHour, LocalTime.wMinute, LocalTime.wSecond, LocalTime.wMilliseconds);
_ftprintf(fp, _T("\n%s\n"),
szTimeBuf);
_ftprintf(fp, _T("\t 500ms
or less : %I64d\n"), m_i64_500OrLess);
_ftprintf(fp, _T("\t 500ms
to 1000ms : %I64d\n"), m_i64_500To1000);
_ftprintf(fp, _T("\t 1000ms to
1500ms : %I64d\n"), m_i64_1000To1500);
_ftprintf(fp, _T("\t 1500ms to
2000ms : %I64d\n"), m_i64_1500To2000);
_ftprintf(fp, _T("\t 2000ms to
2500ms : %I64d\n"), m_i64_2000To2500);
_ftprintf(fp, _T("\t 2500ms to
3000ms : %I64d\n"), m_i64_2500To3000);
_ftprintf(fp, _T("\t 3000ms or
more : %I64d\n"), m_i64_3000OrMore);
_ftprintf(fp, _T("\t Min = %I64d,
Max = %I64d\n"), m_i64Min, m_i64Max);
fflush(fp);
fclose(fp);
}
}
return;
}
