Hello,Chas,

Thanks advanced for your good suggestions.
I  tidy up all the words said by you,and write the code as following.Is it 
right?

while(<$sock>)
{
    my ($key,$value) = split;
    my $timestamp = time();
    push @records, { time => $timestamp,
                     key  => $key,
                    amount=> $value,
                    };

    shift @records while $records[0]{time} < time() - 5*60;

     my %sum;
     for my $rec (@records) {
        $sum{$rec->{key}} += $rec->{amount};
     }

     for (keys %sum)
     {
         do_something() if $sum{$_} > LIMIT;
     }
}


-----Original Message-----
>From: Chas Owens <[EMAIL PROTECTED]>
>Sent: Jan 27, 2006 11:14 PM
>To: Jeff Pang <[EMAIL PROTECTED]>
>Cc: beginners@perl.org
>Subject: Re: count in continuous time piece
>
>On 1/27/06, Jeff Pang <[EMAIL PROTECTED]> wrote:
>> Now I'm still confused on this work.Maybe I have not described the problem 
>> clearly.
>> Fox example,there are some items coming in continuous time piece:
>>
>> 00:00:01  itemA  200
>> 00:00:02  itemB  100
>> 00:00:03  itemC  150
>> 00:00:04  itemD  300
>> 00:00:05  itemE   250
>> ...
>> (the item appear as 'name => vaule' style. And most of the items's name are 
>> different from others)
>>
>> In every 5 minutes, I'm doing it as following:
>>
>> {
>>      sleep 5*60;
>>      my %hash = calculate_from_the_items();   #for each item,I'll plus all 
>> the historical records to a hash value,and the hash key is this item's name.
>>      for (keys %hash){
>>         do_something() if $hash{$_} > LIMIT;
>>     }
>>      clear_the_items_to_null();  #clear all the items's records to null
>> }
>>
>> But it's a very simple resolving method.I'm not satisfied with this way.
>> I want to get this result when each one item is coming:
>>
>> {
>>     my $sock=shift;
>>     while(<$sock>)
>>     {
>>          my $isTrue = do_judgement();#judge if it reach some a limit in last 
>> 5 minutes relative the current time
>>          if ($isTrue){
>>             do_something();
>>          }
>>          clear(); #clear this item's  historical record out of 5 minutes 
>> relative the current time
>>     }
>> }
>>
>> How can I do it?Any suggestion is welcome.Thanks.
>
>In your first example you are creating an aggregate based on a key. 
>In your second example you want a similar aggregate, but you don't
>want to count items that are older than 5 minutes ago.  This means you
>will need to keep the items separate until the check and you will need
>to store a timestamp (to know how old an item is).  Assuming you
>cannot change what is coming in from the socket, you will need to add
>the timestamp that item was read (as opposed to when it was created). 
>The ideal pattern for this is an array of either arrays or hashes
>depending on your preference.  You will need to read in all available
>records from the socket attaching the time the item was read.  At the
>end you should have a data structure that looks like this:
>
>my @records = (
>    {
>        time     => 1138374300,
>        key      => 'joe',
>        amount => 20
>    },
>    {
>        time     => 1138374362,
>        key      => 'mary',
>        amount => 10
>    },
>    {
>        time     => 1138374462,
>        key      => 'joe',
>        amount => 5
>    },
>    {
>        time     => 1138374500,
>        key      => 'joe',
>        amount => 1
>    },
>    {
>        time     => 1138374530,
>        key      => 'mary',
>        amount => 20
>    },
>);
>
>Assuming it is now 1138374600 then the first record needs to be
>expunged with delete().  This could be achieved by doing something
>like this (warning untested code):
>
>for (@records) {
>    delete $_ if $_->{time} < time() - 5*60;
>}
>
>You can then loop over the remaing records creating the aggregate like this
>
>my %sum;
>for my $rec (@records) {
>    $sum{$rec->{key}} += $rec->{amount};
>}


--
http://home.earthlink.net/~pangj/

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
<http://learn.perl.org/> <http://learn.perl.org/first-response>


Reply via email to