On 6/29/05, Rudi Starcevic <[EMAIL PROTECTED]> wrote:
> Hi,
>
> >I do my batch processing daily using a python script I've written. I
> >found that trying to do it with pl/pgsql took more than 24 hours to
> >process 24 hours worth of logs. I then used C# and in memory hash
> >tables to drop the ti
Hi,
>I do my batch processing daily using a python script I've written. I
>found that trying to do it with pl/pgsql took more than 24 hours to
>process 24 hours worth of logs. I then used C# and in memory hash
>tables to drop the time to 2 hours, but I couldn't get mono installed
>on some of my ol
On 6/28/05, Billy extyeightysix <[EMAIL PROTECTED]> wrote:
> Hola folks,
>
> I have a web statistics Pg database (user agent, urls, referrer, etc)
> that is part of an online web survey system. All of the data derived
> from analyzing web server logs is stored in one large table with each
> record
> The bottleneck in the
> whole process is actually counting each data point (how many times a
> url was visited, or how many times a url referred the user to the
> website). So more specifically I am wondering if there is way to store
> and retrieve the data such that it speeds up the counting of