Greg Sabino Mullane wrote:
>
>>> Granted, I use a few MySQL features for this; I'm not sure if LIMIT
>>> exists in postgresql, and I'm fairly sure that the SQL_CALC_FOUND_ROWS
>>> directive (which will return the total rows in a select statement
>>> regardless of the LIMIT directives) doesn't...
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Granted, I use a few MySQL features for this; I'm not sure if LIMIT
exists in postgresql, and I'm fairly sure that the SQL_CALC_FOUND_ROWS
directive (which will return the total rows in a select statement
regardless of the LIMIT directives) doesn't.
Right - that was the line I was trying to find earlier. So much for my
theory about ignoring the LIMITs :-(
All I can think of to explain the speedup that people (including myself)
tend to see anyway is the indexes being cached in the key_buffer the
second+ times around.
Issac
Jeff wrote:
> -
If anything, it really doesn't make sense to cache something in the
query cache with limits intact; LIMIT is just a modifier which trims the
result set on the server side. Since LIMIT doesn't actually affect the
result set, per se, it doesn't make sense for the query cache to pay
attention to it a
Original Message
From: Perrin Harkins <[EMAIL PROTECTED]>
Subject: Re:Database transaction across multiple web requests
Date: Mon Apr 03 2006 03:55:54
How do you know it isn't just the operating system caching the disk
fetches? The docs certainly make it sound like a query ca
How do you know it isn't just the operating system caching the disk
fetches? The docs certainly make it sound like a query cached with
LIMIT will not be useful when different LIMIT values are applied.
Because I run it from the command line, and Windows ain't that smart.
Trust me. :)
- Pe
I missed part of the initial discussion so sorry if this isn't quite what
you're looking for, but if you're talking about the query cache (the one
that caches the exact query and the exact results), this output can be
useful:
mysql> show status like "Q%";
+-+--+
On Mon, 2006-04-03 at 10:28 +0800, Foo Ji-Haw wrote:
> Perrin Harkins wrote:
> > Jeff wrote:
> >> Your application simply uses approach (b) and MySQL does the rest
> >> automatically. So if you
> > Have you tried this? I was under the impression that MySQL would just
> > stop when it finds enou
Perrin Harkins wrote:
Jeff wrote:
Your application simply uses approach (b) and MySQL does the rest
automatically. So if you
Have you tried this? I was under the impression that MySQL would just
stop when it finds enough row to satisfy LIMIT, so it wouldn't cache
the whole result set.
I have
Frank Wiles wrote:
> On Fri, 31 Mar 2006 14:47:37 +0200
> Issac Goldstand <[EMAIL PROTECTED]> wrote:
>
>> Granted, I use a few MySQL features for this; I'm not sure if LIMIT
>> exists in postgresql, and I'm fairly sure that the SQL_CALC_FOUND_ROWS
>> directive (which will return the total rows i
Original Message
From: Enno <[EMAIL PROTECTED]>
To: Perrin Harkins <[EMAIL PROTECTED]>
Cc: Jeff <[EMAIL PROTECTED]>, modperl@perl.apache.org
Subject: Re:Database transaction across multiple web requests
Date: Fri Mar 31 2006 15:38:47
On Fri, 31 Mar 2006, Perrin Harkins wrote:
On Friday 31 March 2006 02:10 pm, Jonathan Vanasco wrote:
> On Mar 31, 2006, at 2:00 PM, Frank Wiles wrote:
> > Yup, you're right I just double checked and there isn't a LIMIT/
> > OFFSET
> > in the SQL standard. For someone reason... probably because it's so
> > freaking useful, I assumed i
Perrin Harkins wrote:
I typically use a combination, retrieving all of the IDs in order in
one shot and saving that in a cache, and then grabbing the details of
the 10 records needed for the current page as I go. This is described
somewhat here:
http://perl.apache.org/docs/tutorials/apps/scale
On Mar 31, 2006, at 2:00 PM, Frank Wiles wrote:
Yup, you're right I just double checked and there isn't a LIMIT/
OFFSET
in the SQL standard. For someone reason... probably because it's so
freaking useful, I assumed it was. :)
I was pretty floored when i found out myself. I *think* mor
On Fri, 31 Mar 2006 13:43:59 -0500
Perrin Harkins <[EMAIL PROTECTED]> wrote:
> On Fri, 2006-03-31 at 14:47 +0200, Issac Goldstand wrote:
> > I also tend to go with this variant of B. I keep $start_from and
> > $max_results in the session (or the query string).
>
> Please keep it in the query s
On Fri, 31 Mar 2006 13:07:03 -0500
Jonathan Vanasco <[EMAIL PROTECTED]> wrote:
>
> On Mar 31, 2006, at 12:56 PM, [EMAIL PROTECTED] wrote:
> > PostgreSQL uses LIMIT 10 OFFSET 20 where mysql uses LIMIT 20,10
> > (notice
> > the swap). I don't know if it is "standard", but if it is, vendors
> > a
On Fri, 2006-03-31 at 14:47 +0200, Issac Goldstand wrote:
> I also tend to go with this variant of B. I keep $start_from and
> $max_results in the session (or the query string).
Please keep it in the query string! One of my pet peeves is
applications that go crazy when I open multiple browser
On Fri, 2006-03-31 at 09:34 -0600, JupiterHost.Net wrote:
> a) If the connection is persistent and you can pass a hash around then
> do the query once, store the data in the hash keyed on the session.
You'd be better off storing a query result in way that is keyed on the
query, not the person who
Well standard is good but sometimes KISS (KeepItSmallandSimple) is much
better. I think that was the MySQL credo in the past and hopefully they
continue with it but when looking at changes especially between 4.0 and
4.1. they seem to leave this road.
Tom
Jonathan Vanasco wrote:
>
> On Mar 31, 20
Jonathan Vanasco wrote:
> Just to add to what other have written-
>
> if you're writing to anything based on this, you'll need some sort of
> internal marker to tell if your data has been changed (i don't think
> you're writing, but you should just be aware of this)
That's why we database vendo
On Mar 31, 2006, at 12:56 PM, [EMAIL PROTECTED] wrote:
PostgreSQL uses LIMIT 10 OFFSET 20 where mysql uses LIMIT 20,10
(notice
the swap). I don't know if it is "standard", but if it is, vendors
aren't following the standard.
There's no SQL standard for Limit and Offset.
But in general , My
>> Granted, I use a few MySQL features for this; I'm not sure if LIMIT
>> exists in postgresql, and ...
>
> PostgreSQL has LIMIT... it's a SQL standard.
PostgreSQL uses LIMIT 10 OFFSET 20 where mysql uses LIMIT 20,10 (notice
the swap). I don't know if it is "standard", but if it is, vendors
are
Just to add to what other have written-
if you're writing to anything based on this, you'll need some sort of
internal marker to tell if your data has been changed (i don't think
you're writing, but you should just be aware of this)
often i do one of two things:
a- have a 'checkout' table
On Fri, 31 Mar 2006 10:34:58 -0500
Sean Davis <[EMAIL PROTECTED]> wrote:
>
>
>
> On 3/31/06 10:01 AM, "Frank Wiles" <[EMAIL PROTECTED]> wrote:
>
> > On Fri, 31 Mar 2006 14:47:37 +0200
> > Issac Goldstand <[EMAIL PROTECTED]> wrote:
> >
> >> Granted, I use a few MySQL features for this; I'm not
On 3/31/06 10:01 AM, "Frank Wiles" <[EMAIL PROTECTED]> wrote:
> On Fri, 31 Mar 2006 14:47:37 +0200
> Issac Goldstand <[EMAIL PROTECTED]> wrote:
>
>> Granted, I use a few MySQL features for this; I'm not sure if LIMIT
>> exists in postgresql, and I'm fairly sure that the SQL_CALC_FOUND_ROWS
>>
Tomas Zerolo wrote:
Howdy,
Howdy to you :)
this is not strictly a modperl question, but I'm sure some of you are
downright experts on this.
We have a modperl2 applicattion with a database backend (PostgreSQL,
DBI). For session management we use Apache::Session.
Now the need has arisen to
On Fri, 31 Mar 2006 14:42:40 +0100
Jeff <[EMAIL PROTECTED]> wrote:
> Original Message
> From: [EMAIL PROTECTED] (Tomas Zerolo)
> To: modperl@perl.apache.org
> Subject: Database transaction across multiple web requests
> Date: 31/3/2006 12:59
>
> >
On Fri, 31 Mar 2006 14:47:37 +0200
Issac Goldstand <[EMAIL PROTECTED]> wrote:
> Granted, I use a few MySQL features for this; I'm not sure if LIMIT
> exists in postgresql, and I'm fairly sure that the SQL_CALC_FOUND_ROWS
> directive (which will return the total rows in a select statement
> regardl
Tomas Zerolo wrote:
(b) Repeat the query (making sure there is a sort criterium) at each
page request, starting at a variable offset and limiting the
result set
(c) Do the whole query at once, putting the results in some kind
of array.
I typically use a combination, retrieving all o
On Fri, 31 Mar 2006, Perrin Harkins wrote:
> Jeff wrote:
> > Your application simply uses approach (b) and MySQL does the rest
> > automatically. So if you
> >
> > SELECT * FROM mytable WHERE something='complex' LIMIT 0,30;
> >
> > and then on another page / connection:
> >
> > SELECT * FROM
Jeff wrote:
Your application simply uses approach (b) and MySQL does the rest
automatically. So if you
SELECT * FROM mytable WHERE something='complex' LIMIT 0,30;
and then on another page / connection:
SELECT * FROM mytable WHERE something='complex' LIMIT 30,30;
and then...
SELECT * F
Original Message
From: [EMAIL PROTECTED] (Tomas Zerolo)
To: modperl@perl.apache.org
Subject: Database transaction across multiple web requests
Date: 31/3/2006 12:59
(a) Create a database cursor and page through it
(b) Repeat the query (making sure there is a sort criterium) at
Hi there,
I also tend to go with this variant of B. I keep $start_from and
$max_results in the session (or the query string). This gives me the
options of allowing flexible number of results per page, flexible
breadcrumbs for navigating the search results, etc. If it's critical
that it's a sin
On Fri, 2006-03-31 at 07:21 -0500, Sean Davis wrote:
> On 3/31/06 6:59 AM, "Tomas Zerolo" <[EMAIL PROTECTED]> wrote:
>
> > (b) Repeat the query (making sure there is a sort criterium) at each
> > page request, starting at a variable offset and limiting the
> > result set
> Also, you migh
On 3/31/06 6:59 AM, "Tomas Zerolo" <[EMAIL PROTECTED]> wrote:
> Howdy,
>
> this is not strictly a modperl question, but I'm sure some of you are
> downright experts on this.
>
> We have a modperl2 applicattion with a database backend (PostgreSQL,
> DBI). For session management we use Apache::
Hi,
add a) You cann't do that because you are not able to keep connections
open between requests => find the appropriate connection
add b) That sounds the best option at least this is the one I chose the
last time you can do a LIMIT $start, $end (at least in MySQL). A
potenti
> (a) Create a database cursor and page through it
> (b) Repeat the query (making sure there is a sort criterium) at each
> page request, starting at a variable offset and limiting the
> result set
> (c) Do the whole query at once, putting the results in some kind
> of array.
>
> same
Howdy,
this is not strictly a modperl question, but I'm sure some of you are
downright experts on this.
We have a modperl2 applicattion with a database backend (PostgreSQL,
DBI). For session management we use Apache::Session.
Now the need has arisen to serve requests yielding many records as
ans
38 matches
Mail list logo