On Thu, Feb 4, 2010 at 4:09 AM, Glenn Maynard wrote:
> But I'll be dropping this db into 8.4 soon to see if it helps
> anything, and I'll check again (and if it's still slow I'll post more
> details). It's been a while and I might just have been doing
> something wrong.
Windowing doesn't want to
On Thu, Feb 4, 2010 at 12:40 AM, wrote:
> I could be wrong in this (if so I know I'll be corrected :-)
>
> but Postgres doesn't need to lock anything for what you are describing.
>
> instead there will be multiple versions of the 'b1' row, one version will be
> deleted, one version that will be k
On Mon, Feb 1, 2010 at 7:53 PM, Matt White wrote:
> I have a relatively straightforward query that by itself isn't that
> slow, but we have to run it up to 40 times on one webpage load, so it
> needs to run much faster than it does. Here it is:
>
> SELECT COUNT(*) FROM users, user_groups
> WHERE
On Thu, Feb 4, 2010 at 3:24 AM, Glenn Maynard wrote:
> On Wed, Feb 3, 2010 at 10:05 PM, Robert Haas wrote:
>> Rewriting it as a join will likely be faster still:
>>
>> SELECT r.id FROM stomp_steps s, stomp_round r WHERE (s.id IS NULL OR
>> r.steps_id = s.id) AND ($1 IS NULL OR r.user_card_id = $1
Jochen Erwied escribió:
> Maybe using 'numeric(19)' instead of bigint is an alternative. I actually
> don't know how these numbers are stored internally (some kind of BCD, or as
> base-100?), but IMHO they should be faster than strings, although not as
> fast as 'native' types.
base 1 in the
Thursday, February 4, 2010, 7:51:37 PM you wrote:
> it is an identifier and is always a number and is used in grouping and
> querying. I thought I would lose performance if it is text vs an
> integer/double field.
Maybe using 'numeric(19)' instead of bigint is an alternative. I actually
don't kno
On Thu, Feb 4, 2010 at 10:43 AM, Craig James wrote:
> Tory M Blue wrote:
>>
>> I have a column that is a bigint that needs to store integers up to 19
>> digits long. For the most part this works but we sometimes have
>> numbers that are greater than 9223372036854775807.
>> ...
>> I was thinking of
Tory M Blue escribió:
> I looked into the numeric data type, but the docs say that it can be slow.
It is slower than values that fit in a single CPU register, sure. Is it
slow enough that you can't use it? That's a different question. I'd
give it a try -- maybe it's not all that slow.
--
Alv
Tory M Blue wrote:
I have a column that is a bigint that needs to store integers up to 19
digits long. For the most part this works but we sometimes have
numbers that are greater than 9223372036854775807.
...
I was thinking of changing this to a real or double precision field,
but read in the doc
Greetings,
I have a column that is a bigint that needs to store integers up to 19
digits long. For the most part this works but we sometimes have
numbers that are greater than 9223372036854775807.
I was thinking of changing this to a real or double precision field,
but read in the docs that the v
On Thu, 4 Feb 2010, Amitabh Kant wrote:
On Wed, Feb 3, 2010 at 10:05 PM, Ivan Voras wrote:
If you can, add another 2 drives in RAID 1 and move+symlink the pg_xlog
directory to the new array.
Can't do anything about this server now, but would surely keep in mind
before upgrading other servers.
On Thu, 2010-01-07 at 13:38 +0100, Lefteris wrote:
> Reported query times are (in sec):
> MonetDB 7.9s
> InfoBright 12.13s
> LucidDB 54.8s
It needs to be pointed out that those databases are specifically
optimised for Data Warehousing, whereas Postgres core is optimised for
concurrent write worklo
On Thu, Feb 4, 2010 at 3:29 AM, Greg Smith wrote:
> Robert Haas wrote:
>
> On Wed, Feb 3, 2010 at 10:10 AM, Amitabh Kant
> wrote:
>
>
> work_mem = 160MB # pg_generate_conf wizard 2010-02-03
>
>
> Overall these settings look sane, but this one looks like an
> exception. That is an enormous
On Thu, Feb 4, 2010 at 3:10 PM, Ivan Voras wrote:
> On 4 February 2010 10:02, Amitabh Kant wrote:
> > On Wed, Feb 3, 2010 at 10:05 PM, Ivan Voras wrote:
> >>
> >> On 02/03/10 16:10, Amitabh Kant wrote:
> >>>
> >>> Hello
> >>>
> >>> I have a server dedicated for Postgres with the following specs
On 4 February 2010 10:02, Amitabh Kant wrote:
> On Wed, Feb 3, 2010 at 10:05 PM, Ivan Voras wrote:
>>
>> On 02/03/10 16:10, Amitabh Kant wrote:
>>>
>>> Hello
>>>
>>> I have a server dedicated for Postgres with the following specs:
>>>
>>> RAM 16GB, 146GB SAS (15K) x 4 - RAID 10 with BBU, Dual Xe
On Thu, Feb 4, 2010 at 12:11 AM, Andy Colson wrote:
> On 2/3/2010 9:10 AM, Amitabh Kant wrote:
>
>> Hello
>>
>> I have a server dedicated for Postgres with the following specs:
>>
>> RAM 16GB, 146GB SAS (15K) x 4 - RAID 10 with BBU, Dual Xeon E5345 @
>> 2.33GHz
>> OS: FreeBSD 8.0
>>
>> It runs
2010/2/4 Grzegorz Jaśkiewicz :
> isn't that possible with window functions and cte ?
> rank, and limit ?
It is, but again I tried that when I originally designed this and I
think it ended up using seq scans, or at least being slow for some
reason or other.
But I'll be dropping this db into 8.4 so
On Wed, Feb 3, 2010 at 10:05 PM, Ivan Voras wrote:
> On 02/03/10 16:10, Amitabh Kant wrote:
>
>> Hello
>>
>> I have a server dedicated for Postgres with the following specs:
>>
>> RAM 16GB, 146GB SAS (15K) x 4 - RAID 10 with BBU, Dual Xeon E5345 @
>> 2.33GHz
>> OS: FreeBSD 8.0
>>
>
> If you re
On Wed, Feb 3, 2010 at 9:49 PM, Reid Thompson wrote:
> On Wed, 2010-02-03 at 20:42 +0530, Amitabh Kant wrote:
> > Forgot to add that I am using Postgres 8.4.2 from the default ports of
> > FreeBSD.
>
> start with this page
> http://www.postgresql.org/docs/8.4/static/kernel-resources.html
>
>
I som
isn't that possible with window functions and cte ?
rank, and limit ?
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On Wed, Feb 3, 2010 at 10:05 PM, Robert Haas wrote:
> Rewriting it as a join will likely be faster still:
>
> SELECT r.id FROM stomp_steps s, stomp_round r WHERE (s.id IS NULL OR
> r.steps_id = s.id) AND ($1 IS NULL OR r.user_card_id = $1) ORDER BY
> r.score DESC LIMIT $2
That's not the same; thi
21 matches
Mail list logo