On Fri, 18 Apr 2008, Tom Lane wrote:
Yeah, it's starting to be obvious that we'd better not ignore sysbench
as "not our problem". Do you have any roadmap on what needs to be done
to it?
Just dug into this code again for a minute and it goes something like
this:
1) Wrap the write statement
Greg Smith <[EMAIL PROTECTED]> writes:
> This is what happens when the kernel developers are using results from a
> MySQL tool to optimize things I guess. It seems I have a lot of work
> ahead of me here to nail down and report what's going on here.
Yeah, it's starting to be obvious that we'd b
Tom Lane wrote:
Craig Ringer <[EMAIL PROTECTED]> writes:
Tom Lane wrote:
Craig Ringer <[EMAIL PROTECTED]> writes:
Speaking of this sort of support tool, what I personally often wish for
is unique error message identifiers that can be looked up (say, with a
web form) or a way to
Craig Ringer <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> Craig Ringer <[EMAIL PROTECTED]> writes:
>>> Speaking of this sort of support tool, what I personally often wish for
>>> is unique error message identifiers that can be looked up (say, with a
>>> web form) or a way to un/re-translate l
I am trying to get a distinct set of rows from 2 tables.
After looking at someone else's query I noticed they were doing a group by
to obtain the unique list.
After comparing on multiple machines with several tables, it seems using
group by to obtain a distinct list is substantially faster tha
Tom Lane wrote:
Craig Ringer <[EMAIL PROTECTED]> writes:
Speaking of this sort of support tool, what I personally often wish for
is unique error message identifiers that can be looked up (say, with a
web form) or a way to un/re-translate localized messages.
The VERBOSE option already
On Thu, 17 Apr 2008, [EMAIL PROTECTED] wrote:
report this to the kernel list so that they know, and be ready to test fixes.
Don't worry, I'm on that. I'm already having enough problems with
database performance under Linux, if they start killing results on the
easy benchmarks I'll really be
On Thu, 17 Apr 2008, Greg Smith wrote:
On Thu, 17 Apr 2008, Matthew wrote:
The last message in the thread says that 2.6.25-rc6 has the problem nailed.
That was a month ago. So I guess, upgrade to 2.6.25, which was released
today.
Ah, even more support for me to distrust everything I read.
On Thu, 17 Apr 2008, Matthew wrote:
The last message in the thread says that 2.6.25-rc6 has the problem nailed.
That was a month ago. So I guess, upgrade to 2.6.25, which was released
today.
Ah, even more support for me to distrust everything I read. The change
has flattened out things, so
- why am I still getting a seq scan ?
You'll seq scan tmp1 obviously, and also the other table since you fetch a
very large part of it in the process.
It's the only way to do this query since there is no WHERE to restrict the
number of rows and the DISTINCT applies on columns from both tab
Craig Ringer <[EMAIL PROTECTED]> writes:
> Speaking of this sort of support tool, what I personally often wish for
> is unique error message identifiers that can be looked up (say, with a
> web form) or a way to un/re-translate localized messages.
The VERBOSE option already gives an exact pointe
Greg Smith schrieb:
You also didn't mention what disk controller you have, or how much write
cache it has (if any).
8.3.1, Controller is
http://www.infortrend.com/main/2_product/es_a08(12)f-g2422.asp with 2GB
cache (writeback was enabled).
That's almost turning the background writer off. I
kevin kempter escribió:
Hi List;
I have a large tble (playback_device) with 6million rows in it. The
aff_id_tmp1 table has 600,000 rows.
I also have this query:
select distinct
tmp1.affiliate_id,
tmp1.name,
tmp1.description,
tmp1.create_dt,
tmp1.playback_device_id,
pf.segment_id
from
aff_id_t
On Thu, Apr 17, 2008 at 11:24 AM, kevin kempter
<[EMAIL PROTECTED]> wrote:
> Hi List;
>
> I have a large tble (playback_device) with 6million rows in it. The
> aff_id_tmp1 table has 600,000 rows.
> - why am I still getting a seq scan ?
>
You're selecting almost all the rows in the product of aff
Hi List;
I have a large tble (playback_device) with 6million rows in it. The
aff_id_tmp1 table has 600,000 rows.
I also have this query:
select distinct
tmp1.affiliate_id,
tmp1.name,
tmp1.description,
tmp1.create_dt,
tmp1.playback_device_id,
pf.segment_id
from
aff_id_tmp1 tmp1,
playback_fragm
On Mon, 31 Mar 2008, James Mansion wrote:
I have a question about file writes, particularly on POSIX.
In other reading I just came across this informative article on this
issue, which amusingly was written the same day you asked about this:
http://jeffr-tech.livejournal.com/20707.html
--
*
On Thu, 17 Apr 2008, Jeffrey Baker wrote:
This has been discussed recently on linux-kernel.
Excellent pointer, here's direct to the interesting link there:
http://marc.info/?l=linux-kernel&m=120574906013029&w=2
Ingo's test system has 16 cores and dives hard at >32 clients; my 4-core
system
On Thu, 17 Apr 2008, Jeffrey Baker wrote:
On Thu, Apr 17, 2008 at 12:58 AM, Greg Smith <[EMAIL PROTECTED]> wrote:
So in the case of this simple benchmark, I see an enormous performance
regression from the newest Linux kernel compared to a much older one.
This has been discussed recently on li
Craig Ringer wrote:
> Simon Riggs wrote:
>
>> Obfuscating the names would make the code harder to understand, true,
>> but only if the code is written in English (or your language-of-choice).
>> It wouldn't damage our ability to read other language code at all.
>
> Speaking of this sort of support
On Thu, Apr 17, 2008 at 12:58 AM, Greg Smith <[EMAIL PROTECTED]> wrote:
> So in the case of this simple benchmark, I see an enormous performance
> regression from the newest Linux kernel compared to a much older one.
This has been discussed recently on linux-kernel. It's definitely a
regression.
Simon Riggs wrote:
Obfuscating the names would make the code harder to understand, true,
but only if the code is written in English (or your language-of-choice).
It wouldn't damage our ability to read other language code at all.
Speaking of this sort of support tool, what I personally often wi
On Thu, 2008-04-17 at 12:41 -0400, Tom Lane wrote:
> Simon Riggs <[EMAIL PROTECTED]> writes:
> > On Thu, 2008-04-17 at 12:12 -0400, Tom Lane wrote:
> >> Aren't these suggestions mutually contradictory?
>
> > No, they're orthogonal. The pretty printer would get the indenting and
> > line feeds corr
Simon Riggs <[EMAIL PROTECTED]> writes:
> On Thu, 2008-04-17 at 12:12 -0400, Tom Lane wrote:
>> Aren't these suggestions mutually contradictory?
> No, they're orthogonal. The pretty printer would get the indenting and
> line feeds correct, the obfuscator would replace actual names with "A",
> "B"
On Thu, 2008-04-17 at 12:12 -0400, Tom Lane wrote:
> Simon Riggs <[EMAIL PROTECTED]> writes:
> > I think it would help if there was some way to prepare functions to
> > allow them to be posted and understood more easily. These would help:
>
> > * a name obfuscator, so people can post functions wit
Simon Riggs <[EMAIL PROTECTED]> writes:
> I think it would help if there was some way to prepare functions to
> allow them to be posted and understood more easily. These would help:
> * a name obfuscator, so people can post functions without revealing
> inner workings of their company and potentia
On Wed, 2008-04-16 at 11:09 -0400, Tom Lane wrote:
> "Gavin M. Roy" <[EMAIL PROTECTED]> writes:
> > In 8.3.0, I'm seeing some oddities with SQL functions which I thought were
> > immune to the planner data restrictions of plpgsql functions and the sort.
>
> Without a specific example this discussi
On Thu, 17 Apr 2008, Greg Smith wrote:
So in the case of this simple benchmark, I see an enormous performance
regression from the newest Linux kernel compared to a much older one. I need
to do some version bisection to nail it down for sure, but my guess is it's
the change to the Completely Fa
Pavan Deolasee wrote:
On Thu, Apr 17, 2008 at 2:57 PM, Gunther Mayer
<[EMAIL PROTECTED]> wrote:
You see, all updates change most of the data fields but never ever touch
the time field. Assuming correct and efficient behaviour of postgresql it
should then also never touch the time index and i
On 17.04.2008, at 14:03, sathiya psql wrote:
Hi,
I need to change the name of the constraint.,
Or i need to drop the constraint, and i need to create constraint
with new
name, how the impact of this in performance, because these constraint
changes am going to do in a table which has 10 mill
Hi,
I need to change the name of the constraint.,
What will be the best way to do this.
Am using postgres 8.1.
Is it possible to do the rename constraint( like renaming a column), i don't
know how to do this ?
Or i need to drop the constraint, and i need to create constraint with new
name, how
On Thu, Apr 17, 2008 at 2:57 PM, Gunther Mayer
<[EMAIL PROTECTED]> wrote:
>
>
> You see, all updates change most of the data fields but never ever touch
> the time field. Assuming correct and efficient behaviour of postgresql it
> should then also never touch the time index and incur zero overhead
Gunther Mayer wrote:
You see, all updates change most of the data fields but never ever touch
the time field. Assuming correct and efficient behaviour of postgresql
it should then also never touch the time index and incur zero overhead
in its presence, but is this really the case? If it somehow
Gunther Mayer wrote:
You see, all updates change most of the data fields but never ever touch
the time field. Assuming correct and efficient behaviour of postgresql
it should then also never touch the time index and incur zero overhead
in its presence, but is this really the case? If it somehow
Hi there,
I have a table which looks similar to:
CREATE TABLE accounting
(
id text NOT NULL,
time timestamp with time zone,
data1 int,
data2 int,
data3 int,
data4 int,
data5 int,
data6 int,
data7 int,
data8 int,
state int
CONSTRAINT accounting_pkey PRIMARY KEY (id),
)
The table has
Adrian Moisey wrote:
Hi
INFO: "blahxxx": scanned 27 of 27 pages, containing 1272 live rows
and 0 dead rows; 1272 rows in sample, 1272 estimated total rows
This is a small table that takes up 27 pages and it scanned all of
them. You have 1272 rows in it and none of them are dead (i.e.
delet
Hi
INFO: "blahxxx": scanned 27 of 27 pages, containing 1272 live rows
and 0 dead rows; 1272 rows in sample, 1272 estimated total rows
This is a small table that takes up 27 pages and it scanned all of them.
You have 1272 rows in it and none of them are dead (i.e. deleted/updated
but still t
Hi
You are tracking ~ 4.6 million pages and have space to track ~ 15.5
million, so that's fine. You are right up against your limit of
relations (tables, indexes etc) being tracked though - 1200. You'll
probably want to increase max_fsm_relations - see manual for details
(server configuration
Adrian Moisey wrote:
Hi
Running VACUUM VERBOSE will give you a detailed view of space usage of
each individual table.
I did that.
Not too sure what I'm looking for, can someone tell me what this means:
INFO: "blahxxx": scanned 27 of 27 pages, containing 1272 live rows and
0 dead rows; 127
This week I've finished building and installing OSes on some new hardware
at home. I have a pretty standard validation routine I go through to make
sure PostgreSQL performance is good on any new system I work with. Found
a really strange behavior this time around that seems related to changes
39 matches
Mail list logo