Cool - seems like the posters caught that "auto memory pick" problem before
you posted, but you got the 16GB/8k parts right.
Now we're looking at realistic numbers - 790 seeks/second, 244MB/s
sequential write, but only 144MB/s sequential reads, perhaps 60% of what it
should be.
Seems like a prett
"Steinar H. Gunderson" <[EMAIL PROTECTED]> writes:
> On Tue, Aug 15, 2006 at 05:20:25PM -0500, Jim C. Nasby wrote:
>> This is only valid if the pre-allocation is also fsync'd *and* fsync
>> ensures that both the metadata and file data are on disk. Anyone
>> actually checked that? :)
> fsync() does
Hi all,
I have PostgreSQL 8.1.4 running on a P 4 2.8 GHz , 512 MB with Linux
(Fedora Core 3)
The SQL comands below have a performance diference that I think is not
so much acceptable ( 1035.427 ms vs 7.209 ms ), since the tables isn´t
so much big ( contrato have 1907 rows and prog have 40.
On Tue, 15 Aug 2006 [EMAIL PROTECTED] wrote:
This is also wrong. fsck is needed because the file system is broken.
nope, the file system *may* be broken. the dirty flag simply indicates
that the filesystem needs to be checked to find out whether or not it is
broken.
Ah, but if we knew it wasn'
On Tue, Aug 15, 2006 at 05:20:25PM -0500, Jim C. Nasby wrote:
> This is only valid if the pre-allocation is also fsync'd *and* fsync
> ensures that both the metadata and file data are on disk. Anyone
> actually checked that? :)
fsync() does that, yes. fdatasync() (if it exists), OTOH, doesn't sync
On Tue, Aug 15, 2006 at 05:38:43PM -0400, [EMAIL PROTECTED] wrote:
> I didn't know that the xlog segment only uses pre-allocated space. I
> ignore mtime/atime as they don't count as file system structure
> changes to me. It's updating a field in place. No change to the structure.
>
> With the pre-
On Tue, Aug 15, 2006 at 04:58:59PM -0400, Michael Stone wrote:
> On Tue, Aug 15, 2006 at 03:39:51PM -0400, [EMAIL PROTECTED] wrote:
> >No. This is not true. Updating the file system structure (inodes, indirect
> >blocks) touches a separate part of the disk than the actual data. If
> >the file syste
=?iso-8859-1?q?Sebasti=E1n=20Baioni?= <[EMAIL PROTECTED]> writes:
> 8-> GroupAggregate (cost=0.00..37348395.05 rows=3951
> width=25) (actual
> time=130.000..2629617.000 rows=254576 loops=1)
> 9 -> Index Scan using uesapt001 on APORTES
> (cost=0.0
On Tue, Aug 15, 2006 at 03:39:51PM -0400, [EMAIL PROTECTED] wrote:
No. This is not true. Updating the file system structure (inodes, indirect
blocks) touches a separate part of the disk than the actual data. If
the file system structure is modified, say, to extend a file to allow
it to contain mo
On Aug 15, 2006, at 1:53 PM, Sebastián Baioni wrote:
9 -> Index Scan using uesapt001 on
APORTES (cost=0.00..37301678.64
rows=9339331 width=25) (actual time=110.000..2520690.000
rows=9335892 loops=1)
It's taking 2520 seconds to scan an index with 9M rows, which sou
On Tue, Aug 15, 2006 at 02:15:05PM -0500, Jim C. Nasby wrote:
Now, if
fsync'ing a file also ensures that all the metadata is written, then
we're probably fine...
...and it does. Unclean shutdowns cause problems in general because
filesystems operate asynchronously. postgres (and other similar
On Aug 15, 2006, at 4:21 PM, Bucky Jordan wrote:
... from Vivek...
which is an issue with freebsd and bonnie++ since it doesn't know
that freebsd can use large files natively (ie, no large file hacks
necessary). the freebsd port of bonnie takes care of this, if you
use that instead of compilin
[EMAIL PROTECTED] writes:
> WAL file is never appended - only re-written?
> If so, then I'm wrong, and ext2 is fine. The requirement is that no
> file system structures change as a result of any writes that
> PostgreSQL does. If no file system structures change, then I take
> everything back as un
On Tue, Aug 15, 2006 at 04:05:17PM -0400, Tom Lane wrote:
> [EMAIL PROTECTED] writes:
> > I've been worrying about this myself, and my current conclusion is that
> > ext2 is bad because: a) fsck, and b) data can be lost or corrupted, which
> > could lead to the need to trash the xlog.
> > Even ext3
Luke,
For some reason it looks like bonnie is picking a 300M file.
> bonnie++ -d bonnie
Version 1.03 --Sequential Output-- --Sequential Input-
--Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
MachineSize K/sec %CP K/sec %CP K/sec %C
[EMAIL PROTECTED] writes:
> I've been worrying about this myself, and my current conclusion is that
> ext2 is bad because: a) fsck, and b) data can be lost or corrupted, which
> could lead to the need to trash the xlog.
> Even ext3 in writeback mode allows for the indirect blocks to be updated
> w
On Tue, Aug 15, 2006 at 02:15:05PM -0500, Jim C. Nasby wrote:
> So what causes files to get 'lost' and get stuck in lost+found?
> AFAIK that's because the file was written before the metadata. Now, if
> fsync'ing a file also ensures that all the metadata is written, then
> we're probably fine... if
On Tue, Aug 15, 2006 at 03:02:56PM -0400, Michael Stone wrote:
> On Tue, Aug 15, 2006 at 02:33:27PM -0400, [EMAIL PROTECTED] wrote:
> >>>Are 'we' sure that such a setup can't lose any data?
> >>Yes. If you check the archives, you can even find the last time this was
> >>discussed...
> >I looked la
On Tue, 15 Aug 2006 12:47:54 -0600
"Carl Youngblood" <[EMAIL PROTECTED]> wrote:
> I tried setting it to 2GB and postgres wouldn't start. Didn't
> investigate in much greater detail as to why it wouldn't start, but
> after switching it back to 1GB it started fine.
>
> On 8/15/06, Jim C. Nasby <[E
On Tue, Aug 15, 2006 at 12:47:54PM -0600, Carl Youngblood wrote:
> I tried setting it to 2GB and postgres wouldn't start. Didn't
> investigate in much greater detail as to why it wouldn't start, but
> after switching it back to 1GB it started fine.
Most likely because you didn't set the kernel's
On Aug 15, 2006, at 2:50 PM, Luke Lonergan wrote:
I don't know why I missed this the first time - you need to let
bonnie++
pick the file size - it needs to be 2x memory or the results you
get will
not be accurate.
which is an issue with freebsd and bonnie++ since it doesn't know
that fr
On Tue, Aug 15, 2006 at 03:02:56PM -0400, Michael Stone wrote:
> On Tue, Aug 15, 2006 at 02:33:27PM -0400, [EMAIL PROTECTED] wrote:
> >On Tue, Aug 15, 2006 at 01:26:46PM -0400, Michael Stone wrote:
> >>On Tue, Aug 15, 2006 at 11:29:26AM -0500, Jim C. Nasby wrote:
> >>>Are 'we' sure that such a setu
On Tue, Aug 15, 2006 at 02:33:27PM -0400, [EMAIL PROTECTED] wrote:
On Tue, Aug 15, 2006 at 01:26:46PM -0400, Michael Stone wrote:
On Tue, Aug 15, 2006 at 11:29:26AM -0500, Jim C. Nasby wrote:
>Are 'we' sure that such a setup can't lose any data?
Yes. If you check the archives, you can even find
Hello Jim, we can't use the Where cuiT='12345678901' in the subquery because we
need max(cuiL)
independently of that cuiT:
cuiT cuiL PERIFAMI
1a200608 0
1a200601 2
1b200607 3
1c200605 4
2a200605 9
2c200604 4
2b200608 1
We need:
whe
Bucky,
I don't know why I missed this the first time - you need to let bonnie++
pick the file size - it needs to be 2x memory or the results you get will
not be accurate.
In this case you've got a 1GB file, which nicely fits in RAM.
- Luke
On 8/15/06 6:56 AM, "Bucky Jordan" <[EMAIL PROTECTED]>
By the way, can you please post a link to that thread?
On 8/15/06, Jim C. Nasby <[EMAIL PROTECTED]> wrote:
See the recent thread about how old rules of thumb for shared_buffers
are now completely bunk. With 4G of memory, setting shared_buffers to 2G
could easily be reasonable. The OP really need
I tried setting it to 2GB and postgres wouldn't start. Didn't
investigate in much greater detail as to why it wouldn't start, but
after switching it back to 1GB it started fine.
On 8/15/06, Jim C. Nasby <[EMAIL PROTECTED]> wrote:
See the recent thread about how old rules of thumb for shared_buf
On Tue, Aug 15, 2006 at 01:26:46PM -0400, Michael Stone wrote:
> On Tue, Aug 15, 2006 at 11:29:26AM -0500, Jim C. Nasby wrote:
> >Are 'we' sure that such a setup can't lose any data?
> Yes. If you check the archives, you can even find the last time this was
> discussed...
I looked last night (coi
On Tue, Aug 15, 2006 at 11:29:26AM -0500, Jim C. Nasby wrote:
> On Mon, Aug 14, 2006 at 01:09:04PM -0400, Michael Stone wrote:
> > On Mon, Aug 14, 2006 at 12:05:46PM -0500, Jim C. Nasby wrote:
> > >Wow, interesting. IIRC, XFS is lower performing than ext3,
> > For xlog, maybe. For data, no. Both ar
On Tue, Aug 15, 2006 at 11:29:26AM -0500, Jim C. Nasby wrote:
Are 'we' sure that such a setup can't lose any data?
Yes. If you check the archives, you can even find the last time this was
discussed...
The bottom line is that the only reason you need a metadata journalling
filesystem is to s
On Tue, Aug 15, 2006 at 11:25:24AM -0500, Jim C. Nasby wrote:
Well, if the controller is caching with a BBU, I'm not sure that order
matters anymore, because the controller should be able to re-order at
will. Theoretically. :) But this is why having some actual data posted
somewhere would be grea
On Tue, Aug 15, 2006 at 03:43:29PM +, Sebasti?n Baioni wrote:
> Hi Nark, thanks for your answer.
>
> It's expected to return 1,720 rows (of 80,471 that match with condition WHERE
> T.cuiT='12345678901')
>
> We have indexes by :
> uesapt000: cuiT, cuiL, PERI;
> uesapt001: cuiL, PERI;
> uesapt0
On Mon, Aug 14, 2006 at 01:09:04PM -0400, Michael Stone wrote:
> On Mon, Aug 14, 2006 at 12:05:46PM -0500, Jim C. Nasby wrote:
> >Wow, interesting. IIRC, XFS is lower performing than ext3,
>
> For xlog, maybe. For data, no. Both are definately slower than ext2 for
> xlog, which is another reason
On Mon, Aug 14, 2006 at 01:03:41PM -0400, Michael Stone wrote:
> On Mon, Aug 14, 2006 at 10:38:41AM -0500, Jim C. Nasby wrote:
> >Got any data to back that up?
>
> yes. that I'm willing to dig out? no. :)
Well, I'm not digging hard numbers out either, so that's fair. :) But it
would be very hand
On Thu, Aug 10, 2006 at 10:20:45AM +0100, Sue Fitt wrote:
> Well they don't necessarily have the same value!
>
> It's a dictionary with cross-referenced words, e.g. 'bring' and
> 'brought' are both headwords in the dictionary, but 'brought' is
> cross-referenced to 'bring'. So, the table stores
Hi Nark, thanks for your answer.
It's expected to return 1,720 rows (of 80,471 that match with condition WHERE
T.cuiT='12345678901')
We have indexes by :
uesapt000: cuiT, cuiL, PERI;
uesapt001: cuiL, PERI;
uesapt002: cuiT, PERI;
We usually make a vacuum analyze and reindex of every table, and we
Can you provide an EXPLAIN ANALYZE of the query in PG? Have you
analyzed the PG database? How many rows is this query expected to
return? Which version of PG are you running? What indexes have you
defined?
-- Mark
On Tue, 2006-08-15 at 14:38 +, Sebastián Baioni wrote:
> Hello, I'm migrati
On Thu, Aug 10, 2006 at 09:30:35AM +0200, Michal Taborsky - Internet Mall wrote:
> Tom Lane napsal(a):
> >Michal Taborsky - Internet Mall <[EMAIL PROTECTED]> writes:
> >>SELECT product.product_id
> >> FROM action
> >> JOIN product ON (product.product_id=action.product_id)
> >> WHERE action.sho
On Thu, Aug 10, 2006 at 10:23:55AM +0100, Richard Huxton wrote:
> Carl Youngblood wrote:
> >- I noticed that there are six different postmaster daemons running.
> >Only one of them is taking up a lot of RAM (1076m virtual and 584m
> >resident). The second one is using 181m resident while the other
Hello, I'm migrating from MS SQL Server to PostgreSQL 8.1 and I have a serious problem:Table: APORTES - Rows: 9,000,000 (9 million)*cuiT (char 11)*cuiL (char 11)*PERI (char 6)FAMI (numeric 6)I need all the cuiLs whose max(PERI) are from a cuiT, and the Max(FAMI) of those cuiLs, so the sentence is
...
I see you are running bonnie++ version 1.93c. The numbers it reports are
very different from version 1.03a, which is the one everyone runs - can
you
post your 1.03a numbers from bonnie++?
...
Luke,
Thanks for the pointer. Here's the 1.03 numbers, but at the moment I'm
only able to run them on
41 matches
Mail list logo