* Andy <[EMAIL PROTECTED]> wrote:
>I have the following problem: I have a client to which we send every
>night a "dump" with a the database in which there are only their
>data's. It is a stupid solution but I choose this solution because I
>couldn't find any better. The target mac
On Oct 11, 2005, at 10:54 AM, Claus Guttesen wrote:
Thank you for your reply. Does this apply to FreeBSD 5.4 or 6.0 on
amd64 (or both)?
It applies to FreeBSD >= 5.0.
However, I have not been able to get a real answer from the FreeBSD
hacker community on what the max buffer space usage will
Well - to each his own I guess - we did extensive testing on 1.4, and
it refused to allocate much past 1gig on both Linux x86/x86-64 and
Windows.
AlexOn 10/11/05, Alan Stange <[EMAIL PROTECTED]> wrote:
Alex Turner wrote:> Perhaps this is true for 1.5 on x86-32 (I've only used it on x86-64)> but I
> > > Apparently this formula is no longer relevant on the FreeBSD systems as
> > > it can cache up to almost all the available RAM. With 4GB of RAM, one
> > > could specify most of the RAM as being available for caching, assuming
> > > that nothing but PostgreSQL runs on the server -- certainly 1/
Alex Turner wrote:
Perhaps this is true for 1.5 on x86-32 (I've only used it on x86-64)
but I was more thinking 1.4 which many folks are still using.
The 1.4.x JVM's will also work just fine with much more than 1GB of
memory. Perhaps you'd like to try again?
-- Alan
On 10/11/05, *Alan
On Tue, 2005-10-11 at 16:54 +0200, Claus Guttesen wrote:
> > > I have a postgresql 7.4.8-server with 4 GB ram.
> > > #effective_cache_size = 1000# typically 8KB each
> > >
> > > This is computed by sysctl -n vfs.hibufspace / 8192 (on FreeBSD). So I
> > > changed it to:
> > >
> > > effective_cac
Perhaps this is true for 1.5 on x86-32 (I've only used it on x86-64)
but I was more thinking 1.4 which many folks are still using.
AlexOn 10/11/05, Alan Stange <[EMAIL PROTECTED]> wrote:
Alex Turner wrote:> Realise also that unless you are running the 1.5 x86-64 build, java> will not use more than
> > I have a postgresql 7.4.8-server with 4 GB ram.
> > #effective_cache_size = 1000# typically 8KB each
> >
> > This is computed by sysctl -n vfs.hibufspace / 8192 (on FreeBSD). So I
> > changed it to:
> >
> > effective_cache_size = 27462# typically 8KB each
>
> Apparently this formula is
Alex Turner wrote:
Realise also that unless you are running the 1.5 x86-64 build, java
will not use more than 1Gig, and if the app server requests more than
1gig, Java will die (I've been there) with an out of memory error,
even though there is plenty of free mem available. This can easily be
Realise also that unless you are running the 1.5 x86-64 build, java
will not use more than 1Gig, and if the app server requests more than
1gig, Java will die (I've been there) with an out of memory error, even
though there is plenty of free mem available. This can easily be
cause by a lazy GC thre
Ups folks,
Indeed there were 2 important indexes missing. Now it runs about 10 times
faster. Sorry for the caused trouble :) and thanx for help.
Hash IN Join (cost=3307.49..7689.47 rows=30250 width=6) (actual
time=227.666..813.786 rows=56374 loops=1)
Hash Cond: ("outer".id_order = "inner"
"Andy" <[EMAIL PROTECTED]> writes:
> EXPLAIN ANALYZE
> DELETE FROM report WHERE id_order IN
> ...
> Hash IN Join (cost=3532.83..8182.33 rows=32042 width=6) (actual
> time=923.456..2457.323 rows=59557 loops=1)
> ...
> Total runtime: 456718.658 ms
So the runtime is all in the delete triggers. Th
On Tue, 2005-10-11 at 09:41 +0200, Claus Guttesen wrote:
> I have a postgresql 7.4.8-server with 4 GB ram.
>
> > #effective_cache_size = 1000# typically 8KB each
>
> This is computed by sysctl -n vfs.hibufspace / 8192 (on FreeBSD). So I
> changed it to:
>
> effective_cache_size = 27462
We run the DB on a linux system. The client has a windows system. The
application is almost the same (so the database structure is 80% the same).
The difference is that the client does not need all the tables.
So, in the remaining tables there are a lot of extra data's that don't
belong to this c
On Tue, Oct 11, 2005 at 10:47:03AM +0300, Andy wrote:
> So, I have a replication only with the tables that I need to send, then I
> make a copy of this replication, and from this copy I delete all the data's
> that are not needed.
>
> How can I increase this DELETE procedure because it is really
On 10/11/05 8:05 AM, "Andy" <[EMAIL PROTECTED]> wrote:
>> Do you have foreign key relationships that must be followed for cascade
>> delete? If so, make sure that you have indices on them.
> Yes I have such things. Indexes are on these fields. >> To be onest this
> delete is taking the longest ti
Do you have foreign key relationships that must be followed for cascade
delete? If so, make sure that you have indices on them.
Yes I have such things. Indexes are on these fields. >> To be onest this
delete is taking the longest time, but it involves about 10 tables.
Are you running
any type
On 10/11/05 3:47 AM, "Andy" <[EMAIL PROTECTED]> wrote:
> Hi to all,
>
> I have the following problem: I have a client to which we send every night a
> "dump" with a the database in which there are only their data's. It is a
> stupid solution but I choose this solution because I couldn't find any
Hi to all,
I have the following problem: I have a client to
which we send every night a "dump" with a the database in which there are only
their data's. It is a stupid solution but I choose this solution because I
couldn't find any better. The target machine is a windows 2003.
So, I h
I have a postgresql 7.4.8-server with 4 GB ram.
> #max_fsm_pages = 2 # min max_fsm_relations*16, 6 bytes each
> #max_fsm_relations = 1000 # min 100, ~50 bytes each
If you do a vacuum verbose (when it's convenient) the last couple of
lines will tell you something like this:
INF
On Mon, 2005-10-10 at 14:57 +0200, Stef wrote:
> Is there any way to achieve better compression?
You can use XML schema aware compression techniques, but PostgreSQL
doesn't know about those. You have to do it yourself, or translate the
XML into an infoset-preserving form that will still allow XPat
21 matches
Mail list logo