Erik Jones wrote:
P.S. To whomever said that Gentoo for for single users running cutting
edge software, poppycock.
That was me. Andrus said in a former post on this thread:
>> I have ... no experiences on Linux.
I stand by my assertion that his company should not be running Gentoo in
a pr
Adam Seering wrote:
Hi,
I'm new here, apologies if this is the wrong list...
I'm playing around with storing custom preprocessed data structures
within Large Objects. I'd like to be able to write a custom function
that will, within a query, let me select out particular bytestrings fr
Hello Scott, thanks for your answer. I've just noticed that my first message
lacked some important info.
First, this is an accounting software, and there's only one database. Almost
all of the options (buttons, generally ) are stored in a set of tables,
beside the database privileges needed to wor
Hi,
I'm new here, apologies if this is the wrong list...
I'm playing around with storing custom preprocessed data structures
within Large Objects. I'd like to be able to write a custom function
that will, within a query, let me select out particular bytestrings from
the middle of a L
On Fri, Nov 14, 2008 at 10:57 AM, Tom Lane <[EMAIL PROTECTED]> wrote:
> Yeah, but note that the planner knows darn well that this will be an
> expensive query --- 493427.14 cost units estimated to fetch 2 rows!
>
> My interpretation is that the condition on user_id is horribly
> nonselective (at
On Thu, Nov 13, 2008 at 5:30 PM, Fernando Moreno <[EMAIL PROTECTED]> wrote:
> Hi, I'm working on a little backup utility for a desktop application. It's
> going to execute pg_dumpall (-r) and pg_dump, but first I have to deal with
> the permissions needed to do that:
>
> 1. Users (pgsql roles) enab
Hi Ho!
--- On Thu, 11/13/08, Brent Wood <[EMAIL PROTECTED]> wrote:
> You need to use a self relation, not a group by, as no data
> are being aggregated into a new single value, which is what
> the group by achieves.
>
> This joins a table to itself, so that columns in it can be
> replicated. The
"Scott Marlowe" <[EMAIL PROTECTED]> writes:
> On Thu, Nov 13, 2008 at 2:04 PM, Phoenix Kiula <[EMAIL PROTECTED]> wrote:
>> explain analyze SELECT alias, id, title, private_key, aliasEntered
>> FROM books
>> WHERE user_id = 'MYUSER' AND url_encrypted =
>> 'bed59c2f2d92e306d4481a276bd8a78c4b0532d4'
"Dan Armbrust" <[EMAIL PROTECTED]> writes:
> Why did those particular tables and indexes take _so_ long to vacuum?
> Perhaps we have a disk level IO problem on this system?
FWIW, I agree with Scott that you seem to have an overstressed I/O
system. It's hard to tell why from here.
> Can someone t
Hi, I'm working on a little backup utility for a desktop application. It's
going to execute pg_dumpall (-r) and pg_dump, but first I have to deal with
the permissions needed to do that:
1. Users (pgsql roles) enabled to backup would be superusers all the time.
This sounds insecure.
2. Users will
On Thu, Nov 13, 2008 at 4:08 PM, Dan Armbrust
<[EMAIL PROTECTED]> wrote:
> I have a system backed by a PostgreSQL DB at a customer site that
> mysteriously slowed way down - and couldn't keep up with the load for
> no apparent reason.
>
> I had them run a vacuum analyze verbose on my database, and
What's the simplest way to grant read-only access to an existing
database? One approach I guess would be to create a user who has
SELECT but not INSERT etc privileges. But it appears that GRANT SELECT
does not work at the schema or database level. This means I'd not only
have to create hundreds of
Phoenix Kiula escribió:
> >>
> >> Index Scan using new_idx_books_userid on books (cost=0.00..493427.14
> >> rows=2 width=31) (actual time=0.428..8400.299 rows=1 loops=1)
> >> Index C
I have a system backed by a PostgreSQL DB at a customer site that
mysteriously slowed way down - and couldn't keep up with the load for
no apparent reason.
I had them run a vacuum analyze verbose on my database, and had these
lines come back which made me suspicious:
INFO: index "ix_cpe_ispid" n
On Thursday 13 November 2008, "glok_twen" <[EMAIL PROTECTED]> wrote:
> INFO: vacuuming "public.monthly_res_01"
>
> ERROR: could not read block 43775860 of relation 1663/11511/24873: read
> only 4096 of 8192 bytes
>
> ERROR: could not read block 43775860 of relation 1663/11511/24873: read
> only
Thanks Scott. Responses below.
>>
>> (1) The culprit SELECT sql is (note that "MYUSER" in this example can
>> be an IP address) --
>
> So, it can be, but might not be? Darn, If it was always an ip I'd
> suggest changing types.
>
Yes, it can either be a registered USER ID or an IP address. I
i have a big table - about 450GB each of data and index use.
i keep getting a hint saying the database needs a vacuum:
WARNING: database "postgres" must be vacuumed within 10970738 transactions
HINT: To avoid a database shutdown, execute a full-database VACUUM in
"postgres".
however when i run
On Thu, Nov 13, 2008 at 1:09 PM, David Wilson <[EMAIL PROTECTED]> wrote:
> On Thu, Nov 13, 2008 at 2:03 PM, <[EMAIL PROTECTED]> wrote:
>> I have several tables that when I run VACUUM FULL on, they are under 200k,
>> but after a day of records getting added they grow to 10 to 20 megabytes.
>> They
On Thu, Nov 13, 2008 at 2:04 PM, Phoenix Kiula <[EMAIL PROTECTED]> wrote:
> On Fri, Nov 14, 2008 at 3:10 AM, Scott Marlowe <[EMAIL PROTECTED]> wrote:
>> On Thu, Nov 13, 2008 at 11:59 AM, Phoenix Kiula <[EMAIL PROTECTED]> wrote:
>>> Hi.
>>>
>>> I had tweaked my PG 8.2.6 with the very kind help of th
On Fri, Nov 14, 2008 at 3:10 AM, Scott Marlowe <[EMAIL PROTECTED]> wrote:
> On Thu, Nov 13, 2008 at 11:59 AM, Phoenix Kiula <[EMAIL PROTECTED]> wrote:
>> Hi.
>>
>> I had tweaked my PG 8.2.6 with the very kind help of this list a
>> couple years ago. It has been working fine, until recently. Not sur
How the best way to controling fast growth in my Database.
atually my postgresql.conf have this:
# - Checkpoints -
checkpoint_segments = 15# in logfile segments, min 1, 16MB each
checkpoint_timeout = 5min # range 30s-1h
#checkpoint_warning = 30s # 0 is
On Thu, Nov 13, 2008 at 7:42 PM, Tom Lane <[EMAIL PROTECTED]> wrote:
>
>
> Other alternatives worth reading about:
>RESET work_mem
>SET LOCAL work_mem
>
nice , thanks :)
--
GJ
On Thu, Nov 13, 2008 at 2:03 PM, <[EMAIL PROTECTED]> wrote:
> I have several tables that when I run VACUUM FULL on, they are under 200k,
> but after a day of records getting added they grow to 10 to 20 megabytes.
> They get new inserts and a small number of deletes and updates.
>
> seq_scan
Sam Mason <[EMAIL PROTECTED]> writes:
> On Thu, Nov 13, 2008 at 02:59:34PM +, Grzegorz Jaaakiewicz wrote:
>> so how do I change it back to default (without knowing what the previous val
>> was). I suppose having it in a transaction won't do :P
> If by "default" you mean whatever was in the con
Ok, I work at a hosting company that runs Gentoo as it's main host
operating system so here's the skivvy on the current status of
PostgreSQL under portage.
Up until just a few months ago the needed packages were called libpq
and postgresql (go figure). These were *not* slotted and so did n
That is the expected behavior. Postgres doesn't give back disk like Java
doesn't give back memory. It keeps a map of where the free space is so it
can use it again.
It does all this so it doesn't have to lock the table to compact it when
VACUUMing. VACUUM FULL does lock the table to compact it.
On Thu, Nov 13, 2008 at 02:03:22PM -0500, [EMAIL PROTECTED] wrote:
> I have several tables that when I run VACUUM FULL on, they are under 200k,
> but after a day of records getting added they grow to 10 to 20 megabytes.
> They get new inserts and a small number of deletes and updates.
>
> A norma
I am somewhat new to Postgresql and am trying to figure out if I have a
problem here.
I have several tables that when I run VACUUM FULL on, they are under 200k,
but after a day of records getting added they grow to 10 to 20 megabytes.
They get new inserts and a small number of deletes and update
On Thu, Nov 13, 2008 at 02:59:34PM +, Grzegorz Jaaakiewicz wrote:
> On Wed, Nov 12, 2008 at 11:08 PM, Greg Smith <[EMAIL PROTECTED]> wrote:
> > Generally, if it's only a report or two that
> > need a lot more working memory for sorts, you can do this at the beginning
> > of them instead:
> >
>
changing it; I've applied a patch for that. I'm still not quite
convinced that Ivan isn't seeing some other issue though.
Thank you
In the meantime, I noticed something odd while experimenting with your
test case: when running with default maintenance_work_mem = 16MB,
there is a slowdown of 3
On Thu, Nov 13, 2008 at 11:59 AM, Phoenix Kiula <[EMAIL PROTECTED]> wrote:
> Hi.
>
> I had tweaked my PG 8.2.6 with the very kind help of this list a
> couple years ago. It has been working fine, until recently. Not sure
> if it is after the update to 8.3 or because my DB has been growing,
> but th
Hi.
I had tweaked my PG 8.2.6 with the very kind help of this list a
couple years ago. It has been working fine, until recently. Not sure
if it is after the update to 8.3 or because my DB has been growing,
but the db is very slow now and the cache doesn't seem enough.
~ > free -m
total used free
2008/11/13 Scott Marlowe <[EMAIL PROTECTED]>:
> On Thu, Nov 13, 2008 at 7:59 AM, Grzegorz Jaśkiewicz <[EMAIL PROTECTED]>
> wrote:
>>
>>
>> On Wed, Nov 12, 2008 at 11:08 PM, Greg Smith <[EMAIL PROTECTED]> wrote:
>>>
>>> Be advised that the work_mem setting (and its deprecated alias sort_mem)
>>> ar
On Thu, Nov 13, 2008 at 7:59 AM, Grzegorz Jaśkiewicz <[EMAIL PROTECTED]> wrote:
>
>
> On Wed, Nov 12, 2008 at 11:08 PM, Greg Smith <[EMAIL PROTECTED]> wrote:
>>
>> Be advised that the work_mem setting (and its deprecated alias sort_mem)
>> are on a per-client basis. So if you have a bunch of peopl
On Thu, Nov 13, 2008 at 10:18:56AM -0500, Kevin Duffy wrote:
> This worked where E: is on the database server
> copy imagineoptions
>from 'E:\\ORGDAT~1\\data\\xxxPositions\\20081112_Options.csv'
> DELIMITERS ',' CSV ;
>
>
> This does not work fileprint-01 is a different server.
> copy ima
On Thu, Nov 13, 2008 at 01:56:11PM +0100, Markus Wollny wrote:
> Sam Mason wrote:
> > You may have some luck with increasing the statistics target on the
> > entry_id and last_updated columns and re-ANALYZING the table. Then
> > again, the fact that it thinks it's only going to get a single row
>
Christian Schröder wrote:
we have a PostgreSQL 8.2 database on an xfs filesystem. By mistake the
first blocks of this filesystem were overwritten. An xfs_repair
reconstructed the superblock and also found many orphaned files and
directories. Actually, all we have on the filesystem now is in
"l
On Wed, Nov 12, 2008 at 11:08 PM, Greg Smith <[EMAIL PROTECTED]> wrote:
> Be advised that the work_mem setting (and its deprecated alias sort_mem)
> are on a per-client basis. So if you have a bunch of people running reports
> with that setting, you might discover your server running out of memor
I am trying to develop a trigger that will post a new account into a
table in another db sing dblink that is part of the egroupware web app
that uses tripledes as the algorithm. I can't seem to find a combination
for gen_salt that produces the correct crypt password, however, my
knowledge in this a
Teodor Sigaev <[EMAIL PROTECTED]> writes:
>> We could extend IndexBuildHeapScan's API to support that, but I'm
>> not quite convinced that this is the issue.
> That extension might be useful for bitmap index too to simplify index
> creation
> process.
Maybe, but in any case the measurable GIN s
This worked where E: is on the database server
copy imagineoptions
from 'E:\\ORGDAT~1\\data\\xxxPositions\\20081112_Options.csv'
DELIMITERS ',' CSV ;
This does not work fileprint-01 is a different server.
copy imagineoptions
from
\\fileprint-01\Company\xxx\Benchmarking\xxxPositions\2008
On Thu, 13 Nov 2008 09:11:05 -0500
Tom Lane <[EMAIL PROTECTED]> wrote:
> Teodor Sigaev <[EMAIL PROTECTED]> writes:
> >> Yeah, I'm not convinced either. Still, Teodor's theory should
> >> be easily testable: set synchronize_seqscans to FALSE and see
> >> if the problem goes away.
>
> > Test suit
Where to get ready binaries which can be copied and ran in Gentoo ? Is
there any how-to pages or tips about compiling PostgreSql from source in
this Gentoo ?
This would do more harm than good so *don't*.
How can compiling PostgreSql 8.3.x from source on Gentoo do more harm than
good ?
I expec
On Wed, 12 Nov 2008 21:33:26 -0500, Greg Smith wrote:
> You need to get one of the experimental builds that include slotted
> support. Take a look at
> http://overlays.gentoo.org/proj/postgresql/wiki/ExperimentalFAQ and the
> page linked to by the blog article you mentioned at
> http://dev-zero.c
no more mesages please..
Holger Hoffstaette escribió:
On Thu, 13 Nov 2008 11:41:35 +0200, Andrus wrote:
Greg,
You need to get one of the experimental builds that include slotted
support. Take a look at
http://overlays.gentoo.org/proj/postgresql/wiki/ExperimentalFAQ and the
page linked to
Teodor Sigaev <[EMAIL PROTECTED]> writes:
>> Yeah, I'm not convinced either. Still, Teodor's theory should be easily
>> testable: set synchronize_seqscans to FALSE and see if the problem goes
>> away.
> Test suit to reproduce the problem:
I don't doubt that you're describing a real effect, I'm j
On Thu, 13 Nov 2008 11:41:35 +0200, Andrus wrote:
> Greg,
>
>> You need to get one of the experimental builds that include slotted
>> support. Take a look at
>> http://overlays.gentoo.org/proj/postgresql/wiki/ExperimentalFAQ and the
>> page linked to by the blog article you mentioned at
>> http:
We could extend IndexBuildHeapScan's API to support that, but I'm
not quite convinced that this is the issue.
That extension might be useful for bitmap index too to simplify index creation
process.
--
Teodor Sigaev E-mail: [EMAIL PROTECTED]
Yeah, I'm not convinced either. Still, Teodor's theory should be easily
testable: set synchronize_seqscans to FALSE and see if the problem goes
away.
Test suit to reproduce the problem:
DROP TABLE IF EXISTS foo;
DROP TABLE IF EXISTS footmp;
CREATE OR REPLACE FUNCTION ge
On Wed, 12 Nov 2008 19:47:15 -0500, Guy Rouillier wrote:
> To answer your question directly, you won't find a prepackaged solution to
> running simultaneous version of PG (or any other software package) on
> Gentoo. That's not how Gentoo is designed to be used. Having said that,
You are contrad
Ivan Sergio Borgonovo <[EMAIL PROTECTED]> writes:
> Tom Lane <[EMAIL PROTECTED]> wrote:
>> I see. So this could explain Ivan's issue if his table contains
>> large numbers of repeated GIN keys. Ivan, is that what your data
>> looks like?
> Well if by GIN keys you mean lexemes it could be. But I
"Tony Fernandez" <[EMAIL PROTECTED]> writes:
> I am getting the following error:
> :14: PGRES_FATAL_ERROR load '$libdir/xxid'; - ERROR:
> incompatible library "/usr/lib/pgsql/xxid.so": missing magic block
You need a version of xxid.so that matches your server version, on
each server.
It might w
brian <[EMAIL PROTECTED]> writes:
> Yes, this one got me, also. Strangely, you need to do:
> select iterate('{1,2}');
In reasonably modern versions of PG you could use an array constructor:
select iterate(array[1,2,3]);
regards, tom lane
--
Sent via pgsql-general mailin
Hi!
Sam Mason wrote:
> You may have some luck with increasing the statistics target on the
> entry_id and last_updated columns and re-ANALYZING the table. Then
> again, the fact that it thinks it's only going to get a single row
> back when it searches for the entity_id suggests that it's all a b
Hi folks,
I was wondering on pg_lesslog integration on our architecture based on wal
shipping using pg_standby (thanks Simon) as recovery tool. The main target
is to reduce wal files accumulation on "master" host (thus consumming disk
space) in case of slave unavailability.
As far as I've read the
Hello,
When changing a view in my mod_perl (mason) application I typically get
this error if I don't restart apache:
"DBD::Pg::st execute failed: ERROR: cached plan must not change result
type"
Is there a way to avoid having to restart apache?
Thanks,
--
http://www.critikart.net
--
On Wed, 12 Nov 2008 15:18:05 -0500
Tom Lane <[EMAIL PROTECTED]> wrote:
> > So, in that case process can insert about 1000 ItemPointers per
> > one data tree lookup, in opposite case it does 1000 lookups in
> > data tree.
> I see. So this could explain Ivan's issue if his table contains
> large n
--- On Wed, 12/11/08, Tony Fernandez <[EMAIL PROTECTED]> wrote:
> Date: Wednesday, 12 November, 2008, 10:52 PM
> Hello lists,
>
>
>
> I am trying to run Slony on a Master Postgres 8.1.11
> replicating to a
> Slave same version and 2nd Slave Postgres 8.3.4.
>
> I am getting the following err
On Wed, Nov 12, 2008 at 04:15:23PM +0100, Markus Wollny wrote:
> I've got this simple query
>
> SELECT image_id
> FROM image_relation
> WHERE entity_id = 69560::integer
> ORDER BY last_updated DESC
> LIMIT1;
>
> which currently runs for something around 600ms. Here's the explain analyze
> o
Greg,
You need to get one of the experimental builds that include slotted
support. Take a look at
http://overlays.gentoo.org/proj/postgresql/wiki/ExperimentalFAQ and the
page linked to by the blog article you mentioned at
http://dev-zero.ch/blog/archives/6-PostgreSQL-Status.html
Thank you.
thanks for ur help,
but, I think that is not my want:D
if you use sum like this, it must be with group by,
what I want is do sum on all columns, not group by.
thanks
regards,
在 2008-11-13四的 19:27 +1100,Russell Smith写道:
> Yi Zhao wrote:
> > I want to select some column(a, b) from the table with
Hi!
In preparation for my upcoming upgrade to PostgreSQL 8.3.5, I have taken the
opportunity to try this scenario on a test machine with the latest PostgreSQL
version. Unfortunately the result remains the same, though this database has
been just reloaded from a dump and vacuum analyzed. select
Yi Zhao wrote:
> I want to select some column(a, b) from the table with the specified
> condition, so, i can do like this:
> select a, b from mytable where id = (select id from temptable where
> tname = 'df' ) and stype = 'def' and range = 'afk'
>
How about;
SELECT a, b, count(1), sum(c) FROM
63 matches
Mail list logo