On 5/29/2007 10:19 AM, Ed L. wrote:
> On Wednesday 23 May 2007 1:04 pm, George Pavlov wrote:
> FWIW, I've also been seeing this sort of query log corruption for
> as long as I can remember, 7.1 through 8.2, HPUX (parisc, ia64),
> Linux on intel, amd...
Do you have any tricks for
> From: Tom Lane [mailto:[EMAIL PROTECTED]
> "George Pavlov" <[EMAIL PROTECTED]> writes:
> > On 5/29/2007 10:19 AM, Ed L. wrote:
> >> FWIW, I've also been seeing this sort of query log corruption for
> >> as long as I can remember, 7.1 through
From: Tom Lane [mailto:[EMAIL PROTECTED]
> "George Pavlov" <[EMAIL PROTECTED]> writes:
> > ... Also redirect_stderr = on.
>
> Hm. Well, that's the bit that ought to get you into the PIPE_BUF
> exception. There's been some speculation that a change
> From: Joshua D. Drake [mailto:[EMAIL PROTECTED]
>
> In those rare cases wouldn't it make more sense to just set
> enable_seqscan to off; run query; set enable_seqscan to on;
1. these cases are not that rare (to me);
2. setting enable_seqscan (in JDBC, say) from the application makes the
whole
> From: Tom Lane [mailto:[EMAIL PROTECTED]
> "George Pavlov" <[EMAIL PROTECTED]> writes:
> > I am curious what could make the PA query to ignore the
> index. What are
> > the specific stats that are being used to make this decision?
>
> you don
> From: Tom Lane
> "George Pavlov" <[EMAIL PROTECTED]> writes:
> >> From: Joshua D. Drake [mailto:[EMAIL PROTECTED]
> >> In those rare cases wouldn't it make more sense to just set
> >> enable_seqscan to off; run query; set enable_seqscan
cesses are writing
at the same time uninterrupted.
Anything else I can do to diagnose?
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of George Pavlov
> Sent: Saturday, June 02, 2007 11:33 AM
> To: Tom Lane
> Cc: Ed L.; pgsql-general@
> From: Tom Lane [mailto:[EMAIL PROTECTED]
> What are the total lengths of the log entries in which you see the
> failure? (The "length" here includes all the lines belonging to a
> single logical entry, eg, ERROR, DETAIL, HINT.)
It is very hard to tease these apart because now that I look at it
> With DROP CASCADE, he can get rid of
> everything within
> the schema at a blow, so this is really pretty close to the same
> functionality.
but beware of cross-schema dependencies! e.g., a DROP SCHEMA CASCADE of
schema X containg a table that has a column defined using a domain from
schema Y w
What's the plan for releasing the next 8.1? There hasn't been a release
since April and there have been fixes. (I personally am particularly
interested in "implement chunking protocol for writes to the syslogger
pipe" because without it over 2/3 of attempts at query analysis fail for
me).
George
as everyone has pointed out it does not seem like the best table design
and querying for these fields as normal course of business does not seem
that great, but if you wanted to audit tables like these once in a while
you could easily do it using your favorite scripting language or SQL
itself. here
i am trying to create a temp table inside a plpgsql function (i just
need a temporary place to hold data, but it is too complex for any other
data structure). unfortunately if i call the function again within the
same session the temp table still exists and the function fails. if i
drop the temp ta
select
to_char(ts, 'MM/DD/') as "day",
str,
proc,
sum(case when z!=0 then 1 end) as good,
sum(case when z =0 then 1 end) as bad
from foobar
where str != 9
group by 1,2,3
order by 1
;
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf
> From: David Fetter [mailto:[EMAIL PROTECTED]
> On Tue, Sep 11, 2007 at 08:55:53AM -0700, George Pavlov wrote:
> > sum(case when z!=0 then 1 end) as good,
>
> This case statement returns true when z factorial is zero, so I'd
> recommend the SQL standard <>
> I am trying to create an expression which
> - always yield true if the incomming array is NULL
> - yields true if a given value is in the array, otherwise yields false
>
> I thought this should work:
> Select 'target'=ANY(COALESCE('{indata1, indata2}','{target}'))
>
> but I get an ERROR: op AN
I started seeing some negative durations in my production query logs --
a "-" sign preceding the duration number, e.g.:
% grep 'duration: -' postgresql-Wed-*
postgresql-Wed-09.log:2008-01-02 08:56:33 PST [11705]: [538-1] LOG:
duration: -268280.421 ms
postgresql-Wed-15.log:2008-01-02 15:01:01 PST [
I am on PG 8.1.10 on RedHat Linux. Some page cache/query performance
questions:
Is there a way to tell whether a query is satisfied from memory cache or
from disk. The only way I know of is based on the time the query takes
(reported by EXPLAIN ANALYZE) -- comparing the first-time performance
(aft
> From: Greg Smith [mailto:[EMAIL PROTECTED]
> Sent: Monday, May 19, 2008 9:03 PM
>
> So, yes, in 8.3 it's possible that you can have sequential
> scans of large
> tables or the VACUUM data pass through the buffer cache, but
> not remain in
> it afterwards. I didn't think George would ever r
This is minor, but just curious about the reasons for the assymetry
between ALTER and DROP with respect to tables vs. views.
* ALTER TABLE seems to work on both tables and views (even though ALTER
VIEW exists, albeit with a limited operation support).
* DROP TABLE works only on tables, not on view
> >> In the second place, the reason most of our messages don't already
> >> contain schema names is that in the past we've judged it would be
> >> mostly clutter; and given the infrequency of complaints I see no
> >> reason to change that opinion.
>
> > I tend to disagree. We can run a poll in a
instead of redefining the table (and ending up with two tables pointing
to the same sequence) you could also just call nextval() on the target
sequence when inserting into your temp table -- pretty much the same
thing but seems a bit cleaner.
insert into adresses_temp
select
userid,
addressid,
Why not just try it! Answer: all -t switches after the first one are
ignored. (And, no, "pg_restore --help" does not mention that). However
with -l and -L, you have a much more powerful mechanism for specifying
exactly which objects you want restored.
> -Original Message-
> From: pgsql-ge
What do you all use for query log analysis for Postgres. I feel/hope
like there must be something that I am missing.
I have tried PQA (http://pqa.projects.postgresql.org/) and it is very
problematic, at least with the kind of application we have. Some of the
problems:
* not aware of prepared sta
> Look into pgfouine on pgFoundry.
> http://pgfoundry.org/projects/pgfouine/
thanks! definitely much better, but still not entirely believable, at
least on first try. e.g. i have a query with 4 conditions in the ORDER
BY. pgfouine reports show the query as having the last 3 of those
repeated 18 t
after an attempt at stress-testing my app i started seeing some
corrupted-looking entries in the postgresql query log. for example:
...
2006-10-10 21:33:24 PDT [31175]: [509-1] LOG: statem2006-10-10 21:33:24
PDT [31166]: [4206-1] LOG: duration: 0.197 ms
ent: EXECUTE [PREPARE: SELECTP.IS_D
> What PG version is this, on what operating system? Do you have
> redirect_stderr enabled?
Sorry, I should have included that:
PostgreSQL 8.1.3 on x86_64-redhat-linux-gnu, compiled by GCC gcc (GCC)
3.4.4 20050721 (Red Hat 3.4.4-2)
And yes, redirect_stderr = on. I have no definitive way of repr
Is there any special meaning to preceding a datatype (or at least some
datatypes) in a table or function definition by underscore that is a
synonym for an array? I can't see it documented anywhere. Below are some
examples. The other question is why "_int4" parses to int[], but "_int"
does not, etc.
> Yep, the array type is represented internally by prefixings an
> underscore. It's mentioned somewhere in the docs, but you may as well
> ignore it.
Hmm, I am not sure I particularly like this behavior or the "ignore it"
advice. Suppose someone makes a typo in his/her table definition: meant
to c
> On Mon, Oct 16, 2006 at 01:16:34PM -0700, George Pavlov wrote:
> > Hmm, I am not sure I particularly like this behavior or the
> > "ignore it"
> > advice. Suppose someone makes a typo in his/her table
> > definition: meant
> > to create an int4
> Hmm. If the messages are less than PIPE_BUF bytes long (4096 bytes on
> Linux) then the writes are supposed to be atomic. Can you
> check whether
> the interspersal cases involve messages whose total length (all lines)
> exceeds 4K?
Tom,
Some of them involve long messages (>4K), but there ar
several ways to do it, here's one:
check (coalesce(a,b,c) not null)
if you want one or the other to be present, but not both see this
thread:
http://archives.postgresql.org/pgsql-general/2006-09/msg00207.php
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED
I see other posts on this log message before, but not clear to me how/if
they apply to me. Opinions appreciated. I am on 8.1.3 on Linux. I have a
log entry like this:
2006-11-08 12:38:34 PST [3739]: [3-1] LOG: could not truncate directory
"pg_multixact/members": apparent wraparound
Nothing troub
> During crash recovery?
no crashes, just normal DB operation...
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
> "George Pavlov" <[EMAIL PROTECTED]> writes:
> >> During crash recovery?
> > no crashes, just normal DB operation...
>
> Hmm ... what is in pg_multixact/members/ again?
Now there is only a file named 0010 the date on which changes about
every 4-5 mi
> For larger tables, you may have to resort to a
> union:
>
>select * from foo where name != 'Other' order by name
>union
>select * from foo where name = 'Other'
Alas, this suggestion is wrong on two counts: (a) UNION expects a single
ORDER BY that applies to the whole recordset and
First of all explicitly inserting IDs into your serial column sounds
like a bad idea (in the general case).
Unfortunately, I don't think inheritance can help you with this. Key
quote from the docs: "A serious limitation of the inheritance feature is
that indexes (including unique constraints) and
try this approach:
SELECT
c.id,
c.firstname,
c.lastname,
a.latest_billdate
FROM
customers c
INNER JOIN -- or LEFT if you want the NULLs
(
SELECT
customer_id,
max(billdate) as latest_billdate
FROM
ar
) a
ON
c.customerid = a.customerid
WHERE
c.status = 'new';
FROM
ar
GROUP BY
customerid
) a
USING
(customerid)
WHERE
c.status = 'new';
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of George Pavlov
> Sent: Thursday, February 01, 2007 8:53 AM
> To: Demel, Jeff; pgs
Currently "ALTER TABLE ... RENAME TO ..." results in all views that
refer to the table to be rewritten with the new table name. This is a
good thing in the general case, but there are also situations where it
is not (e.g. temporarily renaming tables for data reorg reasons). I
can't seem to find a c
a (possibly slightly more user-friendly) alternative to the catalog
table is pg_dump, e.g.:
pg_dump -d your_db_name -t your_table -s | grep 'CREATE INDEX'
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of
> Dmitry Koterov
> Sent: Tuesday, March 2
We have a nightly restart of one PG database. Today it failed and I
can't seem to understand why and how to prevent this in the future (nor
can I reproduce the problem).
We have a line in a shell script that calls "/etc/init.d/postgresql
restart". In the shell script's log from this invocation I h
where num_prods > num_open_issues + num_provisioned + num_canceled
if those columns are nullable (which they don't seem to be) you'd have
to convert the NULLs (i.e. coalesce(num_canceled,0) )
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Ketema
> > So it looks like the STOPPING of the service actually
> > succeeded, albeit
> > it took a while (more than the usual sessions open?). The
> > STARTING is
> > the one that actually failed (is that because the STOP was still in
> > process?). The question is why -- in a RESTART situation
> > w
> Well, that makes sense: if the shutdown took more than a
> minute then the
> "stop" script action would give up waiting, and then the
> "start" action
> would see the postmaster running and go away happy. (It's a bit odd
> that "service start" actions are supposed to treat "already running"
>
no and no. a script looping through all tables (using
information_schema/pg_catalog info) is not that difficult to write
though.
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Robert James
> Sent: Thursday, May 17, 2007 4:23 PM
> To: pgsql-general@
s).
Thanks,
George
> -Original Message-
> From: George Pavlov
> Sent: Wednesday, October 18, 2006 1:23 PM
> To: Tom Lane
> Cc: [EMAIL PROTECTED]
> Subject: RE: [GENERAL] query log corrupted-looking entries
>
> > the behavior. It'd be interesting to verify whe
seems hard to enforce integrity in your model. how are you going to
ensure that the user's city-state-country combo a valid one? (well, you
can, but it is a pain). ask yourself: can a city be in more than one
country? probably not (even if the name is the same it is not the same
city!). can a state
I am trying to figure out how the distribution of data affects index
usage by the query because I am seeing some behavior that does not seem
optimal to my uneducated eye.
I am on PG 8.1.8. I have two tables foo and foo_detail, both have been
vacuum analyzed recently. Both have a property_id colum
I have installed PG on Windows XP successfully several times, but now I
have one machine on which the install
fails with an error message: "Failed to set permissions on the installed
files. Please see the logfile at ..." . The log file in question does
not look very useful. It has lines of the type
use this:
$ psql -Uyer_user -dyer_database -fyer_copy_script.sql <
yer_data_file.csv
where yer-copy-script.sql is:
-- -
drop table foo;
create table foo (
c01 varchar,
c02 varchar,
c03 varchar,
c04 varchar,
c05 varchar,
c06 varchar,
c07 varchar,
c08 varchar,
I need to load CSV files that have quotes in data fields that I want to
map to NULLs in the destination table. So if I see ...,"",... that needs
to be mapped to a NULL (in an INTEGER field in this particular case).
Are there any COPY command options that can do that? It seems that PgSQL
COPY expect
> > I need to load CSV files that have quotes in data fields
> > that I want to
> > map to NULLs in the destination table. So if I see
> > ...,"",... that needs
> > to be mapped to a NULL (in an INTEGER field in this
> > particular case).
> > Are there any COPY command options that can do that?
they have the same kind of page setup for pg Admin:
http://pgsql.navicat.com/PG_Admin/index.php
this one renders...
both pages seem to tell robots not to cache them, so can't view a cached
view on google.
---(end of broadcast)---
TIP 4: Have you
> Quotes tend to imply a text field.
I think you meant to say "quotes imply a non-null text field". And, yes,
I am quite aware of that. The point of the thread was to see if there is
any way of avoiding/overriding that assumption.
> Assuming
> you don't want to write a short Perl script to pre-
Not sure what the correct forum for pgxml/xml2 questions is. I was
wondering what is the definition of "valid" that the xml_valid(text)
function that is part of that module uses? It seems different from the
W3C definition of "valid" XML (is there an implicit DTD?) Maybe it is
more akin to "well-for
> I'm sure that this has been asked before but I can't find any
> reference to it in google, and the search facility on
> postgresql.org is currently down.
http://groups.google.com/groups?q=group%3Apgsql.*
provides the same with a slight delay but arguably a better user
interface.
> I have a l
Looking at the information_schema.columns view I have been wondering why
it only shows the column_default for columns in tables owned by the
current user? Makes things a bit misleading. I am thinking at least
superusers should be able to see that?
This is what I am talking about:
CREATE OR REPLA
Is there any requirement that you have to use REPLACE? Another (possibly
better) way to do the same is by doing:
select translate(your_string,'aeiou','') from your_table;
In my experience that performs much better than regexp. I just ran it on
a table of about 100K random U.S addresses and TRA
Is there a way to export the output of an arbitrary SQL query to a
quoted CSV file?
It seems that the COPY command can do that for regular tables (COPY foo
TO STDOUT WITH CSV QUOTE AS '"') but not for a query result. I could
create temp tables in a script but it seems that there should be a
simpl
Here is something that seems anomalous to me: when I set a boolean field
to FALSE performance is much better than when I set it to TRUE. Any
reason for FALSE to be favored over TRUE?
Some details:
vacuum analyze my_table;
update my_table set is_foo=FALSE where some_id = 47;
--142 rows affected, 8
6 4:00 PM
> To: George Pavlov
> Cc: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] assymetry updating a boolean (=FALSE
> faster than =TRUE)
>
> "George Pavlov" <[EMAIL PROTECTED]> writes:
> > Here is something that seems anomalous to me: when I s
> Wondering how to invoke a application like diff from plpgsql? Thanks!
And don't forget that you are working with a database. Most diff-ing
uses can probably be handled by constructs like EXCEPT and INTERSECT to
say nothing of OUTER JOINs. Also, IS DISTINCT FROM is your friend if you
want a compa
I would like to analyze server stats offline, so I attempt to pg_dump my
production database and then pg_restore it into another database. In the
process all stats seem to be reset (they are not completely zeroed). So
in production I have a table with the following stats (from
pg_stat_all_tables as
do it?
--
George Pavlov
http://mynewplace.com
415.348.2010 desk
415.235.3180 mobile
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
64 matches
Mail list logo