On 04/25/2011 02:13 PM, Seb wrote:
A query such as:
SELECT 'a' || 'b' || NULL;
returns a NULL. How can I ensure I get 'ab' in the result? I'm trying
to concatenate columns and one of them might have NULL values, which I
simply want to ignore.
SELECT 'a' || 'b' || coalesce(NULL, '');
--
S
On 11/16/2010 07:25 PM, Tom Lane wrote:
2. What's proposed as the new feature is that specific non-system
tables can be marked as unlogged, meaning that WAL entries won't
be made for changes in those tables' contents (nor their indexes'
contents). So we can't guarantee that the contents of such
On 11/16/2010 05:55 PM, Josh Berkus wrote:
If you do wish to have the data tossed out for no good reason every so
often, then there ought to be a separate attribute to control that. I'm
really having trouble seeing how such behavior would be desirable enough
to ever have the server do it for y
On 11/16/2010 05:33 PM, Josh Berkus wrote:
With the current patches, the data survives a restart just fine.
Per -hackers, that's not guarenteed.
Which is fine. If you choose to set a table to nonlogged, that implies
that you accept the risk of corrupted data, or that you don't "get it",
On 11/16/2010 05:15 PM, Tom Lane wrote:
Keep in mind that these tables are *not* going to survive any type of
backend crash. Maybe my perceptions are colored because I deal with
Postgres bugs all the time, but I think of backend crashes as pretty
common, certainly much more common than an OS-lev
On 11/16/2010 03:24 PM, Karsten Hilbert wrote:
PostgreSQL 9.1 is likely to have, as a feature, the ability to create
tables which are "unlogged", meaning that they are not added to the
transaction log, and will be truncated (emptied) on database restart.
Such tables are intended for highly volati
Jonathan Vanasco wrote:
it would be that, but with greatest
Not if qty_requested_available needs to be <= qty_available...
-Glen
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Tom Lane wrote:
Jonathan Vanasco writes:
I'm trying to write a bit of logic as 1 query, but I can't seem to do
it under 2 queries.
Uh, why can't you just push that CASE expression into the sub-select?
You might have to qualify qty_requested here to make sure it comes from
cart_item, if th
Richard Huxton wrote:
Glen Parker wrote:
Richard Huxton wrote:
#3 won't work unless the other transactions have all committed by the
time you do the check. It is guaranteed to fail at some point.
If it's in an AFTER INSERT/UPDATE trigger then whatever transaction
takes you beyo
Richard Huxton wrote:
Shakil Shaikh wrote:
Hi,
Consider the following scenario:
CREATE FUNCTION test(name)
select into cnt count(id) from items where owner = name;
--suppose someone inserts during this point? then next check will
succeed when it should not.
if (cnt < 10) then
insert into
Jeremy Harris wrote:
Simon Riggs wrote:
The idea of auto rebuilding indexes following recovery has already been
proposed, so is under consideration. It hasn't been proposed in relation
to the use case you mention, so that is new.
If we did as you suggest then it would speed up the base backup b
Tom Lane wrote:
Glen Parker writes:
Mainly because the idea doesn't seem to make sense unless that's part
of the package. If you don't cut index changes out of the WAL load
then the savings on the base backup alone aren't going to be all that
exciting when you consider th
Scott Marlowe wrote:
That's two people now who have called the idea "silly" without even a hint
of a supporting argument. Why would it be "silly" to improve the
performance of a highly valuable tool set without compromising its utility?
Because it's the size of the WAL files that kills most pe
Tom Lane wrote:
Glen Parker writes:
We have yet to recover from a PG disaster. We back up every night, and
never use the back ups for anything. To me, it seems perfectly
reasonable to get a quicker back up every night, with the remote
possibility of ever having to pay the price for it
Tom Lane wrote:
Glen Parker writes:
Tom Lane wrote:
... AFAICS what
Glen is proposing is to not WAL-log index changes, and with that any
crash no matter how minor would have to invalidate indexes.
Nooo...! This has nothing to do with WAL logging index changes.
How so? In any PITR-based
Joshua D. Drake wrote:
1. You could put all your indexes into a table space, this would allow
you to "try" different things with the indexes.
Most of them are, but I still have to back them up in order to have a
valid backup, because the PITR code would choke if any are missing.
2. Even tho
Tom Lane wrote:
Alvaro Herrera writes:
FWIW I don't think this idea is silly at all. It's so not-silly, in
fact, that we already have some access methods that do this if an index
cannot be recovered (I think at least GiST does it).
Well, there's a difference between "rebuild the index when i
Scott Marlowe wrote:
pg_dump is a perfectly acceptable backup tool, as is PITR. They have
different ways of operating based on what you need. Trying to make
PITR act more like pg_dump seems kind of silly to me.
pg_dump is not acceptable to us because of the potential to lose many
hours of
Grzegorz JaĆkiewicz wrote:
On Wed, Mar 11, 2009 at 3:03 PM, Joshua D. Drake wrote:
Any chance of something like this being done in the future?
I am going to go out on a limb here and say, "no".
That would probably be possible, by placing all indicies in a separate
directory in data, but...
I am wondering the feasibility of having PG continue to work even if
non-essential indexes are gone or corrupt. I brought this basic concept
up at some point in the past, but now I have a different motivation, so
I want to strike up discussion about it again. This time around, I
simply don't
I am wondering the feasibility of having PG continue to work even if
non-essential indexes are gone or corrupt. I brought this basic concept
up at some point in the past, but now I have a different motivation, so
I want to strike up discussion about it again. This time around, I
simply don't wan
Tom Lane wrote:
Glen Parker writes:
For some reason I got into my head the notion that a sequence could be
dropped even if reference by a field default. This seems pretty silly now.
It was true once upon a time, but we have an enforcement mechanism now
to catch that. Did you find the
Tom Lane wrote:
What's the whole contents of the rows with refobjid matching the
sequence's OID? In particular, classid::regclass would tell you
where to look for the dependent object.
oms=# select * from pg_depend where objid = 1011680210;
classid | objid| objsubid | refclassid | refob
I can't drop a sequence. I get the error "cannot drop sequence
because other objects depend on it". I've tried to use
the pg_depend table to find out what object(s) depend on it, and have
found nothing.
I tried drop cascade in a transaction, hoping to see a list of objects
dropped in the c
Hi all!
I have set up a couple of log tables in our database to use table
partitioning. So far it looks to be working pretty well, but I ran into
a query plan issue that doesn't make sense to me at all.
We have a table "omslog", and a set of partition tables
"omslog_part_", where th
Greg Smith wrote:
On Wed, 20 Aug 2008, Glen Parker wrote:
The database will continue accumulating WAL segments it can't recycle if
the archiver keeps failing, which can cause the size of the pg_xlog
directory (often mounted into a separate, smaller partition or disk) to
increase dramati
Greg Smith wrote:
2) If there is a problem with the network drive, the slow/blocked
network write is not happening where the database is waiting for it.
This is not really a consideration, is it? I was under the impression
that no other database activity blocks waiting for the archiver. That
Rob Adams wrote:
I'm setting up WAL archiving on a Windows machine & need to copy the WAL
files to a network drive.
Is it best to give the 'postgres' user network access & archive the WAL
files directly to the network drive? Or archive the WAL files to a local
folder and then use a scheduled
Hi all,
Is there a way in Postgres to re-define, at the per user level, the
default permission set applied to a table as it is created?
Thanks!
-Glen
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpre
Jaime Casanova wrote:
> On Fri, Aug 1, 2008 at 3:36 PM, Alex Vinogradovs
> <[EMAIL PROTECTED]> wrote:
>> It's not that I expect a lot of improvement by having non-WAL
>> indexing, it just sounds logical to me to have that, since
>> index can be re-created fast enough during recovery,
>
> and why y
Richard P. Welty wrote:
but what are the consequences of backing up a WAL file
if the archive process (probably scp in this case) is running
when the backup copy is made? the whole thing won't make it onto
tape, are there any downsides to running a recover with
an incomplete WAL file?
The WAL f
Yes, it can be done "semi-easily". Here's an example:
Ah!! Thank you. Now that someone else has done the leg work it'll be
better than "semi-easy" for me :D
-Glen
---(end of broadcast)---
TIP 6: explain analyze is your friend
Joshua D. Drake wrote:
Glen Parker wrote:
2. Autovacuum should effectively do this, assuming the clustered
table isn't being updated.
These are heavily updated tables. Plain vacuum isn't enough, and the
autovacuum facility isn't functional enough for me yet.
Can you el
Richard Huxton wrote:
Perhaps "VACUUM unclustered" or something?
A couple of thoughts:
1. You shouldn't routinely be running VACUUM FULL on a recent installation.
In my experience, some tables still must be VACUUM FULL'd from time to
time. I switched to clustering because it's much more ef
I think I know the answer to this, but...
Is there a semi-easy way vacuum all tables in a database *except* those
that are clustered?
(VACUUM [full]; CLUSTER;) is a huge waste of time and resources when you
have clustered tables. If there isn't a way to do this, can we consider
it a feature
Ben wrote:
What I read was that you have no use for NULLs, and that they're
equivilant to zero or an empty string or some other known value. Sorry
if I misunderstood that.
Equivalent, yes, because NULL doesn't usually mean UNKNOWN in this
system, just NOT ENTERED. I do still have use for NUL
It cannot already do what I want, unless you blatantly ignore what I
wrote. Putting coalesce() calls *everywhere* counts as more work, don't
you agree?
-Glen
Ben wrote:
But, why do you need an extension when the existing system can already
do what you want?
---(en
That's absolutely correct. What I want is a totally non standard
*optional* extension, recognizing that many, even if not most,
applications could benefit from it. I think there's a clean way to do it.
I would never ask for such a thing if I thought it would effect an out
of the box installa
Ben wrote:
I'm sorry maybe I missed something, but if you don't need NULLs and feel
they just add extra work, why don't you just declare all your columns to
be not null and have them default to zero or an empty string?
Because I DO need NULLS for non text fields, and I still want NULL to
comp
Buy the same token, some application have no use whatsoever for the
distinction between NULL and ''. In that case, the distinction just
adds work.
I would love to see different ways to handle NULL implemented by the
server. For what I do, NULL could always compare equal to zero and ''.
I h
Tarring hot database files still gives me the willies. But then, I
wear belt and suspenders.
I understand. A list of "file changed while we read it" errors is just
a little unnerving at first!
I did quite a few end to end backup/PITR tests, and no matter what I did
to the DB during backup,
Open-database file-level backups might work with PITR, but I
wouldn't trust it.
IME, it does work, and very well. Inconsistencies in the heap files are
trumped by the WAL archive during recovery.
-Glen
---(end of broadcast)---
TIP 2: Don't 'k
Alvaro Herrera wrote:
No, we currently don't expose the number of dead tuples which autovacuum
uses.
Patch submitted :-)
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command
Joris Dobbelsteen wrote:
Alvaro Herrera wrote:
[snip]
I would go one step further and suggest that when the
maintenance window system gets completed we give it a default
setup of midnight to 6AM or something like that.
Do not do that...
I agree. There is nothing very special about that
Brandon Aiken wrote:
You're saying that the dirtyness of a table is proportional to when you
plan on vacuuming it next.
The dirtiness of a table should most certainly have an effect on when it
gets vacuumed in relation to other tables. If dirtiness could be rated,
then the list of vacuumable
Alvaro Herrera wrote:
Glen Parker wrote:
I'd also like to use some of this information to issue reindex and
cluster commands only when they're needed. In fact, on days when I
cluster, there's no need whatsoever to also vacuum those tables. This
is something that autovacuum
The idea that has been discussed in the past is the concept of
maintenance windows, that is for any given period of time, you can set
different vacuum thresholds. So at night you might make the thresholds
very low so that nearly everything gets vacuumed but during the day you
might only vacuum
I wonder if the simple solution is to just have a cron script modify
postgresql.conf and pg_ctl reload. That seems very flexible, or have
two postgresql.conf files and move them into place via cron.
I'd still prefer to vacuum on demand actually. Rather than hope that
autovacuum hit all the na
Matthew O'Connor wrote:
No, how dirty a table isn't subjective, what is subjective is the
question "Does it need to be vacuumed?". A that is 1% dirty (to use
your term) probably doesn't *need* to be vacuumed, but you might choose
to vacuum it anyway at least you might at night when the system
Alvaro Herrera wrote:
4) Expose all information used by autovacuum to form its decisions.
You could argue that this is already there, although not easy to get at
I suppose. But all table threshold settings are available either in the
pg_autovacuum relation or the defaults via GUC variables, th
Erik Jones wrote:
Matthew O'Connor wrote:
Glen Parker wrote:
If it isn't there somewhere already, I would ask to add:
Expose a very easy way to discover autovacuum's opinion about a
particular table, for example "table_needs_vacuum(oid)", ignoring any
time constra
1) Allow multiple "autovacuum workers": Currently Autovacuum is only
capable of ordering one vacuum command at a time, for most work loads
this is sufficient but falls down when a hot (very actively updated
table) goes unvacuumed for a long period of time because a large table
is currently bei
Tom Lane wrote:
Glen Parker <[EMAIL PROTECTED]> writes:
I am still trying to roll my own auto vacuum thingy.
Um, is this purely for hack value?
Don't be silly ;-)
Honestly I sort of thought the problem was fairly obvious.
What is it that you find inadequate
about regular auto
Alvaro Herrera wrote:
Glen Parker wrote:
The trouble now is, I don't see how to reset the statistics. My
assumption was that vacuum did it, but that appears to be false. How
does autovacuum do it? Can I do it with SQL?
Huh, reset what statistics? Autovacuum does not reset any
Hi all,
I am still trying to roll my own auto vacuum thingy. The goal is to
vacuum on demand in one step just like the old days, but not hit the
tables that never change (we have a lot). The idea now is to use a
combination of SQL and shell scripts to duplicate some of what auto
vacuum does
Vlad wrote:
which means that earlier suggestion you've made - to host temp dir on
a separate hd linked to temp dir under data tree it's not quite a
straight solution I assume that link will be erased by postgres
every time it finishes operating with temp directory.
Why would the postmaster
I am continuing to refine my PITR backup script, and there are some
directories I do not know what to do with. First off, here are the
directories/files I am currently ignoring...
pg_xlog (ours is a symlink, so I ignore it altogether)
pg_log/*
base/*/pgsql_tmp/*
postmaster.opts
postmaster.pid
Tom Lane wrote:
Temp files are created in $PGDATA/base/yourdb/pgsql_tmp/. If you have a
mind to, you can replace that subdirectory with a symlink to a (suitably
secure) directory elsewhere.
Tom, is it safe to assume that the contents of the temp directory do not
need to persist across Postgre
dev wrote:
Hello friends,
we have some strange problem, postmaster (pg 8.1 /win32)
suddenly shutdown because of "no reason".
The interesting thing is that this occurs always at
almost same time (between 0.00 and 0.30h), in that time period is
running system backup (Veristas backupexec agent) - st
You can probably make this work if you don't issue any CREATE TABLESPACE
commands while PITR logging is active, but you'll want to test your
procedures pretty carefully.
That's what I thought, and after your message, I went ahead with it and
had no problems. Thx, Tom.
-Glen
Gurus,
I hope I can make this clear somehow... Anyway... This all involves PG
8.1.4 on a 64-bit FC5 box.
Select version() says "PostgreSQL 8.1.4 on x86_64-redhat-linux-gnu,
compiled by GCC x86_64-redhat-linux-gcc (GCC) 4.1.0 20060304 (Red Hat
4.1.0-3)".
I guess the best question I can se
Martijn van Oosterhout wrote:
On Fri, Dec 08, 2006 at 12:49:30PM -0800, Glen Parker wrote:
I'd like to see a general way to take indexes off line without actually
losing their definitions. For example, something like "ALTER TABLE [EN
| DIS] ABLE INDEXES", "ALTER INDEX [
restore as low as possible.
So, it would be great if pg_dump could accept some sort of argument to
make it simply not dump certain types of objects. Indexes, views,
functions, etc.
Thx for considering :-)
-Glen Parker
---(end of broadcast)---
TIP
indexes, unless the index
was disabled explicitly? And then, refuse to rebuild/re-enable the
index as normal if non-unique values found?
Thx for considering :-)
-Glen Parker
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
Scott Marlowe wrote:
On Mon, 2006-12-04 at 09:56, Schwenker, Stephen wrote:
I'm using version 7.4.2 on solaris.
A few points:
4: Look at migrating to 8.1 or even 8.2 (due out real soon now). There
have been a lot of advances in pg since 7.4, and the upgrade is pretty
painless as long as the
Jeff Davis wrote:
On Mon, 2006-11-27 at 19:13 +0200, Andrus wrote:
I need to create nightly backups to separate computer over internet from 8.1
server
I tried to run pg_dump to make backup using command
"..\pg_dump\pg_dump.exe" -i -Z9 -b -v -f "C:\061127 mybackup.backup" -F c -h
81.50.12.18 -
Hi all, I apologize for asking a Linux question here in the postgres
list, but I figure somebody here must know the answer, or a better place
to ask it.
I am building an x86_64 postgres server (see, it isn't JUST about Linux
:-) with SATA drives and I want to use software RAID for my postgres
Matthew T. O'Connor wrote:
Glen Parker wrote:
I would like a way to run the autovacuum daemon on demand
periodically. Every night at 2 AM, for example.
Anybody know if this is possible? If not, it's a feature request :-)
Autovacuum can be enabled / disabled on the fly usi
Richard Broersma Jr wrote:
Cron yes, vacuumdb no. I would like to AUTO vacuum periodically.
Just curious, what is auto_vacuum going to give that vacuumdb wont?
Hmm, I thought this would be obvious. Vacuumdb just issues "VACUUM"
commands.
I would like the get the functionality if autova
Joshua D. Drake wrote:
On Thu, 2006-11-09 at 17:00 -0800, Glen Parker wrote:
I would like a way to run the autovacuum daemon on demand periodically.
Every night at 2 AM, for example.
Anybody know if this is possible? If not, it's a feature request :-)
use vacuumdb and cron.
Cro
I would like a way to run the autovacuum daemon on demand periodically.
Every night at 2 AM, for example.
Anybody know if this is possible? If not, it's a feature request :-)
-Glen
---(end of broadcast)---
TIP 1: if posting/reading through Use
Wes Sheldahl wrote:
if you did have a cron
job run to check, you would probably just have it set a boolean field on
expired records or something of that sort, and run it a little after
midnight, at the start of each day, assuming durations were always being
measured in days.
Exactly. You fl
louis gonzales wrote:
Fine so let's say when the instructor creates the user profile, this can
trigger the creation of a sequence, say, but IF the instructor doesn't
visit that student's profile for a month, I want something that,
independent of any other action, will be either incrementing or
louis gonzales wrote:
Hey Brian,
Yeah I had considered this, using cron, I just feel like that is too dirty.
Actually I didn't see Andreas' post, can someone forward that?
I'm running this application on Solaris 9. Ultimately what I want to
know is, is there something that is internal to post
Tom Lane wrote:
Glen Parker <[EMAIL PROTECTED]> writes:
How much of a TOAST'd field is actually stored in the main heap table?
Is there a way to configure that amount?
A pushed-out-of-line value is replaced by a 20-byte pointer structure.
There's no such thing as partially o
Tom Lane wrote:
Glen Parker <[EMAIL PROTECTED]> writes:
Why would very large text values effect the speed of a seq scan that
does not actually evaluate those values?
I'd actually suppose it's the smaller values (up to a few hundred bytes)
that impact this the most. Really w
very large text values effect the speed of a seq scan that
does not actually evaluate those values?
Oh, PG 8.1.3 on FC 5, 64-bit.
TIA
-Glen Parker
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
My experience has been that RAID5 + Postgres = counter productive. I've
been happier with a 4-way software mirror (Linux).
In the past, Postgres has been very drive seek intensive, because of the
way index scans were performed. RAID5 does *NOTHING* to help this. Raw
disk throughput is essen
Tom Lane wrote:
What ever happened to grouped heap reads, i.e. building a list of tuples
from the index, sorting in heap order, then reading the heap in a batch?
Done in 8.1. I'm uncertain whether Scott knows about that ...
That's GREAT news! Is that the "Bitmap Scan" item in the what's n
Tom Lane wrote:
David Scott <[EMAIL PROTECTED]> writes:
Is the additional overhead of keeping full tuple visibility
information inside of the index so odious to the Postgres community as
to prevent a patch with this solution from being applied back to the
head?
This has been discussed an
We're still on 7.4 (sorry, probly should have mentioned that). Does
that documentation apply to the 7.4 series as well?
Maybe an easier question is, can we expect a TOAST performance increase
when upgrading to 8.1?
Thx again...
-Glen
Tom Lane wrote:
Glen Parker <[EMAIL PROTECTED]
We are having some performance issues we think may be related to large
text field values, and many records. So, a couple questions...
1) Are text values always stored in the companion text file, or only if
they exceed a certain size? What size?
2) When the query executer reads tuples during
hat all about? Has anybody seen this before?
Is there an SQL server that might actually recognize that?
(I didn't think version information would be too important for this, but
I can gather all that info if it's really needed...)
TIA
Glen Parker
[EMAIL PROTECTED]
---
> Glen - can you confirm if this happens only on terminal server in
> "Application Mode" or if it also affects "Remote Administration Mode"?
Can't confirm, I can only tell you that this particular one is in
application mode.
Glen
---(end of broadcast)
Did you do an explain? I'd guess the index isn't being used because of
skewed statistics.
> i have a btree index on col1 in table1. The column has either values
> 1,2,3, or 4. 4 does not appear that much in the table (only 5 times).
> there are about 20 million rows in the table. when i do a "
Since pg_dump will be allowing multiple -t parameters for 8.0, here
is a related feature request.
A similar option (allowing multiples also) to EXCLUDE tables, so we can do a
dump of the entire database minus a few tables.
Glen Parker
[EMAIL PROTECTED]
---(end of
oat, field2 decimal);
CREATE TABLE DB=# insert into test2 values(1,1); INSERT
3229 1 DB=# update test2 set field1=field2 field2=field1; UPDATE 1
So the coercions work fine most of the time. I still say
it's broken :-)
Glen Parker
---(
$ where
these queries are concerned. I'm considering Postgres to be broken. Am I
wrong? Is there an actual reason for this behavior?
Thx again,
Glen Parker
[EMAIL PROTECTED]
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
My favorite part of the artical was way way down at the end, where it
briefly lists a few areas where postgres is still superior, minor little
details such as full joins, sub-selects, views, unions, triggers,
constraints, cursors... silly stuff like that :-)
Glen
---(en
hat can be run by a privileged user. In my experience, with a properly
designed timeout system, stale locks rarely get in the way; with reliable
client-side software, they don't even occur very often.
Glen Parker
[EMAIL PROTECTED]
---(end of broadcast)-
ide it, but if it is not
overidden it remains valid for the owner for one week, after which time it
is removed altogether.
Stuff like that is, IMHO, far beyond the scope of an SQL server.
Glen Parker
[EMAIL PROTECTED]
> I prefer Jan's solution: on first attempt to change, acquire a user
91 matches
Mail list logo