Thanks for the suggestion of casting the hex string to uuid. That works.
I tried the standard syntax using "CAST":
VALUES ( CAST( $1 AS uuid) )
--Basil Bourque
> How about:
> CREATE OR REPLACE FUNCTION public.uuid_write_(character varying)
> RETURNS boolean
> LANGUAGE plpgsql
> AS $function$
>
On 15/04/2011 8:07 AM, Adrian Klaver wrote:
"EXTENDED allows both compression and out-of-line storage. This is the
default for most TOAST-able data types. Compression will be attempted
first, then out-of-
line storage if the row is still too big. "
Good point. I was unclear; thanks for pointi
On Thu, Apr 14, 2011 at 6:46 PM, mark wrote:
>
>
>> -Original Message-
>> From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
>> ow...@postgresql.org] On Behalf Of Yang Zhang
>> Sent: Thursday, April 14, 2011 6:51 PM
>> To: Adrian Klaver
>> Cc: pgsql-general@postgresql.org; Cra
On Thursday, April 14, 2011 7:46:34 pm Yang Zhang wrote:
> On Thu, Apr 14, 2011 at 7:42 PM, Adrian Klaver
wrote:
> > Granted no all data types are TOASTable. Are you looking for something
> > more aggressive than that?
>
> Yes.
>
> http://blog.oskarsson.nu/2009/03/hadoop-feat-lzo-save-disk-spa
>> Ok I removed the geometry column from the cursor query within the function
>> and the session still runs out of memory. I'm still seeing the same error
>> message as well:
>> PortalHeapMemory: 16384 total in 4 blocks; 5944 free (0 chunks); 10440
>> used
>> ExecutorState: 122880 to
On Thu, Apr 14, 2011 at 7:42 PM, Adrian Klaver wrote:
> On Thursday, April 14, 2011 5:51:21 pm Yang Zhang wrote:
>
>> >
>
>> > adrian.kla...@gmail.com
>
>>
>
>> Already know about TOAST. I could've been clearer, but that's not the
>
>> same as the block-/page-level compression I was referring to.
On Thursday, April 14, 2011 5:51:21 pm Yang Zhang wrote:
> >
> > adrian.kla...@gmail.com
>
> Already know about TOAST. I could've been clearer, but that's not the
> same as the block-/page-level compression I was referring to.
I am obviously missing something. The TOAST mechanism is designed t
Hi folks,
We have a current DB on PG 8.4.7 on Ubuntu Server 10.04 LTS.
This DB used to only have one schema and that schema was replaced
on a regular schedule using a pg_dump/pg_restore process. The old
schema was renamed to another name and the incoming schema and data
replaced it in the DB. If
Aleksey Tsalolikhin writes:
> Hi. I use the following query (from
> http://wiki.postgresql.org/wiki/Lock_Monitoring)
> to monitor locks; and I've got an ExlusiveLock that does not have a relation
> name associated with it. What is locked with the Exclusive Lock in this case,
> please? (it's be
On Thursday, April 14, 2011 6:43:21 pm Basil Bourque wrote:
> If I pass the hex string representation of a UUID to a PL/pgSQL function as
> a varchar, that value cannot be used directly when writing to a row whose
> column data type is "uuid", in Postgres 9.0.x. Normally Postgres
> automatically co
Whoops… Typo in the Subject line. Should have been "UUID cannot" rather than
"UUID can".
UUID cannot be written to row when passed as parameter/argument to a PL/pgSQL
function. Feature or bug?
^^^
--Basil Bourque
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To
> -Original Message-
> From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
> ow...@postgresql.org] On Behalf Of Yang Zhang
> Sent: Thursday, April 14, 2011 6:51 PM
> To: Adrian Klaver
> Cc: pgsql-general@postgresql.org; Craig Ringer
> Subject: Re: [GENERAL] Compression
>
> On
If I pass the hex string representation of a UUID to a PL/pgSQL function as a
varchar, that value cannot be used directly when writing to a row whose column
data type is "uuid", in Postgres 9.0.x. Normally Postgres automatically
converts a hex string to a 128-bit UUID value and back again.
Is
Hi. I use the following query (from
http://wiki.postgresql.org/wiki/Lock_Monitoring)
to monitor locks; and I've got an ExlusiveLock that does not have a relation
name associated with it. What is locked with the Exclusive Lock in this case,
please? (it's between "d" and "e" tables below)
psql -
On Thu, Apr 14, 2011 at 5:07 PM, Adrian Klaver wrote:
> On Thursday, April 14, 2011 4:50:44 pm Craig Ringer wrote:
>
>> On 15/04/2011 7:01 AM, Yang Zhang wrote:
>
>> > Is there any effort to add compression into PG, a la MySQL's
>
>> > row_format=compressed or HBase's LZO block compression?
>
>>
>
On Thursday, April 14, 2011 4:50:44 pm Craig Ringer wrote:
> On 15/04/2011 7:01 AM, Yang Zhang wrote:
> > Is there any effort to add compression into PG, a la MySQL's
> > row_format=compressed or HBase's LZO block compression?
>
> There's no row compression, but as mentioned by others there is
> o
It is not easy to follow...but I'll toss out some thoughts anyway.
I would generally not de-normalize the data model in order to make
referential integrity easier. If your "requirements" are that complex then
writing a wrapper around the insert/update layer for the tables in question
is a better
On 04/14/11 4:28 PM, Perry Smith wrote:
I hope this is reasonably easy to follow. I'm looking forward to your thoughts
and comments.
at least on first glance, that looks like object oriented methodology,
not relational.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.o
On 15/04/2011 7:01 AM, Yang Zhang wrote:
Is there any effort to add compression into PG, a la MySQL's
row_format=compressed or HBase's LZO block compression?
There's no row compression, but as mentioned by others there is
out-of-line compression of large values using TOAST.
Row compression w
On Wed, 2011-04-13 at 14:42 -0400, Tom Lane wrote:
> Simon Riggs writes:
> > 2011/4/13 Tom Lane :
> >> Short answer is to test the case you have in mind and see.
>
> > That's the long answer, not least because the absence of a failure in
> > a test is not conclusive proof that it won't fail at so
Based upon my reading of wikipedia, the solution I think I want to implement is
not in 2nd normal form. I'm wondering if anyone on this list has suggestions,
etc.
I have a table called containers where object A will contain object B. There
is a middle column that will describe the type of ass
On Thursday, April 14, 2011 4:01:54 pm Yang Zhang wrote:
> Is there any effort to add compression into PG, a la MySQL's
> row_format=compressed or HBase's LZO block compression?
TOAST?
http://www.postgresql.org/docs/9.0/interactive/storage-toast.html
--
Adrian Klaver
adrian.kla...@gmail.com
--
Is there any effort to add compression into PG, a la MySQL's
row_format=compressed or HBase's LZO block compression?
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
check out this link. I it will be what you are looking for
http://errorbank.blogspot.com/2011/03/list-all-foreign-keys-references-for.html
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/Multiple-foreign-keys-with-the-same-name-and-information-schema-tp1921901p4303625.ht
Wim Bertels writes:
> is there a function one could use to filter out the specific tokes?
In 8.4 and up, pg_get_keywords() might help.
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://ww
On Thu, Apr 14, 2011 at 1:14 PM, Greg Smith wrote:
> And the idea that a UPS is sufficient to protect against that even happening
> in
> wildly optimistic.
Note that the real danger in relying on a UPS is that most power
conditioning / UPS setups tend to fail in total, not in parts. The
two t
Everything was fine, the reordered script fixed everything. Thanks all.
Regards,
Carl
2011/4/14 Carl von Clausewitz
> Ok thanks, the information. I've made the mistake, I will change the
> script, but I will try, that Vidhya told me. Let me see, what will going
> on.
>
> Regards,
> Carl
>
> 201
On Thu, Apr 14, 2011 at 03:00:07PM -0400, Jerry Sievers wrote:
- David Kerr writes:
-
- > Howdy,
- >
- > Is there a doc somewhere that has a formula for how much memory PG
- > backend process will use?
- >
- > I'm looking to get something like total_mem = max_connections * (
- > work_mem + temp_
Henry C. wrote:
I believe this perception that SSDs are less "safe" than failure-prone
mechanical hard drives will eventually change.
Only because the manufacturers are starting to care about write
durability enough to include the right hardware for it. Many of them
are less safe right no
David Kerr writes:
> Howdy,
>
> Is there a doc somewhere that has a formula for how much memory PG
> backend process will use?
>
> I'm looking to get something like total_mem = max_connections * (
> work_mem + temp_buffers ) // I know it's more complicated than that,
> which is why I'm asking =)
On Thu, Apr 14, 2011 at 12:27:34PM -0600, Scott Marlowe wrote:
> > That's what a UPS and genset are for. Who writes critical stuff to *any*
> > drive without power backup?
>
> Because power supply systems with UPS never fail.
Right, there's obviously a trade-off here. Some of this has to do
wit
Dne 14.4.2011 10:01, Craig Ringer napsal(a):
> That's a limitation of streaming replication. It's a lot like the issue
> Oracle has with running out of undo or redo log space. Essentially, my
> understanding is that the hot standby server cannot replay WAL archives
> to keep up with the master's ch
Howdy,
Is there a doc somewhere that has a formula for how much memory PG backend
process will use?
I'm looking to get something like total_mem = max_connections * ( work_mem +
temp_buffers )
// I know it's more complicated than that, which is why I'm asking =)
Something similar to Table 17-2
On Thu, Apr 14, 2011 at 3:40 AM, Henry C. wrote:
>
> On Thu, April 14, 2011 10:51, Craig Ringer wrote:
>> On 14/04/2011 4:35 PM, Henry C. wrote:
>>
>>
>>> There is no going back. Hint: don't use cheap SSDs - cough up and use
>>> Intel.
>>>
>>
>> The server-grade SLC stuff with a supercap, I hope,
Hallo,
according to
http://www.postgresql.org/docs/8.4/interactive/sql-syntax-lexical.htm
A token can be a key word, an identifier, a quoted identifier, a literal
(or constant), or a special character symbol.
I suppose these different tokens are used by the internal parser?
So my questions is,
is
After a glowing review at AnandTech (including DB benchmarks!) I decided to
spring for an OCX Vertex 3 Pro 120 for evaluation purposes. It cost about $300
with shipping, etc and at this point, won't be putting any
Considering that I sprang for 96 GB of ECC RAM last spring for around $5000,
eve
Ok thanks, the information. I've made the mistake, I will change the script,
but I will try, that Vidhya told me. Let me see, what will going on.
Regards,
Carl
2011/4/14 Tom Lane
> Carl von Clausewitz writes:
> > Maintenance:
> > #!/bin/sh
> > date >> /var/log/postgresql_maintenance.log
> > /u
Carl von Clausewitz writes:
> Maintenance:
> #!/bin/sh
> date >> /var/log/postgresql_maintenance.log
> /usr/local/bin/reindexdb --all --username=cvc >>
> /var/log/postgresql_maintenance.log
> echo "Reindex done" >> /var/log/postgresql_maintenance.log
> /usr/local/bin/vacuumdb --all --full --analyz
Hi,
see the two scripts attached. First one is the postgres_maintenance.sh, and
the second is the postgres_backup.sh. I've attached it, and copied, because
of the antivirus filters :-)
regards,
Carl
Maintenance:
#!/bin/sh
date >> /var/log/postgresql_maintenance.log
/usr/local/bin/reindexdb --all
Gipsz Jakab writes:
> Today morning at 01:00 AM in our PostgreSQL 9.0.3 server a routine
> maintenance script has started (vacuumdb --all --full --analyze), and
> stopped with this error:
> sqlstate=23505ERROR: duplicate key value violates unique constraint
> "pg_index_indexrelid_index"
> sqlsta
On April 14, 2011 08:10:47 am Dan Biagini wrote:
> I suspect that it may have occurred during a filesystem level backup
> (ie pg_start_backup(), tar -czf..., pg_stop_backup()), as I performed
> a backup and moved the database to a different system. After
> restoring the files and starting postgre
I have a 9.0.1 database with two corrupted tables (one table has 20
rows, the other 140). The tables *seem* fine for read/select
operations, but updating certain rows in the table produce error
messages:
update media set updated_at = now() at time zone 'UTC';
ERROR: could not read block 2 in fil
In article <20110413163120.gu24...@shinkuro.com>,
Andrew Sullivan writes:
> On Wed, Apr 13, 2011 at 09:21:20AM -0700, Gauthier, Dave wrote:
>> Is there a way to add a default value definition to an existing column?
>> Something like an "alter table... alter column... default 'foo'".
> ALTER TA
At 06:07 PM 4/14/2011, RadosÅaw Smogura wrote:
One thing you should care about is such called
write endurance - number of writes to one memory
region before it will be destroyed - if your SSD
driver do not have transparent allocation, then
you may destroy it really fast, because write of
ea
Tom Lane-2 wrote:
>
>
> I don't think it's a leak, exactly: it's just that the "relcache" entry
> for each one of these views occupies about 100K. A backend that touches
> N of the views is going to need about N*100K in relcache space. I can't
> get terribly excited about that. Trying to redu
Merlin Moncure-2 wrote:
>
>
> ... I've coded a
> lot of multi schema designs and they tend to either go the one
> session/schema route or the connection pooling route. Either way,
> cache memory usage tends to work itself out pretty well (it's never
> been a problem for me before at least). I
Gipsz,
We got this error too what we did is ran vacuum analyze verbose and
afterthat reindexed the db and we din't see the error croping again.
Regards
Vidhya
On Thu, Apr 14, 2011 at 5:26 PM, Gipsz Jakab wrote:
> Dear List,
>
> Today morning at 01:00 AM in our PostgreSQL 9.0.3 server a routine
On Thu, Apr 14, 2011 at 12:19 AM, Benjamin Smith
wrote:
> I was wondering if anybody here could comment on the benefits of SSD in
> similar, high-demand rich schema situations?
>
>
For the last several months, I've been using Texas Memory Systems RamSAN 620
drives on my main DB servers. Having ne
Ok, thanks, I'll try at night.
Regards,
Carl
2011/4/14 Vidhya Bondre
> Gipsz,
>
> We got this error too what we did is ran vacuum analyze verbose and
> afterthat reindexed the db and we din't see the error croping again.
>
> Regards
> Vidhya
>
> On Thu, Apr 14, 2011 at 5:26 PM, Gipsz Jakab wrote
Dear List,
Today morning at 01:00 AM in our PostgreSQL 9.0.3 server a routine
maintenance script has started (vacuumdb --all --full --analyze), and
stopped with this error:
sqlstate=23505ERROR: duplicate key value violates unique constraint
"pg_index_indexrelid_index"
sqlstate=23505DETAIL: Key
On 14/04/2011 5:40 PM, Henry C. wrote:
The server-grade SLC stuff with a supercap, I hope, not the scary
consumer-oriented MLC "pray you weren't writing anything during power-loss"
devices?
That's what a UPS and genset are for. Who writes critical stuff to *any*
drive without power backup?
> I believe this perception that SSDs are less "safe" than failure-prone
> mechanical hard drives will eventually change.
By "safe" I mean they won't corrupt data in case of crash of the machine.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subs
> On 14/04/2011 2:15 AM, Henry C. wrote:
> Nope, it's working as designed I'm afraid.
>
> There are params you can tune to control how far slaves are allowed to
> get behind the master before cancelling queries...
Thanks Craig - this dawned on me eventually.
--
Sent via pgsql-general mailing lis
On Thu, 14 Apr 2011 11:46:12 +0200, Henry C. wrote:
On Thu, April 14, 2011 11:30, Leonardo Francalanci wrote:
have a look at
http://postgresql.1045698.n5.nabble.com/Intel-SSDs-that-may-not-suck-td426826
1.html
It looks like those are "safe" to use with a db, and aren't that
expensive.
Th
Le 14/04/2011 11:40, Henry C. a écrit :
You have a valid point about using SLC if that's what you need though.
However, MLC works just fine provided you stick them into RAID1. In fact, we
use a bunch of them in RAID0 on top of RAID1.
AFAIK, you won't have TRIM support on RAID-arrayed SSDs.
Tha
On Thu, April 14, 2011 11:30, Leonardo Francalanci wrote:
> have a look at
>
> http://postgresql.1045698.n5.nabble.com/Intel-SSDs-that-may-not-suck-td426826
> 1.html
>
>
>
> It looks like those are "safe" to use with a db, and aren't that expensive.
The new SSDs look great. From our experience,
Le 14/04/2011 10:54, John R Pierce a écrit :
On 04/14/11 1:35 AM, Henry C. wrote:
Hint: don't use cheap SSDs - cough up and use Intel.
aren't most of the Intel SSD's still MLC, and still have performance and
reliability issues with sustained small block random writes such as are
generated by
On Thu, April 14, 2011 10:51, Craig Ringer wrote:
> On 14/04/2011 4:35 PM, Henry C. wrote:
>
>
>> There is no going back. Hint: don't use cheap SSDs - cough up and use
>> Intel.
>>
>
> The server-grade SLC stuff with a supercap, I hope, not the scary
> consumer-oriented MLC "pray you weren't writ
have a look at
http://postgresql.1045698.n5.nabble.com/Intel-SSDs-that-may-not-suck-td4268261.html
It looks like those are "safe" to use with a db, and aren't that expensive.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://w
On Thu, 14 Apr 2011 16:57:01 +0800, Craig Ringer wrote:
On 14/04/2011 2:06 PM, Radosław Smogura wrote:
Hello,
I have small crash reporting code, which I use during mmap-ing
database. After
last merge with master I got
TRAP: FailedAssertion("!(slot> 0&& slot<=
PMSignalState->num_child_flag
On 14/04/2011 2:06 PM, Radosław Smogura wrote:
Hello,
I have small crash reporting code, which I use during mmap-ing database. After
last merge with master I got
TRAP: FailedAssertion("!(slot> 0&& slot<= PMSignalState->num_child_flags)",
File: "pmsignal.c", Line: 227)
LOG: server process (PI
On 04/14/11 1:35 AM, Henry C. wrote:
Hint: don't use cheap SSDs - cough up and use Intel.
aren't most of the Intel SSD's still MLC, and still have performance and
reliability issues with sustained small block random writes such as are
generated by database servers? the enterprise grade SLC
On 14/04/2011 4:35 PM, Henry C. wrote:
There is no going back. Hint: don't use cheap SSDs - cough up and use Intel.
The server-grade SLC stuff with a supercap, I hope, not the scary
consumer-oriented MLC "pray you weren't writing anything during
power-loss" devices?
--
Craig Ringer
Tech-
Hi David,
I had just figured out the sub-query requirement when you replied. So
now I've got this working:
SELECT * FROM (
SELECT testname,os,arch,build_type,branch,current_status,last_update,rank()
OVER
(PARTITION BY testname,os,arch,branch,build_type ORDER BY last_update
DESC) AS myrank
FROM myt
Hi!
Windows 7x64, PG9.0, PGAdmin 1.12.1.
First I sent this problem to PGADMIN support list, you can see it, but
I repeat the description.
http://archives.postgresql.org/pgadmin-support/2011-04/msg00012.php
I tried to fillup a text field with all ANSI characters
chr(1)..chr(255). Except 0 becaus
On Thu, April 14, 2011 06:19, Benjamin Smith wrote:
> The speed benefits of SSDs as benchmarked would seem incredible. Can anybody
> comment on SSD benefits and problems in real life use?
>
> I maintain some 100 databases on 3 servers, with 32 GB of RAM each and an
> extremely rich, complex schema
> However, a SELECT eventually fails with "canceling statement due to conflict
> with recovery".
>
> Where else can I check, or what else can I do to determine what the problem
> is?
...or maybe there _is_ no problem.
select count(*) from big_table; -- will fail because it's long-lived and rows
a
The postmaster.pid file shows the pid of the postmaster. The file
shouldn't exist when the postmaster isn't running, so it should be
safe to delete. Its presence does indicate that postgres was
improperly shutdown though.
--
Peter Geoghegan http://www.2ndQuadrant.com/
PostgreSQL Development
On 04/14/11 1:01 AM, Daron Ryan wrote:
Problem: Cannot start Postgres
Platform: Postgres 8.3, Windows Vista Home
Error Message: pg_ctl: invalid data in PID file "C:/Program
Files/PostgreSQL/8.3/data/postmaster.pid"
I run postgres on Windows Vista Home Edition. It normally runs as a
service b
On 14/04/2011 2:15 AM, Henry C. wrote:
Greets,
Pg 9.0.3
This must be due to my own misconfiguration, so apologies if I'm not seeing
the obvious - I've noticed that my slave seems to be stuck in a permanent
startup/recovery state.
That's what warm- and hot-standby slaves are. They're continuou
Problem: Cannot start Postgres
Platform: Postgres 8.3, Windows Vista Home
Error Message: pg_ctl: invalid data in PID file "C:/Program
Files/PostgreSQL/8.3/data/postmaster.pid"
I run postgres on Windows Vista Home Edition. It normally runs as a
service but is not starting. I created a command
On Wed, April 13, 2011 20:15, Henry C. wrote:
> If I try and execute a long-lived SQL query on the slave, it eventually fails
> with "canceling statement due to conflict with recovery". Replication is
> definitely working (DML actions are propagated to the slave), but something
> is amiss.
Let m
72 matches
Mail list logo