> I just have to read more on how to get it out relative to a different
> time zone than it went in. I'll find it.
Sounds like a job for SELECT ... AT TIME ZONE ...;
Karsten
--
Neu: GMX Doppel-FLAT mit Internet-Flatrate + Telefon-Flatrate
für nur 19,99 Euro/mtl.!* http://portal.gmx.net/de/g
Phoenix Kiula wrote:
> Really, PG absolutely needs a way to upgrade the database without so
> much data related downtime and all these silly woes. Several competing
> database systems are a cinch to upgrade.
I'd call it data corruption, not a silly woe.
I know that Oracle for example would not ma
Craig Ringer wrote:
> > The drawback is that some of the side effects of the INSERT occur
> > before the constraint check fails, so it seems to me that I still need
> > to perform the select.
>
> If you really can't afford the INSERT side effects and can't redesign
> your code to be tolerant of th
* Craig Ringer:
> The test program, attached, demonstrates what I should've known in the
> first place. In SERIALIZABLE isolation, the above is *guaranteed* to
> fail every time there's conflict, because concurrent transactions cannot
> see changes committed by the others. So is a SELECT test then
On 19 Jul 2009, at 12:39, Pedro Doria Meunier wrote:
Hash: SHA1
Hi All,
I *really* need some help with this one...
I have a table ~18M rows with a 'timestamp with time zone' column.
It's
indexed thus:
CREATE INDEX my_table_timestamp_idx
ON my_table
USING btree
(zulu_timestamp);
whenev
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Thank you Alban for your considerations.
The issue has already been marked as [SOLVED] following Sam's suggestion.
And FYI the issue *was* urgent and the wolf *was* biting my leg! :]
BR,
Pedro Doria Meunier
GSM: +351 96 17 20 188
Skype: pdoriam
On Mon, Jul 20, 2009 at 10:32:15AM +0800, Phoenix Kiula wrote:
> Thanks Martin. I tried searching through the archives and could only
> come with something like this:
>
> http://docs.moodle.org/en/UTF-8_PostgreSQL
>
> But this only has the usual iconv stuff suggested.
>
> Could you pls suggest s
* Albe Laurenz:
> The original question asked was "how can I tell an error that is caused
> by incomplete isolation from another error?"
>
> If you have a code segment like
>SELECT COUNT(id) INTO i2 FROM a WHERE id = i;
>IF i2 = 0 THEN
> INSERT INTO a (id) VALUES (i);
>END IF;
>
Hello,
I recently upgraded to version 8.4 and now full text search with russian
configuration is not working:
template1=# create database test encoding='win1251';
test=# create table test ("test" varchar(255));
test=# insert into test values ('тест');
test=# select * from test where to_ts
On Jul 20, 9:29 am, dmag...@gmail.com (Chris) wrote:
> > *I have tried *RubyRep, Bucardo, *_but none of these have a stable rpm
> > release for CentOS version of Linux Operation System_.
Just to mention this: rubyrep comes with a bundled package.
http://www.rubyrep.org/installation.html
Just u
Yes, I had done UNION. UNION ALL achives the expected plan and speed! Thank
you!
BTW, this is interesting, because there are only about 5 or 6 rows max
returned from both queries - but I guess the planner expects more and hence
changes the plan to remove duplicates.
On Sun, Jul 19, 2009 at 9:05 P
Two small suggestions that might make it easier for newcomers to take
advantage of the wonderful database:
1. Googling Postgres docs produces links for all different versions. This
is because incoming links are to different versions. Besides being
confusing, it pushes the pages lower in Google, a
On Sunday 19 July 2009 10:59:24 pm Dennis Gearon wrote:
> Hey Tom,
> I was trying to use 'US/Pacific-New' as my long, unabbreviated
> timezone and it wasn't working. I thought postgres wasn't accepting the
> unabbreviated, geopolitical, daylight savings time, time zones. Turns out,
> the serve
I have two queries which should be equivalent. The Planner plans them
differently, although they are both about the same time. Can someone
explain why?
select word from dict
where
word in
(select substr('moon', 0, generate_series(3,length('moon'
select * from dict
inner join (select substr
On Mon, 20 Jul 2009 09:34:00 -0400
Robert James wrote:
> Two small suggestions that might make it easier for newcomers to
> take advantage of the wonderful database:
> 1. Googling Postgres docs produces links for all different
> versions. This is because incoming links are to different
> version
On Mon, Jul 20, 2009 at 2:22 PM, Robert James wrote:
> BTW, this is interesting, because there are only about 5 or 6 rows max
> returned from both queries - but I guess the planner expects more and hence
> changes the plan to remove duplicates.
If you sent the plans for the various attempts we mi
On Mon, Jul 20, 2009 at 2:37 PM, Robert James wrote:
> I have two queries which should be equivalent. The Planner plans them
> differently, although they are both about the same time. Can someone
> explain why?
Uhm, please post the two plans and the server version. I know you've
posted them bef
On Mon, Jul 20, 2009 at 2:37 PM, Robert James wrote:
> I have two queries which should be equivalent. The Planner plans them
> differently, although they are both about the same time. Can someone
> explain why?
> select word from dict
> where
> word in
> (select substr('moon', 0, generate_serie
Robert James escribió:
> Two small suggestions that might make it easier for newcomers to take
> advantage of the wonderful database:
> 1. Googling Postgres docs produces links for all different versions. This
> is because incoming links are to different versions. Besides being
> confusing, it pu
On Mon, Jul 20, 2009 at 5:14 PM, Alvaro
Herrera wrote:
> Robert James escribió:
>> Two small suggestions that might make it easier for newcomers to take
>> advantage of the wonderful database:
>> 1. Googling Postgres docs produces links for all different versions. This
>> is because incoming links
On Mon, Jul 20, 2009 at 11:14:27AM -0400, Alvaro Herrera wrote:
> Robert James escribió:
> > Could the website offer a link to the 'current' version, whichever it is.
> > Eg instead of just :
> > http://www.postgresql.org/docs/8.1/static/creating-cluster.html
> > Have:
> > http://www.postgresql.or
On Mon, Jul 13, 2009 at 3:53 PM, Dan
Armbrust wrote:
>> So this thought leads to a couple of other things Dan could test.
>> First, see if turning off full_page_writes makes the hiccup go away.
>> If so, we know the problem is in this area (though still not exactly
>> which reason); if not we need
Hello,
having problem with conversion while doing function query_to_xml.
query_to_xml('select Nazev as "TITLE", Datum as "DATE", Autor_Akce as "meta
rel=''action_author''",
gave me
http://www.w3.org/2001/XMLSchema-instance";>
Test
2009-07-20
test
How can i transcode this well gen
-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160
>> In my experience, I've found that enabling full logging for a short time
>> (perhaps a few hours) gathers enough data to run through tools like
>> pgFouine
In response to "Greg Sabino Mullane" :
>
> -BEGIN PGP SIGNED MESSAGE-
> Hash: RIPEMD160
>
>
> >> In my experience, I've found that enabling full logging for a short time
> >> (perhaps a few hours) gathers en
On Mon, 2009-07-20 at 13:24 -0400, Bill Moran wrote:
> In response to "Greg Sabino Mullane" :
> >
> > -BEGIN PGP SIGNED MESSAGE-
> > Hash: RIPEMD160
> >
> >
> > >> In my experience, I've found that enabling
On Jul 20, 2009, at 10:24 AM, Bill Moran wrote:
In response to "Greg Sabino Mullane" :
-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160
In my experience, I've found that enabling full logging for a
short time
(perhaps a few hours) gathers enough data to run through tools like
pgFouine
In response to "Joshua D. Drake" :
> On Mon, 2009-07-20 at 13:24 -0400, Bill Moran wrote:
> > In response to "Greg Sabino Mullane" :
> > >
> > > -BEGIN PGP SIGNED MESSAGE-
> > > Hash: RIPEMD160
> > >
> > >
On Mon, 2009-07-20 at 13:46 -0400, Bill Moran wrote:
> In response to "Joshua D. Drake" :
> > It depends on the system. I have seen even big systems take a huge hit
> > by full logging due to transactional velocity.
>
> Perhaps I'm just in a foul mood today, but I feel like people are picking
Po
Hey all,
I'm looking at windows, and a couple of quick google's didn't give me
anything,
If I set my log_min_duration_statement < 16 ms, I get one of 2 values
for my pgbench runs (pretty quick statements). 0, or 16 ms (sometimes
15.999). If I benchmark some other way (via my app) things l
I'm stumped-- at least for an easy way to do this.
When referencing the uniqueidentifier data type in PostgreSQL 8.1
(now uuid, because of an UPDATE to the pg_ tables) in function definitions in
schemas not in the search path, one must reference the data type as
"public.uniqueidentifier" (ahem.
Hey,
I'm writing a backup script. Right now, I only have one database on my
postgresql server. I'm deciding if I should use pg_dump or pg_dumpall when
backing up the server. As far as I can tell, pg_dumpall cannot compress the
dumps automatically and it only dumps data in the standard SQL text fil
I would like some help in reading the postgres logs.
Here is a snippet of the log.
Auto commit seems to be set to false.
But still the logs shows "CommitTransactionCommand" in debug mode.
The same order number is given for multiple clients.
Please see "CommitTransactionCommand" below for both "sele
Hi --
> I'm writing a backup script. Right now, I only have one database on my
> postgresql server. I'm
> deciding if I should use pg_dump or pg_dumpall when backing up the server. As
> far as I can tell,
> pg_dumpall cannot compress the dumps automatically and it only dumps data in
> the stan
What's the generally accepted method for killing processes that went 'all
wacky' in postgres?
I think i've seen in this group that kill -INT would be the way to go.
I'm playing around with different options for a median function. this one got
out of hand
and was taking too long, so i wanted to
On Mon, Jul 20, 2009 at 4:23 PM, APseudoUtopia wrote:
> Hey,
>
> I'm writing a backup script. Right now, I only have one database on my
> postgresql server. I'm deciding if I should use pg_dump or pg_dumpall when
> backing up the server. As far as I can tell, pg_dumpall cannot compress the
> dumps
Here is a snippet of the postgres log.
Auto commit seems to be set to false.
But still the logs shows "CommitTransactionCommand" in debug mode.
The same order number is given for multiple clients.
Please see "CommitTransactionCommand" below for both "select ...for
update" and "update..." SQLs and l
APseudoUtopia wrote:
I'm writing a backup script. Right now, I only have one database on my
postgresql server. I'm deciding if I should use pg_dump or pg_dumpall
when backing up the server. As far as I can tell, pg_dumpall cannot
compress the dumps automatically and it only dumps data in the
s
On Mon, Jul 20, 2009 at 4:44 PM, David Kerr wrote:
> What's the generally accepted method for killing processes that went 'all
> wacky' in postgres?
>
> I think i've seen in this group that kill -INT would be the way to go.
>
> I'm playing around with different options for a median function. this
APseudoUtopia wrote:
Hey,
I'm writing a backup script. Right now, I only have one database on my
postgresql server. I'm deciding if I should use pg_dump or pg_dumpall
when backing up the server. As far as I can tell, pg_dumpall cannot
compress the dumps automatically and it only dumps data in
On Mon, Jul 20, 2009 at 6:48 PM, Scott Marlowe wrote:
> On Mon, Jul 20, 2009 at 4:44 PM, David Kerr wrote:
>> What's the generally accepted method for killing processes that went 'all
>> wacky' in postgres?
>>
>> I think i've seen in this group that kill -INT would be the way to go.
>>
>> I'm play
Scott Marlowe writes:
> On Mon, Jul 20, 2009 at 4:44 PM, David Kerr wrote:
>> I'm playing around with different options for a median function. this one
>> got out of hand
>> and was taking too long, so i wanted to kill it:
>>
>> test=# select array_median(array(select t1 from test2 order by 1));
On Mon, Jul 20, 2009 at 07:18:07PM -0400, Merlin Moncure wrote:
- On Mon, Jul 20, 2009 at 6:48 PM, Scott Marlowe wrote:
- > On Mon, Jul 20, 2009 at 4:44 PM, David Kerr wrote:
- > What's most likely happening is that it's stuck in a tight loop that
- > doesn't check for interrupts, so it just keeps
John R Pierce wrote:
APseudoUtopia wrote:
...
I was leaning towards pg_dumpall, but then I realized that it only
dumps in the standard SQL text file format, and it cannot be
compressed automatically.
pgdumpall | gzip > dumpfile.sql.gz
That deals with compression. But if you want t
Raji Sridar (raji) wrote:
> I would like some help in reading the postgres logs.
> Here is a snippet of the log.
> Auto commit seems to be set to false.
> But still the logs shows "CommitTransactionCommand" in debug mode.
> The same order number is given for multiple clients.
> Please see "CommitTr
On Mon, Jul 20, 2009 at 2:04 PM, Peter Eisentraut wrote:
> On Monday 20 July 2009 06:45:40 Phoenix Kiula wrote:
>> explain analyze select * from sites where user_id = 'phoenix' order by
>> id desc limit 10;
>>
>> QUERY PLAN
>>
> "0x80" makes me think of the following:
> The data originate from a Windows system, where 0x80 is a Euro
> sign. Somehow these were imported into PostgreSQL without the
> appropriate translation into UTF-8 (how I do not know).
>
> I wonder: why do you spend so much time complaining instead of
> s
I once talked to a company that made a custome version of Postgres. It split
tables up on columns and also by rows, had some other custome features. It was
enormously faster from what I gathered.
I could of sworn it began with the letter 'T', but maybe not. I don't see
anything like that on th
On Jul 20, 2009, at 6:56 PM, Dennis Gearon wrote:
I once talked to a company that made a custome version of Postgres.
It split tables up on columns and also by rows, had some other
custome features. It was enormously faster from what I gathered.
I could of sworn it began with the letter
On Jul 20, 2009, at 8:56 PM, Dennis Gearon wrote:
I once talked to a company that made a custome version of Postgres.
It split tables up on columns and also by rows, had some other
custome features. It was enormously faster from what I gathered.
I could of sworn it began with the letter
David Kerr writes:
> But, i don't see any coded loop or way for me to insert a signal check. (I'm
> not much of a
> programmer) the function was just:
> CREATE OR REPLACE FUNCTION array_median(anyarray)
> RETURNS anyelement AS
> $$
> SELECT CASE
> WH
Because of the three-day break, my response is late.
Valgrind is a great tool, but you must learn how to identify false
positives and tell the difference between a leak that matters (say 1kb
allocated and not freed in a loop that runs once per second) and a leak
that doesn't.
I get the memory
Because of the three-day break, my response is late.
8.1.8 is pretty old.
Also you'll have better luck getting help if you actually include the
output
from Valgrind.
the output from Valgrind is not stored. from now on, I will do it again and
get the result from Valgrind.
PS: the memory leak
On Tue, 2009-07-21 at 13:53 +0900, tanjunhua wrote:
> I get the memory leak scenario not only from Valgrind, but also from the
> output of top command.
> At first I think the memory leak occur when I disconnect database by
> Valgrind, then I write a test sample that just connect and disconnect
54 matches
Mail list logo