On 16/06/10 Tom Lane did say:
> > "row exclusive mode" does not block itself.
>
> Specifically, see the table in
> http://www.postgresql.org/docs/8.3/static/explicit-locking.html#LOCKING-TABLES
>
> Exclusive lock modes are those that conflict with themselves; to wit,
> SHARE ROW EXCLUSIVE mode o
On Thu, 2010-06-17 at 20:56 -0400, Ozz Nixon wrote:
> I found a page:
>
> http://www.postgresql.org/docs/8.3/static/uuid-ossp.html
>
> Is that the only way to get UUID from postgresql?
>
> * We are in the midst of porting a replication product to support postgresql,
> which relies on UUID
I found a page:
http://www.postgresql.org/docs/8.3/static/uuid-ossp.html
Is that the only way to get UUID from postgresql?
* We are in the midst of porting a replication product to support postgresql,
which relies on UUID/GUID for uniqueness instead of Sequences and Site ID's.
Thanks,
Sorry, nevermind, the second query returns the correct results.
It first resolves the inner query to a single date, and then gets all
records matching that date.
On Jun 17, 6:25 am, pythonista wrote:
> Hi, I'm new:
>
> This table is called rssitem. Given the fields:
>
> str_key(varchar) se
We're using PostgreSQL to back tickets in CAS, http://www.jasig.org/cas,
in a large university environment where the ephemeral nature of CAS
tickets places a high concurrent read/write workload on the database.
We recently switched platforms from Oracle to PG and saw a dramatic
increase in dead
Sweet! I knew there was a way to do this, I'll mess around with it
more in a bit.
On Thu, Jun 17, 2010 at 10:38 AM, Francisco Figueiredo Jr.
wrote:
> I would go with 2).
>
> Npgsql supports multiple resultsets. You can pass a query separated by
> semicolon ';' or you can use a procedure call wh
On Thu, Jun 17, 2010 at 4:51 PM, Felde Norbert wrote:
> The first error message was what I got after postgres crashed and I
> tried to make a dump, run vacuum or tried somthing else.
> The second message I got when I tried to repaire the problem, so it
> dous not matter because I did something wro
The first error message was what I got after postgres crashed and I
tried to make a dump, run vacuum or tried somthing else.
The second message I got when I tried to repaire the problem, so it
dous not matter because I did something wrong i see.
If I could choose I would use a linux server too, bu
> -Original Message-
> From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
> ow...@postgresql.org] On Behalf Of Merlin Moncure
> Sent: Thursday, June 17, 2010 12:56 PM
> To: Tom Lane
> Cc: Felde Norbert; pgsql-general@postgresql.org; Scott Marlowe
> Subject: Re: [GENERAL] postgr
On Thu, Jun 17, 2010 at 2:53 PM, Tom Lane wrote:
> You weren't too specific about how you got into this state, but I
> suppose that it must have been a system crash or power failure. Even
> then, you would not have gotten burnt if the filesystem and hardware
> did what they're supposed to do. I
"Joshua D. Drake" writes:
> On Thu, 2010-06-17 at 19:27 +, Dann Corbit wrote:
>>> (Perhaps more to the point, if they don't have problems, it's likely
>>> because they tell their customers how to configure Windows boxes safely
>>> before the fact. And people who are spending the money for an
On Thu, 2010-06-17 at 19:27 +, Dann Corbit wrote:
> > -Original Message-
> > From: Tom Lane [mailto:t...@sss.pgh.pa.us]
> > Sent: Thursday, June 17, 2010 12:20 PM
> > To: Dann Corbit
> > Cc: Felde Norbert; pgsql-general@postgresql.org; Scott Marlowe
> > Subject: Re: [GENERAL] postgres c
> -Original Message-
> From: Tom Lane [mailto:t...@sss.pgh.pa.us]
> Sent: Thursday, June 17, 2010 12:20 PM
> To: Dann Corbit
> Cc: Felde Norbert; pgsql-general@postgresql.org; Scott Marlowe
> Subject: Re: [GENERAL] postgres crash SOS
>
> Dann Corbit writes:
> >> (Personally I'd never run
On Thu, Jun 17, 2010 at 1:19 PM, Tom Lane wrote:
> (Perhaps more to the point, if they don't have problems, it's likely
> because they tell their customers how to configure Windows boxes safely
> before the fact. And people who are spending the money for an Oracle
> license will heed that advice.
Dann Corbit writes:
>> (Personally I'd never run a database I cared about on Windows.)
> Somehow, I doubt that Windows is to blame. For instance, Oracle and
> SQL*Server seem to run fine on Windows without this sort of problem.
Really? Are you front-line support for either, so that you can sa
> -Original Message-
> From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
> ow...@postgresql.org] On Behalf Of Tom Lane
> Sent: Thursday, June 17, 2010 11:54 AM
> To: Felde Norbert
> Cc: pgsql-general@postgresql.org; Scott Marlowe
> Subject: Re: [GENERAL] postgres crash SOS
>
Felde Norbert writes:
> The message is the same for the original pg_clog/0003, the 0003
> containing binary 0 and after pg_resetxlog:
> pg_dump: Error message from server: ERROR: could not access status of
> transaction 3974799
> DETAIL: Could not read from file "pg_clog/0003" at offset 204800:
Hi all
I've been working with XML storage in Pg and was puzzled by the fact
that Pg appears to refuse to store a document with a DOCTYPE declaration
- it was interpreting it as a regular element and rejecting it.
This turns out to be because Pg parses XML as a fragment (ie option
CONTENT) when ca
I would go with 2).
Npgsql supports multiple resultsets. You can pass a query separated by
semicolon ';' or you can use a procedure call which return a setof
refcursor.
On both ways, you will need to call NextResult in your Datareader just
like with SQLServer.
You can check our user manual: htt
Evan Carroll writes:
> Why is it that "DELIMITER", and "NULL" options are under [WITH] but
> not under [CSV]. This seems at least slightly awkward.
They're not specific to CSV.
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To ma
Evan Carroll writes:
> While asking questions about the grammar of COPY why do we have a
> bracket of `[ [ WITH ] ...` rather than `[ WITH ]` ...
Somebody left out the closing brackets in the documentation, evidently.
Will fix.
regards, tom lane
--
Sent via pgsql-genera
Why is it that "DELIMITER", and "NULL" options are under [WITH] but
not under [CSV]. This seems at least slightly awkward.
Command: COPY
Description: copy data between a file and a table
Syntax:
COPY tablename [ ( column [, ...] ) ]
FROM { 'filename' | STDIN }
[ [ WITH ]
[ BI
While asking questions about the grammar of COPY why do we have a
bracket of `[ [ WITH ] ...` rather than `[ WITH ]` ...
COPY tablename [ ( column [, ...] ) ]
FROM { 'filename' | STDIN }
[ [ WITH ] (why is this opening bracket that is never closed here.)
[
Jacek Trzmiel writes:
> I have a problem with growing toast table in pgsql 8.3.5. Probably
> raising max_fsm_pages and running autovacuum more aggresively will fix
> it, but during investigation I've got this puzzling output from VACUUM
> ANALYZE VERBOSE:
>> INFO: "pg_toast_16505": found 7563
Hi,
I've got a produktive pg-server V8.4 on SuSE and my dev-server on winxp
localhost.
The 2 servers produce different results in text sorting.
The local server sorts as expected but the productive db produces
unexpected orders.
They have identical data. That the testbox has daily backups of t
On Thu, Jun 17, 2010 at 10:35 AM, Merlin Moncure wrote:
> On Thu, Jun 17, 2010 at 4:47 AM, Felde Norbert wrote:
>> Before I began I made a filesystem level backup after I stopped the
>> postgres service.
>> I have the original 0003 file, the size is 204800, The size of the
>> other files in this
On Wed, Jun 16, 2010 at 10:42 PM, zhong ming wu wrote:
> Dear List
>
> Where can I find this mapping of oid to pg data types mentioned in
> libpq documentation?
> Why is such information not mentioned in the documentation? A general
> knowledge?
curious: what do you need the oids for?
built in
On Thu, Jun 17, 2010 at 4:47 AM, Felde Norbert wrote:
> Before I began I made a filesystem level backup after I stopped the
> postgres service.
> I have the original 0003 file, the size is 204800, The size of the
> other files in this dir is 262144.
hm...any indication of why the file is small? r
On Wednesday 16 June 2010 7:42:02 pm zhong ming wu wrote:
> Dear List
>
> Where can I find this mapping of oid to pg data types mentioned in
> libpq documentation?
> Why is such information not mentioned in the documentation? A general
> knowledge?
>
> Thanks
>
> Mr Wi
I think you are looking for
>I'm trying to make a query that, given N and a date, gives me the interval of
>N hours with the max(sum(...)).
select sum(i) as s, timestamp '2010-06-16 00:00:00' + extract(hour from
d)::integer/3*3 * interval '1 hour' as sumd from p group by extract(hour from
d)::integer/3 where d = '2010-06
Regarding the RAM thing, all the recipes will be in the database but
when the engine loads, it will create a graph in memory of all the
recipes. They'll actually be linked by similarity, recipe A will have
an array of pointers to other recipes that someone who likes recipe A
might also like. Thou
keeping anything in RAM means your data will be lost forever when the session
is lost..(then again if the data is garbage to begin with then throwing away
garbage is no problem)
if your data is NOT garbage then you're going to want to persist the data
..look at oracle/mysql and yes even SQL-se
Hi all – I apologize for the misuse of this alias, but I figured I
have the audience that might be interested in helping me out with a
rather difficult coding problem. In fact, if I can find someone who’s
looking for some part-time work and is interested in a bit of extra
cash, then it’ll be worth
I have found an initial solution. For an interval N of 3 hours the query can
be:
select distinct poi,first_value(start_date) OVER w as
start_date,first_value(end_date) OVER w as end_date,first_value(the_sum)
OVER w from
(select poi,t1.ts as start_date,
t1.ts + '3 hour'::interval as end_dat
On Wed, Jun 16, 2010 at 1:15 PM, Craig Ringer
wrote:
> On 16/06/10 16:56, Allan Kamau wrote:
>
>> The function I have mentioned above seems to "hang" indefinitely (now
>> 12hours and counting) while it took only 2secs for the many times it
>> was previously invoked from the client application (wit
2010/6/16 Filip Rembiałkowski :
>
> 2010/6/15 Allan Kamau
>>
>> I do have a PL/SQL function that gets executed called many times but
>> with different parameter values each of these times. For most
>> invocations of this function run in a couple of seconds however some
>> invocations of the same f
On Wed, Jun 16, 2010 at 1:15 PM, Craig Ringer
wrote:
> On 16/06/10 16:56, Allan Kamau wrote:
>
>> The function I have mentioned above seems to "hang" indefinitely (now
>> 12hours and counting) while it took only 2secs for the many times it
>> was previously invoked from the client application (wit
Before I began I made a filesystem level backup after I stopped the
postgres service.
I have the original 0003 file, the size is 204800, The size of the
other files in this dir is 262144.
I corrected the permissions of the whole data dir, but the error
message is the same.
The exact messages:
The
sunpeng writes:
> hi,i write a function in postgresql source code, how to register this
> function?
See src/include/catalog/pg_proc.h
But you should *really* consider making it a loadable module. That's the
way it makes sense for any code you want to add in the server unless
you're preparing a
Hi,
I have a problem with growing toast table in pgsql 8.3.5. Probably
raising max_fsm_pages and running autovacuum more aggresively will fix
it, but during investigation I've got this puzzling output from VACUUM
ANALYZE VERBOSE:
INFO: "pg_toast_16505": found 756385 removable, 23003704 nonr
On 16 June 2010 19:07, Bryan Montgomery wrote:
> If I do select * from Table_A a, Table_X x where a.id = x.id
> and it only returns 1 or a few rows, should it use an index if I have an
> index built on Table_A using id?
It should because it scans Table_A too.
> Or do I need an index individually
41 matches
Mail list logo