On Mon, 2007-10-22 at 01:23 -0400, Guy Rouillier wrote:
> Ow Mun Heng wrote:
> > On Sun, 2007-10-21 at 22:54 -0400, Guy Rouillier wrote:
> >> The current app uses stored procedures
> >> for all inserts, and PG didn't do well with that approach; substituting
Hi,
Is there a method to obtain the query's runtime from any of the pg_*
tables?
Right now, I'm only seeing the backend_start_time (from
pg_stat_activity) (and I'm also not sure how to interpret this.
---(end of broadcast)---
TIP 1: if posting/rea
On Wed, 2007-10-24 at 07:57 +0200, Stefan Schwarzer wrote:
> Hi there,
>
>
> I read dozens of times the "TIP 2: Don't 'kill -9' the postmaster"...
>
>
> Now, what am I supposed to do if I launched a query which takes ages,
> and which I want to interrupt?
look for the query's procpid and th
On Tue, 2007-10-23 at 09:28 -0500, Erik Jones wrote:
> > Ow Mun Heng wrote:
> >> Hi,
> >>
> >> Is there a method to obtain the query's runtime from any of the pg_*
> >> tables?
> query_start does, however, give you the time that the query star
On Mon, 2007-10-22 at 08:20 -0400, Bill Moran wrote:
> In response to "Joshua D. Drake" <[EMAIL PROTECTED]>:
>
> > Ow Mun Heng wrote:
> > > I'm wondering if what I'm doing is redundant.
> > >
> > > I have a primary key on columns
PL/pgSQL function "query_time2" line 3 at SQL statement
>
> Il Friday 26 October 2007 08:24:46 Ow Mun Heng ha scritto:
> > Hi,
> >
> > After Erik Jones gave me the idea for this, I started to become lazy to
> > have to type this into the sql everytime I want
Hi,
After Erik Jones gave me the idea for this, I started to become lazy to
have to type this into the sql everytime I want to see how long a query
is taking.. so, I thought that I'll create a function to do just that..
I ended up with..
CREATE OR REPLACE FUNCTION query_time()
RETURNS SETOF que
It's not an optimal setup but since I only have 3x500G drives to play
with, I can't build a Raid10, so I'm going for Raid5 to test out
capability before I decide on Raid5 vs Raid1 tradeoff. (Raid1 = No Fault
tolerance since 3 drives)
Anyway.. I'm trying to figure out the chunk size for the raid. I
CREATE ROLE db_users
NOSUPERUSER INHERIT NOCREATEDB NOCREATEROLE;
ALTER ROLE db_users SET search_path=mydb, public;
When I create a login,
CREATE ROLE oli LOGIN
ENCRYPTED PASSWORD 'mXXX'
NOSUPERUSER INHERIT NOCREATEDB NOCREATEROLE;
ALTER ROLE oliver SET search_path=xmms; < I have to set
On Thu, 2007-11-01 at 00:02 -0400, Tom Lane wrote:
> Ow Mun Heng <[EMAIL PROTECTED]> writes:
> > Why is this such or is this an intended feature?
>
> No, there is no intention that ALTER ROLE SET values be inherited ---
> what you get is just what is attached to the role
Hi All,
Thanks to Dave Page's instruction, I'm not a pretty happy camper with
pgadmin's pgagent. It provides a graphical interface to for the NON-nix
inclined people. (otherwise a sh script would have done nicely)
Anyway.. I've got a request and I'm not sure if this has been talked
about or (
I just ran a vacuum verbose on the entire DB and this came out.
number of page slots needed (274144) exceeds max_fsm_pages (153600)
Hence, I've changed the max to 400,000 (pulled it straight out of the
air). How does one calculate what's the number needed anyway?
Another question is, based on w
On Thu, 2007-11-01 at 20:56 -0400, Bill Moran wrote:
> Ow Mun Heng <[EMAIL PROTECTED]> wrote:
> >
> > I just ran a vacuum verbose on the entire DB and this came out.
> >
> > number of page slots needed (274144) exceeds max_fsm_pages (153600)
> >
>
On Thu, 2007-11-01 at 21:22 -0400, Tom Lane wrote:
> Ow Mun Heng <[EMAIL PROTECTED]> writes:
> > OK.. Vacuum verbose took 2 hours.. Vacuum full will likely take 2x that
> > I presume.
>
> Probably a lot more, and it'll bloat your indexes while it's at it.
>
On Thu, 2007-11-08 at 12:02 -0500, Bill Moran wrote:
> Frequently, when people ask for help because they've exceed max_fsm*,
> it's because they're not paying attention to their systems, and therefore
> the problem has been occurring for a while before it got so bad that
> they couldn't ignore it.
I'm trying to do some periodic updates from another DB and would like to
know the # of updates/inserts/deletes from that job.
I usually write a function which gets/uses the GETS DIAGNOSTIC ROW COUNT
parameter which will tell me how many rows were affected by the query.
Now, for this case, I'm not
On Fri, 2007-11-16 at 09:00 +0800, Ow Mun Heng wrote:
> I'm trying to do some periodic updates from another DB and would like to
> know the # of updates/inserts/deletes from that job.
>
> I usually write a function which gets/uses the GETS DIAGNOSTIC ROW COUNT
> parameter wh
On Fri, 2007-11-16 at 10:22 +0100, Martijn van Oosterhout wrote:
> On Fri, Nov 16, 2007 at 09:00:46AM +0800, Ow Mun Heng wrote:
> > I usually write a function which gets/uses the GETS DIAGNOSTIC ROW COUNT
> > parameter which will tell me how many rows were affected by the query.
&g
On Wed, 2007-11-14 at 00:02 -0800, Willem Buitendyk wrote:
> Perfect - that appears to be exactly what I was looking for.
> Reg Me Please wrote:
> > Il Wednesday 14 November 2007 05:50:36 Willem Buitendyk ha scritto:
> >
> >> Will Postgresql ever implement an ignore on error feature when bulk
On Fri, 2007-11-16 at 09:28 -0600, Erik Jones wrote:
> On Nov 16, 2007, at 3:26 AM, Ow Mun Heng wrote:
>
> >
> > On Fri, 2007-11-16 at 10:22 +0100, Martijn van Oosterhout wrote:
> >> On Fri, Nov 16, 2007 at 09:00:46AM +0800, Ow Mun Heng wrote:
> >>> I usual
On Tue, 2007-11-13 at 09:49 -0500, Bill Moran wrote:
> In response to Ow Mun Heng <[EMAIL PROTECTED]>:
> > How does one monitor it closely anyway? the warning comes when one does
> > a vacuum verbose and with autovacuum turned on, I don't even see it
> > anywhere.
On Mon, 2007-11-19 at 08:24 -0500, Bill Moran wrote:
> In response to Ow Mun Heng <[EMAIL PROTECTED]>:
> >
> > Even with the regular vacuuming and even a vacuum full ( on my test DB)
> > I still see that perhaps something is wrong (from the below)
> >
> >
On Wed, 2007-11-28 at 18:30 -0500, Tom Lane wrote:
> For btree indexes, there is a temporary copy of the index data, which
> will go wherever you have arranged for temp files to go. (I think that
> easy user control of this may be new for 8.3, though.)
In 8.2.5 is there a way to control this? I
On Thu, 2007-11-29 at 21:22 -0800, Postgres User wrote:
> I have a large function that's doing a number of calcs. The final
> return value is wrong for a simple reason: any division statement
> where the numerator is less than the denominator is returning a zero.
>
> Each of these statements re
I was browsing the net yesterday after reading through the thread on PG
vs Informix and I was curious as to Ingress.
Ingress is also an open source RDBM (and DataWarehouseing) and I'm
wondering if anyone here has anything to say about it. They also offer
community editions but I've not gone to see
On Fri, 2007-11-30 at 01:59 -0500, Robert Treat wrote:
> On Thursday 29 November 2007 21:12, Ow Mun Heng wrote:
> > Ingress is also an open source RDBM (and DataWarehouseing) and I'm
> > wondering if anyone here has anything to say about it. They also offer
> > commu
On Tue, 2007-12-04 at 14:21 -0500, Gauthier, Dave wrote:
> Is there something like a freeware windows client app that does DBA
> stuff for a remote server? Sort of like TOAD for Oracle?
pgadmin3?
and please don't hijack threads
---(end of broadcast)--
I've got a desired output which looks something like this..
vdt| count
+---
1 | 514
2 |27
3 |15
4 |
5 |12
6 |15
the query in psql is something like this..
select vdt, count(*) from footable
On Thu, 2007-12-06 at 23:06 -0500, Rodrigo De León wrote:
> On Dec 6, 2007 10:44 PM, Ow Mun Heng <[EMAIL PROTECTED]> wrote:
> > I've got a desired output which looks something like this..
> >
> > vdt| count
> > +---
> >
I'm using centos 5 as the OS so, there's no fancy dtrace to look at
which processes is causing my disks to thrash.
I have 4 disks in the box. (all ide, 7200rpm)
1 OS disk [hda]
2 raided (1) disks [hdb/hdc]
1 pg_xlog disk (and also used as an alternate tablespace for [hdd]
temp/in-transit files vi
I'm starting to perform some basic housekeeping to try to trim some big
tables (~200 million rows - ~50GB+indexes) into separate partitions (via
inheritance).
The current issue which prompted me to do such housekeeping is due to
long database wide vacuum time. (it went from 2 hours to 4 hours to 7
On Fri, 2007-12-14 at 01:54 -0500, Tom Lane wrote:
> "Merlin Moncure" <[EMAIL PROTECTED]> writes:
> > there are a few things that I can think of that can can cause postgres
> > to cause i/o on a drive other than the data drive:
> > * logging (eliminate this by moving logs temporarily)
I'll have to
On Fri, 2007-12-14 at 09:35 +0100, Harald Armin Massa wrote:
> Ow Mun Heng,
>
> The current issue which prompted me to do such housekeeping is
> due to
> long database wide vacuum time. (it went from 2 hours to 4
> hours to 7
> hours)
On Sun, 2007-12-16 at 16:11 -0800, Joshua D. Drake wrote:
> On Sun, 16 Dec 2007 17:55:55 -0600
> "Scott Marlowe" <[EMAIL PROTECTED]> wrote:
>
> > On Dec 14, 2007 1:33 AM, Ow Mun Heng <[EMAIL PROTECTED]> wrote:
> > > I kept looking at th
On Tue, 2007-12-18 at 20:12 -0700, Gregory Williamson wrote:
> Kevin H. wrote on Tue 12/18/2007 7:26 PM
> > This is aimed at everyone in this community who contributes to the
> > Postgres project, but especially at the core folks who continually
> make this community great through energy, time, mo
Hi,
is there a simple way for me to backup all the functions which I've
written for a server?
Thanks
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
On Mon, 2007-11-26 at 12:39 -0500, Chris Browne wrote:
> [EMAIL PROTECTED] ("Jeff Larsen") writes:
> Unfortunately, the only way to make things deterministic (or to get
> from "near real time" to "*GUARANTEED* real time") is to jump to
> synchronous replication, which is not much different from 2P
On Wed, 2007-11-21 at 15:33 -0500, Andrew Sullivan wrote:
> On Wed, Nov 21, 2007 at 12:20:51PM -0800, Garber, Mikhail wrote:
>
> > In the high-availabilty situation with a warm standby, is it possible (or
> > planned) to be able to make standby readable?
>
> Yes, but it won't happen for 8.3. It
{resend as don't see it on the list after 4 hours}
I'm just wetting my hands with slony and during the setup of the slave,
I did and dump and restore of the master DB to the Slave DB.
However during the startup of slony, I noticed that it issues a truncate
command to the (to be) replicated table.
On Thu, 2008-01-03 at 19:17 -0500, Geoffrey wrote:
> Ow Mun Heng wrote:
> > However during the startup of slony, I noticed that it issues a truncate
> > command to the (to be) replicated table. Hence, this means that there's
> > no such need for me to do a dum
I'm just wetting my hands with slony and during the setup of the slave,
I did and dump and restore of the master DB to the Slave DB.
However during the startup of slony, I noticed that it issues a truncate
command to the (to be) replicated table. Hence, this means that there's
no such need for me
On Fri, 2007-11-30 at 09:33 -0500, Andrew Sullivan wrote:
> On Fri, Nov 30, 2007 at 01:22:31PM -, Greg Sabino Mullane wrote:
> > or a scapegoat. Please don't perpetuate this urban myth. No companies are
> > suing Oracle and Microsoft because of their products, and companies have
> > no expec
On Mon, 2008-01-07 at 13:13 -0600, Scott Marlowe wrote:
> On Jan 7, 2008 12:57 PM, Devrim GÜNDÜZ <[EMAIL PROTECTED]> wrote:
> >
> > I want to announce PostgreSQL RPM Buildfarm today.
>
> This is very very very cool! Thanks you guys.
Ultra Even. No more mucking about for RPMS and finally a ptop
On Mon, 2008-01-07 at 21:54 -0800, Devrim GÜNDÜZ wrote:
> Hi,
>
> On Tue, 2008-01-08 at 13:34 +0800, Ow Mun Heng wrote:
> > finally a ptop in RPM form.
>
> I packaged it last month, but did not announce it to public.:)
Hmm.. can I find it in the same location as the pgpool
On Wed, 2008-01-09 at 08:41 +0200, Sim Zacks wrote:
> Another way of doing this, without dblink, is using an unsecured language
> (plpython, for example) is to connect to the sql server using odbc and then
> putting the data into your postgresql.
I use perl DBI to connect to both PG and MSSQL.
On Tue, 2008-01-08 at 23:37 -0700, Guido Neitzer wrote:
> On 08.01.2008, at 23:20, Joshua D. Drake wrote:
> Like, I have a situation where I need multi-master just for
> availability. Two small servers are good enough for that. But
> unfortunately with PostgreSQL the whole setup is a major pa
On Tue, 2008-01-08 at 23:16 -0800, Joshua D. Drake wrote:
> Ow Mun Heng wrote:
> > On Wed, 2008-01-09 at 08:41 +0200, Sim Zacks wrote:
> >> Another way of doing this, without dblink, is using an unsecured language
> >> (plpython, for example) is to connect to th
On Tue, 2008-01-08 at 23:05 -0800, Joshua D. Drake wrote:
> Sim Zacks wrote:
>
> >
> > The reason companies go with the closed source, expensive solutions is
> > because they are better products.
>
> Sometimes, sometimes not. It depends on your needs.
This is total FUD. Everything has a plac
On Wed, 2008-01-09 at 00:21 -0700, Guido Neitzer wrote:
> On 09.01.2008, at 00:08, Joshua D. Drake wrote:
> > Great! I was just trying to show you that there was a JDBC layer
> > available for multi-mastering with PostgreSQL.
>
> When I find some time, I might dig a bit deeper in the Sequoia s
On Wed, 2008-01-09 at 00:24 -0700, Guido Neitzer wrote:
> On 09.01.2008, at 00:14, Ow Mun Heng wrote:
>
> >> Like, I have a situation where I need multi-master just for
> >> availability. Two small servers are good enough for that. But
> >> unfortunately wit
Just wondering if my 'Perceived' feeling that since implementing slony
for master/slave replication of select tables, my master database
performance is getting slower.
I'm constantly seeing a very high amount of IO wait. ~40-80 according to
vmstat 1
and according to atop. (hdb/hdc = raid1 mirror)
I'm currently seeing more and more problems with vacuum as the DB size
gets bigger and bigger. (~220GB+)
Bear in mind that I'm working on a fairly big DB with unfairly sized
hardware (Celeron 1.7G, 2x500G Raid1 dbspace1, 1x500Gb dbspace2, 1x80G
system, 768MB Ram, 2G Swap on dspace2)
IO is main bo
On Fri, 2008-01-18 at 14:57 -0500, Chris Browne wrote:
> [EMAIL PROTECTED] (Ow Mun Heng) writes:
> > Just wondering if my 'Perceived' feeling that since implementing slony
> > for master/slave replication of select tables, my master database
> > performance
On Mon, 2008-01-28 at 22:17 +, Jeremy Harris wrote:
> We have one problematic table, which has a steady stream of entries
> and a weekly mass-delete of ancient history. The "bloat" query from
> Greg Sabino Mullane (thanks to Greg Smith for pointing it out) returns:
>
> schemaname | tablenam
On Mon, 2008-01-28 at 20:57 -0500, Greg Smith wrote:
> On Tue, 29 Jan 2008, Ow Mun Heng wrote:
>
> > Can you let me know what is the sql used to generate such a nice summary
> > of the tables?
>
> Might as well dupe the old text; this went out to the performance list:
This is OT for this list and I don't have access to I-net (only email)
and I'm not subscribed to the Slony list.
I need to add a few additional columns to an existing replicated
set/table. I know that I can't just add the columns normally but have to
go through slonik's EXECUTE SCRIPT (/usr/bin/sl
On Tue, 2008-01-29 at 19:16 +, Dave Page wrote:
> On Jan 29, 2008 6:16 PM, Joshua D. Drake <[EMAIL PROTECTED]> wrote:
>
> > I try to be reasonable (no laughing people :)).
>
> Oh it's hard, so very, very hard!
>
But seriously, I've ranted on this some time ago( and you can tell that
I'm ab
On Wed, 2008-01-30 at 09:14 +0100, Eugenio Tacchini wrote:
> Hello,
> I'm writing a function in PL/pgSQL and I would like to know if there
> is a method to get the number of records in a result set, after a
> select query, without executing the same query using COUNT(*).
not sure what exactly
On Wed, 2008-01-30 at 20:47 +0900, Jason Topaz wrote:
> I don't disagree with your point that it's not robust with examples of
> "exactly how a particular problem can be solved". But I think there are
> enough, and more importantly, I don't think problem-solving is an
> important focus for a man
On Wed, 2008-01-30 at 20:14 -0600, Josh Trutwin wrote:
> On Wed, 30 Jan 2008 13:20:58 -0500
> Tom Hart <[EMAIL PROTECTED]> wrote:
>
> > I have 4 years of mySQL experience (I know, I'm sorry)
>
> Why is this something to apologize for? I used to use MySQL for
> everything and now use PostgreSQL
Hi,
I finally figure out how come (i think) my analyszing of some specific
tables is taking so freaking long. 12million rows, ~11GB table.
I had some of the columns with the stat level set up to 1000. (this was
previously because I was trying to optimise somethings to make things
faster. ) When t
On Wed, 2008-03-12 at 21:33 -0500, Adam Rich wrote:
> select c.relname, a.attname, attstattarget
> from pg_catalog.pg_attribute a, pg_catalog.pg_class c,
> pg_catalog.pg_namespace n
> where a.attrelid = c.oid and c.relnamespace=n.oid
> and n.nspname = 'public' and a.attnum > 0
Funny, that does not
On Wed, 2008-03-12 at 21:40 -0700, Scott Marlowe wrote:
> On Wed, Mar 12, 2008 at 8:45 PM, Ow Mun Heng <[EMAIL PROTECTED]> wrote:
> > On Wed, 2008-03-12 at 21:33 -0500, Adam Rich wrote:
> > > select c.relname, a.attname, attstattarget
> > > from pg_catalog.pg_a
query is something like this
Select *
from v_test
where acode Like 'PC%'
and rev = '0Q'
and hcm = '1'
and mcm = 'K'
where acode, rev, hcm, mcm are all indexes.
Currently this query is only using the rev and mcm for the bitmapAND.
it then does a bi
On Fri, 2008-03-14 at 00:50 -0600, Scott Marlowe wrote:
> On Fri, Mar 14, 2008 at 12:28 AM, Ow Mun Heng <[EMAIL PROTECTED]> wrote:
> > query is something like this
> >
> > Select *
> > from v_test
> > where acode Like 'PC%
On Fri, 2008-03-14 at 07:53 +0100, A. Kretschmer wrote:
> am Fri, dem 14.03.2008, um 14:28:15 +0800 mailte Ow Mun Heng folgendes:
> > query is something like this
> >
> > Select *
> > from v_test
> > where acode Like 'PC%'
> >
On Fri, 2008-03-14 at 08:26 +0100, A. Kretschmer wrote:
> am Fri, dem 14.03.2008, um 15:06:56 +0800 mailte Ow Mun Heng folgendes:
> >
> > On Fri, 2008-03-14 at 07:53 +0100, A. Kretschmer wrote:
> > > am Fri, dem 14.03.2008, um 14:28:15 +0800 mailte Ow Mun Heng fol
This question, I think is directed at Devrim, but if anyone else can
answer it would be great as well.
I saw from the site that states that slony1 packages are available.
However, I can't find it from the yum archives. This is for Centos 5.
Does anyone know?
muchos gracias.
--
Sent via pgsql-g
explain select * from d_trr where revision like '^B2.%.SX'
--where ast_revision = 'B2.M.SX'
Seq Scan on d_trr (cost=0.00..2268460.98 rows=1 width=16)
Filter: ((revision)::text ~~ '^B2.%.SX'::text)
show lc_collate;
en_US.UTF-8
Is it that this is handled by tsearch2? Or I need to do the locale
On Wed, 2008-06-25 at 14:58 +1000, Klint Gore wrote:
> Ow Mun Heng wrote:
> > explain select * from d_trr where revision like '^B2.%.SX'
> > --where ast_revision = 'B2.M.SX'
> >
> > Seq Scan on d_trr (cost=0.00..2268460.98 rows=1 width=16)
>
On Wed, 2008-06-25 at 17:00 +1000, Klint Gore wrote:
> Ow Mun Heng wrote:
> > On Wed, 2008-06-25 at 14:58 +1000, Klint Gore wrote:
> > > Ow Mun Heng wrote:
> > > > explain select * from d_trr where revision like '^B2.%.SX'
> > > > --where ast_rev
What can I do about this plan?
HashAggregate (cost=8035443.21..8035445.17 rows=157 width=24)
-> Nested Loop (cost=37680.95..7890528.72 rows=28982898 width=24) <
suspect
Join Filter: ((a.test_run_start_date_time >= date.start_time) AND
(a.test_run_start_date_time <= date.end_ti
On Wed, 2009-06-03 at 01:28 -0600, Scott Marlowe wrote:
> On Wed, Jun 3, 2009 at 12:32 AM, Ow Mun Heng wrote:
> > HashAggregate (cost=8035443.21..8035445.17 rows=157 width=24)
> > -> Nested Loop (cost=37680.95..7890528.72 rows=28982898 width=24) <<<<<
&
I think I'm doing this wrongly. Before I go out re-invent the wheel, I
thought I'll just check w/ the list. (I previously got the idea from IRC)
Table "Master"
--> Table "Child1"
--> Table "Child2"
--> Table "Child2"
Table "Update" --> Table to update come key items from source table.
The
-Original Message-
From: gsst...@gmail.com [mailto:gsst...@gmail.com] On Behalf Of Greg Stark
>> On Tue, Jul 21, 2009 at 11:25 PM, Ow Mun Heng wrote:
>> RETURN NULL;
> From the docs:
> "It can return NULL to skip the operation for the current row."
> --
> From: Sim Zacks [mailto:s...@compulab.co.il]
>-Original Message-
>From: gsst...@gmail.com [mailto:gsst...@gmail.com] On Behalf Of Greg Stark
>
>On Tue, Jul 21, 2009 at 11:25 PM, Ow Mun Heng wrote:
> RETURN NULL;
>
>Just make your trigger return N
While doing
# VACUUM VERBOSE ANALYZE d_trr_dfh;
INFO: vacuuming "xmms.d_trr_dfh"
ERROR: xlog flush request 21F/9F57DF88 is not satisfied --- flushed only to
21F/924CE76C
CONTEXT: writing block 2919652 of relation 17461/17462/17668
I see this in the logs
user= CONTEXT: writing block
-Original Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.us]
"Ow Mun Heng" writes:
>> As background, this is a new box mirrored from a separate box via rsync.
>> I've basically copied/rsync the entire postgresql server and data files
>>over
>&g
I'm starting to use DBLink / DBI-Link and one of the "bad" things is that
the password is out in the clear.
What can I do to prevent it from being such? How do I protect it from
'innocent' users?
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your sub
-Original Message-
From: Tommy Gildseth [mailto:tommy.gilds...@usit.uio.no]
Ow Mun Heng wrote:
>> I'm starting to use DBLink / DBI-Link and one of the "bad" things is that
>> the password is out in the clear.
>> What can I do to prevent it from bei
-Original Message-
From: Magnus Hagander [mailto:mag...@hagander.net]
On Wed, Aug 12, 2009 at 09:30, Ow Mun Heng wrote:
>>
>> From: Tommy Gildseth [mailto:tommy.gilds...@usit.uio.no]
>>
>> Ow Mun Heng wrote:
>>>> I'm starting to use DBLink
-Original Message-
>From: Magnus Hagander [mailto:mag...@hagander.net]
>No, we're talking about operating system user here, not postgres user.
>So the owner of the database object is irrelevant - only the user that
>the backend process is executing as.
Got it.. Thanks for the tip.
--
-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Christophe Pettus
>On Aug 11, 2009, at 12:19 AM, Jan Verheyden wrote:
>> I was looking in what way it's possible to alert via mail when some
>> conditions are true in a d
Hi Guys,
Searching the net didn't give me much clues as to how to convert a Decimal
number into BINARY.
Eg:
I have a datatype in the DB which needs to be converted.
DEC = 192
BINARY = 1100
DEC = 197
BINARY = 11000101
Which I then need to break down into pairs to do calculations on
11 : 0
I was playing around with dbi-link, hoping to get it connected to a teradata
database. However, before I dive into that, I figured that I might as well
try it out first on a PG Database (on another server)
So, it installed dbi-link fine.
I did a select on a 30GB table and it froze the Originating
-Original Message-
From: Andy Colson [mailto:a...@squeakycode.net]
Ow Mun Heng wrote:
>> I was playing around with dbi-link, hoping to get it connected to a
>teradata
>> database. However, before I dive into that, I figured that I might as
>well
>> try it out fi
Hi All,
Anyone here has a teradata box ? Are you able to connect to it from withing
postgresql?
I would like to pull 1or 2 tables from the box (sync) and was wondering if
there's anyway to do that w/o using dbi-link.
I actually am trying dbi-link but it seem as though it doesn't support
DBD::Ter
I've got a largeish table which according to pg_size_pretty, has an on-disk
size of ~22GB Table size and 12GB index size, approx 55million rows.
When I truncate the table, (I've moved the data somewhere else), I see that
I only gain back ~7GB in the Filesystem space.
What gives?
--
Sent via p
Is there any way in which a trigger can return the # of rows affected by the
insert / delete ?
Master
→ slave_1
→ slave_2
Trigger is against master which will, based on the conditions re-direct the
data into the relevant slave_X partitions.
I think this post basically is what I am seeing.
http:
Is there any way in which a trigger can return the # of rows affected by the
insert / delete ?
Master
--> slave_1
--> slave_2
Trigger is against master which will, based on the conditions re-direct the
data into the relevant slave_X partitions.
I think this post basically is what I am
-Original Message-
> From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
> Is there any way in which a trigger can return the # of rows affected by
> the
> insert / delete ?
>
> Master
> ---> slave_1
> ---> slave_2
>
> Trigger is against master which will, based on the cond
-Original Message-
From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
> "htop" is really nice too. http://htop.sourceforge.net/
> (disclaimer - I did not write it)
I like atop better
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to
Sorry guys, I know this is very off-track for this list, but google hasn't
been of much help. This is my raid array on which my PG data resides.
I have a 4 disk Raid10 array running on linux MD raid.
Sda / sdb / sdc / sdd
One fine day, 2 of the drives just suddenly decide to die on me. (sda and
right sibling's left-link doesn't match: block 121425 links to 124561
instead of expected 121828 in index
Oct 20 22:21:29 hmweb5 postgres[8795]: [3-2] "d_trh_trr_water_eval_pkey"
WARNING: terminating connection because of crash of another server process
DETAIL: The postmaster has comman
[resend w/ plain text only - Sorry]
right sibling's left-link doesn't match: block 121425 links to 124561
instead of expected 121828 in index
Oct 20 22:21:29 hmweb5 postgres[8795]: [3-2] "d_trh_trr_water_eval_pkey"
WARNING: terminating connection because of crash of another server process
DETA
-Original Message-
From: Greg Smith [mailto:gsm...@gregsmith.com]
On Wed, 21 Oct 2009, Scott Marlowe wrote:
>> Actually, later models of linux have a direct RAID-10 level built in.
>> I haven't used it. Not sure how it would look in /proc/mdstat either.
>I think I actively block memor
Is there any quick hacks to do this quickly? There's around 20-30million
rows of data.
I want to change a column type from varchar(4) to varchar(5) or should I
just use text instead.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http:
I'm going to alter a bunch a tables columns's data type and I'm being
forced to drop a view which depends on the the colum.
eg: ALTER TABLE xs.d_trh ALTER m_dcm TYPE character varying;
ERROR: cannot alter type of a column used by a view or rule
DETAIL: rule _RETURN on view v_hpp depends on colu
On Mon, 2008-07-07 at 02:10 -0400, Lew wrote:
> Ow Mun Heng wrote:
> >> I want to change a column type from varchar(4) to varchar(5) or should I
> >> just use text instead.
> The choice of TEXT for the column would seem to be supported in the PG
> manual,
> which
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
101 - 200 of 229 matches
Mail list logo