On 07/06/07, Jon Sime <[EMAIL PROTECTED]> wrote:
Jonathan Vanasco wrote:
>
> Does anyone have a trick to list all columns in a db ?
No trickery, just exploit the availability of the SQL standard
information_schema views:
select table_schema, table_name, column_name
from information_s
OK, I am not yet awake :-) Of course, the connection string has database
name but some thing is not working on OGSA-DAI side. It is giving
authorisation failure error!
Samatha
Samatha Kottha wrote:
> Hi Michael,
>
> Thank you for the tip. The integration tool is OGSA-DAI. Of course, we
> have to
Hi Michael,
Thank you for the tip. The integration tool is OGSA-DAI. Of course, we
have to specify the database but the connection string that it uses does
not contain it.
Cheers,
Samatha
Michael Fuhr wrote:
> On Thu, Jun 07, 2007 at 03:38:15PM +0200, Samatha Kottha wrote:
>
>> We are trying
In this query:
select n.nspname as table_schema, c.relname as table_name,
a.attname as column_name
from pg_catalog.pg_attribute a
join pg_catalog.pg_class c on ( a.attrelid = c.oid)
join pg_catalog.pg_namespace n on (c.relnamespace = n.oid)
where c.relkind in ('r',
John D. Burger wrote:
> In any event, you say you need to know when a row is less than 24 hours
> old - that is presumably not an issue for these old rows. I would add
> the column as suggested, but set it to some time in the past for the
> existing rows. Or, you can set it to NULL, appropriately
On 6/8/07, ABHANG RANE <[EMAIL PROTECTED]> wrote:
Hi,
I have a table with one column as real[]. Now if I want to make cubes
out of each of
these arrays, is there a way in postgre I can do it. I guess cube
operator is not defined
for real[] but can I create cubes if the column was integer[]. If ye
Hello All,
I think I am little bit lost in congiguration of SHMMAX parameter.
As per documantation on :
http://www.redhat.com/docs/manuals/database/RHDB-7.1.3-Manual/admin_user/kernel-resources.html
Name Description Reasonable Values
SHMMAX Maximum size of shared memory segment (by
Hi there,
I just found the following message in my logs:
Jun 8 10:38:38 caligula postgres[56868]: [1-1] : LOG: could not
truncate directory "pg_subtrans": apparent wraparound
Should I be worried or can I just ignore this one? My database is still
small (a pg_dumpall bzippe'd is still aroun
Hi there.
Is there any way of determining the actual structure of a record variable?
E. g. I've written a small script to do some calculations over some fields
with a dinamically generated query. It looks like this:
create function foo(text) returns void as
$$
declare
a_record record;
my_query
Gunther Mayer wrote:
> Hi there,
>
> I just found the following message in my logs:
>
> Jun 8 10:38:38 caligula postgres[56868]: [1-1] : LOG: could not
> truncate directory "pg_subtrans": apparent wraparound
>
> Should I be worried or can I just ignore this one? My database is still
> small
Impossible in plpgsql. Use plperl instead.
>>> "Diego Sanchez R." <[EMAIL PROTECTED]> 2007-06-08 14:14 >>>
Hi there.
Is there any way of determining the actual structure of a record
variable? E. g. I've written a small script to do some calculations
over some fields with a dinamically generated
Hi all.
On most modern CPUs there are numeric representations wider than 8-bytes
(aka float8 in PGSQL).
For example, Intel/AMD CPUs have native 12-bytes floating point numbers (aka
long double in C/C++).
I understand that it could not be non-standard from a clean SQL point of view.
Nonetheless, i
In response to Andrew Edson <[EMAIL PROTECTED]>:
> The company I work for provides services for various offices around the
> country. In order to help keep our database straight, and allow several of
> our client-side programs to verify their location, we include a table called
> 'region' in o
On Fri, Jun 08, 2007 at 09:50:10AM +0200, Samatha Kottha wrote:
> OK, I am not yet awake :-) Of course, the connection string has database
> name but some thing is not working on OGSA-DAI side. It is giving
> authorisation failure error!
What's the exact error message? Is the authorization failur
"Ashish Karalkar" <[EMAIL PROTECTED]> writes:
> As per documantation on :
> http://www.redhat.com/docs/manuals/database/RHDB-7.1.3-Manual/admin_user/=
> kernel-resources.html
Why in the world are you consulting PG 7.1 documentation for help in
managing an 8.2 installation? That manual is so obso
"Diego Sanchez R." <[EMAIL PROTECTED]> writes:
> Is there any way of determining the actual structure of a record variable?
Not in plpgsql; even if the info were exposed, you couldn't do anything
very useful because that language is strongly typed.
In some of the other PLs you could do it --- eg,
On Fri, Jun 08, 2007 at 07:39:26AM -0700, Andrew Edson wrote:
> Recently, there have been incidents across a few of the offices
> where the region table in their local copy of the database has
> mysteriously lost all of its data.
Well, a couple possibilities:
1. Are you vacuuming cor
Andrew Edson <[EMAIL PROTECTED]> writes:
> Does anyone have any suggestions as to what could be causing this
> single table to lose its data?
I wonder if Slony is doing it to you --- somehow deciding it needs to
"replicate" that table from somewhere else. Might be worth inquiring
on the slony
The company I work for provides services for various offices around the
country. In order to help keep our database straight, and allow several of our
client-side programs to verify their location, we include a table called
'region' in our database. Although the database is replicated by Slony
On Fri, Jun 08, 2007 at 11:11:05AM -0400, Tom Lane wrote:
>
> I wonder if Slony is doing it to you --- somehow deciding it needs to
> "replicate" that table from somewhere else. Might be worth inquiring
> on the slony lists if anyone's seen such a thing.
The only way Slony would do that is in th
Version is 8.1
The query I originally ran returned ~4-5 rows and had a lot of other
joins and filtering conditions prior to the join with the big table.
Is there any way to instruct postgres to do joins in the specific
order or smth?
---(end of broadcast)--
I have done so but I am not seeing the location of log files. Another
question, is it possible to do a fresh installation of postgresql server and
then replace the folder with the one on my old
installation and get my old database?
Here is a screen shot when I ran the command:
[EMAIL PROTECTED
In article <[EMAIL PROTECTED]>, Tom Lane <[EMAIL PROTECTED]> wrote:
% "George Pavlov" <[EMAIL PROTECTED]> writes:
% >> From: Joshua D. Drake [mailto:[EMAIL PROTECTED]
% >> In those rare cases wouldn't it make more sense to just set
% >> enable_seqscan to off; run query; set enable_seqscan to on;
%
I've looked at the pg_index table and we are currently at 15Mill
entries, which should be OK. After 2-3 days runtime I just get a
disconnect error from backend while doing an insert. After I restore the
DB and insert the same entries it runs fine. Following is the error I
get:
"Query pgsql8.1: PGRE
How to speed up the query
delete from firma1.rid where dokumnr not in (select dokumnr from firma1.dok)
which runs approx 30 minutes
I have dokumnr indexes on both tables, both tables are analyzed.
CREATE TABLE firma1.dok
(
doktyyp character(1) NOT NULL,
dokumnr integer NOT NULL DEFAULT next
> From: Tom Lane [mailto:[EMAIL PROTECTED]
> What are the total lengths of the log entries in which you see the
> failure? (The "length" here includes all the lines belonging to a
> single logical entry, eg, ERROR, DETAIL, HINT.)
It is very hard to tease these apart because now that I look at it
On Thu, Jun 07, 2007 at 02:32:09PM -0500, ARTEAGA Jose wrote:
> I've looked at the pg_index table and we are currently at 15Mill
> entries, which should be OK. After 2-3 days runtime I just get a
> disconnect error from backend while doing an insert. After I restore the
> DB and insert the same ent
Martijn van Oosterhout wrote:
On Thu, Jun 07, 2007 at 02:32:09PM -0500, ARTEAGA Jose wrote:
I've looked at the pg_index table and we are currently at 15Mill
entries, which should be OK. After 2-3 days runtime I just get a
disconnect error from backend while doing an insert. After I restore the
D
On Friday 08 June 2007 10:30 am, George Pavlov wrote:
>
> It is very hard to tease these apart because now that I look
> at it closely it is a total mess; there are multiple
> interruptions and interruptions inside of interruptions...
> The interruption can happen anywhere, including the leading
>
On Fri, Jun 08, 2007 at 09:30:21AM -0700, George Pavlov wrote:
> As to the full length of the entries that get
> interrupted they do seem to be all on the long side--I can't say with
> total certainty, but the couple of dozen that I looked at were all >
> 4096 when all the interruptions are taken o
Does anyone think that PostgreSQL could benefit from using the video
card as a parallel computing device? I'm working on a project using
Nvidia's CUDA with an 8800 series video card to handle non-graphical
algorithms. I'm curious if anyone thinks that this technology could be
used to speed up a d
On Thu, Jun 07, 2007 at 11:20:20AM -0700, Sergei Shelukhin wrote:
> Version is 8.1
> The query I originally ran returned ~4-5 rows and had a lot of other
> joins and filtering conditions prior to the join with the big table.
> Is there any way to instruct postgres to do joins in the specific
> orde
Does anyone think that PostgreSQL could benefit from using the video
card as a parallel computing device? I'm working on a project using
Nvidia's CUDA with an 8800 series video card to handle non-graphical
algorithms. I'm curious if anyone thinks that this technology could be
used to speed up a d
Hi,I have this server that I use as db database. It's decent box Ubuntu, 2GB,
AMD Barton 2.8Gb L2 2Mb. DB version is 7.4.7 - that version was the only one
available at that time. I have it for about 2 years in similar configuration.
Lately I've notices that a pack of postmaster(4-22) process
On Fri, Jun 08, 2007 at 11:29:12AM +0300, Andrus wrote:
> How to speed up the query
We don't know. You don't tell us what version you're running, show
us any EXPLAIN ANALYSE output, tell us about the data. . .
A
--
Andrew Sullivan | [EMAIL PROTECTED]
Unfortunately reformatting the Internet i
Vincenzo Romano wrote:
> Hi all.
> On most modern CPUs there are numeric representations wider than 8-bytes
> (aka float8 in PGSQL).
>
> For example, Intel/AMD CPUs have native 12-bytes floating point numbers (aka
> long double in C/C++).
>
> I understand that it could not be non-standard from a
ARTEAGA Jose wrote:
Also worth mentioning is that I just this week found out about a very,
very important parameter "shared buffers". Ever since the original
person setup our PG (individual no longer with us) this DB had been
running without any major glitches, albeit slow. All this time the
shar
Billings, John wrote:
Does anyone think that PostgreSQL could benefit from using the video
card as a parallel computing device?
Well, I'm not one of the developers, and one of them may have this
particular scratch, but in my opinion just about any available fish has
to be bigger than this one
On Jun 8, 2007, at 3:33 PM, Guy Rouillier wrote:
Well, I'm not one of the developers, and one of them may have this
particular scratch, but in my opinion just about any available fish
has to be bigger than this one. Until someone comes out with a
standardized approach for utilizing whatev
If you're absolutely, positive dying for some excuse to do this (i.e.
I don't currently have the budget to pay you anything to do it), I
work in a manufacturing environment where we are using a postgresql
database to store bills of materials for parts. One of the things we
also have to do i
Anyone ?From: [EMAIL PROTECTED]: [EMAIL PROTECTED]: [GENERAL] Postmaster
processes taking all the CPUDate: Fri, 8 Jun 2007 13:23:00 -0500
Hi,I have this server that I use as db server. It's decent box Ubuntu, 2GB, AMD
Barton 2.8Gb L2 2Mb. DB version is 7.4.7 - that version was the only o
On Fri, Jun 08, 2007 at 03:20:28PM -0500, MC Moisei wrote:
>
> pack of postmaster(4-22) processes ran by postgres user are taking
> over almost all the CPU.
What else is the box doing? If it doesn't have any other work to do,
why shouldn't postgres use the CPU time? (This is a way of saying,
"
Have you done a full vacuum and not just a reqular vacuum?
- Ericson Smith
Developer
http://www.funadvice.com
On 6/8/07, Andrew Sullivan <[EMAIL PROTECTED]> wrote:
On Fri, Jun 08, 2007 at 03:20:28PM -0500, MC Moisei wrote:
>
> pack of postmaster(4-22) processes ran by postgres user are taking
>
I'm not sure I understand the question. What else runs on it ?I have an Apache
that fronts a Tomcat (Java Enterprise App Server). In tomcat I only run this
application that has a connection pool of 30 connections(if I remember
correctly).Once the application starts to open connections it l
I did that remotely, thru the psqladmin. How do I do it from that box ?> Date:
Fri, 8 Jun 2007 16:41:57 -0400> From: [EMAIL PROTECTED]> Subject: Re: [GENERAL]
Postmaster processes taking all the CPU> CC: pgsql-general@postgresql.org> >
Have you done a full vacuum and not just a reqular vacuum?>
I mean, have you run a VACUUM FULL VERBOSE ANALYZE; lately?
SInce you're constantly inserting stuff (are you updating too?), if
you have have not analyzed recently, then the planner will give you
crappy queries.
Also, if you're updating that table frequently, lots of dead tuples
will remain in t
On 6/8/07, Billings, John <[EMAIL PROTECTED]> wrote:
Does anyone think that PostgreSQL could benefit from using the video card
as a parallel computing device? I'm working on a project using Nvidia's
CUDA with an 8800 series video card to handle non-graphical algorithms.
I'm curious if anyone
On 6/8/07, Martijn van Oosterhout <[EMAIL PROTECTED]> wrote:
On Thu, Jun 07, 2007 at 11:20:20AM -0700, Sergei Shelukhin wrote:
> Version is 8.1
> The query I originally ran returned ~4-5 rows and had a lot of other
> joins and filtering conditions prior to the join with the big table.
> Is there
First, your mail is coming through really garbled. Maybe you need to
add some linebreaks or something? Anyway
On Fri, Jun 08, 2007 at 03:58:40PM -0500, MC Moisei wrote:
>
> I'm not sure I understand the question. What else runs on it ?I
> have an Apache that fronts a Tomcat (Java Enterprise App
Hello,
if vacuuming does not help, you may also want to log all queries running
more than x milliseconds to help localize the problem.
see log_min_duration_statement= #ms in postgresql.conf
(i didn't check its avaibility in older version)
HTH,
Marc
On Fri, Jun 08, 2007 at 05:11:44PM -0400, Ericson Smith wrote:
>
> Also, if you're updating that table frequently, lots of dead tuples
> will remain in there if you don't do a VACUUM FULL regularly.
No, they won't. No well-tuned postgres installation has needed
VACUUM FULL in a long time. VACUU
Yes all the connection are coming from within the box so no network
latency.Well, isn't the swap can be because too many process postmaster are
requiring more memory. I will reproduce it and I'd try post a memory and
processes footprint. The reason I said I feel like spinning around the tail is
On Fri, Jun 08, 2007 at 05:08:26PM -0500, MC Moisei wrote:
> Yes all the connection are coming from within the box so no network
> latency.Well, isn't the swap can be because too many process
> postmaster are requiring more memory.
But why are they requring more memory? Do you maybe have (e.g.)
Hi Andrus!
On Jun 8, 10:29 am, "Andrus" <[EMAIL PROTECTED]> wrote:
> How to speed up the query
>
> delete from firma1.rid where dokumnr not in (select dokumnr from firma1.dok)
> CREATE TABLE firma1.dok
> (
> doktyyp character(1) NOT NULL,
> dokumnr integer NOT NULL DEFAULT nextval('dok_dokumn
On 6/8/07, Tom Lane <[EMAIL PROTECTED]> wrote:
"Diego Sanchez R." <[EMAIL PROTECTED]> writes:
> Is there any way of determining the actual structure of a record variable?
Not in plpgsql; even if the info were exposed, you couldn't do anything
very useful because that language is strongly typed.
On 6/8/07, Billings, John <[EMAIL PROTECTED]> wrote:
Does anyone think that PostgreSQL could benefit from using the video card as a
parallel computing device? I'm working on a project using Nvidia's CUDA with
an 8800 series video card to handle non-graphical algorithms. I'm curious if
[EMAIL PROTECTED] ha escrito:
> Hello,
>
> my problem is : in depend of the value of a field in a table A, I
> want to select other fields coming from a table B, or a table C.
>
> I want to know if it's possible to create a parametred view in
> postgresql to resolve this problem
>
>
> Thx,
> Lhaj
Alexander Staubo escribió:
> On 6/8/07, Martijn van Oosterhout <[EMAIL PROTECTED]> wrote:
> >On Thu, Jun 07, 2007 at 11:20:20AM -0700, Sergei Shelukhin wrote:
> >> Version is 8.1
> >> The query I originally ran returned ~4-5 rows and had a lot of other
> >> joins and filtering conditions prior to t
On Fri, Jun 08, 2007 at 03:00:35PM -0400, Bruce Momjian wrote:
> No. Frankly I didn't know 12-byte floats were supported in CPUs until
> you posted this. You could write your own data type to use it, of
> course.
I didn't either, and have no use for them, but curiousity compels me
to wonder how
[EMAIL PROTECTED] wrote:
> On Fri, Jun 08, 2007 at 03:00:35PM -0400, Bruce Momjian wrote:
> > No. Frankly I didn't know 12-byte floats were supported in CPUs until
> > you posted this. You could write your own data type to use it, of
> > course.
>
> I didn't either, and have no use for them, but
60 matches
Mail list logo