On Friday 30 September 2005 15:23, Peter Eisentraut wrote:
> Am Freitag, 30. September 2005 09:03 schrieb Oliver Siegmar:
> > - Would it be possible to add a feature to PostgreSQL to allow grouping
> > by unique identifiers only?
> > - Is this requirement (all fields in GROUP BY) based on the SQL s
I have a couple of very large tables that I am querying on that gives the
following explain analyze output. If someone can help out with my mess,
that would be great.
Thanks,
Sean
explain analyze
select e.*,c.*
from u_all_est_mrna c
join g_rna_acc d on c.accession=d.accession,
On 9/30/05, Sergei Dubov <[EMAIL PROTECTED]> wrote:
Hi guys,I'd really appreciate if you could clarify this to me. Let's say I havea table named TABLE_A that is a bridge to many-to-many relationshipbetween TABLE_B and TABLE_C, as such here is my declaration:
CREATE TABLE table_a ( table_b_id
Thank you Mike for your answer. That is what I had in mind.
You guys have helped me alot. Thank you so much for your cooperation.
Fernando
On Fri, 30 Sep 2005, Joe Maldonado wrote:
> Hello all,
>
> I apologize for the wide distribution but we recently ran into an
> interesting behaviour using PostgreSQL 8.0.3 and did not know whether this
> was a bug or intended behaviour.
>
> When an IN clause contains a NULL value the entire in cla
Looks like it's definately an issue with index bloat. Note that it's
normal to have some amount of empty space depending on vacuum and update
frequency, so 15G -> 20G isn't terribly surprising. I would suggest
using pg_autovacuum instead of the continuous vacuum; it's very possible
that some of you
Hello all,
I apologize for the wide distribution but we recently ran into an
interesting behaviour using PostgreSQL 8.0.3 and did not know whether
this was a bug or intended behaviour.
When an IN clause contains a NULL value the entire in clause is considered as being false, thus no records are r
> User1 starts order and takes the last two units. User2 starts order 1 minut=
> e
> after and checks inventory. He sees 2 units left and adds them to the his
> order. User1 commits his order. Now User2 cannot finish his order because
> the products are not available anymore.
>
> This is the probl
Thank you for your input Doug, but that did not work. User2 was still getting the qty unchanged.
Any more ideas?
I was thinking of creating a table that will hold the product code and
the qty taken by an order. check this table evey minute and if a
product has been there for more that 10 minutes
Hi,
On Thu, 22 Sep 2005, [EMAIL PROTECTED] wrote:
If I run 'rpm -qp --requires postgresql-libs-8.0.3-1PGDG.i686.rpm' on a
freshly downloaded RH9 rpm, I get:
/sbin/ldconfig
/sbin/ldconfig
libc.so.6
libc.so.6(GLIBC_2.0)
libc.so.6(GLIBC_2.1)
libc.so.6(GLIBC_2.1.3)
libc.so.6(GLIBC_2.2)
libc.so.6
Fernando Grijalba <[EMAIL PROTECTED]> writes:
> Thank you for your response.
>
> I want to avoid the following situation.
>
> User1 starts order and takes the last two units. User2 starts order 1
> minute after and checks inventory. He sees 2 units left and adds them to
> the his order. User1 co
Thank you for your response.
I want to avoid the following situation.
User1 starts order and takes the last two units. User2 starts
order 1 minute after and checks inventory. He sees 2 units left and
adds them to the his order. User1 commits his order. Now
User2 cannot finish his order because
Hi guys,
I'd really appreciate if you could clarify this to me. Let's say I have
a table named TABLE_A that is a bridge to many-to-many relationship
between TABLE_B and TABLE_C, as such here is my declaration:
CREATE TABLE table_a (
table_b_idINT4NOT NULL
REFERENCES table
Gandalf Me wrote:
I was just wondering if we can export just the schema/metadata in
Postgres (i.e. just the table and column information, without the data).
Is there an easy way to do this in Postgres?
pg_dump --help is your friend.
You are looking for the -s flag.
Sincerely,
Joshua D. Drak
On Fri, 2005-09-30 at 09:14, jeff sacksteder wrote:
> Are there any data access issues (as opposed to data visibility
> issues)
> you are having?
>
>
> No, It's just that in a hosting situation where each customer has a
> database of their own, they need to be boxed in somehow. In
I was just wondering if we can export just the schema/metadata in Postgres (i.e. just the table and column information, without the data). Is there an easy way to do this in Postgres?
Thanks in advance.
On 9/30/05, Emi Lu <[EMAIL PROTECTED]> wrote:
> Greetings,
>
> I am not very familiar with the system views/tables in postgreSQL. I'd
> like to get all table names that have a column let's say named "col1".
I'd use:
select table_name
from information_schema.columns
where table_schema='my_schem
pgsql stores table names in pg_class.relname, column names in
pg_attribute.attname.
Read the document and you can get the solution.
"Emi Lu" <[EMAIL PROTECTED]> wrote
> Greetings,
>
> I am not very familiar with the system views/tables in postgreSQL. I'd
> like to get all table names that have a
On Fri, Sep 30, 2005 at 10:21:14AM -0400, Fernando Grijalba wrote:
> I need the following to work with PostgreSQL. I was thinking of reading
> uncommitted transactions, but just found out they do not work on PG. I need
> to have user 1 take an item form inventory for an order and before user 1
> fi
Emi Lu wrote:
> Greetings,
>
> I am not very familiar with the system views/tables in postgreSQL. I'd
> like to get all table names that have a column let's say named "col1".
>
> For example,
> t1 (... col1 varchar(3) ... )
> t2 (... col1 varchar(3) ... )
> t3 (... ...)
>
>
> After querying the
am 30.09.2005, um 10:55:44 -0400 mailte Emi Lu folgendes:
> Greetings,
>
> I am not very familiar with the system views/tables in postgreSQL. I'd like
> to get all table names that have a column let's say named "col1".
select table_name from information_schema.columns where column_name = 'col1'
Greetings,
I am not very familiar with the system views/tables in postgreSQL. I'd
like to get all table names that have a column let's say named "col1".
For example,
t1 (... col1 varchar(3) ... )
t2 (... col1 varchar(3) ... )
t3 (... ...)
After querying the system tables/views, I can get the
I need the following to work with PostgreSQL. I was thinking of
reading uncommitted transactions, but just found out they do not work
on PG. I need to have user 1 take an item form inventory for an
order and before user 1 finishes the transaction user 2 will query the
inventory. I need user 2 to
> > The way to do it is to deinstall the old version. What's
> the problem
> > with that? (it won't remove your data!)
>
> Well, maybe you're right. But why do that guys call it
> upgrade if it doesn't upgrade!? I thought the purpose of
> updating mechanism was easyness of upgrades.
It usuall
Are there any data access issues (as opposed to data visibility issues)
you are having?
No, It's just that in a hosting situation where each customer has a
database of their own, they need to be boxed in somehow. In the event
of an application bug allowing raw sql to be executed, it's not
appropri
Peter Eisentraut <[EMAIL PROTECTED]> writes:
> Am Freitag, 30. September 2005 09:03 schrieb Oliver Siegmar:
>> - Is this requirement (all fields in GROUP BY) based on the SQL standard?
> What you describe is called "functional dependencies" in the SQL standard.
> (The query processor would recog
--- Oliver Siegmar <[EMAIL PROTECTED]> wrote:
> Hello,
>
> as you all may know, the GROUP BY clause in
> PostgreSQL has to be parameterized
> with all fields that appear in the SELECT clause
> (except aggregates).
>
> So
>
> SELECT id, foo, sum(bar)
> FROM table1
> GROUP BY id, foo;
>
>
Am Freitag, 30. September 2005 09:03 schrieb Oliver Siegmar:
> - Would it be possible to add a feature to PostgreSQL to allow grouping by
> unique identifiers only?
> - Is this requirement (all fields in GROUP BY) based on the SQL standard?
What you describe is called "functional dependencies" in
Hello everybody, i really need to know hardware requirements for installing
PostgreSQL 8.0.3.
I'm in a database migration project and it is important to work with the
appropiate hardware.
DB must work in windows. There are 50 tables aprox and data size is near 6
GB. Thank for your answers.
Rafa
Hi Magnus,
thanks for your answer.
> The way to do it is to deinstall the old version. What's the problem
> with that? (it won't remove your data!)
Well, maybe you're right. But why do that guys call it upgrade if it
doesn't upgrade!? I thought the purpose of updating mechanism was
easyness of u
Hello,
as you all may know, the GROUP BY clause in PostgreSQL has to be parameterized
with all fields that appear in the SELECT clause (except aggregates).
So
SELECT id, foo, sum(bar)
FROM table1
GROUP BY id, foo;
is perfectly valid, where
SELECT id, foo, sum(bar)
FROM table1
GROU
On Sep 30, 2005, at 8:21 AM, Douglas McNaught wrote:
so, what if simply dropping the table does not recover the disk-space?
It will. Each table is stored in its own set of disk files, and when
the table is dropped those files are simply deleted.
This is normally true. Vacuuming the databas
Gábor Farkas <[EMAIL PROTECTED]> writes:
> so, what if simply dropping the table does not recover the disk-space?
It will. Each table is stored in its own set of disk files, and when
the table is dropped those files are simply deleted.
-Doug
---(end of broadcast)--
> Hello.
>
> Is it possible to install several completely independent
> instances of PostgreSQL on the same Windows machine?
> I'm asking this mostly because I want to have complete
> control over my database, which is not possible if there is
> some other superuser. Therefore I need dedicated
Alban Hertroys wrote:
Gábor Farkas wrote:
i'm only afraid that maybe if we issue the drop-db command, it will
take for example 30minutes...
Wouldn't it be more effective to create a new table by selecting your
session table and switch their names? You can drop the troublesome table
afterwa
Hello.
Is it possible to install several completely
independent instances of PostgreSQL on the same Windows machine?
I'm asking this mostly because I want to have
complete control over my database, which is not possible if there is some other
superuser. Therefore I need dedicated Postgres s
Gábor Farkas wrote:
i'm only afraid that maybe if we issue the drop-db command, it will take
for example 30minutes...
Wouldn't it be more effective to create a new table by selecting your
session table and switch their names? You can drop the troublesome table
afterwards, without influencing
hi,
we have a database, which was not vacuumed for a long time.
right now it's size is 30GB. it only contains a simple table with 90rows.
it seems that it's so big because it was not vacuumed for a long time.
is this a reasonable assumption?
now we'd like to somehow 'compact' him.
it see
Sergei Dubov wrote:
This will automatically create a unique index on the primary key.
Well, if I do a query later like:
select * from table_a where table_b_id=1,
will the query use an index? Will this be true when I do join on this
table?
Yes to both (if the query plan thinks that is the m
jeff sacksteder wrote:
The security documentation available in the official documentation is rather
sparse. Is there a more detailed document I should be aware of?
No.
As an example, I'd like to prevent a a user from being able to get a list of
existing databases. Presumably, I can limit acce
Am Freitag, 30. September 2005 07:07 schrieb surabhi.ahuja:
> /usr/bin/pg_ctl -D /export/home1/sdc_image_pool/dbx/ stop
> the following is displayed:
> waiting for postmaster to shut
> down... failed
> pg_ctl: postmaster does not shut down
Am Donnerstag, 29. September 2005 21:44 schrieb Sergei Dubov:
> select * from table_a where table_b_id=1,
>
> will the query use an index? Will this be true when I do join on this
> table?
It could use the index, but whether it will depends on the particular data
distribution. The EXPLAIN comman
Am Donnerstag, 29. September 2005 21:46 schrieb jeff sacksteder:
> As an example, I'd like to prevent a a user from being able to get a list
> of existing databases.
You can't do that.
--
Peter Eisentraut
http://developer.postgresql.org/~petere/
---(end of broadcast)
43 matches
Mail list logo