Thanks, yeah, but the dummy tables are needed anyway in my case for those
entities that are shared among the tenants :)
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/Best-way-to-create-unique-primary-keys-across-schemas-tp5165043p5433562.html
Sent from the PostgreSQL -
OK, thanks for replys. To sum up, this is what I now consider best practice:
CREATE schema schema1;
CREATE schema schema2;
CREATE SEQUENCE global_seq; --in public schema
CREATE TABLE tbl (ID bigint default nextval('global_seq') primary key,foo
varchar,bar varchar); --in public schema
CREATE TA
Chris Angelico wrote
>
> I would recommend using an explicit sequence object rather than
> relying on odd behavior like this; for instance, if you now drop
> public.tbl, the sequence will be dropped too. However, what you have
> there is going to be pretty close to the same result anyway.
>
Oops
Chris Angelico wrote
>
>
> You can "share" a sequence object between several tables. This can
> happen somewhat unexpectedly, as I found out to my surprise a while
> ago:
>
> CREATE TABLE tbl1 (ID serial primary key,foo varchar,bar varchar);
> INSERT INTO tbl1 (foo,bar) VALUES ('asdf','qwer');
Hi,
If I'd like to have primary keys generated ("numeric" style, no UUIDs) that
are unique across schemas is the best option to allocate a fixed sequence
range (min,max) to the sequences of all schemas?
Thanks
panam
--
View this message in context:
http://postgresql.1045698.n5.
). Now:
Something unusual with this?
Regards,
panam
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/Extending-the-volume-size-of-the-data-directory-volume-tp5030663p5135704.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.
--
Sent via pgsql
Had to restart the import. This time, I tried with a smaller initial disk
size (1GB) and extended it dynamically. It did not cause any problems.
A different reason might be, that I remounted the volume in between during
the last update to deactivate buffer flushing. Maybe a bad combination.
Let's s
No, but will try this first, thanks for the suggestion.
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/Extending-the-volume-size-of-the-data-directory-volume-tp5030663p5034495.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.
--
Sent via pgsql
Hi, output is
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/Extending-the-volume-size-of-the-data-directory-volume-tp5030663p5034494.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.
--
Sent via pgsql-general mailing list (pgsql-general@pos
threads are sleeping (S state).
I will try to reproduce this, this time with a smaller initial disk size...
Regards
panam
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/Extending-the-volume-size-of-the-data-directory-volume-tp5030663p5034257.html
Sent from the PostgreSQL
Hi,
as I am importing gigabytes of data and the space on the volume where the
data dictionary resides just became to small during that process, I resized
it dynamically (it is a LVM volume) according to this procedure:
http://www.techrepublic.com/blog/opensource/how-to-use-logical-volume-manager-
11 matches
Mail list logo