kevin kempter wrote:
Hi List;
I want to run a copy (based on a select) to STDOUT and pipe it to a
psql copy from STDIN on a different host.
here's what I have:
1) a .sql file that looks like this:
copy (
select
cust_id,
cust_name,
last_update_dt
from sl_cust
)
to STDOUT
with delimite
kevin kempter wrote:
> Any thoughts on what I'm doing wrong?
I suspect that pg_dump is going to do a better job than using psql to
generate the input for the remote load. pg_dump can dump single tables
and can use COPY style data formatting.
As for why your current command isn't working ... You
Hi List;
I want to run a copy (based on a select) to STDOUT and pipe it to a
psql copy from STDIN on a different host.
here's what I have:
1) a .sql file that looks like this:
copy (
select
cust_id,
cust_name,
last_update_dt
from sl_cust
)
to STDOUT
with delimiter '|'
This works.
I guess this is something that is not a common occurrence as no one
has yet to suggest a solution.
So it looks like I'm going to have to answer my own question.
Firstly, I'd like to say how well thought out is the design of the
dump/restore utils.
If you use the -F c option in pg_dump, this crea
Alvaro Herrera <[EMAIL PROTECTED]> writes:
> Which means that storing date + timetz in two separate columns is not
> quite the same as storing a timestamptz. Oops.
Quite so. Our docs already point out that timetz is really a completely
brain-damaged concept, anyway.
There's been some talk of ad
Tom Lane wrote:
> Marc Munro <[EMAIL PROTECTED]> writes:
> > It makes no sense to me that by storing the date with the time you
> > can save 4 bytes.
>
> Actually, your mistake is in imagining that timestamptz represents the
> timezone explicitly ... it doesn't. If it did, it'd be at least as
>
Marc Munro <[EMAIL PROTECTED]> writes:
> According to the docs, timestamp with time zone takes less space than
> time with time zone with the same resolution. Is this a documentation
> bug?
Nope.
regression=# select typname,typlen from pg_type where typname like 'time%tz';
typname | typlen
I was just looking at the 8.3.1 documentation on the postgresql web
site.
According to the docs, timestamp with time zone takes less space than
time with time zone with the same resolution. Is this a documentation
bug? It makes no sense to me that by storing the date with the time you
can save 4
On Mon, 14 Apr 2008 21:30:15 +0100
Gregory Stark <[EMAIL PROTECTED]> wrote:
> "Ivan Sergio Borgonovo" <[EMAIL PROTECTED]> writes:
>
> > But why once you add the index and count distinct the performances
> > are still so far?
> > I'd say that counting in this case is not the hardest thing to do,
>
"Ivan Sergio Borgonovo" <[EMAIL PROTECTED]> writes:
> But why once you add the index and count distinct the performances
> are still so far?
> I'd say that counting in this case is not the hardest thing to do,
> but rather the "distinct" part.
Your tests have been a bit unfortunate in finding a f
On Mon, 14 Apr 2008 13:34:36 -0400 Chris Browne wrote:
> If the entire purpose of your application is to run COUNT(*) against
> the entireties of a table with 180M rows, then PostgreSQL may not be
> the right database for your application.
For running a COUNT(*) without WHERE you could even add a
Atkins-Trimnell, Angus Black wrote:
> Hello,
>
> I am trying to harden my application against man-in-the-middle attacks.
> The application, written in PHP, communicates with the PostgreSQL server
> using the usual pg_* functions built on the libpq library. I have the
> proper postgresql.key and
Hello,
I am trying to harden my application against man-in-the-middle attacks.
The application, written in PHP, communicates with the PostgreSQL server
using the usual pg_* functions built on the libpq library. I have the
proper postgresql.key and postgresql.crt files installed on the Web server
On Mon, 14 Apr 2008 13:34:36 -0400
Chris Browne <[EMAIL PROTECTED]> wrote:
> > Another test. In postgres I added an index to the userid column
> > and then counted distinct userids. The average run time over
> > three queries was 4666 seconds, or 78 minutes. Unbelievable.
> > On SQL Server, w
On Mon, Apr 14, 2008 at 1:34 PM, Chris Browne <[EMAIL PROTECTED]> wrote:
> "[EMAIL PROTECTED]" <[EMAIL PROTECTED]> writes:
> > Another test. In postgres I added an index to the userid column and
> > then counted distinct userids. The average run time over three
> > queries was 4666 seconds, or
On Thursday 10 April 2008 16:49, Roberts, Jon wrote:
> > > I am moving from Windows to Solaris and I need pgcrypto and dblink.
> > > Where are these? I don't see anything in the configure that
>
> suggests
>
> > it
> >
> > > is even an option.
> >
> > They're not handled by 'configure'. They ar
"[EMAIL PROTECTED]" <[EMAIL PROTECTED]> writes:
> Let me just start off by saying that I *want* to use postgresql.
> That's my goal. I do not want to use SQLServer. I'm posting this
> message not to slam postgres, but to ask for someone to help me figure
> out what I'm doing wrong.
>
> I've used
You can have a two column foreign key.
create table employee
(id int primary key not null,
company_id int not null,
supervisor_id int);
alter table employee add unique (id, company_id);
alter table employee add foreign key (supervisor_id, company_id)
references employee (id, company_id);
Jon
Alvaro Herrera <[EMAIL PROTECTED]> writes:
>> Stefan Schwarzer wrote:
>>> I am running 10.5, on a MacPro. Postgres version 8.3.1 and got it from
>>> macports.
> On my system, though, locale does report the full locale, not just
> encoding, when I change LANG.
Mine too. On my 10.5.2 machine ini
I have an employees table and one column in the employees table is
"supervisor_id" which is an FK to the id column.
I have employees from numerous companies all in the same table. I have
a column called company_id that indicates the company.
I want to make sure that an employee chooses a super
On Monday 14 April 2008 17:44, Andreas Kretschmer wrote:
> Garry Saddington <[EMAIL PROTECTED]> schrieb:
> > How would I write a table check constraint to make sure that two columns
> > don't both have nulls at the same time.
> > The following doesn't do it because it because it does not allow one
Garry Saddington <[EMAIL PROTECTED]> schrieb:
> How would I write a table check constraint to make sure that two columns
> don't
> both have nulls at the same time.
> The following doesn't do it because it because it does not allow one column
> to
> be null while the other holds data?
>
> che
How would I write a table check constraint to make sure that two columns don't
both have nulls at the same time.
The following doesn't do it because it because it does not allow one column to
be null while the other holds data?
check(teachgroup is not null AND set is not null)
regards
garry
--
Richard Huxton wrote:
> Stefan Schwarzer wrote:
>>> properly. What operating-system (and version) are you running, and
>>> what version of PG, and did you compile it from source or get it
>>> elsewhere?
>>
>> I am running 10.5, on a MacPro. Postgres version 8.3.1 and got it from
>> macports.
Stefan Schwarzer wrote:
Don't know if the problem occured because I hadn't dumped and restored
my data as postgres, but as another postgres-user. Now, it works. But I
re-installed/compiled postgres/postgis, so, can't really say why it
works now...
Since you were getting backend crashes, I'd
Stefan Schwarzer wrote:
properly. What operating-system (and version) are you running, and
what version of PG, and did you compile it from source or get it
elsewhere?
I am running 10.5, on a MacPro. Postgres version 8.3.1 and got it from
macports.
A bit ago, in my many attempts to get postg
> Ah, but it's a record, so pass a record type in:
>
> CREATE TYPE cobol_file_type AS (a integer, b text, c varchar, d date);
> CREATE FUNCTION build_tables1(c cobol_file_type)...
>
> Each existing table has its own type defined too (with the same name),
> which might save you some time.
>
> --
>
I am using Navicat to transfer data from one database to another.
But it soon gives me an error message like the following:
I think you'll probably have to ask the navicat people.
If you want to use pg_dump to transfer data from 8.1 to 8.2
though, use the version of pg_dump that ships with 8
what does this message mean? I didn't find any information on the
web, beside of others having similar messages...
--
The database cluster will be initialized with locales
COLLATE: en_US.UTF-8
CTYPE:UTF-8
This one is missing the en_US off the front. UTF-8
I think there's something sub-optimal with generate_series.
In the following, "documents" is a table with more than 12 rows,
vacuumed and analyzed before the queries.
EXPLAIN ANALYZE
SELECT count (d.id), floor (s.val / 5000)
FROM generate_series (1::INT, 5009) AS s (val)
LEFT JOIN docu
Stefan Schwarzer wrote:
Hi there,
what does this message mean? I didn't find any information on the web,
beside of others having similar messages...
--
The database cluster will be initialized with locales
COLLATE: en_US.UTF-8
CTYPE:UTF-8
This one i
Stefan Schwarzer wrote:
I am using Navicat to transfer data from one database to another. But
it soon gives me an error message like the following:
I think you'll probably have to ask the navicat people.
If you want to use pg_dump to transfer data from 8.1 to 8.2 though,
use the version of pg
I'm using 7.4.19 utils on Centos 5 to move a database from one shared
hosting server to another shared hosting server.
The dump and restore options that I used are:-
pg_dump -O -v -F c
pg_restore -c -O -x -v
pg_restore aborts the restore (with a return code=1) when trying to
'comment schema "pub
33 matches
Mail list logo