I'm trying to identify which postgresql.conf file I should be editing, in order
to change the default database files location for Postgres 9.6.6, when
installed on CentOS 7.x/
Is the bet method for changing the default data directory at the time of
database init, to include the $PGDATA variable
Tom,
Ah, that makes more sense. Thank you very much!
On Tue, Aug 30, 2016 at 9:15 AM, Tom Lane wrote:
> Luke Gordon writes:
> > However, according to a message on this mailing list, Postgres doesn't
> have
> > clustered indexes:
> > "But Postgres doesn
echanism
for a clustered index:
https://www.postgresql.org/docs/9.5/static/sql-cluster.html
<https://www.postgresql.org/docs/9.5/static/sql-cluster.html>So, does
anyone know which is best? Or are the performance differences so minute
they'd only matter in extremely unique circumstances?
Re
I'm trying to understand the extent that row level security in postgresql
9.5 may replace, or augment, application-level access control.
I have a fully implemented application-level access control policy. It's
not clear to me how I will integrate or replace it with RLS.
Craig Ringer mentioned in
TheRegister is running an article about someone breaking into a dbase,
taking control of the encryption key, and 6 or so months later
demanding ransom from the owner of the dbase.
http://www.theregister.co.uk/2015/02/03/web_ransomware_scum_now_lay_waste_to_your_backups/
Anyone want to comment on
On 02/02/15 10:11 AM, Tom Lane wrote:
> If you did "select * from only primate" you would see that there is no
> such row in the parent table, which is what the foreign key is being
> enforced against.
Thanks. That does a lot to clarify it.
-Will
--
Sent via pgsql-general mailing list (pgsq
On 02/02/15 12:11 AM, David G Johnston wrote:
> William Gordon Rutherdale wrote
>> My problem: could someone please explain the semantics and why this
>> behaviour makes sense -- or is it a design error or bug?
> I didn't read your post in depth but I suspect you have no
Hi.
I have encountered a problem with references when using INHERITS (on
Postgres 9.1/9.2). Could someone please explain why this occurs.
Consider this example.
CREATE TABLE primate
(
id SERIAL PRIMARY KEY,
name TEXT,
tale TEXT
);
CREATE TABLE chimp
(
human_friend TEXT
) INHERITS
On the postgresapp.com you can only download the latest version of Postgres for
the Mac. Is it possible to download a specific version?
Thanks,
GTG
--
Gordon Ross
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http
d at the original database from which
the copy was made, and the relation in question is an UNLOGGED table. This
suggests that there's some consideration for binary recovery. I don't care
about the unlogged table data, but I do need the table definition. We use a
few dozen unlogged ta
I repeated the entire process, and I have a few clarifications.
When I said the db seemed fine after the restore, I was wrong. I could do a
\d on an unlogged table, but when I selected count(*) from any, that
resulted in an error like "could not open file "base/16388/15963587": No
such file or
Turns out this was most likely the pg_upgrade bug. In our case, I was able
to dump and recreate the table in question. Since then, this has been made
public: http://wiki.postgresql.org/wiki/20110408pg_upgrade_fix
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/could-no
e or directory.
8895 2011-04-04 22:15:28 EDT [3]CONTEXT: automatic vacuum of table
"mcore.pg_toast.pg_toast_48975830"
I checked and the pg_clog files start at 04BF and run through 0A57 (1,433
files)
Any help would be greatly appreciated.
Gordon
--
View this message in context:
http://po
It's 9.0.2 on Centos
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/walreceiver-getting-bad-data-tp3329916p3330573.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make
3:00 EST [1]LOG: streaming replication successfully
connected to primary
Thanks,
Gordon
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/walreceiver-getting-bad-data-tp3329916p3329916.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.
--
Sent via pgsql-
I'm putting this on this thread, since it could be related to the issue.
I'm now seeing this in the log on the HSB/SR server. It's happened about 4
times in the past 2 days.
23964 2011-01-04 05:23:00 EST [47]LOG: invalid record length at
6E53/46E8A010
23535 2011-01-04 05:23:00 EST [2]FATAL
ieve there could have been more than 2 concurrent updates
because the app is limited to 2 threads, but I will verify that with the
developer.
-gordon
On Fri, Dec 31, 2010 at 3:43 PM, Tom Lane-2 [via PostgreSQL] <
ml-node+3323971-508486184-56...@n5.nabble.com
> wrote:
>
>
> No luck here
The number of matching rows on these queries is anything from 0 to 1. I
don't think I can tell how many would have matched on the ones that
crashed. Although I suspect it would have been toward the 1 end. I've
been trying to get a reproducable test case with no luck so far.
I assume y
Maybe it doesn't work from gmail. I'll try uploading from here.
http://postgresql.1045698.n5.nabble.com/file/n3323933/plan.txt plan.txt
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/seg-fault-crashed-the-postmaster-tp3323117p3323933.html
Sent from the PostgreSQL - g
Yes that query does take 30 or 90 secs. I'm pretty sure it was blocking on
its twin update running concurrently. However I'm not really sure how to
identify what "transaction 1283585646" was.
Enclosed is the query plan -- 21000 lines
-gordon
I tried to replicate the
Sorry, I left that out. Yeah, I wondered that too, since these tables do
not use toast.
CREATE TYPE message_status_enum AS ENUM ( 'V', 'X', 'S', 'R', 'U', 'D' );
On Fri, Dec 31, 2010 at 12:38 PM, Tom Lane-2 [via PostgreSQL] <
ml-node+3323859-1425181809-56...@n5.nabble.com
> wrote:
> Hmmm ...
Here is the ddl for the tables in question. There are foreign keys to other
tables that I omitted.
http://postgresql.1045698.n5.nabble.com/file/n3323804/parts.sql parts.sql
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/seg-fault-crashed-the-postmaster-tp3323117p3323
50. This was an ERROR, not a crash.
At this point, is it your suspicion that there is a code bug in 9.0.2,
rather than corrupt data?
I will post the schema and then work on a test case.
-gordon
On Fri, Dec 31, 2010 at 8:34 AM, Tom Lane-2 [via PostgreSQL] <
ml-node+3323712-1368244686-56...
5f7d14 in PostgresMain (argc=,
argv=, username=) at
postgres.c:3929
#16 0x005c7ce5 in ServerLoop () at postmaster.c:3555
#17 0x005c89ec in PostmasterMain (argc=5, argv=0x1b31ea00) at
postmaster.c:1092
#18 0x005725fe in main (argc=5, argv=) at
main.c:188
On Thu, Dec 30, 2010 at 7:
I'd love to send you a stack trace. Any suggestions on how to get one? It
has since happened again, on the same update command, so I'm guessing I can
repeat it.
On Thu, Dec 30, 2010 at 6:52 PM, Tom Lane-2 [via PostgreSQL] <
ml-node+3323151-436577542-56...@n5.nabble.com
>
7;, updated_on = now() where id in (select id from
v_messages where author_id = 25301995 and status != 'S' limit 1)
How concerned should I be? Thanks!
Gordon
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/seg-fault-crashed-the-postmaster-tp3323117p3323117.htm
> -Original Message-
> From: Adrian Klaver [mailto:adrian.kla...@gmail.com]
> Sent: 21 December 2010 20:36
> To: pgsql-general@postgresql.org
> Cc: William Gordon Rutherdale (rutherw)
> Subject: Re: [GENERAL] Cannot unsubscribe
>
> On Tuesday 21 December 2010 4
unsub pgsql-general
The unsubscribe command did not succeed.
No e-mail addresses matching
"William Gordon Rutherdale (rutherw)"
are subscribed to the pgsql-general mailing list.
Valid commands processed: 1
0 succeeded, 0 stalled, and 1 failed.
Use
I have a CMS with a table of items, and another table serving as a log
of operations performed on the items. The revelent table structure is
as follows:
items
itm_id | usr_id_create | itm_date_create | usr_id_modify |
itm_date_modify | .
itm_id is a serial primary key.
usr_id_* are the keys
> That last message prints tups_vacuumed, but those other ones are counting
> all the removed item pointers. So apparently Gordon had a whole lot of
> pre-existing DEAD item pointers. I wonder why ...
Perhaps this will help. Here's the entire test.
Start with a newly loa
Yes, and also from the original post:
> 3 INFO: scanned index "authors_archive_pkey" to remove 45878 row
> versions
> 4 DETAIL: CPU 0.05s/0.34u sec elapsed 0.41 sec.
> 5 INFO: "authors_archive": removed 45878 row versions in 396 pages
> 6 DETAIL: CPU 0.00s/0.00u sec elapsed 0.00 sec.
Lin
OK, so if it knew that all vacuumable tuples could be found in 492 pages, and
it scanned only those pages, then how could it be that it reports 16558
removable tuples from those 492 pages, when it has already reported earlier
that it removed 45878 tuples -- a number we know in fact to be correct?
Hi, Running 8.4.4 on Centos. A couple of these numbers don't make sense to
me.
(I added line numbers for reference)
1 vacuum verbose authors_archive;
2 INFO: vacuuming "public.authors_archive"
3 INFO: scanned index "authors_archive_pkey" to remove 45878 row versions
4 DETAIL: CPU 0.05s/0
alvherre wrote:
>
> n_live_tup and n_dead_tup corresponds to the current numbers,
> whereas "last analysis tuples" are the values from back when the
> previous analyze ran. These counters keep moving per updates, deletes,
> inserts, they are not static.
>
>
OK. Do you know how can I get th
alvherre wrote:
>
> Excerpts from Gordon Shannon's message of mié may 19 11:49:45 -0400 2010:
>
>> at: last analysis tuples = pg_class.reltuples
>>
>> I'm the least confident about the last one -- tuples as of last analyze.
>> Can anyone confir
least confident about the last one -- tuples as of last analyze.
Can anyone confirm or correct these?
Version: PostgreSQL 8.4.3 on x86_64-redhat-linux-gnu, compiled by GCC gcc
(GCC) 4.1.2 20080704 (Red Hat 4.1.2-46), 64-bit
Thanks!
--gordon
--
View this message in context:
http://old.nabble.com/H
;fixed" things largely by not bothering
> to index already-dead rows.
>
Actually, I put a partial index on status, where != 'V'. That fits our
usage pattern of 99% of the records being 'V', so it's a tiny index and
satisifies this type of query very quickly.
s *returning* 126K rows from that scan? Whereas I think
it should return zero.
I have already fixed this query by adding a better index. But the point of
this post is simply to understand this explain analyze output. Thanks!
--gordon
--
View this message in context:
http://old.nabble.com/C
ong a couple
thousand writes to sibling tables, 4 writes (unknown combination of inserts
and updates) to cts_20100501, which definitely effected the index in
question.
In any case, I will cease and desist from ALTER SET TABLESPACE for a while!.
Thanks!
Gordon
Between 11:11:56 and 11:11:58 EDT (1
On Sun, May 2, 2010 at 12:52 PM, Tom Lane wrote:
> Gordon Shannon writes:
> > Bingo. Yes it is reasonable. It was 25 seconds between my altering the
> > index in question and the server crash.
>
> Sounds like we have a smoking gun. Could you show all your non-defa
On Sun, May 2, 2010 at 12:10 PM, Tom Lane wrote:
> No, this would be a pg_database row with that OID. But it looks like
> you found the relevant index anyway.
>
> Yup, realized that on second reading.
> > These commands worked fine on the master, yet this seems suspiciously
> > relevant.
>
>
>
it, and would you be
> willing to let a developer look at them?
>
>
Before I received your reply, I had already started the re-sync, and
unfortunately already deleted the wal logs in question.
If it happens again, I will certainly keep them, and would be happy to share
them.
Regards,
Gordon
I just got ran into the same problem. Both servers are running 8.4.3, and
the standby server had been running for 2 days, processing many thousands of
logs successfully. Here's my error:
4158 2010-05-02 11:12:09 EDT [26445]LOG: restored log file
"00013C7700C3" from archive
4158
That looks like the fix for this, thanks! I will try to upgrade soon.
-- Gordon
On Sun, Mar 14, 2010 at 7:43 AM, Alvaro Herrera
wrote:
> Gordon Shannon escribió:
> > Ah, now I see what you meant. Forgive me, I thought you were referring
> to
> > the pg_autovacuum table in
,
otherwise I am getting zero for each value?? I don't believe the
documentation mentions this rather important detail:
http://www.postgresql.org/docs/8.4/static/sql-createtable.html#SQL-CREATETABLE-STORAGE-PARAMETERS.
Did I miss it somewhere?
Thanks!
Gordon
On Fri, Mar 12, 2010 at 4:45 PM, G
Thanks, but I do want 1%.
On Fri, Mar 12, 2010 at 5:19 PM, Joshua D. Drake wrote:
> On Fri, 2010-03-12 at 16:45 -0700, Gordon Shannon wrote:
> > This is 8.4, there is no pg_autovacuum table. I set it like this:
> >
> > alter table foo set (autovacuum_analyze_scale_factor
This is 8.4, there is no pg_autovacuum table. I set it like this:
alter table foo set (autovacuum_analyze_scale_factor=0.01);
On Fri, Mar 12, 2010 at 4:31 PM, Alvaro Herrera
wrote:
> Gordon Shannon escribió:
>
> > One possibly interesting thing is that this seems to have started
ovacuums are taking up my vacuum workers so
no analyze workers can run.
Thanks!
Gordon
--
View this message in context:
http://old.nabble.com/unexplained-autovacuum-to-prevent-wraparound-tp27883825p27883825.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.
--
Sent via p
Ah, I finally found it http://pgfoundry.org/projects/textsearch-ja/
- Original Message -
From: "Gordon Callan"
To: pgsql-general@postgresql.org
Sent: Monday, November 9, 2009 2:36:18 PM GMT -08:00 US/Canada Pacific
Subject: [GENERAL] Tsearch2 with Japanese
Does a
Does anyone know where I can locate a Japanese parser and dictionary to use
with Tsearch2? There was a link (http://www.oss.ecl.ntt.co.jp/tsearch2j/
) to a Contrib at one time but this link is now dead :-( Any leads would be
appreciated.
.datfrozenxid) : 648
Should I be concerned?
Thanks
Gordon
--
View this message in context:
http://www.nabble.com/Got-could-not-truncate-directory-%22pg_multixact-offsets%22%3A-apparent-wraparound-tp25287801p25287801.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.
--
Se
Tom Lane-2 wrote:
>
> Gordon Shannon writes:
>
>> ERROR: insufficient data left in message
>> CONTEXT: COPY mytable, line 1, column provider_id
>
>> Anybody seen this?
>
> No. Can you extract a self-contained test case?
>
Got it. The pro
t. And they all talk about nul bytes
(0x00), but again, how can that be relevant when I'm in binary mode? Seems
like it should understand null bytes here, if that's what this is about.
Anybody seen this?
Thanks,
Gordon
--
View this message in context:
http://www.nabble.com/Ge
Hmmm.
If I do:
select * from pg_settings where name='custom_variable_classes';
I see my entry "phone", but I can't see how I can tell if I've set "phone.id"
GTG
From: Tom Lane [...@sss.pgh.pa.us]
Sent: 02 Septem
On 02/09/2009 15:39, "Andreas Kretschmer" wrote:
> Gordon Ross wrote:
>> Is there a way to either test if the custom variable is set, or to specify a
>> global default for the custom variable ?
>
> I think, you can use COALESCE(your_variable, default_value) to s
(I'm using Postgres 8.3)
I have a trigger that references a custom variable. Most of the time this
custom variable is set, and I have no problems.
However, in certain corner cases the custom variable is not set and the
trigger fails.
Is there a way to either test if the custom variable is set, o
Yaroslav Tykhiy wrote:
By the way, `chkdsk' in Windows or `fsck' in Unix can, in a way, be a
_source_ of file loss if the file metadata got damaged badly, e.g., by a
system crash, and the file node has to be cleared. So I've always been
curious if there is a way to retrieve surviving records f
That does the trick, awesome!
I do think it would be great if psql had a "stderr" capture in addition to
stdout.
Thanks
hubert depesz lubaczewski-2 wrote:
>
> On Thu, Apr 02, 2009 at 10:55:10PM -0700, Gordon Shannon wrote:
>> Has anyone solved this issue before?
>
What I'm trying to do doesn't seem like it should be that difficult or
unusual, but I can't seem to find the right combination of commands to make
it happen. I want to have a log file that captures everything from an
interactive psql session. Running 8.3.7 with bash shell on Linux.
If I use \
t language(s) will be returned in the
result set? All 3 languages? Is it based on the default_text_search_config ?
Thanks for your help,
Gordon
_
Windows Live™: Keep your life in sync.
http://windowslive.com/howitworks?ocid=TXT_TAGLM_WL_t1_allup_howitworks_022009
I'm considering using an array of ints column in a table which lists a
row's ancestry. For example, if item 97 is contained within itme 68
and that item is contained with in 31 and that item is contained
within item 1 then the value of the hierachy column would be
{1,31,68,97}, the numbers refer t
I'm working on a CMS, one of the features of the CMS in question is
that only one user can edit an item at any given time. I've
implemented this by having one table that holds the items, and another
table that holds locks. A lock row consists of the ID of the item
locked, a timestamp indicating w
I have a table representing tree structures of pages on a website.
they have an itm_id column (integer key) and an itm_parent column
(pointer to item's parent node). Any item with an itm_parent of 0 is
a root node, representing a website. Anything with a non-zero parent
is a non-root node represe
On May 20, 5:02 pm, Gordon <[EMAIL PROTECTED]> wrote:
> I have a table representing tree structures of pages on a website.
> they have an itm_id column (integer key) and an itm_parent column
> (pointer to item's parent node). Any item with an itm_parent of 0 is
> a ro
I'm adding full text search to a CMS project after we upgraded to 8.3
of Postgres. I'd like to do a bit of testing before deploying it.
I added columns to the pertinent tables for storing tsvectors, and was
looking at my query code to update it so these columns get updated
where appropriate when
On Mar 19, 5:52 pm, [EMAIL PROTECTED] wrote:
> On Mar 18, 9:18 am, Gordon <[EMAIL PROTECTED]> wrote:
>
>
>
> > I'm currently refactoring a database that somebody else designed.
> > When the database was designed he used character columns with a length
> >
I'm currently refactoring a database that somebody else designed.
When the database was designed he used character columns with a length
of 1 char to represent some values that really should have been
represented as booleans. He used 'y' for true and 'n' for false.
I want to cast these columns in
(Sorry for the repost but I thought this would be appropriate to both
groups. I did tell Google to delete my first post but odds are some
guys got that copy already anyway)
After a lot of hairpulling, I finally found a mechanism in PHP for
doing what I wanted. I just had to know 2 things:
1) How
On Feb 27, 10:37 am, Gordon <[EMAIL PROTECTED]> wrote:
> On Feb 26, 5:26 pm, [EMAIL PROTECTED] (Gregory Stark) wrote:
>
>
>
> > "Norman Peelman" <[EMAIL PROTECTED]> writes:
> > >> My options are, as far as I can tell,
>
> > >> 1
On Feb 26, 5:26 pm, [EMAIL PROTECTED] (Gregory Stark) wrote:
> "Norman Peelman" <[EMAIL PROTECTED]> writes:
> >> My options are, as far as I can tell,
>
> >> 1) replace the Database PDO extending class with something else that
> >> provides query results caching in PHP, or
> >> 2) get Postgres itse
On Feb 26, 11:11 am, Gordon <[EMAIL PROTECTED]> wrote:
> I'm working on a CMS that, in addition to the database-stored version
> of articles for easy searching, sorting, etc, also stores a HTML file
> version so pages can be fetched with the minimum of overhead (browsing
&g
I'm working on a CMS that, in addition to the database-stored version
of articles for easy searching, sorting, etc, also stores a HTML file
version so pages can be fetched with the minimum of overhead (browsing
articles has no more overhead than accessing any other HTML file on
the server).
As I'v
On Feb 20, 4:03 am, [EMAIL PROTECTED] (hewei) wrote:
> table:
> CREATE TABLE price (
> TIMESTAMP Timestamp NULL,
> idnumeric(5,0) NOT NULL,
> price numeric(10,3) NULL,
> primary key (id)
> );
> sql:
> update price set price=* where id=*;
>
> On Feb 20, 2008 11:56 AM, We
On Feb 18, 1:14 pm, pgsql_user <[EMAIL PROTECTED]> wrote:
> On Feb 18, 6:08 pm, Paul Boddie <[EMAIL PROTECTED]> wrote:
>
>
>
> > On 18 Feb, 13:36, django_user <[EMAIL PROTECTED]> wrote:
>
> > > How can stop postgresql from incrementing the primary key value, so
> > > that even after many failed ins
On Dec 19, 11:32 am, Gordon <[EMAIL PROTECTED]> wrote:
> I'm a web developer who does a lot of work in PHP. The back end
> database we use is bases on Postgres. I am trying to set my new
> MacBook Pro up as a development system.
>
> I have managed to in
I'm a web developer who does a lot of work in PHP. The back end
database we use is bases on Postgres. I am trying to set my new
MacBook Pro up as a development system.
I have managed to install Postgres from
http://sourceforge.net/projects/pgsqlformac/
and can access the database on my mac with
I'm developing a web application in PHP and Postgres that will
basically serve as a CMS. I want to implement a feature to allow
users to make copies of documents or folders, so this will require the
appropriate rows to be duplicated.
If possible I'd like to do this with SQL queries and avoid SELE
Hi-
Anyone know where to find info about whether or not the new US DST rules
impact certain versions of Postgres and what needs to be done to ensure
observance of the new rules? Thanks.
-- adam
---(end of broadcast)---
TIP 2: Don't 'kill -9'
I am coming from Oracle & mySQL. In Oracle I can use
spool foo.txt
spool off
After this the file foo.txt shows everything the screen showed.
If I had echo on it will show the command and results. If I turn
on timing it shows there too.
With mySQL similarly I can use tee foo.txt and notee;
count something (e.g.
SPAM), and if a record doesn't exist, make one with a count of 1.
I don't know whether something similar is available in PostGres.
Gordon L. Burditt
---(end of broadcast)---
TIP 1: if posting/read
Hello
I have next problem: I have Windows 2000 s. Postgresql 8.0 database
have 300MB and hardware configuration (2 x P3 1Ghz,1GB RAM and SCSI
HD).
How set the shared_buffers and other parameters to better performance.
for example:
When i read table from workstation (20.000 record and 20 column)
max_connections = 50 on my server
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
If I have a table that I only use for INSERTs and queries (no UPDATEs or DELETEs), is
it enough to just run ANALYZE on the table instead of VACUUM ANALYZE? In other words,
is running a VACUUM on a table useful if all that you're doing is INSERTing into it?
My understanding of VACUUM is that it
anybody else had this problem? What debugging
methods/tools/logs will help.
This is a very small development shop, so I'm
throwing this out to a larger community for the first time.
Thanks,
Gordon Campbell
[EMAIL PROTECTED]
Educational Technologist
212-854-1869
PPro200/1M)
with 1.5GB RAM and two RAID-5E arrays. My customer
is running on a Dell PowerEdge 2400 (2xPIII 866)
with 512MB RAM with a RAID-1 and a RAID-10 array.
This one is amazingly fast!
As always, your mileage may vary, contents may
have settled during shipment, and objects in
mirror ar
cp) NULLs
as an ASCII zero character. Not pretty. You'll want
to put together a little PERL script to fix that.
Gordon.
--
It doesn't get any easier, you just go faster.
-- Greg LeMond
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
A filesystem.
Thanks,
Gordon.
--
It doesn't get any easier, you just go faster.
-- Greg LeMond
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
not find
> postgres.h
>
> Does it need to look for another file or do I need to tell it where to
> look for it?
I had that problem too, as recently as RC2. I copied
the file myself, as well as the util/ directory (in
src/include/), which is also needed.
Not sure why make install doe
ich commit
every n records, selectable by the user. I think having
a feature like this in COPY would greatly facilitate
data migrations (which is what I'm doing, and the reason
for such a big file). What do you think?
Thanks,
Gordon.
--
It doesn't get any easier, you ju
failed: No such file or directory
DEBUG: redo done at (0, 3288327848)
FATAL 2: ZeroFill(logfile 0 seg 196) failed: No space left on device
/opt/postgresql/bin/postmaster: Startup proc 7922 exited with status 512 - abort
-
Thanks,
Gordon.
--
It doesn't get any easier, you
done at (0, 3288327848)
FATAL 2: ZeroFill(logfile 0 seg 196) failed: No space left on device
/opt/postgresql/bin/postmaster: Startup proc 7922 exited with status 512 - abort
-
Thanks,
Gordon.
--
It doesn't get any easier, you just go faster.
-- Greg LeMond
---
e COPY to the loc table, this
> problem does not occur. Am I going to have to resort to inserts for the
> referring tables?
I can't answer the backend question, but how about running
'split' on the big file, then COPYing these smaller files?
Gordon.
ame.del -c -t "|" -r "\n" \
-S server -U user -P password
This will pull the data out, with '|' as the field delimiter
and a newline as a record separator.
Now you can COPY the data in using '|' as the delimiter.
If you have BLOB data types, those tables will
93 matches
Mail list logo