Wim>> I have a postgres db version 8.2.15 (Yes, I know it's rather old version).
Wim>> After correcting some disk and file system problems the postgres table
seems to be corrupt, returning:
Wim>>
Wim>> ERROR: xlog flush request B67/44479CB8 is not satisfied --- flushed only
to B67/429EB150
Wim>>
Wim Goedertier writes:
> After studying http://www.postgresql.org/docs/8.2/static/app-pgresetxlog.html
> I ran:
> su -c 'pg_resetxlog -f -x 0x9A0 -m 0x1 -O 0x1 -l
> 0x1,0xB67,0x58 '
> and yes, it worked perfectly!
> After that I could pg_dump, drop database, create database and impor
On Jul 6, 2012, at 9:24 PM, Gurjeet Singh wrote:
> On Thu, Jul 5, 2012 at 7:16 PM, Steven Schlansker wrote:
>
> On Jul 5, 2012, at 3:51 PM, Tom Lane wrote:
>
> > Steven Schlansker writes:
> >> Why is using an OR so awful here?
> >
> > Because the OR stops it from being a join (it possibly need
thx for sharing!
On Fri, Jul 6, 2012 at 9:13 PM, Don Parris wrote:
> Hi all,
>
> I believe this may be pertinent here. Last year I wrote a tutorial on
> connecting LibreOffice to the powerful PostgreSQL database server. Now
> there is an updated driver that allows read-write access. So I've upd
Source data has duplicates. I have a file that creates the table then
INSERTS INTO the table all the rows. When I see errors flash by during the
'psql -d -f ' I try to scroll back in the terminal to
see where the duplicate rows are located. Too often they are too far back to
let me scroll to se
On 07/09/2012 04:48 PM, Rich Shepard wrote:
Source data has duplicates. I have a file that creates the table then
INSERTS INTO the table all the rows. When I see errors flash by during the
'psql -d -f ' I try to scroll back in the terminal to
see where the duplicate rows are located. Too ofte
You're welcome! Heaven knows, I ask plenty of questions - it's good to be
able to offer an answer now and again. :-)
Don
On Mon, Jul 9, 2012 at 6:04 PM, Willy-Bas Loos wrote:
> thx for sharing!
>
>
> On Fri, Jul 6, 2012 at 9:13 PM, Don Parris wrote:
>
>> Hi all,
>>
>> I believe this may be p
On Mon, 9 Jul 2012, Rob Sargent wrote:
psql -d -f file.sql > file.log 2>&1 would give you a logfile
Rob,
Ah, yes. I forgot about redirecting stdout to a file.
sort -u file.raw > file.uniq might give you clean data?
Not when it has the SQL on each line. I thought that I had eliminated
Rich Shepard wrote:
Source data has duplicates. I have a file that creates the table then
INSERTS INTO the table all the rows. When I see errors flash by during the
'psql -d -f ' I try to scroll back in the terminal to
see where the duplicate rows are located. Too often they are too far
back
> If the textual value of search_path (as per "show search_path") lists
> the schema but current_schemas() doesn't, I have to think that you've
> got a permissions problem --- the system will silently ignore any
> search_path entries for which you don't have USAGE permission.
> You said you'd done
10 matches
Mail list logo