I am stuck with a segmentation fault while running pg_upgrade, from 8.4.3 to
9.0.1
$ ./pg_upgrade -d /var/pgsql-8_4_3/data/ -D /var/pgsql-9_0_1/data/ -b
/var/pgsql-8_4_3/bin/ -B /var/pgsql-9_0_1/bin/ --check -P 5433 -v -g -G
debug
Running in verbose mode
Running in debug mode
PerForming Consistenc
2010/11/2 Grzegorz Jaśkiewicz
> try gdb --args ./pg_upgrade -d /var/pgsql-8_4_3/data/ -D
> /var/pgsql-9_0_1/data/ -b /var/pgsql-8_4_3/bin/ -B
> /var/pgsql-9_0_1/bin/ --check -P 5433 -v -g -G debug
> and when it fails, type in 'bt' and paste it here please.
>
> --
> GJ
>
Well, this is strange. I
2010/11/2 hernan gonzalez
> 2010/11/2 Grzegorz Jaśkiewicz
>
> try gdb --args ./pg_upgrade -d /var/pgsql-8_4_3/data/ -D
>> /var/pgsql-9_0_1/data/ -b /var/pgsql-8_4_3/bin/ -B
>> /var/pgsql-9_0_1/bin/ --check -P 5433 -v -g -G debug
>> and when it fails, type in
In pg_upgrade/controldata.c , putenv2 function :
char *envstr = (char *) pg_malloc(ctx, strlen(var)
+ strlen(val) + 1);
sprintf(envstr, "%s=%s", var, val);
Shouldn't it be "+ 2 " instead of "+ 1" ? (one for the '=', plus one for
the null terminating char) ?
I think that
Replacing that 1 for 2 it's enough for making it work, for me, it seems.
But it's not enough to get valgrind happy (It still reports 4 "definitely
lost" blocks, all from that putenv2 function). Perhaps that's related to the
comment:
/*
* Do not free envstr because it becomes pa
Most examples in the array documentation uses ARRAY[1,2,3] and similar.
http://www.postgresql.org/docs/9.0/interactive/functions-array.html
I think (actually I have experienced it, both in myself and others)
that this can be misleading in some cases.
For example: array_upper(ARRAY[1,2,3,4], 1)
(After dealing a while with this, and learning a little, I though of
post this as comment in the docs, but perhaps someone who knows better
can correct or clarify)
=
The issues of charset encodings
> It seems to me that postgres is trying to do as you suggest: text is
> characters and bytea is bytes, like in Java.
But the big difference is that, for text type, postgresql knows "this
is a text"
but doesnt know the encoding, as my example showed. This goes against
the concept of "text vs byt
> Umm, I think all you showed was that the to_ascii() function was
> broken. Postgres knows exactly what encoding the string is in, the
> backend encoding: in your case UTF-8.
That would be fine, if it were true; then, one could assume that every
postgresql function that returns a text gets ALW
> IMHO, the semantics of encode() and decode() are correct (the bridge
> between bytea and text ... in the backend encoding; they should be the
> only bridge), convert() is also ok (deals with bytes), but
> convert_to() and convert_from() are dubious if not broken: they imply
> texts in arbitrary e
Another example (Psotgresql 8.3.0, UTF-8 server/client encoding)
test=# create table chartest ( c text);
test=# insert into chartest (c) values ('¡Hasta mañana!');
test=# create view vchartest as
select encode(convert_to(c,'LATIN9'),'escape') as c1 from chartest;
test=# select c,octet_length(c
I'm doing some tests with date-time related fields to design my web
application.
I was already dissatisfied with Postgresql handling of timezones
concepts (issue
already discussed here - not entirely PG's fault, rather a SQL thing)
and I vehemently
reject the idea of a global server-side timezone
> There are any number of
> server-side settings that can affect the interpretation (and display)
> of your data. Datestyle for example already renders this position
> untenable.
What makes me a little uncomfortable in this assertion -and in many
parts of PG docs-
is that emphasis put on what "is
I plan to define two domains with no contraints, sort of typedefs, to
work with date-times inside my application:
CREATE DOMAIN instant AS timestamp(3) with time zone;
CREATE DOMAIN localdatetime AS timestamp(3) without time zone;
Two questions:
1. I guess that there is no performance penalty
to_timestamp() returns a TIMESTAMP WITH TIME ZONE
Perhaps an alternative that returns a TIMESTAMP WITHOUT TIME ZONE (which,
BTW, is the default TIMESTAMP)
should be provided. Elsewhere, there is no direct-robust way of parsing
a TIMESTAMP WITHOUT TIME ZONE (which
represesents a "local date-time"
There is some related discussion here
http://postgresql.1045698.n5.nabble.com/to-timestamp-returns-the-incorrect-result-for-the-DST-fall-over-time-td3327393.html
But it amounts to the same thing: TO_TIMESTAMP() is not apt for dealing with
plain TIMESTAMP
(without time zones).
Hence, there is no f
> Rather than being not viable, I'd argue that is is not correct. Rather, a
> simple direct cast will suffice:
> '2011-12-30 00:30:00'::timestamp without time zone
>
That works only for that particular format. The point is that, for example,
if I have some local date time
stored as a string in ot
On Thu, Jun 23, 2011 at 4:15 PM, Adrian Klaver wrote:
> On 06/23/2011 11:40 AM, hernan gonzalez wrote:
>
>>Rather than being not viable, I'd argue that is is not correct.
>>Rather, a simple direct cast will suffice:
>>'2011-12-30 00:30:00'::ti
>
>
> Every example here starts, at its core, with to_timestamp. That function
> returns a timestamp *with* time zone so of-course the current timezone
> setting will influence it. Stop using it - it doesn't do what you want.
>
> If you cast directly to a timestamp *without* time zone you can take
> As I understand it, documentation patches are welcomed:)
I'd indeed wish some radical changes to the documentation.
To start with, the fundamental data type names are rather misleading; SQL
standard sucks here, true, but Postgresql also has its idiosincracies, and
the docs do not help much:
ht
On Sat, Jun 25, 2011 at 3:56 AM, David Johnston wrote:
> First: I would suggest your use of “Local Time” is incorrect and that you
> would be better off thinking of it as “Abstract Time”. My responses below
> go into more detail but in short you obtain a “Local” time by “Localizing”
> and “Abstr
>
>
> You might want to review the Theories of Relativity, which pretty much blew
> away
> the notion of an absolute time and introduced the notion of frame of
> reference
> for time.
>
>
Well, I give up.
--
Hernán J. González
http://hjg.com.ar/
I was thinking thinking about the issue asked here, about an error in
a query causing the whole transaction to abort,
http://stackoverflow.com/questions/2741919/can-i-ask-postgresql-to-ignore-errors-within-a-transaction/2745677
which has already bothered so many postgresql users and has been
discus
(Disclaimer: I've been using Postgresql for quite a long time, I
usually deal with non-ascii LATIN-9 characters ,
but that has never been a problem, until now)
My issue summarized: when psql is invoked from a user who has a locale
different from that of the database, the tabular output
is wrong fo
It's surely not a xterm problem, I see the characters ok with just the
\x formatting. I can check also the output redirecting to a file.
My original client_encoding seems to be LATIN9 in both cases,
accorging to the \set ouput.
If I change it (for the root user) to UTF8 with " SET CLIENT_ENCODING
Mmm no: \x displays correctly for me because it sends
the raw text (in LATIN9) and I have set my terminal in LATIN9 (or ISO-8859-15)
And it's not that "xterm is misdisplaying" the text, it just that psql
is ouputting
an EMPTY (zero lenght) string for that field.
(I can even send the output to a
Sorry about a error in my previous example (mixed width and precision).
But the conclusion is the same - it works on bytes:
#include
main () {
char s[] = "ni\xc3\xb1o"; /* 5 bytes , 4 utf8 chars */
printf("|%*s|\n",6,s); /* this should pad a black */
printf("|%.*s|\n",4,s);
Wow, you are right, this is bizarre...
And it's not that glibc intends to compute the length in unicode chars,
it actually counts bytes (c plain chars) -as it should- for computing
field widths...
But, for some strange reason, when there is some width calculation involved
it tries so parse the cha
d, speaks of the 'pg_option' file,
and doesn't mention those settings...
Help! Am I missing something stupid?
Hernan Gonzalez
Buenos Aires, Argentina
29 matches
Mail list logo