I've tried a lot of sizes
but I still have messages in my log saying:
* ---
ERROR: value too long for type character varying(200)
--- *
Why is this? There are no other varchar(200) columns in my DB at all,
no other table. Only this column used to be 25 characters and using
the alter ty
On Feb 5, 2012, at 10:46 PM, Tom Lane wrote:
> Drop the parentheses in the GROUP BY.
I had the suspicion that it was some kind of a late-night brain fart ;-)
I don't know where the hell the parens came from, since I've *NEVER* put
spurious parens in a group by clause before. But it took someone
no not all of rows
is that gonna work ?
thanks
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/Error-while-importing-CSV-file-tp5458103p5459396.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.
--
Sent via pgsql-general mailing list (pgsql-ge
Hi
Normally when I need to run a function during an insert I make it a trigger
function of that table.
However, in this case, I need to overwrite the table with which the trigger is
attached.
I would appreciate any suggestions on how to do this.
Bob
Osmel Barreras =?utf-8?Q?Pi=C3=B1era?= writes:
> I need to develop for my diploma thesis an extension that allows me to travel
> the implementation plan once you pass through the phase of planning and
> optimization. SPI functions that could be used to obtain and work with the
> plan?
SPI is n
Scott Ribe writes:
> Is this a bug (in 9.0.4), or have I just gone cross-eyed from too much work?
> The query:
> select t0."ICD9", t0."Description", count(*)
> from (select distinct "Person_Id", "ICD9", "Description" from
> "PatientDiagnoses") as t0
> group by (t0."ICD9", t0."Description")
>
Is this ERROR thrown for all the rows ?
Try the following.
select max(length(column_name)) from table_name;
It seems that some value is bigger than the define size.
> Date: Sun, 5 Feb 2012 21:14:40 -0800
> From: w_war...@hotmail.com
> To: pgsql-general@postgresql.org
> Subject: Re: [GENERA
OK that problem solved
Thanks so much
but I had another problem
*ERROR: value too long for type character varying(100)*
although in the original file it was varying(25) only !!
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/Error-while-importing-CSV-file-tp5458103p54
Thanks but still i get this :
*ERROR: invalid input syntax for integer: "id"*
(1st col only)
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/Error-while-importing-CSV-file-tp5458103p5459334.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.
--
I need to develop for my diploma thesis an extension that allows me to travel
the implementation plan once you pass through the phase of planning and
optimization. SPI functions that could be used to obtain and work with the plan?
Fin a la injusticia, LIBERTAD AHORA A NUESTROS CINCO COMPATRIOT
Is this a bug (in 9.0.4), or have I just gone cross-eyed from too much work?
The query:
select t0."ICD9", t0."Description", count(*)
from (select distinct "Person_Id", "ICD9", "Description" from
"PatientDiagnoses") as t0
group by (t0."ICD9", t0."Description")
order by count(*) desc limit 10;
>
> Yeah, that's what it will look like if psql is using Apple's libedit
> library; it's unrelated to the server.
>
> I think libedit doesn't support control-r either, not totally sure
> though. In any case there are some known bugs in libedit that Apple's
> not been terribly swift to fix. I'd su
zhong ming wu writes:
> My .psql_history contains lines of the form.
> select\040sum(price)\040from\040products\040p\040join\040
Yeah, that's what it will look like if psql is using Apple's libedit
library; it's unrelated to the server.
I think libedit doesn't support control-r either, not tota
My .psql_history contains lines of the form.
select\040sum(price)\040from\040products\040p\040join\040
My psql client is 9.1.2 on Mac OS and server is linux 9.0.5
Is the version mismatch messing up this .psql_history file?
Also control-r to search the history isn't working at psql prompt from
Hi all,
I wrote an application that store a large quantity of files in the
database as large binary objects. There are around 50 tables (all in one
schema) and only one table host all these large objects. Every user
connect to database using his own user, so all users are parts of the
same group ro
Pat Heuvel writes:
> [ vacuumlo fails ]
> When I added the -v option, there were many "removing lo x" messages
> before the above messages appeared. I have previously tried to reindex
> pg_largeobject, but that process failed as well.
You need to get the index consistent before trying vacuu
Gday all,
I have a large database with many large objects, linked to a single table.
I have been trying to backup the database so I can migrate to a later
version, but the backup has been failing due to problems within
pg_largeobject. I am not surprised at these errors, because the server
is
On Feb 5, 2012, at 11:04, Shadin_ wrote:
>
> am new at dealing with PostgreSQL
> I was using PGAdmin and needed to export some data from a query I had run
> and then import it in another DB.
>
> *my columns names* : id (int4), name (varchar), time_starp(timestamp)
>
> so I followed these inst
am new at dealing with PostgreSQL
I was using PGAdmin and needed to export some data from a query I had run
and then import it in another DB.
*my columns names* : id (int4), name (varchar), time_starp(timestamp)
so I followed these instructions
http://www.question-defense.com/2010/10/15/how-to-
19 matches
Mail list logo