Hello all,
i'm looking for a way to insert a file into a table using available binding
for nodejs.
just for comparison, if i where using java on server the upload code would
be like this:
protected void doPost(HttpServletRequest request,
HttpServletResponse response) throws ServletException, IOE
GMT-03:00 Adrian Klaver :
> On 10/26/2015 03:28 PM, Leonardo wrote:
>
>> Hello all,
>>
>> i'm looking for a way to insert a file into a table using available
>> binding for nodejs.
>>
>> just for comparison, if i where using java on server the uploa
> > MayVACUUM FULL on a table improve perfromance of the system?
>
> No, it will make things worse.
???
Why?
"The FULL option is not recommended for routine use, but might be useful
in special cases. An example is when you have deleted or updated most
of the rows in a table and would like th
> I also recommend reindexing any table that has been VACUUM FULLed.
Mmmh, from the docs I get that in 9.0 a "vacuum full" rewrites the whole table,
so I expect the indexes to be re-created anyway... so a reindexing would
be totally useless.
Am I wrong?
--
Sent via pgsql-general mailing
> One option would be to create a simple 2-node cluster and run your PgSQL
> server in a migrateable VM backed by a SAN or, if your budget is more
> modest, a simple DRBD device.
>
> Personally, I like to use RHCS (Red Hat Cluster Services) with a DRBD
> array becking clustered LVM with Xen VM
p
them). Would it be a feature that can be added in the future, assuming
that the tables would then flagged somehow as "read only"?
Leonardo
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
ndexes: "just" fsync it and be done with it.
Wouldn't that be useful?
Leonardo
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
t wouldn't take long, and it basically won't
re-write any data. This would be very useful for data that it's not "that
important", but that at the same time can be made "persistent" if needed...
Am I wrong? (I'm not too familiar with WAL...)
Leonardo
--
have a look at
http://postgresql.1045698.n5.nabble.com/Intel-SSDs-that-may-not-suck-td4268261.html
It looks like those are "safe" to use with a db, and aren't that expensive.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://w
It look's like you're searching for Postgres equivalent of Oracle RAC. I
don't know if there is any solution to do this right now in the postgres
On Wed, Nov 26, 2014 at 8:36 AM, Postgres India
wrote:
> Hi All,
>
> I am looking for PostgreSQL active/active clustering and whether PostgreSQL
> su
Beside all notes, i recommend you to use pg_upgrade, to avoid a complete
backup/restore in your transition.
http://www.postgresql.org/docs/9.2/static/pgupgrade.html
On Fri, Apr 5, 2013 at 1:30 PM, Kevin Grittner wrote:
> Robert Treat wrote:
>
> > Yeah, there were also some subtle breakage aro
ndition doesn't include accented chars. RTF
escapes accented characters as "\'f1" for í, "\'f3" for ó, and so on.
To escape \ and ', I'd read it shuld be used \\ and '', so I thaugth
that a like '%diagn''f3stica%' shou
On jue, 2009-10-08 at 11:28 -0400, Merlin Moncure wrote:
> 2009/10/8 Leonardo M. :
> > Hi, in my database I store RTF files inside a Bytea field. Now a
> > customer is requesting a search inside RTF fields and I'm trying to
> > implement it by issuing this query:
>
iagnóstica", but it
> > doesn't.
> >
>
> I prefer use
>
> select * from table where i_bytea::text ~~ $$%\\row%$$;
>
> Dollar quoting is more clean to put strings inside than ' '. ~~ is
> like operator.
> And :: operand is more clean to the
aseencoding(); )
>
Thanks, now this works:
set standard_conforming_strings = 0;
select
idturno,
infres::text
from turno
where
infres::text ~~ $$%diagn'f3stico%$$;
This database has WIN1252 encoding.
--
Leonardo M. Ramé
Griensu S.A. - Medical IT Córdoba
Tel.: 0351-4247979
I don't think that there will be too much trouble, as long as you follow
every changelog tip (9.0->9.1, 9.1->9.2 and 9.2->9.3)
On Wed, Nov 6, 2013 at 7:06 AM, Greg Burek wrote:
> Hello,
> How advisable or well known is it to take a 9.0 era db directly to 9.3
> using the latest pg_upgrade binary
Yeah, the things that matters are always on top of the changelog, so it's
not much trouble to look after then.
On Thu, Nov 7, 2013 at 9:00 PM, Adrian Klaver wrote:
> On 11/07/2013 11:07 AM, Greg Burek wrote:
>
>> On Wed, Nov 6, 2013 at 4:36 AM, Leonardo Carneiro
>> mailto
> I believe this perception that SSDs are less "safe" than failure-prone
> mechanical hard drives will eventually change.
By "safe" I mean they won't corrupt data in case of crash of the machine.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subs
fairly good storage? Just trying to get some
ideas before starting testing
(table will be 5M rows, where some of the group by
select could return 3-400K groups)
Leonardo
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.post
he different "group by"s aren't related one to
another. My understanding is that windowing functions
can't help in that case, but I'll look at them
Thank you
Leonardo
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscriptio
table (appends only).
What would be the "right" or "sensible" FILLFACTOR in this case?
I guess the docs need some example of "sensible" values for
some of the "classic" cases? (I'm sure I'm not the only one
storing a timestamp index in time-order..
> > I have an index on a timestamp value that is inserted, for 90%
> > of the inserts, in increasing order. No updates, no deletes on the
> > table (appends only).
>
> The bit about "increasing order" is a red herring here. If you have
> no updates, then you can leave the FILLFACTOR alone.
>
> It will be really useful to see some test results where you alter the
> fillfactor and report various measurables.
It's not that easy... stressing "only" the index insertion
speed won't be simple. I would have liked some "theory"...
The docs seem to imply there are some guidelines, it's
just
> Yes, I use the same approach, but I'm not aware of any such guideline
> related to fillfactor with indexes. Anyway those guidelines need to be
> written by someone, so you have a great opportunity ;-)
I did a quick test using your example. As in your test, "increasing"
values don't get any g
> What about the index size? How much space do they occupy? Analyze the
> table and do this
Of course space is different. That's not the point. The point is: I'm willing
to pay the price for another HD, if that helps with performance. But it doesn't.
>
> The minimal performance difference is
On 07/06/2011 23.52, Tom Lane wrote:
> Very fast on a very narrow set of use cases ...
Can you explain a little (if possible)?
Thank you
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-gener
On Tue, Sep 13, 2011 at 8:12 AM, Venkat Balaji wrote:
> Yes. I would be excited to know if there is a possibility of multi-master
> replication system on Postgres.
>
> We will be soon using 9.1 Streaming replication.
>
> Thanks
> Venkat
>
>
> On Tue, Sep 13, 2011 at 1:31 AM, Aleksey Tsalolikhin <
Hi,
trying to find how to store a large amount (>1 rows/sec) of rows in a table
that
has indexes on "random values" columns, I found:
http://en.wikipedia.org/wiki/TokuDB
Basically, instead of using btrees (which kill insert performance for random
values
on large tables) they use a differ
Hi,
how can I call array_append from a user-defined C function?
I know the type of the array I'm going to use (int4[]) so if there's an
equivalent
function that can be called without going through PG_FUNCTION_ARGS stuff...
Thank you
Leonardo
--
Sent via pgsql-general mailing l
t" (the array I'm using is one-dimensional).
I'll try to use that... sorry for the noise.
Leonardo
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Hi everyone,
Is there any admin here of the brazillian postgresql site and mailing list?
Already have some days that is offline.
Any news?
tks in advance.
- Mensaje original -
Fecha: Fri, 05 Oct 2012 17:30:15 -0400
De: "Tom Lane"
Para: "Leonardo M. Ramé"
Asunto: Re: [GENERAL] pg_dump problem
Copia: "PostgreSql-general"
Leonardo =?iso-8859-1?Q?M=2E_Ram=E9?= writes: > I'm trying
to migrate a Postg
Hi,
I have a simple table that has indexes on 2 integer columns.
Data is inserted very often (no updates, no deletes, just inserts):
at least 4000/5000 rows per second.
The input for the 2 indexed columns is very random.
Everything is "fine" for the first 10-20M rows; after that, performance
get
> Does it help to reindex the index at that point?
Didn't try; but I guess a reindex of such a big table
would block inserts for a long time... but I'll try
> Bad. The partitioning code isn't designed to scale
> beyond a few dozen partitions.
What kind of problems am I going to experience?
It
> On a few very narrow applications I've gotten good
> performance in the
> low hundreds. After that things fall apart
> quickly.
Ehm... what exactly "fall apart quickly" means?
I can trade some "select" speed for "insert" speed...
I don't have experience with partitioning, if some of
you alr
> The usual set of tricks is to
> increase shared_buffers, checkpoint_segments, and checkpoint_timeout to cut
> down
Uh, didn't know shared_buffers played a role in index insertion as well...
got to try that. Thank you.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> > The usual set of tricks is to
> increase shared_buffers,
> checkpoint_segments, and checkpoint_timeout to cut down
That did it. Setting a much higher shared_buffers helped quite a lot.
Thank you everybody for your suggestions.
--
Sent via pgsql-general mailing list (pgsql-general@pos
Hi,
increasing shared_buffers has improved *a lot* the number of inserts/second,
so my "problem" [1] is fixed.
But now I'm worried because of the sentence (Tom Lane):
"The partitioning code isn't designed to scale beyond a few dozen partitions"
Is it mainly a planning problem or an execution ti
> The thing that takes the longest is planning queries. I made THAT problem
> just
> go away for the most part by using cached queries (only works within the same
> database connection, but thats no problem for me).
What do you mean by "cached queries"? Prepared statements?
--
Sent via
I'm sorry I have to come back at this, but the solution the list gave helped,
but didn't fully solved my problems...
To sum up:
I have a simple table that has indexes on 2 integer columns.
Data is inserted very often (no updates, no deletes, just inserts): at
least 4000/5000 rows per second. The
Hi all,
we're going to deploy a web app that manages users/roles for another
application.
We want the database to be "safe" from changes made by malicious
users.
I guess our options are:
1) have the db listen only on local connections; basically when the
machine is accessed the db could be "co
> Personally I would lean toward making
> the bulk of security within the
> application so to simplify everything - the
> database would do what it
> does best - store and manipulate data - and the
> application would be the
> single point of entry. Protect the servers - keep
> the applications
>I think this point number 2 is pretty important. If at all possible, keep
> the webapp separate from the database, and keep the database
> server on a fairly restrictive firewall. This means that someone has
> got to get in to the webapp, then hop to the database server, it just
> adds another
At this page:
http://wiki.postgresql.org/wiki/Auto-partitioning_in_COPY
I read:
"The automatic hierarchy loading code is currently integrated
in the code of the COPY command of Postgres 8.5"
Is that true?
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes
HI all,
I have a very big table (2000 inserts per sec, I have to store 20 days of data).
The table has 2 indexes, in columns that have almost-random values.
Since keeping those two indexes up-to-date can't be done (updating 2000
times per second 2 indexes with random values on such a huge table
> Could you please explain the reason to do so many
> partitions?
Because otherwise there would be tons of rows in each
partition, and randomly "updating" the index for that many
rows 2000 times per second isn't doable (the indexes
get so big that it would be like writing a multi-GB file
random
> Well the situation is still ambiguous
> so:
> Is it possible to provide this table and indexes definitions?
> And it
> would be great it you describe the queries you are going to do
> on this table
> or just provide the SQL.
Sure!
Basically what I'm trying to do is to partition the index in
> For "inserts" I do not see the reason
> why
> it would be better to use index partitioning because AFAIK
> b-tree
> would behave exactly the same in both cases.
no, when the index gets very big inserting random values gets
very slow.
But still, my approach doesn't work because I thought Postg
> AFAIU the OP is trying to give the cache a chance of
> doing some useful
> work by partitioning by time so it's going to be forced to
> go to disk
> less.
Exactly
> have you
> considered a couple of
> "levels" to your hierarchy. Maybe bi-hourly (~15
> million records?)
> within the current
> thanks very much for your
> help.
> It gave me a good idea of what to do. If you have further
> recommendations, I
> would be glad to here them.
I guess you should give more info about the expected
workload of your server(s)... otherwise you'll risk spend
too much money/spend your money in a
>I'm trying to make a query that, given N and a date, gives me the interval of
>N hours with the max(sum(...)).
select sum(i) as s, timestamp '2010-06-16 00:00:00' + extract(hour from
d)::integer/3*3 * interval '1 hour' as sumd from p group by extract(hour from
d)::integer/3 where d = '2010-06
Hi,
since postgresql multidimensional arrays can't have different size per axis, I
was wondering what would happen in case I used an array of, say, 10x10
elements, where only 10x2 elements are filled and the rest are NULL. I guess
the NULL elements take space (and I would have 80% of the space wa
> i thought PG multidimensional arrays were just arrays of arrays, and any
>dimension could be anything.
from:
http://www.postgresql.org/docs/8.4/static/arrays.html
"Multidimensional arrays must have matching extents for each dimension. A
mismatch causes an error"
--
Sent via pgsql-g
Hi,
I need to generate aggregates of data coming from a stream.
I could easily doing it inserting data coming from the stream into a table,
and then query it using something like:
select from atable group by
The problem with this approach is that I would have to wait for the whole
stream to
> In pl/pgsql at any rate, functions which return a set of rows build up
> the entire result set in memory and then return the set in one go:
Ok, then pl/pgsql and pl/python (which can't return SETOF) are ruled out.
(Thank you for pointing that out).
But pl/perl seems to do the trick:
"PL/Per
buntuforums.org/showthread.php?t=1307864
https://bugs.launchpad.net/ubuntu/+bug/461105
Best regards,
Leonardo C.
tabase, execute the DDL command, and reconnect the
program again.
What can be causing this behavior? any workaround?.
--
Leonardo M. Ramé
Griensu S.A. - Medical IT Córdoba
Tel.: 0351-4247979
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your sub
El mar, 29-12-2009 a las 11:20 -0500, Bill Moran escribió:
> In response to "Leonardo M." Ramé :
>
> > Hi, I need to create a trigger on a table used by our sofware, the
> > problem is, when I issue a "create trigger" on this table, it takes
> > for
El mar, 29-12-2009 a las 14:48 -0300, Leonardo M. Ramé escribió:
> El mar, 29-12-2009 a las 11:20 -0500, Bill Moran escribió:
> > In response to "Leonardo M." Ramé :
> >
> > > Hi, I need to create a trigger on a table used by our sofware, the
> > >
database,
when I set the Transaction Isolation Level to ReadCommitted, the locking
problem appears, when I use the default connection method, the locks
doesn't appear when I do "select * from pg_locks".
This solves the locking problem, but what happens to transactions? the
app is still
ansaction mode _is_ read committed :-).
>
> merlin
>
Merlin, knowning this, I'm asking to the developers of the connection
library because in their code, if I use the default connection mode,
then the transactions are ingnored, applying the changes immediately
after every Insert, U
flag of leaky application code.
>
> merlin
I did the Select * from pg_locks right after your answer, and found that
almost all locks originated by my app have "granted = t", also, all are
in " in transaction". The interesting thing is the app is doing
only Selects, withou
,
Leonardo M. Ramé
Griensu S.A. - Medical IT Córdoba
Tel.: 0351-4247979
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
> The docs are mute on this.
Not true. Read the NOTES section of
http://www.postgresql.org/docs/8.4/static/sql-cluster.html :
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
> VACUUM ANALYZE;
> CLUSTER;
> REINDEX DATABASE "database";
ANALYZE has to go after CLUSTER; and CLUSTER already
vacuums the tables (I'm not 100% sure though). CLUSTER also
reindexes the whole table, so there's no need for another REINDEX.
I think the right way of doing it would be:
CLUSTER;
ANA
Hello everyone,
Is the pages hosted at http://projects.postgresql.org/ offline? I can't
access any of them.
Tks in advance.
Have a look at Mysql gotchas...
http://sql-info.de/mysql/database-definition.html#2_4
So here's another little gem about our friends from Uppsala: If you create a
table with InnoDB storage and your server does not have InnoDB configured, it
falls back to MyISAM without telling you.
As i
I got a table with oid 25459.
The file is 1073741824 bytes big.
I did some more inserts, and now I have this two new files:
size/name:
1073741824 25459.1
21053440 25459.2
What are they?
The 25459.1 looks exactly like the 25459.
I tried looking at the docs, but searching for ".1" or ".2" wasn't that
When a data file for a specific table (or index?) is larger than 1GB,
its split up in several parts. This is probably a left over from the
time OSs used to have problems with large files.
Thank you.
Is there any documentation I can read about this?
---(end of broadcast)---
the 8.x.x versions for Windows.
Is there any wway to get the older versions for Window$?
Thanx in advance.
--
Leonardo Mateo.
Keep bugs out, close your Windows!!!.
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Hi,
I still don't understand how replication can be used in web applications.
Given this scenario:
1) user updates his profile -> update to the db (master)
2) web app redirects to the "profile page" -> select from db (slave)
Since (2) is a select it is issued to the slave.
How can one be sure
pg_cluster is an example of a synchronous replication method (although
it's really considered multi-master...not master-slave)
It looks like pgcluster is what I would need, I just don't understand
how it works... aren't there some "good" docs about it?
Which are its limits?
And: is there a w
We use continuent at work (albeit for mysql...) on a three node cluster.
That's a good project, the only thing I don't like is that one is forced
to use Java which is not what I'd like to do (I'd prefer Ruby).
Thank you everybody for your answers.
Leonardo
---
With mysql I know how much space a row will take, based on the datatype
of it columns. I also (approximately) know the size of indexes.
Is there a way to know that in postgresql?
Is there a way to pack (compress) data, as with myisampack for mysql?
Thank you
---(end of
I was reading "Don't be lazy, be consistent: Postgres-R,
a new way to implement Database Replication"
and I found this:
"5.1 General configuration
PostgreSQL uses a force strategy to avoid redo recovery,
flushing all dirty buffer pages at the end of each
transaction. With this strategy, response
A couple of days ago I announced that I wrote a JDBC driver that
adds table partitioning features to databases accessed via JDBC.
I also wrote:
> In case you think this could be of any interest if integrated
> in Postgresql (I mean if it was a core functionality of Postgresql,
> not just a JDBC d
I read "Chapter 23. Monitoring Database Activity" to monitor postgresql,
but on Solaris it doesn't work. I tried "/usr/ucb/ps", but it doesn't
work either (I only see the postmaster startup parameters). Isn't there
any other solution to see what postgresql instances are doing?
-
I wrote a function to sum arrays.
It works, but I had to cast the data pointer to int64 (because my arrays
are 'int8[]'):
int64* ptr1 = ARR_DATA_PTR(v1);
What if I want to write a more general function that adds values of 2
arrays of every int type? How could I do it?
Here is the function (if y
"In addition, your original invocation of the postmaster command
must have a shorter ps status display than that provided by each
server process."
Yes, using PGDATA instead of the whole path eith the -D option worked:
now I can see the different status displays.
---(end of
Here's the answer.
http://pdenya.com/2014/01/16/postgres-bytea-size/
El 04/03/15 a las 12:17, John R Pierce escibió:
On 3/4/2015 7:03 AM, María Griensu wrote:
I need to figure out how can I weight BLOB objects in a table of a
DB, I'm not expert on this topics, so I appreciate any help you can
e not
dumpled, but why I don't get any CREATE SEQUENCE command in my dump?.
--
Leonardo M. Ramé
http://leonardorame.blogspot.com
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
El 19/03/15 a las 13:09, Adrian Klaver escibió:
On 03/19/2015 08:43 AM, "Leonardo M. Ramé" wrote:
Hi, I'm creating a database dump excluding one table and found only the
sequences created implicitly (using serial type) are created when I
restore the dump.
The command I use
Hi, I had to change the O.S. timezone and aparently PostgreSql continues
using the old timezone, how can I force update it's time zone?.
Using PostgreSql 8.4 on Ubuntu Server 12.04.
To update the OS timezone I used sudo dpkg-reconfigure tzdata
Regards,
--
Leonardo M. Ramé
El 20/03/15 a las 12:38, Steve Crawford escibió:
On 03/20/2015 08:29 AM, "Leonardo M. Ramé" wrote:
Hi, I had to change the O.S. timezone and aparently PostgreSql
continues using the old timezone, how can I force update it's time
zone?.
Using PostgreSql 8.4 on Ubuntu Server 12
El 20/03/15 a las 13:03, Adrian Klaver escibió:
I am not sure what the exact issue is?
How are you determining that the new time zone is not being used?
What was the old time zone, what is the new one?
I don't know if the new time zone is beign used, It just coincede with
the old local time zo
El 19/03/15 a las 14:13, Adrian Klaver escibió:
On 03/19/2015 10:02 AM, "Leonardo M. Ramé" wrote:
El 19/03/15 a las 13:09, Adrian Klaver escibió:
On 03/19/2015 08:43 AM, "Leonardo M. Ramé" wrote:
Hi, I'm creating a database dump excluding one table and found o
El 20/03/15 a las 14:11, "Leonardo M. Ramé" escibió:
El 20/03/15 a las 13:03, Adrian Klaver escibió:
I am not sure what the exact issue is?
How are you determining that the new time zone is not being used?
What was the old time zone, what is the new one?
I don't know if the n
* Error **
ERROR: column "sessiontimestamp" does not exist
SQL state: 42703
Character: 28
But if I do:
DELETE From sessions WHERE "SESSIONTIMESTAMP" < '2010-01-01 10:02:02'
It DOES work.
Why the db doesn't recognize the name of the table without quot
El 26/03/15 a las 14:17, Ashesh Vashi escibió:
[Sent through mobile]
On Mar 26, 2015 10:43 PM, Leonardo M. Ramé mailto:l.r...@griensu.com>> wrote:
>
> Ok, I have this table:
>
> CREATE TABLE sessions
> (
> "SESSIONID" integer NOT NULL,
> "S
El 26/03/15 a las 14:18, Adrian Klaver escibió:
On 03/26/2015 10:12 AM, "Leonardo M. Ramé" wrote:
Ok, I have this table:
CREATE TABLE sessions
(
"SESSIONID" integer NOT NULL,
"SESSIONTIMESTAMP" character varying(45) NOT NULL,
"SESSIONDATA&q
El 26/03/15 a las 14:23, Francisco Olarte escibió:
Hi Leonardo:
On Thu, Mar 26, 2015 at 6:12 PM, "Leonardo M. Ramé" wrote:
DELETE From sessions WHERE SESSIONTIMESTAMP < '2010-01-01 10:02:02'
ERROR: column "sessiontimestamp" does not exist
LINE 1: DELETE Fro
s already there.
Regards,
--
Leonardo M. Ramé
http://leonardorame.blogspot.com
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Hi, I'm trying to move a db from postgres 8.1 encoded LATIN9 from a debian 4.0
box to postgres 8.4 encoded UTF8 on a rh6.6 (the whole job is to dismiss the
old server, migrate and upgrade bugzilla application)
I would like to restore dumped data in the new utf8 db solving the problem of
chars li
one encoding (or multiple
encodings) to UTF-8 and I, in a previous test, run recode.pl to convert the
data dumped as latin9 (of course editing the "client_encoding" from latin9 to
utf8) and then no "strange chars" were shown after restoring in the new utf8
database.
Thank you v
]: Leaving directory 'E:/postgresql-9.3.10/src'
GNUmakefile:11: recipe for target 'all-src-recurse' failed
mingw32-make: *** [all-src-recurse] Error 2
Does anyone know how can I get rid of this error?.
Regards,
--
Leonardo M. Ramé
http://leonardorame.blogspot.com
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
El 04/11/15 a las 00:05, Tom Lane escribió:
"=?UTF-8?Q?Leonardo_M._Ram=c3=a9?=" writes:
Hi, I'm trying to build the client library of PostgreSql 9.3.x using
this version of MinGW's gcc:
...
g++ -DFRONTEND -I../../src/include -I./src/include/port/win32
-DEXEC_BACKEND "-I../../src/include/port
El 04/11/15 a las 06:00, Leonardo M. Ramé escribió:
El 04/11/15 a las 00:05, Tom Lane escribió:
"=?UTF-8?Q?Leonardo_M._Ram=c3=a9?=" writes:
Hi, I'm trying to build the client library of PostgreSql 9.3.x using
this version of MinGW's gcc:
Nevermind, deleted my Min
Hi, is there a way to get an array converted to json without brackets?.
I'm getting, for example [{"field": "value"}, {"field": "value"}] and I
want to get this: {"field": "value"}, {"field": "value"}.
Regar
El 13/11/15 a las 10:49, Merlin Moncure escribió:
On Fri, Nov 13, 2015 at 7:20 AM, Leonardo M. Ramé wrote:
Hi, is there a way to get an array converted to json without brackets?.
I'm getting, for example [{"field": "value"}, {"field": "value&quo
Hi, I'm trying to download Skytools 3.2 but pgFoundry seems to be down,
does anyone know another place to download it?.
Regards,
--
Leonardo M. Ramé
Medical IT - Griensu S.A.
Av. Colón 636 - Piso 8 Of. A
X5000EPT -- Córdoba
Tel.: +54(351)4246924 +54(351)4247788 +54(351)4247979 int. 19
Cel.
1 - 100 of 183 matches
Mail list logo