On Fri, Dec 12, 2014 at 5:03 AM, Paul Jungwirth
wrote:
> http://www.postgresql.org/docs/9.3/static/xfunc-c.html#XFUNC-C-TYPE-TABLE
>
> It looks like bigint should be listed and should correspond to an
> int64 C type. Also I see there is an INT8OID, PG_GETARG_INT64,
> DatumGetInt64, and Int64GetDat
Manuel Kniep writes:
> On 11. Dezember 2014 at 00:08:52, Tom Lane (t...@sss.pgh.pa.us) wrote:
>> There's no supported way to do that. As an unsupported way, you could
>> consider a manual UPDATE on the type's pg_type row.
> I also thought about this but I guess I have to INSERT the dependency in
On 11. Dezember 2014 at 00:08:52, Tom Lane (t...@sss.pgh.pa.us) wrote:
> Manuel Kniep writes:
> > I have a custom type and want to add the yet missing SEND and RECEIVE
> > functions
> > is there anyway to alter the type definition without dropping and
> > recreating it?
>
> There's no supporte
Robert DiFalco wrote
> I have users, friends, and friend_requests. I need a query that
> essentially
> returns a summary containing:
>
> * user (name, imageURL, bio, ...)
> * Friend status (relative to an active user)
>* Is the user a friend of the active user?
>* Has the u
On Thu, Dec 11, 2014 at 6:52 PM, Robert DiFalco
wrote:
> Thanks Arthur. I don't think there is as big a different between BIGINT
> and INTEGER as you think there is. In fact with an extended filesystem you
> might not see any difference at all.
>
> As I put in the first emal I am using a GIST ind
Thanks Arthur. I don't think there is as big a different between BIGINT and
INTEGER as you think there is. In fact with an extended filesystem you
might not see any difference at all.
As I put in the first emal I am using a GIST index on user.name.
I was really more interested in the LEFT OUTER J
Carlos Henrique Reimer writes:
> Extracted ulimits values from postmaster pid and they look as expected:
> [root@2-NfseNet ~]# cat /proc/2992/limits
> Limit Soft Limit Hard Limit
> Units
> Max address space 102400 unlimited
> bytes
So you'v
So if you watch processes running with sort by memory turned on in top
or htop can you see your machine running out of memory etc? You have
enough swap if needed? 48G is pretty small for a modern pgsql server
with as much data and tables as you have, so I'd assume you have
plenty of swap just in ca
Extracted ulimits values from postmaster pid and they look as expected:
[root@2-NfseNet ~]# ps -ef | grep /postgres
postgres 2992 1 1 Nov30 ?03:17:46
/usr/local/pgsql/bin/postgres -D /database/dbcluster
root 26694 1319 0 18:19 pts/000:00:00 grep /postgres
[root@2-N
Hello,
The table of which C types represent which SQL types seems to be missing bigint:
http://www.postgresql.org/docs/9.3/static/xfunc-c.html#XFUNC-C-TYPE-TABLE
It looks like bigint should be listed and should correspond to an
int64 C type. Also I see there is an INT8OID, PG_GETARG_INT64,
Datum
Carlos Henrique Reimer writes:
> Yes, all lines of /etc/security/limits.conf are commented out and session
> ulimit -a indicates the defaults are being used:
I would not trust "ulimit -a" executed in an interactive shell to be
representative of the environment in which daemons are launched ...
ha
Yes, all lines of /etc/security/limits.conf are commented out and session
ulimit -a indicates the defaults are being used:
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pen
On Thu, Dec 11, 2014 at 12:05 PM, Carlos Henrique Reimer
wrote:
> That was exactly what the process was doing and the out of memory error
> happened while one of the merges to set 1 was being executed.
You sure you don't have a ulimit getting in the way?
--
Sent via pgsql-general mailing list
That was exactly what the process was doing and the out of memory error
happened while one of the merges to set 1 was being executed.
On Thu, Dec 11, 2014 at 4:42 PM, Vick Khera wrote:
>
> On Thu, Dec 11, 2014 at 10:30 AM, Tom Lane wrote:
>
>> needed to hold relcache entries for all 23000 table
On Thu, Dec 11, 2014 at 10:30 AM, Tom Lane wrote:
> needed to hold relcache entries for all 23000 tables :-(. If so there
> may not be any easy way around it, except perhaps replicating subsets
> of the tables. Unless you can boost the memory available to the backend
>
I'd suggest this. Break
Slony version is 2.2.3
On Thu, Dec 11, 2014 at 3:29 PM, Scott Marlowe
wrote:
> Just wondering what slony version you're using?
>
--
Reimer
47-3347-1724 47-9183-0547 msn: carlos.rei...@opendb.com.br
Just wondering what slony version you're using?
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Thanks David! This is what I needed. I figured I was looking in the wrong
place.
On Thu, Dec 11, 2014 at 10:58 AM, David G Johnston <
david.g.johns...@gmail.com> wrote:
> Jim McLaughlin wrote
> > Hi all,
> >
> > I am rewriting a pljava procedure in C++ with libpq. This procedure needs
> > to acce
Jim McLaughlin wrote
> Hi all,
>
> I am rewriting a pljava procedure in C++ with libpq. This procedure needs
> to access some temp tables that the calling procedure creates and
> populates. It seems that the connection created by PQconnectdb creates a
> new connection (I have tried all permutation
Hi all,
I am rewriting a pljava procedure in C++ with libpq. This procedure needs
to access some temp tables that the calling procedure creates and
populates. It seems that the connection created by PQconnectdb creates a
new connection (I have tried all permutations of conninfo I could think
of).
Hi,
Yes, I agree, 8.3 is out of support for a long time and this is the reason
we are trying to migrate to 9.3 using SLONY to minimize downtime.
I eliminated the possibility of data corruption as the limit/offset
technique indicated different rows each time it was executed. Actually, the
failure
Carlos Henrique Reimer writes:
> I've facing an out of memory condition after running SLONY several hours to
> get a 1TB database with about 23,000 tables replicated. The error occurs
> after about 50% of the tables were replicated.
I'd try bringing this up with the Slony crew.
> I guess postgre
On 12/10/2014 6:53 PM, Israel Brewster wrote:
Currently, when I need to create/edit a stored procedure in Postgresql,
my workflow goes like the following:
- Create/edit the desired function in my "DB Commands" text file
- Copy and paste function into my development database
- Test
- repeat above
On 12/10/2014 7:20 PM, Guyren Howe wrote:
I want to do something that is perfectly satisfied by an hstore column. *Except* that
I want to be able to do fast (ie indexed) <, > etc comparisons, not just
equality.
From what I can tell, there isn’t really any way to get hstore to do this, so
Hi,
I've facing an out of memory condition after running SLONY several hours to
get a 1TB database with about 23,000 tables replicated. The error occurs
after about 50% of the tables were replicated.
Most of the 48GB memory is being used for file system cache but for some
reason the initial copy
> Currently, one issue you're going to face is that brin doesn't rescan a
range to
> find the tighest possible summary tuple.
That's going to be an issue I think, thanks for mentioning it. We'd need
some sort of mechanism for achieving this without a complete REINDEX, even
if it only reset the min
Hi
A final followup from my side to this post for anyone who may find this
thread in archives in the future.
On the 15th of August Jacob Bunk Nielsen wrote:
> On the 1st of July 2014 Jacob Bunk Nielsen wrote:
>
>> We have a PostgreSQL 9.3.4 running in an LXC container on Debian
>> Wheezy on a L
27 matches
Mail list logo