bytes
Max nice priority 00
Max realtime priority 00
Max realtime timeout unlimitedunlimited
us
[root@2-NfseNet-SGDB ~]#
On Thu, Dec 11, 2014 at 6:01 PM, Tom Lane wrote:
> Carlos Henrique Reimer writes:
> > Yes,
014 at 12:05 PM, Carlos Henrique Reimer
> wrote:
> > That was exactly what the process was doing and the out of memory error
> > happened while one of the merges to set 1 was being executed.
>
> You sure you don't have a ulimit getting in the way?
>
--
Reim
That was exactly what the process was doing and the out of memory error
happened while one of the merges to set 1 was being executed.
On Thu, Dec 11, 2014 at 4:42 PM, Vick Khera wrote:
>
> On Thu, Dec 11, 2014 at 10:30 AM, Tom Lane wrote:
>
>> needed to hold relcache entries for all 23000 table
Slony version is 2.2.3
On Thu, Dec 11, 2014 at 3:29 PM, Scott Marlowe
wrote:
> Just wondering what slony version you're using?
>
--
Reimer
47-3347-1724 47-9183-0547 msn: carlos.rei...@opendb.com.br
(-x) unlimited
On Thu, Dec 11, 2014 at 1:30 PM, Tom Lane wrote:
> Carlos Henrique Reimer writes:
> > I've facing an out of memory condition after running SLONY several hours
> to
> > get a 1TB database with about 23,000 tables replicated. The error oc
Hi,
I've facing an out of memory condition after running SLONY several hours to
get a 1TB database with about 23,000 tables replicated. The error occurs
after about 50% of the tables were replicated.
Most of the 48GB memory is being used for file system cache but for some
reason the initial copy
follow to identify the root cause
in order to prevent it to happen again?
Thank you!
On Tue, Aug 6, 2013 at 9:14 PM, Sergey Konoplev wrote:
> On Tue, Aug 6, 2013 at 4:17 PM, Carlos Henrique Reimer
> wrote:
> > I have tried to drop the index and the reindex procedure but both fail
&g
directory to the new box.
Hope the error will not be propagated to the new box.
Reimer
On Mon, Aug 5, 2013 at 10:42 AM, Adrian Klaver wrote:
> On 08/05/2013 06:24 AM, Carlos Henrique Reimer wrote:
>
>> Hi,
>>
>> Yes, I agree with you that it must be upgraded to a support
, 2013 at 8:35 AM, Craig Ringer wrote:
> On 08/04/2013 02:41 AM, Carlos Henrique Reimer wrote:
> > Hi,
> >
> > I have a Windows box running Windows Server 2003 Enterprise Edition
> > Service Pack 2 with PostgreSQL 8.2.23 and getting a server crash while
> > trying
Hi,
I have a Windows box running Windows Server 2003 Enterprise Edition Service
Pack 2 with PostgreSQL 8.2.23 and getting a server crash while trying to
select a table:
select * from "TOTALL.tt_est" where assina=' kdkd' ;
Dumping the table with pg_dump or creating indexes in this table produce
t
as source_t
> , pg_type as target_t
> ,pg_proc as proc
> WHERE
> ct.castsource = source_t.oid
> and ct.casttarget = target_t.oid
> and ct.castfunc = proc.oid
>
> I get 144 rows.
> http://www.rummandba.com/2013/02/postgresql-type-casting-information.html
>
>
&g
It works if I drop the functions but then the select trim(1) does not work;
On Wed, May 15, 2013 at 5:38 PM, AI Rumman wrote:
> Drop those functions and try again.
>
>
> On Wed, May 15, 2013 at 4:22 PM, Carlos Henrique Reimer <
> carlos.rei...@opendb.com.br> wrote:
&g
issed that.
> Which version of 9.2 you are working with? I am also at 9.2 and its
> working fine.
> Try out using
> select 'teste'||1::int;
>
> See if it works or not.
>
>
> On Wed, May 15, 2013 at 3:41 PM, Carlos Henrique Reimer <
> carlos.rei...@opendb.co
1-to.html
>
> It'll work.
>
>
> On Wed, May 15, 2013 at 3:17 PM, Carlos Henrique Reimer <
> carlos.rei...@opendb.com.br> wrote:
>
>> Hi,
>>
>> Currently, our application is still using PG 8.2 and we are trying to use
>> 9.2 but there are some
Hi,
Currently, our application is still using PG 8.2 and we are trying to use
9.2 but there are some problems related with the implicit casts removed on
8.3.
Example:
1) select 'teste'||1;
2) select trim(1);
Select 1 & 2 does run fine on 8.2 but in 9.2 select 1 is ok and select 2
got an error d
Hi,
We are developing a solution which will run in thousands of small cash till
machines running Linux and we would like to use PostgreSQL but there is a
insecurity feeling regarding the solution basically because these boxes
would be exposed to an insecure environment and insecure procedures like
los Henrique Reimer wrote:
>
> > Anyway it does not seam related to statistics as the query plan
> > is exactly the same for both scenarios, morning and evening:
>
> > Morning:
>
> > Index Scan using pagpk_aux_mes, pagpk_aux_mes, pk_cadpag,
> > pk_c
ample that could help is this seqscan:
explain analyze select sittrib8 from iparq.arript where sittrib8=33;
In the evening:
Fri Feb 8 14:00:01 BRST 2013
QUERY
PLAN
-------
Hi,
I`m trying to figure out why a query runs in 755ms in the morning and
20054ms (26x) in the evening.
_
Mornin
= 2::smallint) AND ((tipopgto)::text > '
'::text)) OR ((ano = 2013::smallint) AND (mes = 1::smallint) AND (codfunc =
29602::bigint) AND (seqfunc = 2::smallint) AND ((tipopgto)::text = '
'::text) AND (codpd > 0::smallint)))
(2 rows)
Should it not be the same inside o
Hi,
We're facing a weird performance problem in one of our PostgreSQL servers
running 8.0.26.
What can explain the difference between calling same query inside and
outside a cursor? If we run the query outside a cursor we got a response
time of 755ms and 33454ms if we call the same query inside a
eimer
On Tue, Nov 13, 2012 at 5:51 PM, Tom Lane wrote:
> Carlos Henrique Reimer writes:
> > That is what I got from gdb:
>
> > ExecutorState: 11586756656 total in 1391 blocks; 4938408 free (6
> > chunks); 11581818248 used
>
> So, query-lifespan memory leak.
Hi,
That is what I got from gdb:
TopMemoryContext: 88992 total in 10 blocks; 10336 free (7 chunks); 78656
used
Type information cache: 24576 total in 2 blocks; 11888 free (5 chunks);
12688 used
Operator lookup cache: 24576 total in 2 blocks; 11888 free (5 chunks);
12688 used
Operator class
Hi,
How is the best way to attach a debugger to the SELECT and identify why is
it exhausting server storage.
Thank you in advance!
On Fri, Nov 9, 2012 at 4:10 AM, Craig Ringer wrote:
> On 11/08/2012 11:35 PM, Carlos Henrique Reimer wrote:
> > Hi Craig,
> >
> > work_mem
abled triggers:
tg_nfe BEFORE INSERT OR DELETE OR UPDATE ON "5611_nfarq".nfe FOR EACH
ROW EXECUTE PROCEDURE fun_nfarq.nfe('5611', 'NFARQ')
FiscalWeb=#
On Thu, Nov 8, 2012 at 10:50 AM, Craig Ringer wrote:
> On 11/08/2012 06:20 PM, Carlos Henrique Reimer wrote:
>
Hi,
The following SQL join command runs the PostgreSQL server out of memory.
The server runs on a box with Red Hat Enterprise Linux Server release 6.3
(Santiago) and PostgreSQL 8.3.21.
select wm_nfsp from "5611_isarq".wm_nfsp
left join "5611_nfarq".nfe on
wm_nfsp.tpdoc = 7 where 1 = 1 and
wm_nfsp
Hi,
We're planning to move our postgreSQL database from one CPU box to another
box.
I'm considering an alternative procedure for the move as the standard one
(pg_dump from the old, copy dump to the new box, psql to restore in the
new) will take about 10 hours to complete. The ideia is installing
Hi,
I need to improve performance for a particular SQL command but facing
difficulties to understand the explain results.
Is there somewhere a tool could help on this?
I've stored the SQL code and corresponding explain analyze at
SQL: http://www.opendb.com.br/v1/sql.txt
Explain: http://www.open
quivalent in "UTF8"
pg_dump: The command was: COPY brasil.cidade (gid, "municpio", "municpi0",
uf, longitude, latitude, the_geom) TO stdout;
pg_dump: *** aborted because of error
How can I fix this error?
Thank you!
2010/11/1 Filip Rembiałkowski
> 2010/11/1 C
Hi,
I currently have my PostgreSQL server running in a windows box and now we're
migrating it to a Linux operational system.
Current windows configuration:
pg_controldata shows the LC_COLLATE and LC_CTYPE are Portuguese_Brasil.1252
psql \l command shows we have databases with encoding WIN1252 and
Hi,
After starting the debugger in a PostgreSQL 8.3 running in Windows 2003 SP2
box I'm getting in the log a lot of the following message:
LOG: loaded library "$libdir/plugins/plugin_debugger.dll
Configuration option changed to start the debugger:
shared_preload_libraries = '$libdir/plugins/plug
Hi,
Yes, once correct schema was included in the search_path, VACUUM and ANALYZE
run fine again.
Thank you!
On Fri, Sep 10, 2010 at 11:38 AM, Tom Lane wrote:
> Carlos Henrique Reimer writes:
> > Yes, you're right! I found out a functional index using this function and
&g
ccb(codtab)
"fk_tit_decb" FOREIGN KEY (codecb) REFERENCES "BRASIL".td_ecb(codtab)
"fk_tit_drem" FOREIGN KEY (codrem) REFERENCES "BRASIL".td_rem(codtab)
"fk_tit_rec" FOREIGN KEY (filrec, seqrec, parrec, subrec) REFERENCES
"BRASIL
"
Hi,
We are facing the following problem in a PG 8.2 server when trying to vacuum
one of our databases:
vacuumdb: vacuuming database "reimer"
INFO: vacuuming "pg_catalog.pg_database"
INFO: "pg_database": found 0 removable, 6 nonremovable row versions in 1
pages
INFO: index "pg_database_datname_
needs to be manually done and as any manual
operation exposed to errors.
Maybe this changed in the new PG releases but it was this way in the past.
Thank you!
On Sun, Sep 5, 2010 at 4:46 PM, Scott Marlowe wrote:
> On Sun, Sep 5, 2010 at 5:09 AM, Carlos Henrique Reimer
> wrote:
> &
think another approach. Maybe a
CLUSTER can do the work. Will start a CLUSTER and see if I can check the
progress looking the size of the new table relfilenode. It will probably
have less than 102 GB.
Thank you!
2010/9/5 Alban Hertroys
> On 5 Sep 2010, at 12:13, Carlos Henrique Reimer wrote:
Hi,
I need to shrick a table with 102 GB and approximately 380.000.000 rows.
There is a vacuum full running for 13 hours and the only messages a get are:
INFO: vacuuming "public.posicoes_controles"
INFO: "posicoes_controles": found 43960 removable, 394481459 nonremovable
row versions in 133089
-- and use this as a
> base for
> DELETE statement...
>
> 2010/8/30, George H :
> > On Mon, Aug 30, 2010 at 5:30 AM, Carlos Henrique Reimer
> > wrote:
> >> Hi,
> >>
> >> We had by mistake dropped the referencial integrety between two huge
> >&g
Seq Scan on posicoes (cost=0.00..8064108.80 rows=380245580
width=4)"
Will this work better that a pl/pgsql as you suggested? Or is there
something even betther?
Thank you!
2010/8/30 George H
> On Mon, Aug 30, 2010 at 5:30 AM, Carlos Henrique Reimer
> wrote:
> > Hi,
> >
Hi,
We had by mistake dropped the referencial integrety between two huge tables
and now I'm facing the following messages when trying to recreate the
foreign key again:
alter table posicoes_controles add
CONSTRAINT protocolo FOREIGN KEY (protocolo)
REFERENCES posicoes (protocolo) MATCH SI
Hi
I've a Linux box running postgresql 8.2.17 and facing some strange results
from the to_date function.
As you can see in the following tests the problem occurs when the template
used includes upper and lower case characters for the minute (Mi or mI).
Am I using the incorrect syntax or is it a
25, 2009 at 3:28 PM, Carlos Henrique Reimer
> wrote:
> > Hi,
> >
> > We're facing performance problems in a Linux box running CentOS release 5
> > (Final) and PostgreSQL 8.2.4. I've done some basic checks in the
> > configuration but everything looks fine to
Hi,
We're facing performance problems in a Linux box running CentOS release 5
(Final) and PostgreSQL 8.2.4. I've done some basic checks in the
configuration but everything looks fine to me. One weird behaviour I've
found is the cached size showed by the
"top" and "free" Linux commands:
top - 08:
Hi,
I've a plpgsql function that when called never ends and I would like
to trace the internal function commands and see where is the problem.
How can I trace what the function is doing?
Thank you!
Carlos
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make cha
Hi, When the pg_locks view is used the internal lock manager data structures are momentarily locked and that is why I would like to know if some application is reading the pg_locks view and how many times. Is there a way to discover it? Thanks in advance! Reimer
Yahoo! Acesso Gr
Hello, Pg_dump complains when I use the -Fc option with: pg_dump: [archiver] WARNING: requested compression not available in this installation -- archive will be uncompressed and the dump is not compressed... searching in the list I´ve found that there is something related with the zlib.
Hi, I would like to know how much clustered is a table related to some index How can I discover? Reimer
Yahoo! doce lar. Faça do Yahoo! sua homepage.
Hello, Is there a way do discover when was the last time a table or database vacuumed? Thanks in advance! Reimer
Yahoo! Acesso Grátis
Internet rápida e grátis. Instale o discador agora!
I would like to change to C because it will give us better performance. As it is per-cluster I would have to initib again but what will happen when the dump will be reloaded?
Some characters that we have today in the SQL_ASCII database probably can not be loaded in a LATIN1 database. Am I right
Hi,
I´m trying to post messages in the performance list but they don´t appear in the list.
Whan can be wrong?
Reimer__Converse com seus amigos em tempo real com o Yahoo! Messenger http://br.download.yahoo.com/messenger/
Hi,
I´m thinking to test your suggestion, basically because there are only few sites to connect, but there are some points that aren´t very clear to me.
My doubts:
1. How to make a view updatable? Using the rule system?
1. Why are inserts handled differently from updates?
2. Can not I use th
ExactlyJeff Davis <[EMAIL PROTECTED]> escreveu:
Jim C. Nasby wrote:> Or, for something far easier, try> http://pgfoundry.org/projects/pgcluster/ which provides syncronous> multi-master clustering.> He specifically said that pgcluster did not work for him because thedatabases would be at physica
I read some documents about replication and realized that if you plan on using asynchronous replication, your application should be designed from the outset with that in mind because asynchronous replication is not something that can be easily added on after the fact.
Am I right?
Reimer
__
Hello,
Currently we have only one database accessed by the headquarter and two branches but the performance in the branches is very poor and I was invited to discover a way to increase it.
One possible solution is replicate the headquarter DB into the two branches.
I read about slony-i, but
Hi,
I´m trying to restore a client cluster in my Linux box but during the restore the following error is reported:
__
DELETE 0CREATE USER '\set ON_ERROR_STOP 'true' PostgreSQL database cluster dump--\connect "template1"You are now conne
55 matches
Mail list logo