2013/8/26 Jeff Janes
> On Mon, Aug 26, 2013 at 10:01 AM, Torello Querci
> wrote:
> > Ok,
> >
> > now create index is finished using maintenance_work_mem=100MB.
> >
> > Thanks to all.
> >
> > I suppose that an error message more clear can help.
>
> Unfortunately, since no one knows what the real
Yes,
the table is bigger than 512MB.
Thank got your tips.
Best Regard, Torello
2013/8/26 Rafael Martinez Guerrero
> On 08/26/2013 06:37 PM, Torello Querci wrote:
> >
> >
> > In this moment I get this error while executing the restore of the big
> > table in a different database on the same m
Peter Eisentraut,
My name is Minmin,I come from China.At present,I am responsible for
translation work.From the site,I see some information about calling for
translations.I have great interest in this job,
and have time to do it.I hope that have opportunity to do this work.
-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160
> ...replication between PostgreSQL 9.2.4 and Oracle Database
...
> We also thank to develop a solution based on trigger and/or WAL
Before you reinvent that first wheel, check out Bucardo, which is a
trigger-based solution that can go from Pos
On Mon, Aug 26, 2013 at 3:17 AM, gajendra s v wrote:
> Please explain me why it is ?
>
A good place to start would be removing all the parts here that don't seem
to matter. Your problem seems to be with the recursive query (since that is
the part you're changing). Cut off everything else and com
Janek Sendrowski wrote
> Hi,
>
>
>
> thanks for all your answers.
>
> I'll have a try with the contains operator and the intrange, but
> before I'd like to know if this would work:
>
>
>
> CASE WHEN a >= 0 AND a < 25
>
> CASE WHEN a >= 25 AND a < 50
>
>
>
> There wouldn't be a
Hi,
thanks for all your answers.
I'll have a try with the contains operator and the intrange, but before I'd like to know if this would work:
CASE WHEN a >= 0 AND a < 25
CASE WHEN a >= 25 AND a < 50
There wouldn't be a double endpoint. I just have to decide which range the endpoint i
On Mon, Aug 26, 2013 at 10:01 AM, Torello Querci wrote:
> Ok,
>
> now create index is finished using maintenance_work_mem=100MB.
>
> Thanks to all.
>
> I suppose that an error message more clear can help.
Unfortunately, since no one knows what the real problem is, we can't
make the message more c
On 08/23/2013 03:29 AM, sachin kotwal wrote:
create sample table with one or two rows then use following command to
populate data.
INSERT INTO TABLE_NAME VALUES(generate_series(1,10));
Cartesian joins are also useful - especially when you want
semi-realistic data. A quick Google will g
On 08/26/2013 06:37 PM, Torello Querci wrote:
>
>
> In this moment I get this error while executing the restore of the big
> table in a different database on the same machine:
>
> psql:dump_ess_2013_08_26.sql:271177424: SSL error: sslv3 alert
> unexpected message
> psql:dump_ess_2013_08_26.sql:2
Ok,
now create index is finished using maintenance_work_mem=100MB.
Thanks to all.
I suppose that an error message more clear can help.
Best Regards, Torello
2013/8/26 Torello Querci
>
>
>
> 2013/8/26 Tom Lane
>
>> Torello Querci writes:
>> > 2013/8/26 Luca Ferrari
>> >> Is it possible t
On Sun, Aug 25, 2013 at 11:08 PM, 高健 wrote:
> Hello:
>
> Sorry for disturbing.
>
> I am now encountering a serious problem: memory is not enough.
>
> My customer reported that when they run a program they found the totall
> memory and disk i/o usage all reached to threshold value(80%).
>
> That pr
2013/8/26 Tom Lane
> Torello Querci writes:
> > 2013/8/26 Luca Ferrari
> >> Is it possible to test with an incremented work_mem value?
>
> > Actually I use the default work_set value (1MB).
>
> maintenance_work_mem is what would be used for CREATE INDEX.
>
> Ok thanks
> FWIW, though, the
Torello Querci escribió:
> 2013/8/26 Luca Ferrari
>
> > On Mon, Aug 26, 2013 at 4:27 PM, Torello Querci wrote:
> > > ERROR: unexpected end of tape
> >
> > Really strange, if I get it right something went wrong while sorting
> > tuples.
> > Is it possible to test with an incremented work_mem val
Torello Querci writes:
> 2013/8/26 Luca Ferrari
>> Is it possible to test with an incremented work_mem value?
> Actually I use the default work_set value (1MB).
maintenance_work_mem is what would be used for CREATE INDEX.
FWIW, though, the combination of this weird error and the fact that you
On Sun, Aug 25, 2013 at 7:57 PM, 高健 wrote:
> Hi :
>
> Thanks to Alvaro! Sorry for replying lately.
>
> I have understood a little about it.
>
> But the description of full_page_write made me even confused. Sorry that
> maybe I go to another problem:
>
> It is said:
> http://www.postgresql.org/docs
2013/8/26 Luca Ferrari
> On Mon, Aug 26, 2013 at 4:27 PM, Torello Querci wrote:
> > ERROR: unexpected end of tape
>
> Really strange, if I get it right something went wrong while sorting
> tuples.
> Is it possible to test with an incremented work_mem value?
>
> Actually I use the default work_s
2013/8/26 Florian Weimer
> On 08/26/2013 04:27 PM, Torello Querci wrote:
>
>> Create index statement that I use is:
>>
>> CREATE INDEX dati_impianto_id_tipo_dato_id_**data_misurazione_idx
>>ON dati
>>USING btree
>>(impianto_id , tipo_dato_id , data_misurazione DESC);
>>
>
> What are
On Mon, Aug 26, 2013 at 4:27 PM, Torello Querci wrote:
> ERROR: unexpected end of tape
Really strange, if I get it right something went wrong while sorting tuples.
Is it possible to test with an incremented work_mem value?
Luca
--
Sent via pgsql-general mailing list (pgsql-general@postgresql
On 08/26/2013 04:27 PM, Torello Querci wrote:
Create index statement that I use is:
CREATE INDEX dati_impianto_id_tipo_dato_id_data_misurazione_idx
ON dati
USING btree
(impianto_id , tipo_dato_id , data_misurazione DESC);
What are the data types of these columns?
--
Florian Weimer /
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of ??
Sent: Monday, August 26, 2013 2:08 AM
To: pgsql-general
Subject: [GENERAL] Is there any method to limit resource usage in PG?
Hello:
Sorry for disturbing.
I am now encountering a serious problem
On Mon, Aug 26, 2013 at 11:02 PM, Mistina Michal
wrote:
> Hi Masao.
> Thank you for suggestion. In deed that could occure. Most probably while I
> was testing split-brain situation. In that case I turned off network card on
> one node and on both nodes DRBD was in primary role. But after the
> spl
Hi to all
On my Postgresql 9.1 instance I had a problem with an index.
Using index I get less tuples than expected.
I try to remove index and the query works fine but obviosly the query is
slow so I try to recreate the index.
I run the create index statement but after a lot of time I get thi
Hi Masao.
Thank you for suggestion. In deed that could occure. Most probably while I
was testing split-brain situation. In that case I turned off network card on
one node and on both nodes DRBD was in primary role. But after the
split-brain occurred I resync DRBD so from two primaries I promoted on
BladeOfLight16 wrote
> Then again, I guess you don't need a nested query.
>
> SELECT v_rec1.user,
> CASE WIDTH_BUCKET(v_rec_fts.lev, 0, 100, 4)
> WHEN 1 THEN '0 to 25'
> WHEN 2 THEN '25 to 50'
> WHEN 3 THEN '50 to 75'
> WHEN 4 THEN '
>Given a system with 32 cores, an SSD SAN with 48x drives, and 2x 8Gbps
>paths from the server to the SAN, what would be a good starting point
>to set effective_io_concurrency? I currently have it set to 32, but I
>kind of feel like the right setting would be "2" since we have two
>paths. We don'
On Mon, Aug 26, 2013 at 9:53 PM, Mistina Michal wrote:
> Hi there.
>
> I didn’t find out why this issue happened. Only backup and format of the
> filesystem where corrupted postmaster.pid file existed helped to get rid of
> it. Hopefully the file won’t appear in the future.
I have encountered sim
Hi there.
I didn't find out why this issue happened. Only backup and format of the
filesystem where corrupted postmaster.pid file existed helped to get rid of
it. Hopefully the file won't appear in the future.
Best regards,
Michal Mistina
From: pgsql-general-ow...@postgresql.org
[mailto:p
On 08/26/2013 11:37 AM, Luca Ferrari wrote:
On Mon, Aug 26, 2013 at 4:57 AM, 高健 wrote:
But why "writes the entire content of each disk page to WAL "?
The documentation states that: "The row-level change data normally
stored in WAL will not be enough to completely restore such a page
during p
On Mon, Aug 26, 2013 at 4:57 AM, 高健 wrote:
> But why "writes the entire content of each disk page to WAL "?
>
The documentation states that: "The row-level change data normally
stored in WAL will not be enough to completely restore such a page
during post-crash recovery.". I guess that a mixed pa
Hello All,
I am migrating oracle queries to postgres queries
*Oracle query is below*
select * from (select * from KM_COURSE_MAST where ID in (select OBJ_ID from
(select OBJ_ID,PERFORMER_TYPE,PERFORMER_ID from KM_REL_OBJ_PER_ACTION
where OBJ_TYPE='COURSETYPE') where PERFORMER_TYPE='GROUP'
31 matches
Mail list logo