Hi,
Thanks for your help Alban !
Alexis.
2014-08-09 18:46 GMT+02:00 Alban Hertroys :
> On 09 Aug 2014, at 13:24, Alexis Bernard wrote:
>
> > Primary key definition: "tracks_pkey" PRIMARY KEY, btree (id)
> > select version(): PostgreSQL 9.3.4 on x86_64-unknown-linux-gnu, compiled
> by gcc (Ubu
The vacuum full tracks just finished and I still have the duplicated IDs.
2014-08-12 9:17 GMT+02:00 Alexis Bernard :
> Hi,
>
> Thanks for your help Alban !
>
> Alexis.
>
>
> 2014-08-09 18:46 GMT+02:00 Alban Hertroys :
>
> On 09 Aug 2014, at 13:24, Alexis Bernard wrote:
>>
>> > Primary key defin
-Original Message-
From: Bruce Momjian [mailto:br...@momjian.us]
Sent: 07 August 2014 17:43
To: Russell Keane
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Upgrading 9.0 to 9.3 - permission denied to
pg_upgrade_utility.log
On Thu, Aug 7, 2014 at 09:42:13AM +0100, Russell Kea
2014-08-11 22:48 GMT+02:00 Kevin Grittner :
> Rémi Cura wrote:
>
> > as you (both?) suggested it works using advisory lock used at the
> > beginning and end of the transaction. This way there is no upsert
> > at all if the element is locked? (I used general advisory lockbut
> > in the same way as
On Tue, Aug 12, 2014 at 5:07 AM, Rémi Cura wrote:
>
>
>
> 2014-08-11 22:48 GMT+02:00 Kevin Grittner :
>
>> Rémi Cura wrote:
>>
>> > as you (both?) suggested it works using advisory lock used at the
>> > beginning and end of the transaction. This way there is no upsert
>> > at all if the element i
I guess this is easy a question for the gurus, but afraid I cannot get
te answer!
I have this table:
aif_test=# \d sbox;
Table "public.sbox"
Column | Type | Modifiers
++---
id | character varying(255) | not null
data
Hello,
I'm looking into gathering information about geographical locations:
- Coordinates
- Place name
- Pictures
- etc.
Since a place can be anything, a specific place may need any kind of data
type to be described.
I'm therefore looking into using the same method as the semantic Web and
tryi
On 08/12/2014 06:20 AM, Jose Moreira wrote:
I guess this is easy a question for the gurus, but afraid I cannot get
te answer!
I have this table:
aif_test=# \d sbox;
Table "public.sbox"
Column | Type | Modifiers
++---
id
On Thu, Aug 7, 2014 at 11:50 PM, vpmm2007 wrote:
> create or replace package CUM_A_TYPES
> as
> type LT_PAYMENT is record
> (BASICnumber,
> DPnumber,
> GRADE_PAYnumber
> );
> TYPE TYPE_CALC_TAX is record
> (
> FIN_ROLE_ID number(8),
> CALC_FOR_ROLE_CODE number(
Is there a reason why hstore or json is not an option? That may work a lot
better than this approach.
Another approach I have had is a set of additional catalog tables and
dynamically adding/removing columns from an extended attributes table.
On Tue, Aug 12, 2014 at 6:19 AM, Jimmy Thrasibule <
We are currently working on in-database semantic reasoning.
However, if you do not need any T-Box or A-Box reasoning, maybe JSON is
a better choice in your case.
Am 12.08.2014 15:19, schrieb Jimmy Thrasibule:
Hello,
I'm looking into gathering information about geographical locations:
- Coor
Hi,
I don't have the root permission for yum.
secondly,When i try to install it's return following same as previous
error/msg
tar xvzf Test-Harness-3.32.tar.gz
cd Test-Harness-3.32
-bash-4.1$ *perl Makefile.PL*
Can't locate ExtUtils/MakeMaker.pm in @INC (@INC contains:
/usr/loca
where need to install Test-Harness-3.32. or ExtUtis/MakeMaker.pm in pg
cluu -2.0 folder or /usr/local/lib64/perl..
please let me know
advance thanks..
On Tue, Aug 12, 2014 at 3:20 PM, Ramesh T
wrote:
> Hi,
> I don't have the root permission for yum.
> secondly,When i try to i
> Is there a reason why hstore or json is not an option? That may work a lot
> better than this approach.
I don't want to move away from SQL common features so I can have a
test environment using SQLite and deploy on PostgreSQL. This approach
looks elegant and simple to me. Using a new table per
Hello all,
Iam running a postgresql 9.0.13 master/slave instance in a write heavy
workload.
The hardware is a Dell 720, E5530 - 8 core, 128GB RAM. The database (around
250g with indexes/bloat etc) is sitting on flashcache device with 2 fusion-
io PCIe MLC SSDs as frontend and a MD3200 based RAID1
On 12/08/2014 15:57, Jimmy Thrasibule wrote:
>> Is there a reason why hstore or json is not an option? That may work a lot
>> better than this approach.
>
> I don't want to move away from SQL common features so I can have a
> test environment using SQLite and deploy on PostgreSQL. This approach
>
On Tue, 12 Aug 2014 16:57:32 +0200
Jimmy Thrasibule wrote:
> > Is there a reason why hstore or json is not an option? That may work a lot
> > better than this approach.
>
> I don't want to move away from SQL common features so I can have a
> test environment using SQLite and deploy on PostgreSQ
Hello
When you install pgcluu, this program will need some perl modules that
normally they are not installed in our OS
For this reason the installation stop some times.
You must search the different modules required in the internet and
install them.
I hope this help you
David
On Tue 12 A
On 8/12/2014 12:32 AM, Alexis Bernard wrote:
The vacuum full tracks just finished and I still have the duplicated IDs.
vacuum full does a reindex, the reindex should have errored out on the
duplicate ID's.
--
john r pierce 37N 122W
somewhere on the midd
On 8/12/2014 6:19 AM, Jimmy Thrasibule wrote:
I'm looking into gathering information about geographical locations:
- Coordinates
- Place name
- Pictures
- etc.
Since a place can be anything, a specific place may need any kind of
data type to be described.
I'm therefore looking into using th
Genereal advice is to set up shared_buffers to 25% of total RAM. 75% RAM
for OS cache.
On my case (1.5 TB database, 145 GB RAM), setting shared_buffers bigger
than 8GB would give no significant performance impact.
On some cases, setting it low would be an advantage
http://www.depesz.com/2007/12/05/
Alexis Bernard wrote
> Hi,
>
> I am having regurlarly duplicated rows with the same primary key.
>
> => select id, created_at, updated_at from tracks where created_at =
> '2014-07-03 15:09:16.336488';
>id | created_at | updated_at
> +-
On 08/12/2014 06:57 AM, Ramesh T wrote:
where need to install Test-Harness-3.32. or ExtUtis/MakeMaker.pm in pg
cluu -2.0 folder or /usr/local/lib64/perl..
please let me know
advance thanks..
Two ways:
1) Quick and dirty
Go here:
http://search.cpan.org/~bingos/ExtUtils-MakeMaker-6.98/lib/
Do you run intensive read query on slave ?
If yes, query conflict can cause that,
http://www.postgresql.org/docs/9.1/static/hot-standby.html#HOT-STANDBY-CONFLICT
On conflict, xlog stream will be saved on xlog dir on slave instead of
replaying it. This happen until slave has opportunity to write all
Hello all,
Iam running a postgresql 9.0.13 master/slave instance in a write heavy
workload.
The hardware is a Dell 720, E5530 - 8 core, 128GB RAM. The database (around
250g with indexes/bloat etc) is sitting on flashcache device with 2 fusion-
io PCIe MLC SSDs as frontend and a MD3200 based RA
Yesterday I had an interesting discussion with my colleague about shared
buffers size for our new server. This machine (is dedicated for db) has got
512GB of RAM and database size is about 80GB, so he assumes that db will
never have to read from disk, so there is no point to adjust read ahead
setti
Hello,
I'm going to add support to the jsonb data type in psycopg2, in order
to have the type behaving like json currently does
(http://initd.org/psycopg/docs/extras.html#json-adaptation).
Is it correct that oid and arrayoid for the type will be 3802 and 3807
and that they won't change before the
On 08/12/2014 11:49 PM, Daniele Varrazzo wrote:
> Hello,
>
> I'm going to add support to the jsonb data type in psycopg2, in order
> to have the type behaving like json currently does
> (http://initd.org/psycopg/docs/extras.html#json-adaptation).
>
> Is it correct that oid and arrayoid for the ty
On 8/12/2014 2:41 PM, pinker wrote:
btw. 512MB if we assume up to 600 connection is a reasonable value?
thats an insanely high connection count, if you actually expect those
connections to be executing concurrent queries, unless you have
something north of 100 CPU cores.
you'd be much bette
yes, I know the count is quite high. It is the max value we've
estimated, but probably on average day it will be 100-200, and yes we
use pgpool.
Am 13.08.2014 00:09, schrieb John R Pierce:
On 8/12/2014 2:41 PM, pinker wrote:
btw. 512MB if we assume up to 600 connection is a reasonable value?
On 8/12/2014 3:29 PM, pinker wrote:
yes, I know the count is quite high. It is the max value we've
estimated, but probably on average day it will be 100-200, and yes we
use pgpool.
if you're using a pooler, then why would you be using 200 concurrent
connections, unless you have a 50 or 100
Ok, I wasn't precisely enough, you are right. It's brand new server,
nothing is yet configured and we have not even os installed. The number
was the overall count we expect for a whole cluster.
But the main question is: is it possible to completely avoid disk read
if there is huge amount of R
On 8/12/2014 3:52 PM, pinker wrote:
Ok, I wasn't precisely enough, you are right. It's brand new server,
nothing is yet configured and we have not even os installed. The
number was the overall count we expect for a whole cluster.
But the main question is: is it possible to completely avoid di
> Yesterday I had an interesting discussion with my colleague about shared
> buffers size for our new server. This machine (is dedicated for db) has got
> 512GB of RAM and database size is about 80GB, so he assumes that db will
> never have to read from disk, so there is no point to adjust read ahe
On Tuesday, August 12, 2014, pinker wrote:
> Yesterday I had an interesting discussion with my colleague about shared
> buffers size for our new server. This machine (is dedicated for db) has got
> 512GB of RAM and database size is about 80GB, so he assumes that db will
> never have to read from
On Tue, Aug 12, 2014 at 8:33 AM, Bill Moran
wrote:
> On Tue, 12 Aug 2014 16:57:32 +0200
> Jimmy Thrasibule wrote:
>
> > > Is there a reason why hstore or json is not an option? That may work
> a lot
> > > better than this approach.
> >
> > I don't want to move away from SQL common features so I
On Wed, Aug 13, 2014 at 6:54 AM, Vik Fearing wrote:
> On 08/12/2014 11:49 PM, Daniele Varrazzo wrote:
>> Hello,
>>
>> I'm going to add support to the jsonb data type in psycopg2, in order
>> to have the type behaving like json currently does
>> (http://initd.org/psycopg/docs/extras.html#json-adapt
I re-tried both vacuum and reindex:
=> vacuum full verbose tracks;
INFO: vacuuming "public.tracks"
INFO: "tracks": found 0 removable, 1662221 nonremovable row versions in
34274 pages
DETAIL: 0 dead row versions cannot be removed yet.
CPU 0.27s/0.68u sec elapsed 2.44 sec.
VACUUM
=> reindex index
dushy wrote:
> Iam running a postgresql 9.0.13 master/slave instance in a write heavy
> workload.
>
> The hardware is a Dell 720, E5530 - 8 core, 128GB RAM. The database (around
> 250g with indexes/bloat etc) is sitting on flashcache device with 2 fusion-
> io PCIe MLC SSDs as frontend and a MD320
39 matches
Mail list logo