Hello!
max_locks_per_transaction enforces some restriction: the_locks_count <=
the_threshold
the_threshold is max_locks_per_transaction * (max_connections +
max_prepared_transactions), the documentation is quite clear.
What is the_locks_count?
In docs we see "no more than this many distinct
On Sun, Mar 29, 2015 at 8:52 PM, Maulik Shah
wrote:
> Dear Sir/Madam
>
>
> how to convert float to timestamp in single select query
>
> for exp. i have float as 1.251152515236 ,
>
> i want to convert this to datetime and from datetime to timestamp...
> i.e. 02:22:044456
>
You imply that (float)
Dear Sir/Madam
how to convert float to timestamp in single select query
for exp. i have float as 1.251152515236 ,
i want to convert this to datetime and from datetime to timestamp...
i.e. 02:22:044456
I need sql query
It is actually .crd file made in xml , and i have query inside xml like
Good morning,
I am Rebecca Zahra and I am currently in my final year of Masters studies
at the University of Malta. My thesis is about the usage of indexes for
multi-dimensional data.
I was going through the posts regarding GIST indexes and I came across the
following
http://dba.stackexchange.com
Hi,
I am trying to wrap my head around a strange problem I am having. I have
double checked the documentation but I could not find anything on this.
I am attaching a simplified version of my problem. I my TEST 4 I expect 1
row but I get nothing. The test is
with I(id) as (
insert into abc(colD
Hi all,
I'm trying to create a custom aggregate function that returns the value
from the first row (possibly null).
For ex.
Table t
a | b
-
1 | A
2 | NULL
SELECT my_first(b order by a) => A
SELECT my_first(b order by a DESC) => NULL
The straightforward way would seem to be something like
TonyS writes:
> Running "analyze verbose;" and watching top, the system starts out using no
> swap data and about 4GB of cached memory and about 1GB of used memory. As it
> runs, the amount of used RAM climbs, and eventually the used swap memory
> increases to 100% and after being at that level fo
Ah, that's exactly what I was looking for. Thanks everyone for the
responses!
- Josh
ᐧ
On Tue, Mar 31, 2015 at 8:54 PM, Tom Lane wrote:
> Michael Paquier writes:
> > On Wed, Apr 1, 2015 at 9:43 AM, Joshua Ma wrote:
> >> Why are two scans necessary? What would break if it did something like
>
Michael Paquier writes:
> On Wed, Apr 1, 2015 at 9:43 AM, Joshua Ma wrote:
>> Why are two scans necessary? What would break if it did something like the
>> following?
>>
>> 1) insert pg_index entry, wait for relevant txns to finish, mark index
>> open for inserts
>>
>> 2) build index in a singl
Hi Michael,
Isn't that also true during the 2nd scan? I'm assuming new inserts during
the 2nd scan properly update the index, so couldn't the same mechanism
update the index during the 1st scan?
I guess I'm confused because, if you assume a pathological case where all
the data gets inserted after
I have loaded data for a database conversion I am working on and I have
created the indexes I believe I will need.
My current design uses a schema for each client that will access the
database. I am using schema because there is one set of global tables that I
need to query with the tables within
On Wed, Apr 1, 2015 at 9:43 AM, Joshua Ma wrote:
> Hi all,
>
> I was curious about why CONCURRENTLY needs two scans to complete - from
> the documentation on HOT (access/heap/README.HOT), it looks like the
> process is:
>
> 1) insert pg_index entry, wait for relevant in-progress txns to finish
>
Hi all,
I was curious about why CONCURRENTLY needs two scans to complete - from the
documentation on HOT (access/heap/README.HOT), it looks like the process is:
1) insert pg_index entry, wait for relevant in-progress txns to finish
(before marking index open for inserts, so HOT updates won't writ
On 3/31/2015 3:28 PM, Fabio Ugo Venchiarutti wrote:
On 01/04/15 06:12, Andy Colson wrote:
On 3/31/2015 1:58 AM, Fabio Ugo Venchiarutti wrote:
Greetings
I'm working for a startup and our core DB is growing rather fast.
Our target scale is large enough that we expect some of our core tables'
Thanks for the responses
For anyone searching in the future I'll answer Tom's questions and list the
boneheaded fix that it ended up actually being (really painful as I've been
fighting this for a week).
1) According to amazon they run stock postgres as far as the query planner
is concerned.
2) Y
> Version of PostgreSQL?
9.3
> Operating system?
win
> Hardware configuration?
8 gb ram. takes about 7000 ms to retrieve about 7000 rows.
max_connections = 200
shared_buffers = 512mb
effective_cache_size = 6gb
work_mem = 13107kb
maintenance_work_mem = 512mb
checkpoint_segments = 32
checkpoint_com
Mike Roest writes:
>I'm having an issue with query performance between 2 different pgsql
> environments.
> Ther first is our current production postgres server with is running 9.3.5
> on Centos 5 x64. The second system is Amazon's RDS postgres as a service.
> On our local DB server we have a
On 3/31/2015 10:31 AM, Mike Roest wrote:
Hi there,
I'm having an issue with query performance between 2 different pgsql
environments.
Ther first is our current production postgres server with is running
9.3.5 on Centos 5 x64. The second system is Amazon's RDS postgres as a
service. On our
On 3/31/2015 1:58 AM, Fabio Ugo Venchiarutti wrote:
Greetings
I'm working for a startup and our core DB is growing rather fast.
Our target scale is large enough that we expect some of our core tables'
indexes to grow bigger than the memory on any single node over the next
couple years (our cur
Hi there,
I'm having an issue with query performance between 2 different pgsql
environments.
Ther first is our current production postgres server with is running 9.3.5
on Centos 5 x64. The second system is Amazon's RDS postgres as a service.
On our local DB server we have a query that executes
OK, I figured out this part and came up with:
SELECT
row.snt_code AS "snt_code",
row.vdc AS "vdc",
row.uuid AS "uuid",
row_to_json(row, true)::json AS "json"
FROM (
SELECT
vm.*,
array_agg(vi),
CONCAT('https://mysite.mydomain.tld/v3/customer/', vm.snt_cod
Thanks IAM
Regards
-
Christian Vazquez
--
View this message in context:
http://postgresql.nabble.com/unrecognized-configuration-parameter-bdr-connections-tp5843957p5844078.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.
--
Sent via pgsql-general mailing li
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
23 matches
Mail list logo