> On 04 Mar 2015, at 22:18, Igor Stassiy wrote:
>
> I would like to stop executing the query for a row of table "a" when a single
> row of "b" is found. This query would not stop
> processing but will filter all the rows that are found at the end of
> execution.
>
> Is there a way to express
On 3/4/15 9:25 AM, Semyon Reyfman wrote:
I have a list-based partition table and I want to prevent scanning of all
children tables in a JOIN, which is using the partition column. I know
that CHECK constraints are not used in this case
Why do you say that? If properly written and with constrai
On 3/4/15 10:12 AM, Bill Moran wrote:
From there, if the data is large enough to trigger out-of-line
storage, the data will be broken down into chunks and stored in a
toast table, this increases the overhead because each row in the
toast table will have it's own overhead, and the number of rows
I've sketched out a design for a templating system for Postgres, and I'm
looking for feedback from anyone that might be interested in it.
My goal is to allow people to register different template languages in
it (anything that can be loaded in a Postgres procedure language would
work), and all
On 3/4/2015 9:10 PM, AI Rumman wrote:
I am working on setting up a new database server with shared disk and
cluster failover.
In this environment, only one node will be active at a certain time.
Underneath, we are planning to have shared storage with CIFS protocol.
As I am newbie with this stor
Hi All,
I am working on setting up a new database server with shared disk and
cluster failover.
In this environment, only one node will be active at a certain time.
Underneath, we are planning to have shared storage with CIFS protocol.
As I am newbie with this storag, can anyone please help me wi
I would like to stop executing the query for a row of table "a" when a
single row of "b" is found. This query would not stop
processing but will filter all the rows that are found at the end of
execution.
Is there a way to express this without a subquery?
On Wed, Mar 4, 2015 at 11:53 PM Paul Rams
Stop writing so many subqueries, think in joins; the poor planner!
SELECT DISTINCT ON (a.id) a.id AS a_id, b.id AS b_id
FROM a
JOIN b
ON ST_Contains(b.shape, a.shape)
WHERE b.kind != 1
Also, the DISTINCT ON syntax (PgSQL custom) lets you winnow a result
set down to just one of the inputs.
P.
O
Paul Ramsey wrote
> Though maybe with a really big table? (with really big
> objects?) Though still, doesn't analyze just pull a limited sample
> (30K approx max) so why would table size make any difference after a
> certain point?
Hi paul, "my" table is quite big (about 293.049.000 records) but t
Circling back on this one, I had a look at our analyze code. I found
one place where *maybe* we weren't freeing memory and freed it, but
analyzing a 2M record table I barely see any bump up in memory usage
(from 22M up to 24M at peak) during analyze. And the change I made
didn't appear to alter tha
On Wed, Mar 4, 2015 at 4:50 AM, Bill Moran wrote:
> On Wed, 4 Mar 2015 14:05:09 +0400
> Alexander Shutyaev wrote:
>
>> Thanks for the answer. Now, given this info I've calculated that our
>> postgresql should occupy approx. 30,53 GB while the server has 125 GB of
>> RAM. However we often see in t
On Wed, 4 Mar 2015 12:36:36 -0300
María Griensu wrote:
> Thanks, I mean, how heavy it is in kB.
If you mean how much overhead is involved in storing the data, that's
a bit complicated.
First off, how the data is stored depends on the size of it, and what
other fields there are in the table and
Roxanne Reid-Bennett wrote
> Most definitely ask on the Postgis list. Identify the full Postgis
> version and Postgres versions as well.
Hi Roxanne,
seconds before sending it to the postgis-list i checked the table
planet_osm_ways and there is no geometry:
That can't be a postgis problem. I'll
Here's the answer.
http://pdenya.com/2014/01/16/postgres-bytea-size/
El 04/03/15 a las 12:17, John R Pierce escibió:
On 3/4/2015 7:03 AM, María Griensu wrote:
I need to figure out how can I weight BLOB objects in a table of a
DB, I'm not expert on this topics, so I appreciate any help you can
Thanks, I mean, how heavy it is in kB.
2015-03-04 12:17 GMT-03:00 John R Pierce :
> On 3/4/2015 7:03 AM, María Griensu wrote:
>
> I need to figure out how can I weight BLOB objects in a table of a DB, I'm
> not expert on this topics, so I appreciate any help you can give me.
>
>
> postgres's equ
Hello,
I have a rather big query which should match id's from various tables
together. To be able to use multiprocessing I'm currently using
postgresql 9.5-dev, because of the SKIP LOCKED feature.
SELECT
start,
destination,
ST_Distance(start_geom, end_geom) AS distance_meter
FROM (
I have a list-based partition table and I want to prevent scanning of all
children tables in a JOIN, which is using the partition column. I know
that CHECK constraints are not used in this case so I hope that creating an
index on the partition column would help. But I am not sure if I should
cre
On 3/4/2015 7:03 AM, María Griensu wrote:
I need to figure out how can I weight BLOB objects in a table of a DB,
I'm not expert on this topics, so I appreciate any help you can give me.
postgres's equivalent of BLOB is BYTEA. I'm not sure what you mean
by 'weight' here.
--
john r pierce
Hi Alexander:
On Wed, Mar 4, 2015 at 11:05 AM, Alexander Shutyaev
wrote:
> Thanks for the answer. Now, given this info I've calculated that our
> postgresql should occupy approx. 30,53 GB while the server has 125 GB of
> RAM. However we often see in top that there is very little free memory and
I need to figure out how can I weight BLOB objects in a table of a DB, I'm
not expert on this topics, so I appreciate any help you can give me.
Thanks in advance.
Maria.
Hi Adrian:
On Wed, Mar 4, 2015 at 1:03 AM, Adrian Klaver
wrote:
>
> As you pointed, my bet is in the -f case COPY FROM STDIN expects the
>> data on the file ( otherwise pg_dumps would not work ), but your
>> sugestion seems to have a problem of double redirection, let me elaborate:
>>
>
> Well
Samuel,
* Samuel Smith (pg...@net153.net) wrote:
> I noticed that I could get very nice partition elimination using
> constant values in the where clause.
>
> Ex:
> select * from where between '2015-01-01'
> and '2015-02-15'
>
> However, I could not get any partition elimination for queries th
Hello,
I have a query plan optimization question. It is formatted nicely on
http://stackoverflow.com/questions/28856452/postgres-not-using-gist-index-in-lateral-join
But here is a copy for the archive:
Here is the setup:
CREATE EXTENSION postgis;
DROP TABLE IF EXISTS A;
DROP TABLE IF EXISTS B;
On Wed, 4 Mar 2015 14:05:09 +0400
Alexander Shutyaev wrote:
> Thanks for the answer. Now, given this info I've calculated that our
> postgresql should occupy approx. 30,53 GB while the server has 125 GB of
> RAM. However we often see in top that there is very little free memory and
> even swap is
You have spam on postgresql.org main page...
"Jim Smith: Myśli o istotnych Szczegóły kwiatów dostawy online"
It's not even proper polish :)
--
View this message in context:
http://postgresql.nabble.com/Spam-on-main-page-tp5840406.html
Sent from the PostgreSQL - general mailing list archive at N
Hi,
On 2015-03-04 14:05:09 +0400, Alexander Shutyaev wrote:
> Thanks for the answer. Now, given this info I've calculated that our
> postgresql should occupy approx. 30,53 GB while the server has 125 GB of
> RAM. However we often see in top that there is very little free memory and
> even swap is
Thanks for the answer. Now, given this info I've calculated that our
postgresql should occupy approx. 30,53 GB while the server has 125 GB of
RAM. However we often see in top that there is very little free memory and
even swap is used. What could be the reason of postgres using so much
memory?
Tha
27 matches
Mail list logo