Greetings.
Is there a way to find out the compiled-in port number?
I can parse `pg_config` output to check out port in cases port was actually
specified.
However if defaults had been used, is there any tool that will tell me the
magic 5432 number
or should I silently stick to this number in my s
Hey all,
According to
http://www.postgresql.org/docs/9.1/static/protocol-flow.html#AEN91458
"is not actually necessary for the frontend to wait for
ReadyForQuery before issuing another command".
But is it necessary for frontend to wait for ReadyForQuery
before sending Describe message? Or is it n
On Tue, 2012-03-13 at 11:16 +0200, Виктор Егоров wrote:
> Greetings.
>
> Is there a way to find out the compiled-in port number?
>
Two ways, with Postgres running:
- Scan the server's ports with nmap.
- as root on the server, run "lsof | less" and look at the
Postgres process(es).
Both are fa
On Mon, Mar 12, 2012 at 3:06 AM, Nur Hidayat wrote:
> I once have the same problem. In my case it's because most of my table using
> text datatype.
> When I change the field type to character varying (1000) database size
> reduced significantly
I'll bet what happened was postgres re-wrote your ta
On 03/13/2012 02:16 AM, Виктор Егоров wrote:
Greetings.
Is there a way to find out the compiled-in port number?
I can parse `pg_config` output to check out port in cases port was
actually specified.
However if defaults had been used, is there any tool that will tell me
the magic 5432 number
or
2012/3/12 François Beausoleil :
> Hi all,
>
> When using COPY FROM STDIN to stream thousands of rows (20k and more hourly),
> what happens with indices? Are they updated only once after the operation, or
> are they updated once per row? Note that I'm not replacing the table's data:
> I'm appendi
Le mardi 13 mars 2012 à 10:48, Merlin Moncure a écrit :
> 2012/3/12 François Beausoleil (mailto:franc...@teksol.info)>:
> > Currently, I can sustain 30-40 writes per second on a Rackspace VPS. I know
> > it's not the ideal solution, but that's what I'm working with. Following
> > vmstat, the
On Tue, Mar 13, 2012 at 12:51 AM, wrote:
>
> Scott Marlowe wrote:
> 2012/3/12 François Beausoleil :
>> Hi all,
>>
>> When using COPY FROM STDIN to stream thousands of rows (20k and more
>> hourly), what happens with indices? Are they updated only once after the
>> operation, or are they updated
2012/3/13 François Beausoleil :
>
>
> Le mardi 13 mars 2012 à 10:48, Merlin Moncure a écrit :
>
>> 2012/3/12 François Beausoleil > (mailto:franc...@teksol.info)>:
>> > Currently, I can sustain 30-40 writes per second on a Rackspace VPS. I
>> > know it's not the ideal solution, but that's what I'm
I'm getting this error:
"Error executing SQL ALTER TABLE ts_core.calls ALTER COLUMN call_uuid TYPE
VARCHAR(255): ERROR: must be owner of relation calls"
Is there a way that I can configure postgresql so that it allows other users to
alter this table?
Thanks Much,
Jerry
--
Sent via pgsql-ge
On Tue, Mar 13, 2012 at 10:07 AM, Jerry Richards
wrote:
> I'm getting this error:
>
> "Error executing SQL ALTER TABLE ts_core.calls ALTER COLUMN call_uuid TYPE
> VARCHAR(255): ERROR: must be owner of relation calls"
>
> Is there a way that I can configure postgresql so that it allows other users
John,
Thanks, I'll clarify my language around that.
Still hoping that there is a way to get a rough estimate of how long
converting an integer column to a bigint will take. Not possible?
Thanks guys,
Carson
On Mon, Mar 12, 2012 at 6:13 PM, John R Pierce wrote:
> On 03/12/12 5:01 PM, Carson G
David,
Thanks for the tip on the Regular Expression, as well as the advice to use an
example statement.
So, I played with the expression you gave me and that works well. The question
I now have is if I am trying to select all data for any row where that
condition is true, is it possible to ind
Excuse me if what i say below is nonsensical, for I haven't read much about
compression techniques and hence these ramblings are just out of common
sense.
I think the debate about level (row, page, file) of compression arises when
we strictly stick to the axioms of compression which require that a
On 03/08/12 12:01 PM, Andy Colson wrote:
2) better partitioning support. Something much more automatic.
that would be really high on our list. and something that can handle
adding/dropping partitions while there's concurrent transactions
involving the partitioned table
also a planner th
+1 to seamless partitioning.
Although the idea of having a student work on this seems a bit scary, but what
seems scary to me may be a piece of cake for a talented kid :-)
Kiriakos
http://www.mockbites.com
On Mar 13, 2012, at 3:07 PM, John R Pierce wrote:
> On 03/08/12 12:01 PM, Andy Colson
As a follow up, is the upgrade from integer to bigint violent? I assume
so: it has to physically resize the column on disk, right?
Thanks,
Carson
On Tue, Mar 13, 2012 at 9:43 AM, Carson Gross wrote:
> John,
>
> Thanks, I'll clarify my language around that.
>
> Still hoping that there is a way
On 03/13/12 6:10 PM, Carson Gross wrote:
As a follow up, is the upgrade from integer to bigint violent? I
assume so: it has to physically resize the column on disk, right?
I think we've said several times, any ALTER TABLE ADD/ALTER COLUMN like
that will cause every single tuple (row) of the
I have twice set up pg hot standbys ala the docs at
http://www.postgresql.org/docs/9.1/interactive/hot-standby.html
The third time I'm trying this I'm running into trouble. The first two
times were with actual servers. This time I'm trying to set up two pg
instances on my desktop for testing
On Wed, Mar 14, 2012 at 11:07 AM, Joseph Shraibman wrote:
> I have twice set up pg hot standbys ala the docs at
> http://www.postgresql.org/docs/9.1/interactive/hot-standby.html
>
> The third time I'm trying this I'm running into trouble. The first two
> times were with actual servers. This time
Got it.
Thank you, that's very helpful: we could delete quite a few of the rows
before we did the operation and cut way down on the size of the table
before we issue the update. Trimming the table size down seems obvious
enough, but that's good confirmation that it will very much help. And
there
OK, last post on this topic, I promise. I'm doing some math, and I think
I'll have about 100 million rows in the table to deal with.
Given a table that size, I'd like to do the following math:
100 million rows / inserted rows per second = total seconds
Does anyone have a reasonable guess as t
22 matches
Mail list logo