Normally we call this from within our windows program where a lot of code
is involved for setting up the environment, and creating the pipes and
redirecting stdout, stderr and stdin. However I believe it is the same
problem that can be caused with the following commandline in cmd.exe but
you will n
Please refrain from top-posting.
On 2 November 2015 at 10:48, Eelke Klein wrote:
> Normally we call this from within our windows program where a lot of code is
> involved for setting up the environment, and creating the pipes and
> redirecting stdout, stderr and stdin. However I believe it is the
Alban Hertroys writes:
> Please refrain from top-posting.
> On 2 November 2015 at 10:48, Eelke Klein wrote:
>> Normally we call this from within our windows program where a lot of code is
>> involved for setting up the environment, and creating the pipes and
>> redirecting stdout, stderr and stdi
Thanks for the prompt replies so far, I have done some more investigation to be
able to clearly answer some of the question.
The original shared-buffers was 8G and I have done another run on Friday using
this old value instead of my more recent 1G limit. There was no noticeable
improvement. I
Hello,
I'm running Postgres 9.3 in a warm standby configuration, and the slave
has this setting in recovery.conf:
archive_cleanup_command = '/usr/lib/postgresql/9.3/bin/pg_archivecleanup
/secure/pgsql/archive/ %r'
But I noticed that the archive directory had files going back to
February 20
On 11/02/2015 07:44 AM, Paul Jungwirth wrote:
Hello,
I'm running Postgres 9.3 in a warm standby configuration, and the slave
has this setting in recovery.conf:
archive_cleanup_command = '/usr/lib/postgresql/9.3/bin/pg_archivecleanup
/secure/pgsql/archive/ %r'
But I noticed that the archive dir
On 11/02/2015 07:44 AM, Paul Jungwirth wrote:
Hello,
I'm running Postgres 9.3 in a warm standby configuration, and the slave
has this setting in recovery.conf:
archive_cleanup_command = '/usr/lib/postgresql/9.3/bin/pg_archivecleanup
/secure/pgsql/archive/ %r'
But I noticed that the archive dir
Paul Jungwirth wrote:
> I'm running Postgres 9.3 in a warm standby configuration, and the slave
> has this setting in recovery.conf:
>
> archive_cleanup_command = '/usr/lib/postgresql/9.3/bin/pg_archivecleanup
> /secure/pgsql/archive/ %r'
>
> But I noticed that the archive directory had files goi
Is there anything else beside *.backup files in the directory?
There were a few *.history files, and a few files with no extension,
like this: 000600BE0040.
Paul
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://ww
On 11/02/2015 08:17 AM, Paul Jungwirth wrote:
Is there anything else beside *.backup files in the directory?
There were a few *.history files, and a few files with no extension,
like this: 000600BE0040.
So, as Albe posted pg_archivecleanup is only cleaning up the WAL files,
not t
So, as Albe posted pg_archivecleanup is only cleaning up the WAL files,
not the auxiliary files. The WAL files would be the ones with no
extension and a size of 16 MB(unless someone changed the compile settings).
Okay, thank you both for the explanation! I'm glad to hear that it's not
a misconf
On 11/02/2015 08:41 AM, Paul Jungwirth wrote:
So, as Albe posted pg_archivecleanup is only cleaning up the WAL files,
not the auxiliary files. The WAL files would be the ones with no
extension and a size of 16 MB(unless someone changed the compile
settings).
Okay, thank you both for the explana
On 11/02/2015 09:11 AM, Adrian Klaver wrote:
The *.backup files should not be 16MB and from your original post they
looked to be 300 bytes. Now if you have 30K of 16MB files then something
else is going on.
Ah, you are right! Sorry for the misunderstanding.
Paul
--
Sent via pgsql-general ma
On 11/02/2015 09:21 AM, Paul Jungwirth wrote:
On 11/02/2015 09:11 AM, Adrian Klaver wrote:
The *.backup files should not be 16MB and from your original post they
looked to be 300 bytes. Now if you have 30K of 16MB files then something
else is going on.
Ah, you are right! Sorry for the misunder
On 11/02/2015 08:41 AM, Paul Jungwirth wrote:
So, as Albe posted pg_archivecleanup is only cleaning up the WAL files,
not the auxiliary files. The WAL files would be the ones with no
extension and a size of 16 MB(unless someone changed the compile
settings).
Okay, thank you both for the explana
On 11/2/15 9:32 AM, Tom Dearman wrote:
My system under load is using just over 500M of the shared_buffer at
usage count >= 3. Our system is very write heavy, with all of the big
tables written to but not read from (at least during the load test run).
Although our db will grow (under load) to 1
So something is doing a base backup roughly every two hours.
Is that what you would expect?
No. :-)
Sounds like I need to do some archeology. This is a system I inherited,
so I haven't yet explored all the dark corners.
Paul
--
Sent via pgsql-general mailing list (pgsql-general@postgresq
Thanks to Mr. Nasby & others for these references & input.
Indeed. I'm rather sure we don't have tables updated heavily enough to
warrant any adjustments to autovacuum, or to do extra 'vacuuming' of the
database. So I'll be leaving it alone (i.e. there's nothing broke so no
fixes needed!)
-
What is the current plans for bigintarray?
Igor
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Hi,
I have a database with jsonb type of columns. Colums contain complex json
structures. I would like to get all rows which contain a json where any of
json's values matches to a given string (like %hello%).
How to create a postgre sql query to do this?
I guess postgre should traverse though ea
On Mon, Nov 2, 2015 at 7:32 AM, Tom Dearman wrote:
> Thanks for the prompt replies so far, I have done some more investigation to
> be able to clearly answer some of the question.
>
> The original shared-buffers was 8G and I have done another run on Friday
> using this old value instead of my more
Igor Bossenko schrieb am 02.11.2015 um 14:20:
What is the current plans for bigintarray?
Igor
The following works for me:
create table foo
(
bia bigint[]
);
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www
Hi, I have a table that contains call records. I'm looking to get only
records for users who made the most calls over a particular time duration in
an efficient way.
calls()
time, duration, caller_number, dialed_number
-- query to get top 10 callers
select caller_number, count(1) from
On Mon, Nov 2, 2015 at 3:14 PM, droberts wrote:
> Hi, I have a table that contains call records. I'm looking to get only
> records for users who made the most calls over a particular time duration
> in
> an efficient way.
>
> calls()
>
> time, duration, caller_number, dialed_number
>
>
>
> --
On Mon, Nov 2, 2015 at 1:41 PM, Thomas Kellerer wrote:
> Igor Bossenko schrieb am 02.11.2015 um 14:20:
>>
>> What is the current plans for bigintarray?
>>
>> Igor
>>
>>
>
> The following works for me:
>
> create table foo
> (
> bia bigint[]
> );
But you can't build indexes on them using
Sami,
What version of postgres are you using?
There's some examples using GIN indexes for searching jsonb objects in the
wiki:
https://wiki.postgresql.org/wiki/What's_new_in_PostgreSQL_9.4#JSONB_Binary_JSON_storage
Hope that helps,
On Mon, Nov 2, 2015 at 4:09 PM, Sami Pietilä wrote:
> Hi,
>
>
2015-11-02 19:14 GMT-03:00 droberts :
> Hi, I have a table that contains call records. I'm looking to get only
> records for users who made the most calls over a particular time duration
> in
> an efficient way.
>
> calls()
>
> time, duration, caller_number, dialed_number
>
>
>
> -- query to g
On Mon, Oct 12, 2015 at 10:15 PM, Adrian Klaver
wrote:
> On 10/12/2015 06:53 AM, Tom Lane wrote:
>
>> Andres Freund writes:
>>
>>> On 2015-10-09 14:32:44 +0800, Victor Blomqvist wrote:
>>>
CREATE FUNCTION select_users(id_ integer) RETURNS SETOF users AS
$$
BEGIN
RETURN QUERY
28 matches
Mail list logo