On Oct 20, 2017, at 3:31 PM, Justin Pryzby wrote:
>
> On Fri, Oct 20, 2017 at 03:08:26PM -0800, Israel Brewster wrote:
>> Summary: the following query takes around 12 seconds on my test machine. On
>> my production machine, it's at half an hour and counting. What's
Summary: the following query takes around 12 seconds on my test machine. On my production machine, it's at half an hour and counting. What's going on?Details:As a first stab at getting some data I need, I've developed the following SQL query:SELECT legfrom, count(*) as totaldeps, count(*)
On Jun 19, 2017, at 12:29 PM, Thomas Kellerer wrote:
>
> Israel Brewster schrieb am 19.06.2017 um 22:17:
>> SELECT
>> ...
>> (SELECT
>> array_agg(to_json(row(notedate,username,note)))
>> FROM sabrenotes
>> INNER JOIN users ON author=users.id
>&
I have two tables, a ticket table and a notes table, set up where each ticket can have multiple notes. I'm trying to come up with a query that returns the ticket fields as well as a field that is an array type field with the values being json-encoded note records. I've come up with the following su
On Feb 1, 2017, at 1:45 PM, Tom Lane wrote:
>
> Israel Brewster writes:
>> So just a bit ago I ran into a bit of excitement when the kernel decided
>> to kill one of my postmaster processes due to an out-of-memory issue,
>
> Fun :-(
>
>> So a single postmast
So just a bit ago I ran into a bit of excitement when the kernel decided to kill one of my postmaster processes due to an out-of-memory issue, which would have been fine, except that the problem was then compounded by Pacemaker attempting to restart postgresql, but only managing to get as far as st
On Jan 23, 2017, at 10:06 AM, Jerry Sievers wrote:
>
> Israel Brewster writes:
>
>> I have a backup strategy that in part consists of doing pg_dump s on my
>> various databases. In order to hopefully reduce/prevent operational
>> slow-down as a result of the
>&g
I have a backup strategy that in part consists of doing pg_dump s on my various databases. In order to hopefully reduce/prevent operational slow-down as a result of the backup, I do the dumps from my secondary server, configured as a hot standby with streaming replication.In general this works fine
> On Jan 9, 2017, at 1:54 PM, Kevin Grittner wrote:
>
> On Mon, Jan 9, 2017 at 11:49 AM, Israel Brewster
> wrote:
>
>> [load of new data]
>
>> Limit (cost=354643835.82..354643835.83 rows=1 width=9) (actual
>> time=225998.319..225998.320 rows=1 loops=
ying Service;MIS
TITLE:PC Support Tech II
EMAIL;type=INTERNET;type=WORK;type=pref:isr...@frontierflying.com
TEL;type=WORK;type=pref:907-450-7293
item1.ADR;type=WORK;type=pref:;;5245 Airport Industrial Wy;Fairbanks;AK;99701;
item1.X-ABADR:us
CATEGORIES:General
X-ABUID:36305438-95EA-4410-91AB-45D16CABCDD
5 23:09 GMT+01:00 Paul Ramsey <mailto:pram...@cleverelephant.ca>>:
> Varying the segment length upwards might have a salutary effect for a while,
> as the efficiency improvement of fewer inner loops battles with the
> inefficiency of having more points selected by the index
r. Worth an experiment.POn Thu, Jan 5, 2017 at 1:00 PM, Israel Brewster <isr...@ravnalaska.net> wrote:On Jan 5, 2017, at 10:38 AM, Paul Ramsey <pram...@cleverelephant.ca> wrote:Yes, you did. You want a query that spits out a tupleset of goemetries (one each for each wee segment), and the
sable - and like mentioned, that's a worst-case
scenario query. Thanks!
Of course, if you have any suggestions for further improvement, I'm all ears :-)
---
Israel Brewster
Systems Analyst II
Ravn Alaska
5245 Airport Industrial Rd
Fai
So apparently it is now doing a sequential scan on data rather than using the
index. And, of course, sorting 20 million rows is not trivial either. Did I do
something wrong with forming the query?
---
Israel Brewster
Systems Analyst II
Ravn Alas
I have a database (PostgreSQL 9.6.1) containing 62,702,675 rows of latitude (numeric), longitude(numeric), elevation(integer) data, along with a PostGIS (2.3.0) geometry column (location), running on a CentOS 6.8 box with 64GB RAM and a RAID10 SSD data drive. I'm trying to get the maximum elevation
with the additional
column in about 10 minutes rather than 19+ hours. Now to see how long it takes
to generate the indexes :-)
Thanks again!
-------
Israel Brewster
Systems Analyst II
Ravn Alaska
5245 Airport I
ax (perhaps more
efficient as well?) for populating the column. I'll give it a shot, and see how
it goes!
-------
Israel Brewster
Systems Analyst II
Ravn Alaska
5245 Airport Industrial Rd
Fairbanks, AK 99709
(907) 450-7293
--
Short version:Do indexes impact the speed of an UPDATE, even when the indexed columns aren't changing?Details:I have a table containing geographical data (Latitude, longitude, and elevation) with 406,833,705 records. The Latitude and Longitude columns are indexed. In order to better utilize the dat
Simple question: are WAL files archived when full, or when recycled? That is, are the WAL archive files "up-to-date" other than the current WAL file, or will the archives always be wal_keep_segments behind?---Israel BrewsterSystems Analyst IIRavn Alaska52
On Nov 29, 2016, at 8:12 AM, Israel Brewster wrote:
>
> On Nov 28, 2016, at 10:04 PM, Jeff Janes <mailto:jeff.ja...@gmail.com>> wrote:
>>
>> On Mon, Nov 28, 2016 at 2:50 PM, Israel Brewster > <mailto:isr...@ravnalaska.net>> wrote:
>>
>>> -
On Nov 28, 2016, at 10:20 PM, Thomas Kellerer wrote:
>
> Israel Brewster schrieb am 28.11.2016 um 23:50:
>>>
>>> pg_archivecleanup -n /mnt/server/archiverdir
>>> 00010010.0020.backup
>>
>> Ok, but where does that "000
On Nov 28, 2016, at 10:04 PM, Jeff Janes wrote:
>
> On Mon, Nov 28, 2016 at 2:50 PM, Israel Brewster <mailto:isr...@ravnalaska.net>> wrote:
>
>> - What is the "best" (or just a good) method of keeping the WAL archives
>> under control? Obviously when I
On Nov 28, 2016, at 2:13 PM, John R Pierce wrote:
>
> On 11/28/2016 2:50 PM, Israel Brewster wrote:
>>>
>>> Why wouldn't the streaming replica salvage that? Are they expected to fail
>>> together? Is the NFS share onto which you want to store your ba
On Nov 25, 2016, at 1:00 PM, Jeff Janes wrote:
>
> On Wed, Nov 23, 2016 at 10:16 AM, Israel Brewster <mailto:isr...@ravnalaska.net>> wrote:
> I was wondering if someone could inform me about, or point me to an online
> article about best practices for backing up a postgr
On Nov 23, 2016, at 10:31 AM, John R Pierce wrote:
>
> On 11/23/2016 11:20 AM, Israel Brewster wrote:
>> How do I determine which those are? Just based on the timestamp if the WAL
>> file, such that I could do something like take the timestamp of the last
>> basebackup
END:VCARD
On Nov 23, 2016, at 9:42 AM, John R Pierce <pie...@hogranch.com> wrote:On 11/23/2016 10:16 AM, Israel Brewster wrote:To mitigate these issues, I am thinking of supplementing the individual dumps with a full base backup and WAL archiving to a NFS share. This should enable (relatively)
I was wondering if someone could inform me about, or point me to an online article about best practices for backing up a postgresql database cluster? At the moment, I have two servers running with streaming replication for failover purposes, and then I do nightly database dumps for recovery/backup
On Nov 18, 2016, at 5:48 AM, Jehan-Guillaume de Rorthais wrote:
>
> On Thu, 17 Nov 2016 08:26:59 -0900
> Israel Brewster mailto:isr...@ravnalaska.net>> wrote:
>
>>> On Nov 16, 2016, at 4:24 PM, Adrian Klaver
>>> wrote:
>>>
>>> On 11/16/
On Nov 16, 2016, at 11:39 PM, Jehan-Guillaume de Rorthais
wrote:
>
> On Wed, 16 Nov 2016 15:51:26 -0900
> Israel Brewster mailto:isr...@ravnalaska.net>> wrote:
>
>> I've been playing around with streaming replication, and discovered that the
>> followin
> On Nov 16, 2016, at 4:24 PM, Adrian Klaver wrote:
>
> On 11/16/2016 04:51 PM, Israel Brewster wrote:
>> I've been playing around with streaming replication, and discovered that
>> the following series of steps *appears* to work without complaint:
>>
>>
I've been playing around with streaming replication, and discovered that the following series of steps *appears* to work without complaint:- Start with master on server A, slave on server B, replicating via streaming replication with replication slots.- Shut down master on A- Promote slave on B to
On Sep 27, 2016, at 11:16 AM, John R Pierce wrote:
>
> On 9/27/2016 12:06 PM, Israel Brewster wrote:
>> That helps for one-time stat collection, but as I mentioned in my original
>> message, since connections may not last long, I could be getting close to,
>> or eve
---
Israel Brewster
Systems Analyst II
Ravn Alaska
5245 Airport Industrial Rd
Fairbanks, AK 99709
(907) 450-7293
---
BEGIN:VCARD
VERSION:3.0
N:Brewster;Israel;;;
FN:Israel Brewster
ORG:Frontier Flying
On Sep 27, 2016, at 10:55 AM, Melvin Davidson wrote:
>
>
>
> On Tue, Sep 27, 2016 at 2:46 PM, Israel Brewster <mailto:isr...@ravnalaska.net>> wrote:
>> >I'm still curious as to how I can track concurrent connections, ...
>>
>> Have you consid
On Sep 27, 2016, at 10:55 AM, Jonathan Vanasco wrote:
>
>
> On Sep 27, 2016, at 2:46 PM, Israel Brewster wrote:
>
>> I do have those on, and I could write a parser that scans through the logs
>> counting connections and disconnections to give a number of current
>
nfortunately, only seems to track connections per second and not consecutive
connections), already existed, or that there was some way to have the database
itself track this metric. If not, well, I guess that's another project :)
---
Israel Br
On Sep 27, 2016, at 9:55 AM, John R Pierce wrote:
>
> On 9/27/2016 9:54 AM, Israel Brewster wrote:
>>
>> I did look at pgbadger, which tells me I have gotten as high as 62
>> connections/second, but given that most of those connections are probably
>> very short
On Sep 27, 2016, at 10:07 AM, Adrian Klaver wrote:
>
> On 09/27/2016 09:54 AM, Israel Brewster wrote:
>> I have a Postgresql (9.4.6) cluster that hosts several databases, used
>> by about half-a-dozen different in-house apps. I have two servers set up
>> as mas
I have a Postgresql (9.4.6) cluster that hosts several databases, used by about half-a-dozen different in-house apps. I have two servers set up as master-slave with streaming replication. Lately I've been running into an issue where one of the apps periodically can't connect to the db. The problem
On Apr 28, 2016, at 11:21 AM, David G. Johnston
wrote:
>
> On Thu, Apr 28, 2016 at 12:09 PM, Israel Brewster <mailto:isr...@ravnalaska.net>> wrote:
>
> >
> > It'll kill your performance, but if aesthetics are that important to you...
>
> They'
---
Israel Brewster
Systems Analyst II
Ravn Alaska
5245 Airport Industrial Rd
Fairbanks, AK 99709
(907) 450-7293
---
BEGIN:VCARD
VERSION:3.0
N:Brewster;Israel;;;
FN:Israel Brewster
ORG:Frontier Flying
>
> On Apr 28, 2016, at 10:39 AM, Vik Fearing wrote:
>
> On 04/28/2016 08:30 PM, Israel Brewster wrote:
>> This is probably crazy talk, but in Postgresql is there any way to have
>> a "keyed" sequence? That is, I have a table with a list of departments.
>>
This is probably crazy talk, but in Postgresql is there any way to have a "keyed" sequence? That is, I have a table with a list of departments. While *relatively* static, technically speaking this is a dynamic list - someone certainly could decide to add a department, or combine two departments int
END:VCARD
On Oct 12, 2015, at 11:50 AM, Jeff Janes <jeff.ja...@gmail.com> wrote:On Mon, Oct 12, 2015 at 11:39 AM, Tom Lane <t...@sss.pgh.pa.us> wrote:Jeff Janes <jeff.ja...@gmail.com> writes:
> On Mon, Oct 12, 2015 at 10:58 AM, Israel Brewster <isr...@ravnalaska.net>
> wro
On Oct 12, 2015, at 10:39 AM, Tom Lane wrote:
>
> Jeff Janes writes:
>> On Mon, Oct 12, 2015 at 10:58 AM, Israel Brewster
>> wrote:
>>> My first thought was to do something like this:
>>>
>>> SELECT * FROM (SELECT lognum,array_agg(flightnum) as
(or whatever)?
Are you looking for this ? SELECT lognum array_agg(flightnum) FROM logs WHERE flightnum ~ '^8' GROUP BY lognum;
---
Israel Brewster
Systems Analyst II
Ravn Alaska
5245 Airport Industrial Rd
Fairbanks, AK 99709
(907) 450-7293
--
ator, and
needs to be on the right. Of course, turning it around to be:
WHERE ANY(flightnum) like '8%'
gives me a syntax error. So is there any way I can run this query such that I
get any rows containing a flight number that starts with an 8 (or whatever)?
--
How sensitive is PostgreSQL to a failure that causes it to loose a single tablespace, while the rest of the database cluster is still accessible? Will it continue running, allowing access to databases that aren't in the missing tablespace, or will it crash (or similar)?I have one rather large datab
; yet. We originally found a presentation from The PostgreSQL Conference PostgreSQL High Availability with Corosync/Pacemaker, and then bought the book PostgreSQL 9.0 High Performance (a sample chapter is available as a PDF).All working out well in testing at the moment.Steve PritchardBritish Trust for O
I have a couple of questions regarding setting up and administering a high availability Postgresql "cluster". At the moment the setup I have consists of two servers set up with streaming replication using the named replication slots feature of Postgresql 9.4 (currently running 9.4.1). Let me know i
> On Mar 17, 2015, at 1:41 PM, Marc Mamin wrote:
>
>
>>> On Tue, Mar 17, 2015, at 11:30 AM, Adrian Klaver wrote:
>>>> On 03/17/2015 10:57 AM, Israel Brewster wrote:
>>>>>
>>>>>
>>>>>> On Mar 17, 2015, at 9:30 AM, Pa
end 3/17.
Right. And I will run into some of those (at least the crossing midnight), so
I'll keep an eye out.
---
Israel Brewster
Systems Analyst II
Ravn Alaska
5245 Airport Industrial Rd
Fairbanks, AK 99709
(907) 450-7293
-
On Mar 17, 2015, at 9:05 AM, David G. Johnston
wrote:
>
> On Tuesday, March 17, 2015, Israel Brewster <mailto:isr...@ravnalaska.net>> wrote:
>
>
> > On Mar 17, 2015, at 8:09 AM, Paul Jungwirth > > wrote:
> >
> >>> test=> select h,
ased on 45 minutes from row id 1 [8:15-9:00)
plus the full hour [08:00-9:00) from row id 3, the hour 9 value is based on the
amount of rows 1,2 and 3 that fall within the 9 hour, etc.
---
Israel Brewster
Systems Analyst II
Ravn Alaska
5245 Airport Indust
;99701;
item1.X-ABADR:us
CATEGORIES:General
X-ABUID:36305438-95EA-4410-91AB-45D16CABCDDC\:ABPerson
END:VCARD
> On Mar 16, 2015, at 3:46 PM, Adrian Klaver wrote:
>
> On 03/16/2015 04:16 PM, Israel Brewster wrote:
>> On Mar 16, 2015, at 2:22 PM, David G. Johnston
>>
On Mar 16, 2015, at 2:22 PM, David G. Johnston
wrote:
>
> On Mon, Mar 16, 2015 at 3:13 PM, Adrian Klaver <mailto:adrian.kla...@aklaver.com>> wrote:
> On 03/16/2015 02:57 PM, Israel Brewster wrote:
> I have a table with two timestamp columns for the start time and end
On Mar 16, 2015, at 2:16 PM, Paul Jungwirth
wrote:
>
>> I have a table with two timestamp columns for the start time and end
>> time of each record (call them start and end).I'm trying to figure out
>> if there is a way to group these records by "hour of day",
>
> I think you can do this by sel
On Mar 16, 2015, at 2:13 PM, Adrian Klaver wrote:
>
> On 03/16/2015 02:57 PM, Israel Brewster wrote:
>> I have a table with two timestamp columns for the start time and end
>> time of each record (call them start and end).I'm trying to figure out
>> if there is a
I have a table with two timestamp columns for the start time and end time of each record (call them start and end).I'm trying to figure out if there is a way to group these records by "hour of day", that is the record should be included in the group if the hour of the day for the group falls anywhe
On Jan 29, 2015, at 1:20 PM, Michael Heaney wrote:
> On 1/29/2015 5:12 PM, Israel Brewster wrote:
>> I'm working on setting up a new PostgreSQL database server, and would like
>> to be able to monitor a number of statistics on it, such as:
>>
>> number of
I'm working on setting up a new PostgreSQL database server, and would like to be able to monitor a number of statistics on it, such as:number of connectionsnumber of queriesquery timesetc.All these stats are easily available, either from the pg_stat_statements view (which I have enabled) and the li
Currently, when I need to create/edit a stored procedure in Postgresql, my workflow goes like the following:- Create/edit the desired function in my "DB Commands" text file- Copy and paste function into my development database- Test- repeat above until it works as desired- Copy and paste function i
-D
/usr/local/pgsql/data
-h
*
RunAtLoad
UserName
_postgres
-------
Israel Brewster
Systems Analyst II
Ravn Alaska
5245 Airport Industrial Rd
---
Israel Brewster
Systems Analyst II
Ravn Alaska
5245 Airport Industrial Rd
Fairbanks, AK 99709
(907) 450-7293
---
BEGIN:VCARD
VERSION:3.0
N:Brewster;Israel;;;
FN:Israel Brewster
ORG:Frontier Flying
On Oct 8, 2014, at 9:30 AM, Emanuel Calvo wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA512
>
>
>
> El 08/10/14 a las 14:01, Israel Brewster escibió:
>> I am currently doing periodic syncs of one of my production
>> databases to my development database
---Israel BrewsterSystems Analyst IIRavn Alaska5245 Airport Industrial RdFairbanks, AK 99709(907) 450-7293---BEGIN:VCARD
VERSION:3.0
N:Brewster;Israel;;;
FN:Israel Brewster
ORG:Frontier Flying Service;MIS
TITLE
I am currently doing periodic syncs of one of my production databases to my development database using the command pg_dump -ch | psql , run on my development server. This works well enough, but as the size of the production database grows, this command is, for obvious reasons, getting progressive
I have a Postgresql 9.3.5 server running on CentOS 6.5. In looking at some stats today, I saw that it was handling about 4-5 transactions/second (according to the SELECT sum(xact_commit+xact_rollback) FROM pg_stat_database; query), and an instance of the postmaster process was consistently showing
---
Israel Brewster
Computer Support Technician II
Era Alaska
5245 Airport Industrial Rd
Fairbanks, AK 99709
(907) 450-7250 x7293
---
BEGIN:VCARD
VERSION:3.0
N:Brewster;Israel;;;
FN:Israel Brewster
On Mar 6, 2014, at 1:25 PM, Steve Crawford
wrote:
> On 03/06/2014 09:33 AM, Israel Brewster wrote:
>> For starters, this happened again this morning (no data prior to 4:45 am and
>> sequence reset), so whatever is going on appears to be reoccurring. Also, I
>> forgo
aps the file
from such) that is being loaded in, but I still don't know how or why. Closer
though. Thanks for the suggestions!
---
Israel Brewster
Computer Support Technician II
Era Alaska
5245 Airport Indu
On Mar 6, 2014, at 10:03 AM, Adrian Klaver wrote:
> On 03/06/2014 10:43 AM, Israel Brewster wrote:
>> On Mar 6, 2014, at 9:03 AM, Adrian Klaver wrote:
>>
>
>>>
>>> Well something is happening. See my notes on logging below to help track
>>> down
On Mar 6, 2014, at 9:03 AM, Adrian Klaver wrote:
> On 03/06/2014 09:33 AM, Israel Brewster wrote:
>> For starters, this happened again this morning (no data prior to 4:45 am and
>> sequence reset), so whatever is going on appears to be reoccurring. Also, I
>> forgo
On Mar 5, 2014, at 10:01 AM, Thom Brown <t...@linux.com> wrote:On 5 March 2014 18:22, Israel Brewster <isr...@eraalaska.net> wrote:
I have a Postgresql 9.2.4 database containing real-time tracking data for our aircraft for the past week (theoretically). It is populated by two differe
On Mar 5, 2014, at 10:01 AM, Alvaro Herrera wrote:
> Israel Brewster wrote:
>
>> So my question is, aside from someone going in and mucking about in the wee
>> hours of the morning, what could possibly cause this behavior? What sort of
>> event could cause all dat
/05/2014 10:22 AM, Israel Brewster wrote:
>> My first thought was "Oh, I must have a typo in my
>> cleanup routine, such that it is deleting all records rather than only
>> those a week old, and it's just that no one has noticed until now". So I
>> looked at
I have a Postgresql 9.2.4 database containing real-time tracking data for our aircraft for the past week (theoretically). It is populated by two different processes: one that runs every few minutes, retrieving data from a number of sources and storing it in the DB, and one that has an "always on" c
On Dec 29, 2009, at 5:41 AM, Adrian Klaver wrote:
On Monday 28 December 2009 8:58:38 am Israel Brewster wrote:
On Dec 24, 2009, at 12:53 PM, Adrian Klaver wrote:
On Thursday 24 December 2009 1:44:58 pm Israel Brewster wrote:
This is sort of a PostgreSQL question/sort of a general SQL
On Dec 24, 2009, at 12:53 PM, Adrian Klaver wrote:
On Thursday 24 December 2009 1:44:58 pm Israel Brewster wrote:
This is sort of a PostgreSQL question/sort of a general SQL question,
so I apologize if this isn't the best place to ask. At any rate, I
know in PostgreSQL you can issue a co
This is sort of a PostgreSQL question/sort of a general SQL question, so I apologize if this isn't the best place to ask. At any rate, I know in PostgreSQL you can issue a command like 'SELECT "time"(timestamp_column) from table_name' to get the time part of a timestamp. The problem is that this co
On Dec 21, 2009, at 9:34 AM, Pavel Stehule wrote:
2009/12/21 Israel Brewster :
On Dec 19, 2009, at 2:59 AM, Filip Rembiałkowski wrote:
2009/12/18 Gauthier, Dave
Can you define your own commands in PG.
In psql, yes:
\set sel 'SELECT * FROM'
:sel clients;
\set desc '\
On Dec 19, 2009, at 2:59 AM, Filip Rembiałkowski wrote:2009/12/18 Gauthier, Dave Can you define your own commands in PG. In psql, yes:\set sel 'SELECT * FROM':sel clients;\set desc '\\d' :desc table E.g., if users from other DBs use “describe foo” to get the metadata fo
On Dec 3, 2009, at 10:54 AM, Tom Lane wrote:
Israel Brewster writes:
1) From the second paragraph of that post:
If you add something like "-arch i386 -arch ppc" to CFLAGS and build
normally, you get real working multiarch binaries and libraries.
Which is exactly the problem th
On Dec 2, 2009, at 5:16 PM, Tom Lane wrote:
Israel Brewster writes:
Well, I'm not trying to use the server or client programs from this
build - I just want the universal libraries for my programs. My point
in this last section, however, doesn't necessarily extend as far as
actua
On Dec 2, 2009, at 8:26 AM, Tom Lane wrote:
Israel Brewster writes:
That said, I did sort of get this to work. What I ended up doing was
building for each architecture separately (but on the same machine),
then using lipo to combine the resulting libraries. When I took all
but one
On Dec 2, 2009, at 1:03 AM, Dave Page wrote:
2009/12/2 Grzegorz Jaśkiewicz :
On Tue, Dec 1, 2009 at 11:43 PM, Israel Brewster >
wrote:
I am trying to create a static, universal (x86 and ppc at least -
64 bit
architectures are optional) build of the PostgreSQL libraries on a
Mac O
I am trying to create a static, universal (x86 and ppc at least - 64 bit architectures are optional) build of the PostgreSQL libraries on a Mac OS X 10.6 machine. The command line I am using is as follows:CFLAGS="-arch i386 -arch ppc -arch x86_64" ./configure --disable-sharedThe configure runs fine
87 matches
Mail list logo