On 7 July 2015 at 10:52, Daniele Varrazzo
wrote:
> Patch attached.
>
>
>
Thanks for the patch.
Would you be able to post it to pgsql-hack...@postgresql.org instead?
Regards
David Rowley
--
David Rowley http://www.2ndQuadrant.com/
<http://www.2ndquadrant.co
with a small rate of change
then a weekly/biweekly full backup with daily incremental/differential
should be very efficient.
--
- David Steele
da...@pgmasters.net
signature.asc
Description: OpenPGP digital signature
INSERT INTO c SELECT * FROM a JOIN b on a.id = b.id WHERE a.id >= 800;
Of course, you'd need to be very careful to ensure that the results of each
SELECT never overlap. It would be nice to invent some better way than this
that divided the workload evenly even when the tables grow.
Then yo
ng on your log level).
--
-David
da...@pgmasters.net
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
in memory?
>
>
It depends which memory you're talking about. If you mean pages that are in
the shared buffers then you can just
EXPLAIN (ANALYZE, BUFFERS) select ... from table;
You'll see Buffers: shared read=N if any buffers were "read from disk" but
kee
write a function to do this?
Obviously I can just issue multiple SELECT COUNT(column)... statements, but I'd
rather not.Thanks,David
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
nbsp;
143<br>----+---<br>col6
|
1<br>+---<br&g
On 15 August 2015 at 02:32, David Nelson wrote:
> Hello list,Apologies if this has been asked before. My search only
> turned up ways to list the total non-null values for all columns as a
> single number. I want the count for each column by column.I have
> inherited a database consi
On Fri, Aug 14, 2015 at 9:59 AM, John McKown
wrote:
>
> David,
>
> It still came through as junk. But I reconstructed it below
>
> === original message ===
> Apologies if this has been asked before. My search only turned up ways to
list the total non-null values for all colum
On Fri, Aug 14, 2015 at 10:00 AM, Tom Lane wrote:
>
> David Rowley writes:
> > On 15 August 2015 at 02:32, David Nelson
wrote:
> >> Hello list,Apologies if this has been asked before. My search
only
> >> turned up ways to list the total non-null values for all colu
On Fri, Aug 14, 2015 at 9:17 PM, Ken Tanzer wrote:
> On Fri, Aug 14, 2015 at 6:35 PM, David Nelson
> wrote:
>
>> On Fri, Aug 14, 2015 at 10:00 AM, Tom Lane wrote:
>> >
>> > David Rowley writes:
>>
>
>
>> Tthat is the way I would do it for a
Howdy All,
For a very long time I've held the belief that splitting PGDATA and xlog on
linux systems fairly universally gives a decent performance benefit for many
common workloads.
(i've seen up to 20% personally).
I was under the impression that this had to do with regular fsync()'s from the
On Tue, Aug 25, 2015 at 10:16:37AM PDT, Andomar wrote:
> >However, I know from experience that's not entirely true, (although it's not
> >always easy to measure all aspects of your I/O bandwith).
> >
> >Am I missing something?
> >
> Two things I can think of:
>
> Transaction writes are entirely s
> On Aug 25, 2015, at 10:45 AM, Bill Moran wrote:
>
> On Tue, 25 Aug 2015 10:08:48 -0700
> David Kerr wrote:
>
>> Howdy All,
>>
>> For a very long time I've held the belief that splitting PGDATA and xlog on
>> linux systems fairly universally
Good morning all,
I am creating an updatable view on a set of tables, and just ran into
unexpected (or more likely misunderstood) behavior with the UPDATE
statement. If an attribute is not explicitly listed in the UPDATE statement,
the NEW value appears to be populated with the OLD value. Unless I
>> So in the UPDATE statement, I only provided a value for last_user. But
the
>> first test of the trigger function tests for a NULL value of
>> NEW.empname. Since
>> I did not provide one, I was expecting it to be NULL and an exception to
>> be thrown. Am I just misunderstanding how things work? I
On Thu, Aug 27, 2015 at 1:21 PM, Jerry Sievers
wrote:
>
> David Nelson writes:
>
> >>> So in the UPDATE statement, I only provided a value for last_user.
But the
> >>> first test of the trigger function tests for a NULL value of
> >>> NEW.empn
On Fri, Aug 28, 2015 at 6:39 AM, Ladislav Lenart wrote:
>
> Hello.
>
>
> On 27.8.2015 18:35, David Nelson wrote:
> >>> So in the UPDATE statement, I only provided a value for last_user.
But the
> >>> first test of the trigger function tests for a NULL value
> Just for the sake of completeness...
>
> If the value (empname in the above example) can be NULL, the compare does
not work, because
>
> SELECT NULL = NULL
>
> returns NULL which is treated as FALSE.
>
> But I am sure you know this :-)
>
>
> HTH,
>
> Ladislav Lenart
>
> __
paces out of $PGDATA.
There has been a lot of discussion about disallowing it in future
releases as it causes problems for various tools.
--
-David
da...@pgmasters.net
signature.asc
Description: OpenPGP digital signature
Indeed, I will look inot pgbasebackup.
pg_basebackup is good for creating replicas but for real backup you
might want to consider purpose-built backup software like pgBackRest or
barman.
--
-David
da...@pgmasters.net
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org
ome outer query.
If foreign keys were updated immediately, like indexes normally are, then
this wouldn't be an issue.
I've attached a file with 2 examples of when this can happen.
Regards
David Rowley
--
David Rowley http://www.2ndQuadrant.com/
<http://www.2ndqua
ore any missing file errors while taking the backup and
be sure to back up all the WAL segments required to make the database
consistent. pg_basebackup will do all of this for you.
Another option is to use purpose-built backup software such as
pgBackRest or barman, both of which are open so
On 9/24/15 10:15 AM, Stephen Frost wrote:
* David Steele (da...@pgmasters.net) wrote:
It's actually perfectly normal for files to disappear during a
backup, even when pg_start_backup() is called first (never perform
file-level backup with calling pg_start_backup()). The database
*wi
our tables locally, and populated them with the same number of
records as your row estimates in the EXPLAIN you pasted and I got:
test=# delete from omdok where not exists (select 1 from omrid where
omdok.dokumn = omrid.dokumnr);
DELETE 0
Time: 1698.233 ms
Whereas with the NOT IN() I cancelled it after 10 minutes.
Regards
David Rowley
--
David Rowley http://www.2ndQuadrant.com/
<http://www.2ndquadrant.com/>
PostgreSQL Development, 24x7 Support, Training & Services
ime=159.840..201915.765
> rows=7672 loops=1)
> Filter: (group_id = 45)
> Rows Removed by Filter: 212699113
>
>
Rows Removed by Filter: 212699113 seems to indicate that your 473733 row
count for "customers" is incorrect.
If you're doing lots of filtering on
I have a development site ( FreeBSD 10.1 and Postgres 9.3 ) where I can
connect to the database via psql or pgadminIII both locally and remotely.
However, all of the local apps ( 3 different ones ) that would connect to it
are failing to connect.
I see no error events in the messages/postgres
st all all 192.168.32.0/19 trust
I don’t believe this to be a postgres problem, but you don’t know what you
don’t know.
From: David G. Johnston [mailto:david.g.johns...@gmail.com]
Sent: Friday, October 23, 2015 10:36 AM
To: Day, David
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERA
.
After seeing that I able to run down the installation flaws.
Thanks very much to all for your suggestions on this matter.
Regards
Dave
-Original Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.us]
Sent: Friday, October 23, 2015 12:49 PM
To: Day, David
Cc: pgsql-general@postgresql.org
I just installed PostgreSQL and started reading the tutorial @
http://www.postgresql.org/docs/8.0/static/tutorial-createdb.html and was
snowed at square one. To create a database, I'm supposed to type the
following command:
$ createdb mydb
But it doesn't say where I'm supposed to type it. When I
;not found" error message.
On Sat, Oct 24, 2015 at 6:35 PM, Adrian Klaver
wrote:
> On 10/24/2015 06:21 PM, David Blomstrom wrote:
>
>> I just installed PostgreSQL and started reading the tutorial @
>> http://www.postgresql.org/docs/8.0/static/tutorial-createdb.html and was
>
I'd greatly prefer a GUI. It seems like a command-line tool would be
incredibly tedious when creating tables, modifying them, filling them with
data, etc. Thanks.
On Sat, Oct 24, 2015 at 7:28 PM, Adrian Klaver
wrote:
> On 10/24/2015 07:20 PM, David Blomstrom wrote:
>
>> I'
Hmmm...I have pgAdminIII. When I click on Server, there's no option to
create a database.
On Sat, Oct 24, 2015 at 7:37 PM, John R Pierce wrote:
> On 10/24/2015 7:33 PM, David Blomstrom wrote:
>
>> I'd greatly prefer a GUI. It seems like a command-line tool would be
>
"Is there a entry under Servers?"
PostgreSQL 9.5 (localhost) - but there's a red X over it.
On Sat, Oct 24, 2015 at 7:52 PM, Adrian Klaver
wrote:
> On 10/24/2015 07:44 PM, David Blomstrom wrote:
>
>> Hmmm...I have pgAdminIII. When I click on Server, there's n
Good tip; I can now see the database I created. Thanks.
On Sat, Oct 24, 2015 at 8:20 PM, Adrian Klaver
wrote:
> On 10/24/2015 08:00 PM, David Blomstrom wrote:
>
>> "Is there a entry under Servers?"
>>
>> PostgreSQL 9.5 (localhost) - but there's a red X
he
> possibilities. I personally find the command line more productive, but
> there is a learning curve.
>
>
>>
>>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
--
David Blomstrom
Writer & Web Designer (Mac, M$ & Linux)
www.geobop.org
sting. The
community seems painfully small compared to MySQL, and there are less
online resources. But I'm guessing that will change in the coming years. I
remember when CSS was a strange, foreign thing. ;)
On Sun, Oct 25, 2015 at 6:28 AM, Adrian Klaver
wrote:
> On 10/24/2015 09:19 PM, Dav
L?
On Sun, Oct 25, 2015 at 8:14 AM, Alexander Reichstadt
wrote:
> Hello David,
>
> This cookbook has worked for me for the last five years on Mac OS X,
> always and totally reliable up to incl. El Capitan and every intermittent
> release before. And most of all, it worked even as a r
I'm creating a website focusing on living things (mostly animals). I have
multiple huge MySQL database tables with animal taxons arranged in a
parent-child relationship. I was trying to figure out how I could navigate
to a URL like MySite/life/mammals and display the number of children (i.e.
orders
ert
wrote:
> On Sun, Oct 25, 2015 at 08:32:43AM -0700, David Blomstrom wrote:
>
> > Someone said when you add a new column in Postgre, it's appended to the
> end
> > of the table. Does that mean that's where it has to stay, or can you
> > rearrange columns
>
>
I've already learned that I can install and use MySQL and PostgreSQL
simultaneously on my laptop, though I haven't yet learned how to connect to
my Postgre database from a web page, write a query, etc. So here's what I'm
planning...
I have several big, complex websites driven by a MySQL database.
Oh my God, Whatcom County. I went to school in Bellingham years ago, when
the population was just 50,000. It was such a beautiful place, I'd be
afraid to go back. ;)
Thanks for the tips.
On Sun, Oct 25, 2015 at 10:46 AM, Adrian Klaver
wrote:
> On 10/25/2015 09:37 AM, David Blomstr
361/hierarchical-query-in-mysql-ii
And this is the discussion where someone suggested I check out PostgreSQL
--
http://stackoverflow.com/questions/33313021/displaying-simple-counts-from-stored-procedure
On Sun, Oct 25, 2015 at 10:59 AM, Adrian Klaver
wrote:
> On 10/25/2015 08:48 AM, Dav
ago.
On Sun, Oct 25, 2015 at 2:07 PM, Alban Hertroys wrote:
>
> > On 25 Oct 2015, at 19:38, Adrian Klaver
> wrote:
> >
> > On 10/25/2015 11:12 AM, David Blomstrom wrote:
> >> I'm sorry, I don't know exactly what you mean by "definitions." The
fe's entire database. The
Encyclopedia of Life and WIkipedia are both enormous projects, but there
are some amazing gaps in both projects that I hope to fill.
On Sun, Oct 25, 2015 at 8:51 PM, Adrian Klaver
wrote:
> On 10/25/2015 06:10 PM, David Blomstrom wrote:
>
>> @ Adrian Kl
actly what the codes
mean.
Then again, when I navigate to the Encyclopedia of Life's aardvark page @
http://www.eol.org/pages/327830/overview the code is actually amazingly
short.
On Sun, Oct 25, 2015 at 9:04 PM, David Blomstrom
wrote:
> What was amazed me is the HUGE tables (as in too b
Making it more confusing, I believe there are several different series of
numerical ID's. See this page, for example...
https://www.wikidata.org/wiki/Q46212
On Sun, Oct 25, 2015 at 9:10 PM, David Blomstrom
wrote:
> It's also interesting that some entities (e.g. EOL) are now usin
ve to learn how to use the new search function, Elastic, or
whatever it's called. Speaking of which, I just discovered the new Russian
and Chinese search engines, Yandex and Baidu. They have some interesting
possibilities, too. ;)
On Sun, Oct 25, 2015 at 9:12 PM, David Blomstrom
wrote:
> M
ble - but which LSID does
one choose? But it's amazing how many "aliases" are attached to many
taxonomic names; utterly bewildering.
On Sun, Oct 25, 2015 at 10:09 PM, Adrian Klaver
wrote:
> On 10/25/2015 09:10 PM, David Blomstrom wrote:
>
>> It's also interesting that
Klaver wrote:
>>>>
>>>>> On 10/26/2015 08:12 AM, Rob Sargent wrote:
>>>>>
>>>>>> On 10/26/2015 08:43 AM, Jim Nasby wrote:
>>>>>>
>>>>>>> On 10/25/15 8:10 PM, David Blomstrom wrote:
>>>>>
I decide to organize things by taxonomic levels (e.g. kingdom,
class, etc.), then it should be easy to rename the table, delete a few
columns, and refill it with data associated with a particular class.
On Mon, Oct 26, 2015 at 1:29 PM, David Blomstrom
wrote:
> Sorry for the late response.
s from scratch.
It gets still more complicated when you get into "specialist names." ;) But
the system I've set up so far seems to be working pretty nicely.
On Mon, Oct 26, 2015 at 1:41 PM, Rob Sargent wrote:
> On 10/26/2015 02:29 PM, David Blomstrom wrote:
>
>> S
PDO::PARAM_STR);
$stmt->execute();
$Total = $stmt->fetch();
On Mon, Oct 26, 2015 at 1:51 PM, David Blomstrom
wrote:
> I'm focusing primarily on vertebrates at the moment, which have a total of
> (I think) about 60,000-70,000 rows for all taxons (species, families,
> etc.).
What does "top post" mean? And what do you mean by "embedded spaces"? Are
you referring to the underscores in the TABLE name?
On Mon, Oct 26, 2015 at 2:12 PM, Gavin Flower wrote:
> Hi David,
>
> Please don't top post!
>
>
> On 27/10/15 09:42, David
Here's what it looks like now:
CREATE TABLE public.gz_life_mammals
(
id integer NOT NULL,
taxon text NOT NULL,
parent text NOT NULL,
slug text,
namecommon text,
plural text,
extinct smallint NOT NULL,
rank smallint NOT NULL,
key smallint NOT NULL,
CONSTRAINT "Primary Key" PRIM
I tried to import a CSV file into a PostgreSQL table using pgAdmin III. I
got an error message: "extra data after last column."
All my spreadsheets have an "end of data" column that has /r/n in each
cell. When I import a CSV file into a MySQL table, everything beyond /r/n
is ignored. Is there som
OK, I figured out how to drop the primary key and change the NULLS. So it
looks like this now:
CREATE TABLE public.gz_life_mammals
(
id integer NOT NULL,
taxon text NOT NULL,
parent text NOT NULL,
slug text,
namecommon text,
plural text,
extinct smallint NOT NULL,
rank smallint NO
I pasted this into the shell...
CREATE ROLE david
LOGIN
CREATEDB;
CREATE DATABASE GeoZoo2
OWNER david;
and I did it again, replacing LOGIN with my password, but when I refresh
pgAdmin III, there are no new databases.
And when I paste this in, it doesn't create a table...
CREATE
When I type in /l, it just says "database Postgres," even though I can see
TWO databases in pgAdmin III. When I type in /dt, it says Username
[postgres].
On Mon, Oct 26, 2015 at 3:40 PM, Gavin Flower wrote:
> On 27/10/15 11:18, David Blomstrom wrote:
> [...]
>
>>
Server [localhost]: /l
Database [postgres]:
* * * * *
Server [localhost]: /dt
Database [postgres]:
* * * * *
However...I've noticed that when I open up the shell, I get multiple
instances - sometimes over half a dozen. If I type the same things into one
of the other instances, I get this:
Pr
Oops, let me try it again...
*Shell1*
Server [localhost]: \l
Database [postgres]:
* * * * *
Server [localhost]: \dt
Database [postgres]:
*Shell2*
Database [postgres]: \l
Port [5432]:
* * * * *
Port [5432]: \dt
Username [postgres]:
Wow, I must be getting dyxlexic; yes, it is
\r\n
I guess it would be easy enough to just copy the stuff I want to import
into a separate spreadsheet and save it as a CSV. I've done that before
with really big spreadsheets, actually. Thanks.
On Mon, Oct 26, 2015 at 4:19 PM, Adrian Klaver
wrote:
> On 10/26/2015 04:13 PM, David Blomstrom wrote:
>
>> Oops, let me try it again...
>>
>
> Still not making sense.
>
> Show the exact command you are using to get the below and explain where
> you are running
ne and the same; just a fluke, I guess.
On Mon, Oct 26, 2015 at 4:22 PM, Gavin Flower wrote:
> On 27/10/15 12:15, David Blomstrom wrote:
>
>>
>> Server [localhost]: \c geozoo2 david
>>
>> Database [postgres]:
>>
>>
>> *Shell2*
>>
>> Us
No, I'm on a Mac running OS X El Capitan.
t 26, 2015 at 4:30 PM, John R Pierce wrote:
> On 10/26/2015 4:27 PM, David Blomstrom wrote:
>
>> I have two PostgreSQL icons on my taskbar - a blue elephant (pgAdmin III)
>> and a little monitor (PSQL). When I click on PSQL, it always opens at least
>> two windows or instanc
>>
>
> Actually OS X :
>
>
> http://www.postgresql.org/message-id/CAA54Z0hdYkqDDYP=8s577rwvz4qrn9+-mjkeyrot69um3ra...@mail.gmail.com
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>
--
David Blomstrom
Writer & Web Designer (Mac, M$ & Linux)
www.geobop.org
Last login: Mon Oct 26 16:35:25 on ttys002
/Library/PostgreSQL/9.5/scripts/runpsql.sh; exit
Davids-MacBook-Pro-2:~ davidblomstrom$
/Library/PostgreSQL/9.5/scripts/runpsql.sh; exit
Server [localhost]: \l
Database [postgres]: \dt
Port [5432]:
Username [postgres]: Cmd-Spacebar
Terminalpsql: cou
On Mon, Oct 26, 2015 at 5:02 PM, Adrian Klaver
wrote:
> On 10/26/2015 04:42 PM, David Blomstrom wrote:
>
>> Last login: Mon Oct 26 16:35:25 on ttys002
>>
>> /Library/PostgreSQL/9.5/scripts/runpsql.sh; exit
>>
>> Davids-MacBook-Pro-2:~ davidblomstrom$
>>
tgreSQL is obviously too advanced for me. I don't have a clue
about what's going on. Thanks for all the tips, though. I got some good
general pointers for designing my animals database.
On Mon, Oct 26, 2015 at 5:20 PM, David G. Johnston <
david.g.johns...@gmail.com> wrote:
>
: extra command-line argument "[postgres]:" ignored
psql: invalid port number: "Port"
Press to continue...
On Mon, Oct 26, 2015 at 5:49 PM, Gavin Flower wrote:
> On 27/10/15 13:29, John R Pierce wrote:
>
>> On 10/26/2015 5:20 PM, David G. Johnston wrote:
>>
&g
15 at 6:01 PM, Melvin Davidson
wrote:
> The law of O/S & databases:
> For every Linux / PostgreSQL user, there is and equal an opposite Mac /
> MySQL user.
> However, the latter is completely useless.
>
> On Mon, Oct 26, 2015 at 8:54 PM, David Blomstrom <
> david.blomst...@
. For you they may be liberating. Stick with pgAdmin and ask
> questions about it. There may even be a forum dedicated to it.
>
> Someday we can revisit "power tools"
>
> On Oct 26, 2015, at 7:08 PM, David Blomstrom
> wrote:
>
> Judging from the anti-Mac comments,
rning';
This must be done in the same session as the pg_start_backup().
--
-David
da...@pgmasters.net
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On 10/27/15 3:32 PM, David Steele wrote:
On 10/27/15 3:16 PM, Leo Baltus wrote:
postgresql-9.4.5
When running pg_stop_backup() psql outputs on stderr:
NOTICE: pg_stop_backup complete, all required WAL segments have been
archived
--quiet does not seem to suppress it, what does?
This should
Can anyone tell me how to write the query described @
http://stackoverflow.com/questions/33402831/count-descendants-in-hierarchical-query
?
The answer's very thorough, but I don't know how to string two queries and
a function together like that. This doesn't work:
$sql = "select * from gz_life_ma
No, I get the same T_FUNCTION error.
Someone commented that the function...
create function tax_rank(id integer) returns text as $$
select case id
when 1 then 'Classes'
when 2 then 'Orders'
when 3 then 'Families'
when 4 then 'Genera'
Just so I understand what's going on, I can create a lookup table by
pasting this code...
create table taxon (
taxonid serial,
descr text
);
create table gz_life_mammals (
id serial,
taxonid integer, -- use the lookup table
parentid integer -- use the lookup table
);
...into pgAdmin III
Thanks for the tips.
In pgAdmin III, I can create a table step-by-step by choosing Edit > Object
> New Table
But is there a pace for me to past in a block of code that creates the
table with just one process? If I click on the SQL icon, a SQL window opens
up, and I can past the code into SQL Edit
at 3:36 PM, Adrian Klaver
wrote:
> On 10/30/2015 03:29 PM, David Blomstrom wrote:
>
>> Thanks for the tips.
>>
>> In pgAdmin III, I can create a table step-by-step by choosing Edit >
>> Object > New Table
>>
>> But is there a pace for me to past in a blo
Yes, I guess it does make sense to keep a copy of your actions.
In the meantime, I now have two new tables with the following schema:
-- Table: public.taxon
-- DROP TABLE public.taxon;
CREATE TABLE public.taxon
(
taxonid integer NOT NULL DEFAULT nextval('taxon_taxonid_seq'::regclass),
descr
The field descr would presumably hold the values I originally had in the
field Taxon - e.g. the names of various taxons, like 'Mammalia' and
'Canis-lupus.' The field id is just a numerical key, and I already have the
numerical values for parentid.
But what am I supposed to put in the field taxoni
Sorry, I don't think I specified that at the beginning.
My original/master table has several fields, four of which are relevant to
this hierarchical stuff - id, taxon, parent, parent_id.
The first is a numerical key, from 1 to probably somewhere around 8,000 for
mammals, 1 to 10,000 for birds, et
I think I answered my question @
http://www.the-art-of-web.com/sql/lookup-table/
It sounds like the field taxon_id is similar to the field id - it's just
automatically populated by a numerical key. So if I add a new taxon on row
5, then the taxon_id for the following row with change from 5 to 6 -
First consider the following table:
create table taxon (
taxonid serial,
descr text
);
As I understand it, "serial" means that column will automatically populate
with a numerical key.
If I want to fill the field 'descr' with a list of scientific names stored
in a spreadsheet, then how would
gresql.org [mailto:
> pgsql-general-ow...@postgresql.org] On Behalf Of Raymond O'Donnell
> > Sent: Samstag, 31. Oktober 2015 12:42
> > To: David Blomstrom ;
> pgsql-general@postgresql.org
> > Subject: Re: [GENERAL] Selectively Importing Data
> >
> > On 31/10/
Awesome; thanks!
On Sat, Oct 31, 2015 at 7:19 AM, Andy Colson wrote:
> On 10/30/2015 05:10 PM, David Blomstrom wrote:
>
>> Just so I understand what's going on, I can create a lookup table by
>> pasting this code...
>>
>>
> I don't know anything ab
ssume could be applied to populations as opposed to
> phylogeny (the OP's use case). Does it deal with consanguinity? Does it
> perform well going "up" the tree (which is of course branched at every
> level)?
>
--
David Blomstrom
Writer & Web Designer (Mac, M$ & Linux)
www.geobop.org
exists(select * from b where
a.id=b.id and a.id > random());
See: convert_EXISTS_sublink_to_join() for details.
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
--
Sent via pgsql-general mailing list (pgs
Hi,
There seems to be an unbounded growth of memory usage by the backend postgres
process representing a "permanent" session in our system.
The size/res values retrieved by running the "top" utility seem to exceed the
amount I would expect given the mem and buf tuning parameters of the
pos
14, 2016 11:22 AM
To: Day, David
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Server side backend permanent session memory usage ?
"Day, David" writes:
> There seems to be an unbounded growth of memory usage by the backend
> postgres process representing a "permanent&
-Original Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.us]
Sent: Wednesday, July 27, 2016 4:50 PM
To: Day, David
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Server side backend permanent session memory usage ?
"Day, David" writes:
> We have a server
w be a small
> subset of cluster on node-1.
>
> Benefit: If I have to restore only 5% of entire db, it should be lot faster.
pgBackRest allows specified databases to be restored from a cluster backup:
http://www.pgbackrest.org/user-guide.html#restore/option-db-include
I kno
--
David Rader
dav...@openscg.com
On Tue, Aug 9, 2016 at 10:23 AM, wrote:
> Logged by: Nicolas David
> Email address: nicolas.da...@verosoftware.com
> PostgreSQL version: 9.2.5
> Operating system: Windows 10
> Description:
>
> Dear All,
>
> we have a
>
>
> So is that really possible? Just copy the data between folders? if so,
> i'll probably chose option 2!!!
> Even that is 2.5TB I don't think the copy will take longer than 20
> minutes... and I'd still be able to perform reads...
>
> I'll do some test to see if option 2 can be done :)
>
> Than
You can reduce the time much further by pre copying the files. Then during
the maintenance window only copy the deltas basically.
On Sep 1, 2016 9:43 PM, "Patrick B" wrote:
>
>
> 2016-09-02 15:36 GMT+12:00 Venkata B Nagothi :
>
>>
>> On Fri, Sep 2, 2016 at 12:48 PM, Patrick B
>> wrote:
>>
>>> H
>
>
> Isn't this heading in the wrong direction? We need to be more precise
> than 0 (since 0 is computed off of rounded/truncated time stamps), not less
> precise than 0.
>
> Cheers,
>
> Jeff
>
Hmm, You may be right, reading it 4 more times for comprehension it looks
like it should be set to -
-11 17:23:40
>> 2016-09-13 20:00:51
>> 2016-09-13 20:00:51
>> 2016-09-13 20:00:51
>> 2016-09-13 20:00:51
>
>
>
> What am I doing wrong?
None of those dates are between your specified date range. If you want
to include all of 2016-09-13 timestamps, then you
atic/auto-explain.html)
Setting this up will log the EXPLAIN ANALYZE to the PostgreSQL logs
when you execute the query as normal.
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
--
Sent via pgsql-general mailing lis
s interesting.
How about trying to increase max_worker_processes to say, 16. I know
you've said you've tried multiple times and it seems consistent, but
increasing this seems to be worth a try, if anything, to rule that
out.
--
David Rowley http://www.
101 - 200 of 3829 matches
Mail list logo