I have a table with two timestamp columns for the start time and end
time of each record (call them start and end).I'm trying to figure out
if there is a way to group these records by "hour of day",
I think you can do this by selecting `FROM generate_series(0, 23) s(h)`
and then joining to your
test=> select h, count(*) from start_end, generate_series(0, 23) as s(h) where
h between extract(hour from start_time) and extract(hour from end_time) group by h
order by h;
h | count
+---
8 | 2
9 | 3
10 | 2
11 | 2
Note if you always want all 24 rows with a count
So next question: how do I get the "active" time per hour from this?
I think you just SUM() over the intersection between each hourly window
and each event, right? This might be easiest using tsrange, something
like this:
SUM(extract(minutes from (tsrange(start_time, end_time) &&
tsrange(
Some weird edge cases to be careful about: activities that cross midnight.
>> Activities that last more than one full day,
>> e.g. start 3/15 and end 3/17.
Right. And I will run into some of those (at least the crossing midnight),
> so I'll keep an eye out.
If you are running the report on mor
> This underfits:
>
> postgres=# select hstore_to_array('a=>1,b=>2,c=>3'::hstore) &&
> hstore_to_array('a=>2,d=>4,b=>2'::hstore)
>
> ...because array overlaps op takes every element (even 'a' or 2 alone)
> and doesn't test for key and value together like in 'b=>2'!
How about hstore_to_matrix? The
Does someone have a success story of using Postgres Full Search
Capability with significant data, lets say > 50-100 GB ?
This is a recent and very complete article on using Postgres for
full-text search:
http://blog.lostpropertyhq.com/postgres-full-text-search-is-good-enough/
see also the d
Is it possible to have date type data that only contain month and year?,
how can I obtain that from a timestamp (without time zone) column?
I think you want date_trunc, which will cut everything down to the first
of the month, e.g 2015-01-01, 2015-02-01, etc. The results will still be
dates, s
You really shouldn't use WITHOUT TIME ZONE.
I'd like to know more about this. Can you say why? Are there any
articles you'd recommend? I'm fond of normalizing all times to UTC and
only presenting them in a time zone when I know the current
"perspective". I've written about that approach in a
Hi Steve,
Thanks for such a thorough response! I agree that time is a lot trickier
and trappier than one might expect, so it's good to learn how others
grapple with it.
> Your original question had to do with month/year.
Just to clarify, that was Daniel's original question, but you're
reply
what you've said above is incorrect.
All "WITH TIME ZONE" does is tell PostgreSQL to apply timezone
conversions during various operations. The stored data is represented
as an epoch without any concept of the source data's timezone
representation.
Oh, very interesting! Thank you for pointing th
> I'm not sure how to create a result where I get the average number of
> new users per day of the week. My issues are that days that did not
> have any new users will not be factored into the average
This is a pretty common problem with time-series queries when there is
sparse data. My go-to so
Thanks Paul, I guess I'm not sure how a generate_series between 0 to 6
would solve this problem. Wouldn't I have to generate a series based on
the date range (by day) and then group by DOW _after_ that? Can you give
me an example of how I'd do it with a series based on 0 to 6?
Looks like David J
Running the queries individually and using a limit on the golite ip db
results are back immediately 1-2ms but when using the first query it
takes 2-3 seconds.
Is there a way to use a limit in the join?
This sounds like the real issue is a missing/incorrect index, but if
you're on 9.4+ you can
Hello,
I'm running Postgres 9.3 in a warm standby configuration, and the slave
has this setting in recovery.conf:
archive_cleanup_command = '/usr/lib/postgresql/9.3/bin/pg_archivecleanup
/secure/pgsql/archive/ %r'
But I noticed that the archive directory had files going back to
February 20
Is there anything else beside *.backup files in the directory?
There were a few *.history files, and a few files with no extension,
like this: 000600BE0040.
Paul
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://ww
So, as Albe posted pg_archivecleanup is only cleaning up the WAL files,
not the auxiliary files. The WAL files would be the ones with no
extension and a size of 16 MB(unless someone changed the compile settings).
Okay, thank you both for the explanation! I'm glad to hear that it's not
a misconf
On 11/02/2015 09:11 AM, Adrian Klaver wrote:
The *.backup files should not be 16MB and from your original post they
looked to be 300 bytes. Now if you have 30K of 16MB files then something
else is going on.
Ah, you are right! Sorry for the misunderstanding.
Paul
--
Sent via pgsql-general ma
So something is doing a base backup roughly every two hours.
Is that what you would expect?
No. :-)
Sounds like I need to do some archeology. This is a system I inherited,
so I haven't yet explored all the dark corners.
Paul
--
Sent via pgsql-general mailing list (pgsql-general@postgresq
On 11/12/2015 11:49 AM, db042190 wrote:
I see "unexpected eof...could not receive data..target
machine actively refused it.".
That sounds like the same error message as discussed here:
http://www.postgresql.org/message-id/4d75289d.9020...@techbaza.pl
Could it be a problem of too many open con
On 06/17/2016 03:03 AM, Alex John wrote:
RDS is a prime candidate except for the fact that they have explicitly
stated that the Postgres engine is *not* HIPAA compliant.
More precisely, it is not covered by the BAA Amazon will sign.
I've helped several companies run HIPAA-compliant Postgres on
Hi Adam,
On 06/17/2016 03:01 PM, Adam Brusselback wrote:
Just wondering what others have done for using enum or uuid columns in
exclusion constraints?
[snip]
And as a closing note on this, I really can't wait until these are
supported types for gist indexes.
Here is some work I did to add supp
Hello,
I'm trying to create a custom inetrange type. This works:
CREATE TYPE inetrange AS RANGE (
subtype = inet,
subtype_diff = inet_diff
);
But since inet is discrete, not continuous, I'd like to define a
canonical function too:
CREATE TYPE inetrange AS RANGE (
On 07/27/2016 07:44 AM, Vick Khera wrote:
On Wed, Jul 27, 2016 at 3:28 AM, Anton Ananich wrote:
In my situation this order is invalid. Obviously, year 2016 should go after
2014, like that:
I think you expect JSONB to sort differently than it does. I cannot
imagine what a "natural" ordering of
On 09/19/2016 08:48 AM, Willy-Bas Loos wrote:
Since records can
be changed afterwards, it has been argued that we should have
"journaling", meaning that every change to the data is saved in a
separate schema that holds a "journaling" copy of each table
I don't think this is especially unusual.
> On 09/19/2016 10:56 AM, Willy-Bas Loos wrote:
> > On Mon, Sep 19, 2016 at 6:26 PM, Paul Jungwirth
> > mailto:p...@illuminatedcomputing.com>>
wrote:
> > I've worked on similar
> > projects that maintain history for regulatory reasons.
> Can yo
Hi Melvin:
On 09/29/2016 12:06 PM, Melvin Davidson wrote:
I list the creation time for a WAL file and it shows:
/home/mdavidson/dba$ ls -l --time=ctime
/d-log/pg_xlog/0001000D00C9
-rw--- 1 postgres postgres 16777216 Sep 29 07:14
/d-log/pg_xlog/0001000D00C9
ctime s
On Fri, Sep 30, 2016 at 6:06 AM, Rakesh Kumar
wrote:
A typical fear mongering Q from
them "what if due to a bug in your s/w, our competitors end up looking at our
data" or
something like that. That's why schema level vs db level discussion.
I've been reading this discussion with great interes
Hi All,
I've noticed in the past that doing aggregates while joining to more
than one table can sometimes give you unintended results. For example,
suppose I have three tables: products, sales, and resupplies. In sales I
track what I sell, and in resupplies I track my own purchases to
increas
On 02/24/2016 12:34 AM, Marco Lobbia wrote:
I am on a Ubuntu 14.04 LTS machine.
I thought I'd chime in since I work with Rails and Postgres on Ubuntu
all day long. :-) 14.04 LTS is fine for both production and development.
(Sounds like you're using Heroku for production in any case.)
Putti
On 02/24/2016 09:44 AM, Paul Jungwirth wrote:
Also, Rails wants to use Postgres "ident" authentication, which does not
require a password because it trusts that the OS has already
authenticated you.
Sorry, I misspoke: this is "peer" authentication, not "ident". S
On 04/21/2016 01:36 PM, Guyren Howe wrote:
Anyone familiar with the issue would have to say that the tech world
> would be a significantly better place if IBM had developed a real
> relational database with an elegant query language
I'm surprised no one yet has mentioned Tutorial D by C. J. Dat
On 05/03/2016 09:11 PM, Guyren Howe wrote:
I think I'm going to write a book called Love Your Database, aimed at web
developers
What might I cover that I haven't mentioned? What are the usual objections to
server-side code and how can they be met? When *are* they justified and what
should the
On 05/04/2016 08:39 AM, Paul Jungwirth wrote:
On 05/03/2016 09:11 PM, Guyren Howe wrote:
I think I'm going to write a book called Love Your Database, aimed at
web developers
I gave a talk here about doing "interesting" Postgres things in Rails:
Oh also: one part of my talk
Hello,
I'm trying to migrate an existing hstore column to json in Postgres
9.3, and I'd like to be able to run the script in reverse. I know not
all JSON can turn back into hstore, but since this is coming from an
old hstore column, I know the structure is flat (no nesting), and that
all values ar
o.id don't accumulate---instead each one wipes out the previous
one, so the hstore column winds up with just one key/value pair.
Any suggestions for making this one big UPDATE?
Thanks,
Paul
On Tue, Jul 1, 2014 at 3:26 PM, Paul Jungwirth
wrote:
> Hello,
>
> I'm trying to migra
> listen_addresses='*'
I'm pretty sure that listen_addresses belongs in postgresql.conf, not
pg_hba.conf.
Paul
On Thu, Jul 10, 2014 at 1:40 PM, Aram Fingal wrote:
>
> I just tried to set up a PostgreSQL server on an existing instillation of
> Ubuntu 13.10 server but I am getting an error tr
>> It is non-specific since it is assumed at this point in the documentation
>> that you realize ALL "configuration parameters" are defined in
>> postgres.conf or its includes.
I think the comments in pg_hba.conf are a lot more misleading than the
online documentation, and are more likely to be re
> Or another idea, add a column that is the path of the parent:
I don't think this will work. The problem is you need the full path to
keep the children with their parents, but you also need the score. If
you make the path an array of (-votes, id) tuples (perhaps flattened
for simplicity), then yo
>> Can you confirm that your software is SHA-256 Compliant?
Postgres's SSL certificate & key live at the value of ssl_cert_file
and ssl_key_file in your postgresql.conf. Why not point it at a
SHA-256 certificate, restart, and try it out?
Paul
--
_
Pulchritudo sp
Hi All,
I have Postgres 9.3 on Ubuntu 14.04 set up in a master/slave
configuration with streaming replication. On the master I ran `sudo
service postgresql stop` and then on the slave I ran `sudo touch
$trigger_file`. Now the slave seems to be running fine, but I'm trying
to figure out the process
A bit more info:
> What if there were changes on the master that didn't get replicated
> before I originally shut it down?
It looks like Ubuntu's init.d script does a "fast" shutdown, i.e.
SIGINT on this page:
http://www.postgresql.org/docs/9.3/static/server-shutdown.html
I can't tell from the
How do you handle DDL changes in general? I would treat stored
procedures the same way. For instance Ruby on Rails has "database
migrations" where you write one method to apply the DDL change and
another to revert it, like this:
def up
add_column :employees, :manager_id, :integer
a
Hello,
I'm working on a package of functions that compute statistics on
arrays of numbers. For example this one computes a histogram from a
bunch of values plus some bucket characteristics:
CREATE OR REPLACE FUNCTION
array_to_hist(double precision[], double precision, double precision, int)
RETUR
Hello,
The table of which C types represent which SQL types seems to be missing bigint:
http://www.postgresql.org/docs/9.3/static/xfunc-c.html#XFUNC-C-TYPE-TABLE
It looks like bigint should be listed and should correspond to an
int64 C type. Also I see there is an INT8OID, PG_GETARG_INT64,
Datum
Hi Michael,
I can't comment on the domain-specific stuff, but I recently used numeric
arrays for a project and it worked well. In my case we had one million
simulation results (floats) per scenario, so rather than reading one
million separate rows to compute a histogram, we stored everything in on
Hi Pierre,
It looks like you're saying that each row has an id plus three numeric
columns, and you want the stddev calculated from the three numeric
columns? In that case you could do this:
create table foo (id integer, a float, b float, c float);
insert into foo values (1, 2,3,4);
insert into fo
:15 AM, Paul Jungwirth
wrote:
> Hi Pierre,
>
> It looks like you're saying that each row has an id plus three numeric
> columns, and you want the stddev calculated from the three numeric
> columns? In that case you could do this:
>
> create table foo (id integer, a float, b
Oh sorry, you should leave off the grouping:
select stddev(a), stddev(b), stddev(c) from foo;
Paul
On Wed, Jan 21, 2015 at 10:24 AM, Paul Jungwirth
wrote:
> Hi Pierre,
>
> Looking at your Excel document I think I misinterpreted, and you are
> trying to take the stddev of each colum
> In a number of places on the web I've seen it claimed that ordering can be
> set via prepared statements.
> ...
> sandbox=# PREPARE testplan(text) AS
> SELECT * FROM test ORDER BY $1;
>
> But the output is not what one would expect:
>
> sandbox=# EXECUTE testplan('gender');
> ...
> As opposed to:
Hi Pierre,
How do you know in which group each row belongs? If you don't care how
the rows are grouped, you can say this:
create table foo (v float);
insert into foo select random() from generate_series(1, 100) s(a);
select n % 50 g, stddev(v) from (select row_number() over () n, v from
foo)
Hi Han,
Here is an example:
create table foo (v integer);
insert into foo values (23), (45), (65), (22);
create table bar (v integer);
insert into bar select v from foo, generate_series(1,5);
But note that in any relational database there is no defined order for
the rows. A table is more like a
> 1. I have a table with mailmessages, which has an FK to a table of hub users.
> 2. I'd like to retain the content of message repository (with it's domain key
> not cleared), when I drop a particular username from service to release
> that username to others.
> 3. I try to do that with FK "
> With the hstore you can do hstore ? 'key' to check if the object contains
> the key-- is there a similar function for json objects?
Is this good enough?:
=> select ('{"a":1,"b":null}'::json) -> 'a';
?column?
--
1
=> select ('{"a":1,"b":null}'::json) -> 'b';
?column?
--
null
>> I imagine your original would be at risk of LIMITing out the very row you
>> seek to get at the "top", since you don't have an ORDER BY to tell it which
>> ones to keep during the outer LIMIT.
Here is an old thread about combining ORDER BY with UNION:
http://www.postgresql.org/message-id/16814
> Or maybe instead of a view you could write a
> set-returning function, e.g. as described here:
I thought I'd see if I could make this work just for fun. Here is a
simple proof of concept (on 9.3):
-- DROP TABLE IF EXISTS topics;
CREATE TABLE topics (
id INTEGER PRIMARY KEY,
bumped_at INTEGE
> I've noticed that when representing lists as JSON, Postgres 9.4 sometimes
> outputs spaces after commas, and other times does not.
Here is a similar test on 9.3:
# select '[1,2,3]'::json::text, '[1, 2, 3]'::json::text;
text | text
-+---
[1,2,3] | [1, 2, 3]
It looks like
I am currently planning to set up a PostgreSQL + PostGIS instance for my
lab. Turns out I believe this would be useful for the whole center, so
that I'm now considering setting up the server for everyone—if interest
is shared of course. At the moment, I am however struggling with what
would be req
> i want to select based on input, but if input is not provided or if
> input is empty, then i want to select all rows.
I think you can just use OR:
SELECT *
FROMtable
WHERE (input = '' OR input = ?)
This is assuming that `input` is a column in your table and ? is the
user input, based
Hi Michael,
hey, john, i did as you said like:
update db.user
set deleted= 1,
updateterminal = UpdateTerminal,
updateuser = UpdateUser,
updatedate = UpdateDate
returning
credittypeid,
Is there any way to determine, by querying pg_index and other pg_* tables,
whether an index was created as `USING something`? I've already got a big
query joining pg_class, pg_index, etc. to pull out various attributes about
the indexes in my database, and I'd like to include whether it's a GIST
in
> pg_indexes (not pg_index) seems to have the data you're looking for,
unless I misunderstood the question.
That is a lovely table, but I want to get each attribute individually,
without having to parse the CREATE INDEX statement.
It looks like I was almost there with pg_opclass. This will t
> pg_opclass seems the hard way --- just use pg_class.relam, which is
> the OID of the index's AM.
Ah, that works like a charm. Thanks!
Paul
Hello,
I'm running a specialized search engine that indexes a few tens of millions
of web pages, keeping everything in Postgres, and one problem I'm starting
to see is poor cache hit rates. My database has two or three tables just
for the text of the scraped pages, with one row every time a page i
I have a long-running multi-row UPDATE that is deadlocking with a
single-row UPDATE:
2013-03-09 11:07:51 CST ERROR: deadlock detected
2013-03-09 11:07:51 CST DETAIL: Process 18851 waits for ShareLock on
transaction 10307138; blocked by process 24203.
Process 24203 waits for ShareLock on
> 2) All transactions modify table rows in the same order, e.g. ascending
"id".
>With the big update you can do that by putting an "ORDER BY tg2.id"
into
>the subquery, and with the "little transactions" you'll have to make
sure
>that rows are updated in ascending "id" order.
I agree t
> Out of curiosity: any reason the ORDER BY should be in the subquery? It
seems like it ought to be in the UPDATE (if that's allowed).
Hmm, it's not allowed. :-) It's still surprising that you can guarantee the
order of a multi-row UPDATE by ordering a subquery.
Paul
--
where you
> might want to look into separate servers.
So it sounds like I'm on the right track. But a separate
cluster/server seems like a drastic solution.
Thanks,
Paul
On Mon, Mar 11, 2013 at 12:17 PM, Ben Chobot wrote:
>
> On Mar 9, 2013, at 11:54 AM, Paul Jungwirth wrote:
Wow, this is a fun puzzle. I'd love to be the first to solve it with
just SQL, but I don't have a solution yet. Here are some elements that
might be useful:
SELECT market_segment_dimension, array_agg(value)
FROM market_segment_dimension_values
GROUP BY market_segment_dimension;
the UNNEST functio
} | 3
{retail,south} | 3
{manufacturing,south} | 3
{wholesale,south} | 3
(6 rows)
Paul
On Wed, Mar 20, 2013 at 8:40 PM, Paul Jungwirth
wrote:
> Wow, this is a fun puzzle. I'd love to be the first to solve it with
> just SQL, but I don't ha
>> Our major concern is related to the write-back issues we can face in this
>> environment.
I agree this is the OS's responsibility. Greg Smith's Postgres: High
Performance book has a lot to say about this, but there's also a lot
you could read online, since it's really a requirement for any
Post
The UTF-8 encoding for a pound sign is 0xc2a3, not just 0xa3. You
might want to make sure your PHP file is correct. If you're on Linux,
you can use a command like `od --format=ax1 foo.php` to see the actual
byte values. If that shows the wrong value, then the problem is your
text editor is saving t
I'm trying to create a unique index where the unique constraint is
`deferrable initially immediate`. But I don't see any way to do this
in the syntax of the `create index` command. It looks like the only
way to do it is via `alter table foo add unique`. Is that right, or
can I do it as part of `cre
> Deferrability is a property of a constraint, not an index
Yes, but creating a unique constraint implicitly creates an index, and
creating a unique index implicitly creates a constraint. So I'm
wondering whether I can create a pair where the index is partial and
the constraint is deferrable. It s
Hi Alban,
I think Postgres works great for OLAP work, and Amazon's Red Shift is
even based on Postgres. 100 million sales should be not problem at
all. My understanding is Greenplum also builds on top of Postgres, so
if you ever do outgrow your Postgres installation, that would be an
easy migratio
On 02/28/2017 08:21 AM, Chris Withers wrote:
How can I aggregate the results of a query that equates to "show me the
number of matching rows and the set of |tag1| value that have
a |tag2| value of |t2val1|?
...but I really want:
|count |tag1 ---+-2|["val1","val2","va
On 03/01/2017 08:39 AM, jonathan vanasco wrote:
I have to store/search some IP data in Postgres 9.6 and am second-guessing my
storage options.
Would anyone mind giving this a quick look for me?
Right now I have two tables, and am just using cidr for both:
Hi Jonathan,
CIDR seems like a bet
On 03/21/2017 12:21 PM, David G. Johnston wrote:
> words=> COPY words_reviews (uid, author, nice, review, updated) FROM
> stdin FORMAT csv;
What did you read that lead you to think the above shoud work?
I don't know about COPY FROM, but COPY TO works without parens (or
FORMAT), like this:
Hi All,
I wrote a blog post about the Postgres permissions system, and I thought
I'd share:
http://illuminatedcomputing.com/posts/2017/03/postgres-permissions/
The main point I wanted to convey, which I somehow never grasped
confidently from reading the docs or other articles, is how permiss
On 03/29/2017 06:36 AM, Tom Lane wrote:
Karsten Hilbert writes:
Being able to create foreign keys may allow to indirectly
discover whether certain values exists in a table which I
don't otherwise have access to (by means of failure or
success to create a judiciously crafted FK).
Aside from th
Also I don't understand why you wrote “You need the permission on both
tables”: Only the owner of a table can add constraints to it
Ah, this piece was really helpful for me in making it click. Thanks so
much! I added a couple new paragraphs to my post with a link back to
this thread. I feel li
On 04/10/2017 09:33 AM, Merlin Moncure wrote:
On Sun, Apr 9, 2017 at 4:27 PM, Guyren Howe wrote:
If I do a SELECT with a GROUP_BY and multiple ARRAY_AGGs,
>> will the ARRAY_AGGs be guaranteed to have entries in the
>> same (ie corresponding) order?
It is unsafe to rely on aggregation order un
On 04/10/2017 11:35 AM, Tom Lane wrote:
I'm not very keen on recommending that the OP insert an ORDER BY into
each aggregate call, because that would cause a separate sort for each
aggregate (unless someone's improved that recently while I wasn't
looking).
I mentioned this in my other email, up
On 05/19/2017 01:06 PM, Micky Hulse wrote:
ERROR: type "xxx_xxx_x" does not exist
LINE 1: DECLARE results xxx_xxx_x;
It sounds like the type might be in a different schema. You can say \dn
to see the schemas in your database, and \dT+ will show the types along
with their schema. You
On 05/19/2017 02:25 PM, Micky Hulse wrote:
Awesome, that worked!
SET search_path TO myschema, public;
Thanks to everyone for the help! I really appreciate it. :)
Glad you figured it out! Setting the seach_path is often a good thing to
put in your ~/.psqlrc so you don't run into the same
Hi Andre,
I've written some C statistics functions for Postgres before, here:
https://github.com/pjungwir/aggs_for_arrays
https://github.com/pjungwir/aggs_for_vecs
They are all really simple, although they operate on arrays, so yours
should be even simpler. The second link has aggregate functi
On 06/19/2017 12:40 AM, Scott Marlowe wrote:
On Sun, Jun 18, 2017 at 2:20 PM, Condor wrote:
What I should expect, what is good and bad things that can be happened.
I've run Postgres on a LUKS volume for a few years now and it's all been
pretty quiet. One challenge is you need to supply the p
I am new to chef concept, can anyone provide me a cookbook for doing
installation and configuration of PostgreSQL on Redhat.
Hi Pawan,
I have always used the standard "postgresql" cookbook, which has recipes
to install a server. In a Berksfile you just say:
cookbook "postgresql"
The rep
I tried a
query, but it is not giving me any result. Can anyone help me pls?
SELECT missions.*,
CASE WHEN submissions.id IS NULL THEN 'incompleted'
ELSE 'completed' END AS mission_status
FROM "missions" LEFT JOIN submissions ON submissions.mission_id =
missions.id
INNER JOIN members O
On 06/26/2017 11:21 AM, Alexander Farber wrote:
The error message returned by
the database is rather cryptic:
words=> select * from words_skip_game(1, 1);
ERROR: query returned no rows
CONTEXT: PL/pgSQL function words_skip_game(integer,integer) line 85 at
SQL statement
When I look at my sourc
which is great. I have an array of perfect JSON objects. Now I just need
to turn that into a single JSON object.
I think you're saying you want it as a single JSON *array*, right? An
object of objects doesn't make sense. Assuming that's right, this seems
to work:
db1=# select json_agg(schema
On 08/22/2017 01:08 PM, John McKown wrote:
On Tue, Aug 22, 2017 at 2:48 PM, rakeshkumar464
wrote:
We have a requirement to encrypt the entire database.
Personally, what I'd do (and actually do at work) is to us LUKS.
I second that, although I'll add that if you're on AWS you can also use
e
Hello,
I've got an extension that supplies functions written in C. Two
databases from the same cluster both use this extension. I understand
how I can load the example--2.0.0.sql file in one database, and
example--3.0.0.sql in another, but from what I can tell both databases
still share the s
On 11/10/2017 09:10 AM, Eric D wrote:
I have a standby db server (SB1) that will soon become the master. SB1
is set up with streaming replication from the current master. I'm
trying to set up a third server (SB2) as a slave/standby to SB1, so that
when SB1 becomes the master, there will be a st
> Because the training budget is used up for this year, and I don't want
> to wait until January, I'm trying to find things that are inexpensive
> enough to get started on right away.
I'm looking forward to others' suggestions which will surely be
better, but I originally learned the basics of dat
> Yesterday I found that one of the databases in my database cluster suddenly
> lost all its tables.
> A \dt in psql showed nothing.
Is there any chance this could be a search_path issue? Do you have a ~/.psqlrc?
Or are you sure you're in the right database? If you are connecting as
the postgres
I use this script on an Ubuntu system:
#!/bin/bash
set -eu
postgresql_autodoc -d example_dev -u example_dev -h localhost
--password=
dot -Tpng -o example-schema.png example_dev.dot
dot -Tpdf -o example-schema.pdf example_dev.dot
That gives you a schema diagram in pdf, png, dia, and dot
Try this:
SELECT REGEXP_REPLACE(LOWER('300 North 126th Street'),
'(\d)(st|nd|rd|th)', '\1', 'g');
Note that matching a number is \d not /D: backslash, not forward
slash, and lowercase d not uppercase. \d means a digit, \D means
anything except a digit.
Also, I don't think Postgres supports posit
> We are load balancing 2 data centers.
Chapter 8 of Scalable Internet Architectures has a good discussion of
running master-master setups in separate data centers. I'd read that
whole chapter for some of the challenges you'll face.
> If DC1 goes down our LB is failing over to DC2.
This sounds l
Do these queries update more than one row? I ran into a similar issue
a year ago, where two multi-row updates would deadlock because they
processed rows in a different order. I'd love to see UPDATE support
ORDER BY to fix this, but it doesn't yet. (If I ever try contributing
to Postgres, this is a
> Is this the most efficient way to perform this kind of query?
I don't think there is one answer that's always correct, but you could
compare it with a LEFT OUTER JOIN.
There are lots of articles and blog posts about EXISTS vs OUTER JOIN
vs IN, for all the major RDBMSes. Note that not all these
1 - 100 of 113 matches
Mail list logo