I have the need to move a specific set of data from one schema to another.
These schemas are on the same database instance and have all of the same
relations defined. The SQL to copy data from one table is relatively
straightforward:
INSERT INTO schema_b.my_table
SELECT * FROM schema_a.my_table W
On Thu, Apr 23, 2015 at 10:27 AM Steve Atkins wrote:
>
> On Apr 23, 2015, at 10:09 AM, Cory Tucker wrote:
>
> > I have the need to move a specific set of data from one schema to
> another. These schemas are on the same database instance and have all of
> the same relations
[pg version 9.3 or 9.4]
Suppose I have a simple table:
create table data (
my_value TEXT NOT NULL
);
CREATE INDEX idx_my_value ON data USING gin(my_value gin_trgm_ops);
Now I would like to essentially do group by to get a count of all the
values that are sufficiently similar. I can do it us
hu, May 14, 2015 at 12:08 PM David G. Johnston <
david.g.johns...@gmail.com> wrote:
>
> On Thu, May 14, 2015 at 11:58 AM, Cory Tucker
> wrote:
>
>> [pg version 9.3 or 9.4]
>>
>> Suppose I have a simple table:
>>
>> create table data (
>> my
Hi, I am using postgres 9.3 and am preparing to migrate to 9.4. As part of
the migration, I was hoping to also delete a bunch of data that is no
longer needed (100M+ rows across several tables).
I can fairly trivially delete the data by doing a simple statement like
this:
DELETE FROM account WHE
We have a performance problem accessing one of our tables, I think because
the statistics are out of date. The table is fairly large, on the order of
100M rows or so.
The general structure of the table is as follows:
Column | Type | Modifiers
---+--+---
On Wed, Dec 30, 2015 at 11:20 AM Tom Lane wrote:
> Cory Tucker writes:
> > This table is almost always queried using a combination of (account_id,
> > record_id) and is generally pretty fast. However, under certain loads,
> the
> > query becomes slower and slower as tim
On Wed, Jan 13, 2016 at 9:48 AM Vick Khera wrote:
> That was my intuition too. Not enough I/O available from the hardware for
> the workload requested.
>
> As recommended, log your checkpoints and try tuning them to spread the
> load.
>
Thanks guys, will turn on checkpoint logging and try to sni
Hello,
I have a query that is using a tremendous amount of temp disk space given
the overall size of the dataset. I'd love for someone to try to explain
what PG is doing and why its using so much space for the query.
First off, the system is PG 9.6 on Ubuntu with 4 cores and 28 GB of RAM.
The qu
I'm interested in trying to figure out which channels have been subscribed
to (using LISTEN). From what I could tell via a little Googling, there
used to be a table named pg_catalog.pg_listener that contained all this
information, but that seems to have disappeared somewhere in the 9.x
release (I'
10 matches
Mail list logo