I configured replication with repmgr on 2 postgresql 9.6.3 nodes. Both of
those utilities can handle failover but I should let only one of them do
it. So, I wanted to know who should be the one responsible for the failover
and why ?
Thanks .
Hi,
I have a big function that includes many truncates on different tables. In
the documentation is is written that truncates creates a new file and
resign the old filenode to the new filenode and the old file (old data of
the table) is deleted in commit.
In order to execute my function I run psql
In my postgresql 9.6 instance I have 1 production database. When I query
the size of all databases :
combit=> Select
pg_database.datname,pg_size_pretty(pg_database_size(pg_database.datname))
as size from pg_database;
datname | size ---+-
template0 | 7265 kB
combit| 285
*isnt an object that match* and for some there are. So, How can I
continue ?
2017-12-12 17:49 GMT+02:00 Tom Lane :
> "David G. Johnston" writes:
> > On Tue, Dec 12, 2017 at 8:15 AM, Mariel Cherkassky <
> > mariel.cherkas...@gmail.com> wrote:
> >> And th
Hi,
What version of postgresql do you have ?
Do you use streaming replcation or a different tool like repmgr or pgpool
or something else ?
What are all the configurations for you wals(the parameters depends on your
version..) ?
Do you see any errors or warnings in the server log ?
Regards, Mariel.
Do you any errors in the server log or in the log of pgadmin (show us..)?
What are the settings that you configured for your pgadmin client and for
the connection ?
2018-01-12 6:55 GMT+02:00 Dinesh Chandra 12108 :
> Dear Expert,
>
>
>
> While connecting PostgreSQL 9.3 with PGAdmin client I am g
Hi,
I configured a master table that is called "year_2018" :
create table year_2018(a int,b int c date);
The master table has a unique constraint on those 3 columns so that I wont
have any duplicated rows. Moreover, I configured a before insert trigger on
that table that creates a child table for
Hi,
I'm currently migrating an oracle schema to postgresql. In the oracle`s
schema there is a table partition that has partitions by range(date - for
every day) and each partition has a sub partition by list(some values..).
Moreover, the data is loaded from a csv in a bulk. One important thing is
t
Hi,
I configured range partitions on a date column of my main
table(full_table). Each partition represents a day in the month. Every day
partition has a list parition of 4 tables on a text column.
full table
table_01_11_2017 -->
table_02_11_2017
.
he date format ?
Thanks , Mariel.
2018-02-04 11:23 GMT+02:00 Mariel Cherkassky :
> Hi,
> I configured range partitions on a date column of my main
> table(full_table). Each partition represents a day in the month. Every day
> partition has a list parition of 4 tables on
Hi,
I configured range partitions on a date column of my main table(log_full).
Each partition represents a day in the month. Every day partition has a
list parition of 4 tables on a text column.
log_full
log_full_01_11_2017 -->
log_full _01_11_2
select count(*) from log_full where end_date between
to_date('2017/12/03','YY/MM/DD') and to_date('2017/12/03','YY/MM/DD');
and the result if full scan on all partitions.
Why it decided to run a full table scan on all partitions ?
2018-02-04 14:03 GMT+02
show constraint_exclusion;
constraint_exclusion
--
partition
(1 row)
2018-02-04 15:19 GMT+02:00 legrand legrand :
> What is the value of guc constrain_exclusion ?
>
>
>
> --
> Sent from: http://www.postgresql-archive.org/PostgreSQL-performance-
> f2050081.html
>
>
explain analyze takes too much time.. hours ...
I run it now but it will take some time.
The output of the explain :
Finalize Aggregate (cost=38058211.38..38058211.39 rows=1 width=8)
-> Gather (cost=38058211.16..38058211.37 rows=2 width=8)
Workers Planned: 2
-> Partial Agg
03'::text, 'YY/MM/DD'::text)) AND (end_date <= to_date(
'2017/12/04'::text, 'YY/MM/DD'::text)))
and so on full on on partitions..
2018-02-04 15:43 GMT+02:00 Mariel Cherkassky :
> explain analyze takes too much time.. hours ...
> I
Great, it solved the issue. Seems problematic that the planner do full
scans on all partitions in the first case isnt it ? Seems like a bug ?
2018-02-04 16:54 GMT+02:00 Andreas Kretschmer :
>
>
> Am 04.02.2018 um 13:19 schrieb Mariel Cherkassky:
>
>> I checked the plan
I read those two links and I dont think that they are relevant because : 1
1)I didnt do any join.
2)I used a where clause in my select
2018-02-04 17:25 GMT+02:00 Justin Pryzby :
> On Sun, Feb 04, 2018 at 05:06:38PM +0200, Mariel Cherkassky wrote:
> > Great, it solved the iss
Hi Tom,
Did you hear about any solution that is similar to oracle`s global index ?
Is there any way to query all the partitions with one index?
2018-02-04 17:39 GMT+02:00 Tom Lane :
> Mariel Cherkassky writes:
> > Great, it solved the issue. Seems problematic that the planner do full
Hi,
I have installed pgpool 2 version 3.7.0 . I'm trying to configure log
rotation on the pgpool.log but It seems that something wrong. I configured
in logrotate conf file the parameters :
/PostgreSQL/pgpool/log/pgpool.log {
daily
dateext
missingok
compress
Hi,
I installed postgresql v9.6/10 in our company. When I tried to create the
extension plpython I got the next error :
ERROR: could not open extension control file
"/PostgreSQL/9.6/share/postgresql/extension/plpythonu.control": No such
file or directory
When I try to install the extension with y
:06:50PM +0300, Mariel Cherkassky wrote:
> > Hi,
> > I installed postgresql v9.6/10 in our company.
>
> Which version did you install and how ? Compiled or binaries from some
> repo ?
> Using PGDG repo or some other one ?
>
> > When I try to install the extension
When installing the postgresql10-plpython one of its dependencies is
the postgresql10-server. However, I dont want to install the server but as
you can see it is a must. What can I do ?
2018-07-08 16:33 GMT+03:00 Justin Pryzby :
> On Sun, Jul 08, 2018 at 04:24:10PM +0300, Mariel Cherkas
fault
values then the solution is just coppying the plpythonu.control to my
instance`s extensions directory ?
2018-07-08 16:43 GMT+03:00 Justin Pryzby :
> On Sun, Jul 08, 2018 at 04:38:21PM +0300, Mariel Cherkassky wrote:
> > When installing the postgresql10-plpython one of its dependenc
4096 May 8 00:03 plperl
drwxr-xr-x 5 postgres postgres 319 May 8 00:06 tcl
drwxr-xr-x 5 postgres postgres 4096 May 8 00:06 plpython
is there a way to install the extension from here ?
2018-07-08 17:18 GMT+03:00 Justin Pryzby :
> On Sun, Jul 08, 2018 at 04:46:47PM +0300, Mariel Cherkas
Hi,
I'm trying to install the pgwatch2 tool in our company without using the docker
option. I followed the instructions that are specified in the github page but
I'm facing an error during STEP 4.2 when I try to add my cluster to be the /dbs
page in order to monitor it. After I add it I'm gettin
Yes, it worked. Thanks!
On Sun, Jul 8, 2018, 8:25 PM Justin Pryzby wrote:
> On Sun, Jul 08, 2018 at 05:36:06PM +0300, Mariel Cherkassky wrote:
> > I still got the binaries of the installation and I found that I have the
> > next directory : postgresql-10.4/src/pl/
> > cd p
Hi,
I'm using postgresql v10.4. I have a local partitioned table (by range -
data, every day has its own table). I'm using the oracle_fdw extension to
bring data from the oracle partitioned table into my local postgresql
(insert into local select * from remote_oracle). Currently, I dont have any
in
Hi,
I'll try to answer all your question so that you will have more information
about the situation :
I have one main table that is called main_table_hist. The "main_table _hist"
is partitioned by range (date column) and includes data that is considered
as "history data" . I'm trying to copy the d
Inserting directly into the partition didnt help, the performance are just
the same. I tried to increase the prefetch value to 1000(alter foreign
table hist_oracle options (add prefetch '1000') but still no change - 15
minutes for one partition(6GB).
On the oracle side the plan is full scan on the
Hi,
I have a big table (with 1.6 milion records). One of the columns is called
end_date and it`s type is timestamp. I'm trying to find the best way to
delete most of the table but not all of it according to a range of dates.
The table structure :
afa=# \d my_table;
Hi,
I already checked and on all the tables that uses the id col of the main
table as a foreign key have index on that column.
I tried all the next 4 solutions :
1)delete from my_table where end_date <=
to_date('12/12/2018','DD/MM/') and end_date >
to_date('11/12/2018','DD/MM/');
Executi
Cant drop foreign keys, there are too much.
בתאריך יום ב׳, 3 בספט׳ 2018 ב-11:35 מאת Sergei Kornilov <s...@zsrv.org
>:
> Hello
>
> > Delete on my_table (cost=0.00..65183.30 rows=1573862 width=6) (actual
> time=5121.344..5121.344 rows=0 loops=1)
> >-> Seq Scan on my_table (cost=0.00
392kB
Buffers: shared hit=65020, temp written=6852
-> Seq Scan on my_table my_table (cost=0.00..81160.02
rows=1614002 width=14) (actual time=0.297..815.133 rows=1613117 loops=1)
Buffers: shared hit=65020
I restarted the cluster after running every query.
t;
berlincar...@gmail.com>:
> This is a terribley inflexible design, why so many foreign keys? If the
> table requires removing data, rebuild with partitions. Parent keys should
> be in reference tables, not in fact table.
>
> On Mon, Sep 3, 2018 at 04:51 Mariel Cherkassky
Hi jefff,
I tried every solution that I checked on net. I cant disable foreign keys
or indexes.
Trying to have better performance by just changing the query / changing
parameters.
בתאריך יום ב׳, 3 בספט׳ 2018 ב-18:25 מאת Jeff Janes <
jeff.ja...@gmail.com>:
>
>
>
>>
>> 4)delete in chunks :
Hi,
I'm trying to understand the execution plan that is chosen for my query
when I run a select on a partition table . I have on my main partition
table rules that redirect the insert to the right son table.
My scheme :
Postgresql 9.6.8
mydb=# \d comments_daily
Table "public.fw_log_da
Hi,
Does the work mem is used when I do sorts or hash operations on temp
tables ? Or the temp_buffer that is allocated at the beginning of the
session is used for it ?
At one of our apps, the app create a temp table and run on it some
operations (joins,sum,count,avg ) and so on.. I saw in the pos
great, thanks !
בתאריך יום ה׳, 11 באוק׳ 2018 ב-10:42 מאת Andrew Gierth <
and...@tao11.riddles.org.uk>:
> >>>>> "Mariel" == Mariel Cherkassky writes:
>
> Mariel> Hi,
> Mariel> Does the work mem is used when I do sorts or hash operatio
Hi,
Can someone explain the logic behind it ? I know that vacuum full isnt
something recommended but I found out that whenever I run vacuum full on my
database checkpoint occurs during that time every second ! well I know that
VACUUM FULL duplicates the data into new data files and then it deletes
First of all thank you for the quick answer. In my case checkpoint happened
every one second during the vacuum full so the checkpoint timeout isn't
relevant. My guess was that it writes the changes to the wals but I didn't
find anything about it in the documentation. Can you share a link that
prove
iple lists.
>
> On Thu, Nov 15, 2018 at 08:53:14PM +0200, Mariel Cherkassky wrote:
> > Can someone explain the logic behind it ? I know that vacuum full isnt
> > something recommended but I found out that whenever I run vacuum full on
> my
> > database checkpoint
Hi,
I'm trying to understand something that is weird on one of my environments.
When I query pg_stat_all_tables I see that most of the tables dont have any
value in the last_autovacuum/analyze column. In addition the columns
autovacuum_count/analyze_count is set to 0. However, when checking the
log
Hi,
I'm using postgres 9.6.
I have a table with 100M+ records which consume on disk about 8GB. In
addition I have an index on the id column of the table.
When I run in psql : explain analyze select id from my_table order by id
The query returns output after 130 seconds which is great. The plan that
.postgresql.org
>
> Please avoid simultaneously sending the same question to multiple lists.
>
> It means that people can't see each others replies and everything that
> implies.
>
> On Sun, Nov 25, 2018 at 03:08:33PM +0200, Mariel Cherkassky wrote:
> > However when I run
Hi,
I checked pg_stat_all_tables and I see that the last_autovacuum is empty
for all the tables but in the database`s log I see that autovacuum was
running and deleted records(not all of them but still deleted...).
Can someone explain why the pg_stat_all_tables doesnt show the real data ?
1 automaticvacuum "db1.public.tbl4":
I just changed the name of the tables but all other values are accurate.
Firewall rules should block all stats not just some of them right ?
Something is weird...
בתאריך יום ב׳, 26 בנוב׳ 2018 ב-17:25 מאת Tom Lane <t...@sss.pgh.pa.us
&
Hi,
I'm trying to understand why my database consume so much space. I checked
the space it consume on disk :
[root@ base]# du -sh * | sort -n
1.1T17312
5.2Gpgsql_tmp
6.3M1
6.3M12865
6.4M12870
119G17313
myBIGdb=# select t1.oid,t1.datname AS
db_name,pg_size_pretty(pg_databas
Hi,
I have a very strong machine with 64GB of ram and 19 cpu but it seems that
when I'm running the next benchmark test with pg_bench the database is
crashing :
createdb -U postgres bench
pgbench -i -s 50 -U postgres -d bench
pgbench -U postgres -d bench -c 10 -t 1
output :
client 8 receivin
Hey,
I installed a new postgres 9.6 on both of my machines. I'm trying to
measure the differences between the performances in each machine but it
seems that the results arent accurate.
I did 2 tests :
1)In the first test the scale was set to 100 :
pgbench -i -s 100 -U postgres -d bench -h machine_
; to send that last message back to the list, as maybe others will have
>> better ideas.
>>
>>
>>
>> Greg.
>>
>>
>>
>> *From:* Mariel Cherkassky
>> *Sent:* Thursday, December 13, 2018 1:45 PM
>> *To:* Greg Clough
>> *Subject:*
Hi,
I'm trying to understand some issues that I'm having with the unix_socket
settings and pgsql.
I have 2 machines with pg v9.2.5 with the same next settings :
#listen_addresses = 'localhost'
#unix_socket_directory = ''
in both of the machines I run netstat to check on what socket the postgres
li
>
> I installed on machine 2 the next packages and not what I mentioned on my
> last comment :
>
---> Package postgresql96.x86_64 0:9.6.10-1PGDG.rhel6 will be installed
---> Package postgresql96-contrib.x86_64 0:9.6.10-1PGDG.rhel6 will be
installed
---> Package postgresql96-libs.x86_64 0:9.6.10-1PG
5 מאת Tom Lane <t...@sss.pgh.pa.us
>:
> Mariel Cherkassky writes:
> > I'm trying to understand some issues that I'm having with the unix_socket
> > settings and pgsql.
> > I have 2 machines with pg v9.2.5 with the same next settings :
> > #listen_ad
the socket dir. Does those packages include
a different libpq ? What postgres package change the libpq ?
בתאריך יום ד׳, 9 בינו׳ 2019 ב-17:13 מאת Jeff Janes <
jeff.ja...@gmail.com>:
> On Wed, Jan 9, 2019 at 10:09 AM Mariel Cherkassky <
> mariel.cherkas...@gmail.com> w
Hey,
It is clear that when we query some data, if that data isnt in the shared
buffers pg will go bring the relevant blocks from the disk to the shared
buffers. I wanted to ask if the same logic works with
dml(insert/update/delete). I'm familiar with the writing logic, that the
checkpointer is the
. My question is, is it possible that it also deleted the 9.2
libpq file ?
בתאריך יום ד׳, 9 בינו׳ 2019 ב-18:11 מאת Tom Lane <t...@sss.pgh.pa.us
>:
> Mariel Cherkassky writes:
> > But in both of the machines I have the same os and I used the same
> > repository - post
Thanks Ken. I just wanted to make sure that it happened because of 9.6
packages installation and not because of any other reason.
בתאריך יום ה׳, 10 בינו׳ 2019 ב-11:42 מאת Ken Tanzer <
ken.tan...@gmail.com>:
> On Wed, Jan 9, 2019 at 7:09 AM Mariel Cherkassky <
>
commit. How the
database will handle it ?
בתאריך יום ה׳, 10 בינו׳ 2019 ב-10:55 מאת Guillaume Lelarge <
guilla...@lelarge.info>:
> Le jeu. 10 janv. 2019 à 09:07, Mariel Cherkassky <
> mariel.cherkas...@gmail.com> a écrit :
>
>> Hey,
>> It is clear that when
Hey,
I have a table with 3 columns and one of those columns is bytea type
A(int,int,bytea).
Every row that I insert is pretty big and thats why postgresql decided to
save that column in a toasted table(pg_toasted_Aid). I had a lot of bloat
issues with that table so I set the vacuum_threshold of the
table then the toasted table ? Should they vacuumed together ?
On Jan 17, 2019 7:52 PM, "Alvaro Herrera" wrote:
On 2019-Jan-17, Mariel Cherkassky wrote:
> I tried to set the same threshold for the toasted table but got an error
> that it is a catalog table and therefore permission
right now the threshold for the original table is set to 0.05 and it
it to often for the original but for the toasted table it isn't enough
because it has more then 10 m records..
On Jan 17, 2019 9:09 PM, "Alvaro Herrera" wrote:
On 2019-Jan-17, Mariel Cherkassky wrote:
> I did i
Got it, I didn't see the toast word in the command. Thanks !
On Thu, Jan 17, 2019, 10:17 PM Alvaro Herrera On 2019-Jan-17, Mariel Cherkassky wrote:
>
> > But you said that the threshold that is chosen for the toasted table is
> > identical to the originals table threshold
Hey,
I'm trying to help a guy that is using pg9.6 but I'm not so familiar with
the error message :
ERROR: found xmin 16804535 from before relfrozenxid 90126924
CONTEXT: automatic vacuum of table db1.public.table_1"
It seems that the error has started appearing two weeks ago. Data that I
collec
Yeah 9.6 !
On Wed, Jan 23, 2019, 9:51 PM Jerry Sievers Mariel Cherkassky writes:
>
> > Hey,
> > I'm trying to help a guy that is using pg9.6 but I'm not so familiar
> > with the error message :
> > ERROR: found xmin 16804535 from before relfrozenxid 90126
t ? Or there is
something else that need to be done?
בתאריך יום ד׳, 23 בינו׳ 2019 ב-21:51 מאת Jerry Sievers <
gsiever...@comcast.net>:
> Mariel Cherkassky writes:
>
> > Hey,
> > I'm trying to help a guy that is using pg9.6 but I'm not so familiar
> > w
I'm getting this issue when I try to connect to a specific db. Does it
matters what table I specify ? Should I just choose a random table from the
problematic db? If I'll dump the db and restore it it can help ?
On Fri, Jan 25, 2019, 10:19 AM Adrien NAYRAT On 1/24/19 3:14 PM, Mariel
Update to the minor version should be an easy solution - yum update
postgresql . What did you mean by carful
On Sat, Jan 26, 2019, 12:48 PM Adrien NAYRAT On 1/25/19 6:20 PM, Mariel Cherkassky wrote:
> > I'm getting this issue when I try to connect to a specific db. Does it
> > m
Hi,
I'm planning our db upgrade from 9.6. Basically I wanted to check how
stable is pg11 version. I'm considering upgrading from 9.6 to 10 and then
to 11 immediatly. Is there a way to upgrade directly to 11 and jump on 10.
Thanks.
Hey,
I noticed that pg_locks has an addition row for every transaction that is
created with a locktype "virtualxid". Tried to search it online but I didnt
find an explanation for this behavior. Does anyone can explain why it
happens ?
GCC) 4.4.7
20120313 (Red Hat 4.4.7-23), 64-bit
(1 row)
and the error is still exist..
בתאריך שבת, 26 בינו׳ 2019 ב-12:59 מאת Adrien NAYRAT <
adrien.nay...@anayrat.info>:
> On 1/26/19 11:56 AM, Mariel Cherkassky wrote:
> > Update to the minor version should be an e
d *no
upgrade was done*
בתאריך יום ד׳, 30 בינו׳ 2019 ב-11:14 מאת Alvaro Herrera <
alvhe...@2ndquadrant.com>:
> On 2019-Jan-30, Mariel Cherkassky wrote:
>
> > It seems that the version of the db is 9.6.10 :
> >
> > psql -U db -d db -c "select version()
Hey,
I'm using postgresql 9.6.11. I wanted to ask something about the functions
I mentioned in the title :
I created the next table :
postgres=# \d students;
Table "public. students "
Column | Type | Modifiers
--+-+---
id| integer |
name| text|
age| integ
> >
> > [1]
> >
> https://www.postgresql.org/docs/current/functions-admin.html#FUNCTIONS-ADMIN-DBSIZE
> >
> > Le mer. 30 janv. 2019 à 11:42, Mariel Cherkassky <
> > mariel.cherkas...@gmail.com> a écrit :
> >
> > > Hey,
> > > I'm us
2019 ב-11:14 מאת Alvaro Herrera <
> alvhe...@2ndquadrant.com>:
>
>> On 2019-Jan-30, Mariel Cherkassky wrote:
>>
>> > It seems that the version of the db is 9.6.10 :
>> >
>>
Hi,
I have a table with a bytea column and its size is huge and thats why
postgres created a toasted table for that column. The original table
contains about 1K-10K rows but the toasted can contain up to 20M rows. I
assigned the next two settings for the toasted table :
alter table orig_table set
cleaning it or wait until the
vacuum_threshold hit again ?
בתאריך יום ד׳, 6 בפבר׳ 2019 ב-15:05 מאת David Rowley <
david.row...@2ndquadrant.com>:
> On Thu, 7 Feb 2019 at 00:17, Laurenz Albe
> wrote:
> >
> > On Wed, 2019-02-06 at 12:29 +0200, Mariel Cherkassky wrote:
&
which one you mean ? I changed the threshold and the scale for the specific
table...
בתאריך יום ד׳, 6 בפבר׳ 2019 ב-15:36 מאת dangal <
danielito.ga...@gmail.com>:
> Would it be nice to start changing those values found in the default
> postgres.conf so low?
>
>
>
> --
> Sent from:
> http
2 מאת Jeff Janes <
jeff.ja...@gmail.com>:
> On Wed, Feb 6, 2019 at 5:29 AM Mariel Cherkassky <
> mariel.cherkas...@gmail.com> wrote:
>
>
>> Now the question is how to handle or tune it ? Is there any change that I
>> need to increase the cost_limit / cost_dela
mment there is a byte column and therefore the toasted table is the
problematic here.
בתאריך יום ה׳, 7 בפבר׳ 2019 ב-0:34 מאת David Rowley <
david.row...@2ndquadrant.com>:
> On Thu, 7 Feb 2019 at 02:34, Mariel Cherkassky
> wrote:
> > As I said, I set the next setting
Hey,
I'm trying to understand the logic behind all of these so I would be happy
if you can confirm what I understood or correct me if I'm wrong :
-The commit command writes all the data in the wal_buffers is written into
the wal files.
-Checkpoints writes the data itself (blocks that were changed)
> > I'm trying to understand the logic behind all of these so I would be
> happy
> > if you can confirm what I understood or correct me if I'm wrong :
> > -The commit command writes all the data in the wal_buffers is written
> into the wal files.
>
> All the transaction log for the transaction has
Hey,
I have a very big toasted table in my db(9.2.5). Autovacuum doesnt gather
statistics on it because the analyze_scale/threshold are default and as a
result autoanalyze is never run and the statistics are wrong :
select * from pg_stat_all_Tables where relname='pg_toast_13488395';
-[ RECORD 1 ]-
e bit problematic ?
בתאריך יום ד׳, 13 בפבר׳ 2019 ב-18:13 מאת Alvaro Herrera <
alvhe...@2ndquadrant.com>:
> On 2019-Feb-13, Mariel Cherkassky wrote:
>
> > Hey,
> > I have a very big toasted table in my db(9.2.5).
>
> Six years of bugfixes missing there ...
I meant the anaylze, if anaylze will run very often on the original table,
arent there disadvantages for it ?
בתאריך יום ד׳, 13 בפבר׳ 2019 ב-18:54 מאת Alvaro Herrera <
alvhe...@2ndquadrant.com>:
> On 2019-Feb-13, Mariel Cherkassky wrote:
>
> > To be honest, it isnt my d
ds ~ 17H
So autovacuum was laying down for 17h ? I think that I should increase the
cost_limit to max specifically on the toasted table. What do you think ? Am
I wrong here ?
בתאריך יום ה׳, 7 בפבר׳ 2019 ב-18:26 מאת Jeff Janes <
jeff.ja...@gmail.com>:
> On Thu, Feb 7, 2019 at
solve the issue ?
On Thu, Feb 14, 2019, 8:38 PM Michael Lewis It is curious to me that the tuples remaining count varies so wildly. Is
> this expected?
>
>
> *Michael Lewis*
>
> On Thu, Feb 14, 2019 at 9:09 AM Mariel Cherkassky <
> mariel.cherkas...@gmail.com> wrote:
*Michael Lewis*
>
> On Thu, Feb 14, 2019 at 12:41 PM Mariel Cherkassky <
> mariel.cherkas...@gmail.com> wrote:
>
>> Maybe by explaining the tables purpose it will be cleaner. The original
>> table contains rows for sessions in my app. Every session saves for itself
&
ently.
>
>
>
> *Michael Lewis *
>
>
> On Thu, Feb 14, 2019 at 1:08 PM Mariel Cherkassky <
> mariel.cherkas...@gmail.com> wrote:
>
>> No I don't run vacuum manually afterwards because the autovacuum should
>> run. This process happens every night
'but then I don't have accurate statistics on my toasted table..
On Fri, Feb 15, 2019, 3:39 PM Alvaro Herrera On 2019-Feb-14, Mariel Cherkassky wrote:
>
> > I meant the anaylze, if anaylze will run very often on the original
> table,
> > arent there disadvantages f
Hi,
I have a table with json col : R(object int, data jsonb).
Example for content :
object | data
+---
50 | {"ranges": [[1, 1]]}
51 | {"ranges": [[5, 700],[1,5],[9,10}
52 | {"ranges":
is int4range type, then I would expect that you could add a
> GiST and then use overlaps &&, or another operator. I would not expect that
> you could index (unnest data->>'ranges' for instance) to get the separated
> out range values.
>
>
>
> *Michae
able like R3 would make sense to me.
>
> *Michael Lewis*
>
> On Tue, Feb 19, 2019 at 9:34 AM Mariel Cherkassky <
> mariel.cherkas...@gmail.com> wrote:
>
>> I dont have any indexes on R (the table with the jsonb column). I was
>> asking if I can create any t
Hi,
I was testing pgstattuple and I realized that pgstattuple is working on
toasted table but pgstattuple_approx is raising the next error msg :
ERROR: "pg_toast_18292" is not a table or materialized view
ahm, is that because the pgstattuple_approx uses visibility map ? Can
someone explain ? tnx
Hi,
I have the next relation in my db : A(id int, info bytea,date timestamp).
Every cell in the info column is very big and because of that there is a
toasted table with the data of the info column (pg_toast.pg_toast_123456).
The relation contains the login info for every user that logs into the
s
updated every hour. Only during those problematic
7 hours it wasnt updated.
בתאריך יום ד׳, 6 במרץ 2019 ב-19:05 מאת Justin Pryzby <
pry...@telsasoft.com>:
> On Wed, Mar 06, 2019 at 06:47:21PM +0200, Mariel Cherkassky wrote:
> > Those settings helped but the table still grey very
pen ? The db was installed in that version from the first place and *no
>> upgrade was done*
>>
>> בתאריך יום ד׳, 30 בינו׳ 2019 ב-11:14 מאת Alvaro Herrera <
>> alvhe...@2ndquadrant.com>:
>>
>>> On 2019-Jan-30, Mariel Cherkassky wrote:
>>>
tored it :
drop table table_name;
psql -d db -U username -f table.sql
בתאריך יום ד׳, 13 במרץ 2019 ב-14:24 מאת Adrien NAYRAT <
adrien.nay...@anayrat.info>:
> On 3/12/19 8:58 AM, Mariel Cherkassky wrote:
> > Apparently the issue appeared again in the same database but on
>
t;select * from table for update" database
>
> Using the last releases of the major versions solve the bug for me.
>
> Best regards
>
> Em qua, 13 de mar de 2019 às 09:29, Mariel Cherkassky <
> mariel.cherkas...@gmail.com> escreveu:
>
>> Hey,
>> The logs are
Hi all,
I'm trying to analyze a deadlock that I have in one of our environments.
The deadlock message :
06:15:49 EET db 14563 DETAIL: Process 14563 waits for ShareLock on
transaction 1017405468; blocked by process 36589.
Process 36589 waits for ShareLock on transaction 1017403840; blocked by
pro
Hey,
I was searching for a solution to scale my postgresql instance in the
cloud. I'm aware of that that I can create many read only replicas in the
cloud and it would improve my reading performance. I wanted to hear what
solution are you familiar with ? Are there any sharding solution that are
com
1 - 100 of 153 matches
Mail list logo