In fact it was a single delete statement.
From: Vladimir Nicolici
Sent: Tuesday, October 10, 2017 17:30
To: Achilleas Mantzios; pgsql-general@postgresql.org
Subject: RE: [GENERAL] Strange checkpoint behavior - checkpoints take alongtime
No, it didn’t. The delete was done in a single transaction.
No, it didn’t. The delete was done in a single transaction.
From: Achilleas Mantzios
Sent: Tuesday, October 10, 2017 17:18
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Strange checkpoint behavior - checkpoints take a longtime
Hello Vladimir,
maybe your update triggered auto_vacuum on
I experimented some more with the settings this weekend, while doing some large
write operations (deleting 200 million records from a table), and I realized
that the database is capable of generating much more WAL than I estimated.
And it seems that spikes in write activity, when longer than a f
Further updates:
Yesterday checkpoints were finishing more or less on time with the
configuration for 25 minutes out of 30 minutes, taking 26 minutes at most.
So for today I reduced the time reserved for checkpoint writes to 20 minutes
out of 30 minutes, by setting checkpoint_completion_target
: Friday, October 6, 2017 04:51
To: Vladimir Nicolici
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Strange checkpoint behavior - checkpoints take a longtime
Hi,
On 2017-10-05 22:58:31 +0300, Vladimir Nicolici wrote:
> I changed some configuration parameters during the night to the values I
mbination, I will probably set it to something like 0.90
target, so that it distributes the writes over 27 minutes.
Thanks,
Vlad
From: Igor Polishchuk
Sent: Friday, October 6, 2017 02:56
To: Vladimir Nicolici
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Strange checkpoint behavior - chec
Some further updates about the issue.
I did a bit of benchmarking on the disk system with iozone, and the during the
test the SSDs seemed to be able to easily sustain 200 MB/second of writes each,
they fluctuated between 200 MB/s and 400 MB/s when doing 96 GB of random writes
in a file. That wo
I have a large database, 1.3 TB, with quite a bit of write activity. The
machine has, 2 cpus x 6 cores x 2 threads (2 x E5-2630 v2 @ 2.60GHz), 4 x EVO
Pro 2TB SSDs in a RAID 1+0 software raid configuration, on a SATA 3 controller.
The machine has a lot of memory, 384 GB, so it doesn’t do a lot o
caching in ZFS. As I
understand it now such config can provide better results since data will be
cached once in ZFS.
On Sun, Sep 24, 2017 at 8:59 PM, Tomas Vondra
wrote:
> On 09/24/2017 11:03 AM, Vladimir Mihailenco wrote:
> > Thanks for your response. Ss I understand it now the difference is
e a typo? If not then what data is written synchronously?
On Sat, Sep 23, 2017 at 6:01 PM, Tomas Vondra
wrote:
> Hi,
>
> On 09/23/2017 08:18 AM, Vladimir Mihailenco wrote:
> > Hi,
> >
> > I wonder what is the point of setting max WAL size bigger than shared
> >
Hi,
I wonder what is the point of setting max WAL size bigger than shared
buffers, e.g.
shared_buffers = 512mb
max_wal_size = 2gb
As I understand a checkpoint happens after 2gb of data were modified
(writter to WAL), but shared buffers can contain at most 512mb of dirty
pages to be flushed to th
Hi,
I wonder what is the point of setting max WAL size bigger than shared
buffers, e.g.
shared_buffers = 512mb
max_wal_size = 2gb
As I understand a checkpoint happens after 2gb of data were modified
(writter to WAL), but shared buffers can contain at most 512mb of dirty
pages to be flushed to th
ons, so
it would be easier to reson about
Have you tried "withStatusInterval(20, TimeUnit.SECONDS)" instead of 20
millis? I don't think it matter much, however 20ms seems to be an overkill.
Vladimir
пт, 15 сент. 2017 г. в 19:57, Dipesh Dangol :
> hi,
>
> I am trying to im
Hi Andres.
> 25 апр. 2017 г., в 7:17, Andres Freund написал(а):
>
> Hi,
>
> I've lately seen more and more installations where the generation of
> write-ahead-log (WAL) is one of the primary bottlenecks. I'm curious
> whether that's primarily a "sampling error" of mine, or whether that's
> ind
assword';
>
> where it will test that the password entered should be according to the
> company standard, while creation of users.
> So please suggest.
>
Consider using PAM authentication where you can insert any of already
existing password strength checks.
Or, maybe, LDAP auth where stuff will be enforced by LDAP server.
--
Vladimir Rusinov
Storage SRE, Google Ireland
smime.p7s
Description: S/MIME Cryptographic Signature
Depends on goals of your benchmarking.
What are you trying to achieve?
Initialization and vacuuming each time will help achieve more consistent
best-case numbers (to reduce variance, I'd also destroy cluster completely
and clean up hardware, e.g. run fstrim in case of SSD, etc).
If you are howeve
Maybe, maybe not.
Have you tried installing '=postgresql93-contrib-9.3.14' ?
On Thu, Nov 24, 2016 at 3:41 PM David Richer
wrote:
> Hi guys,
>
>
>
> I want to check my production server for the free space map issue.
> https://wiki.postgresql.org/wiki/Free_Space_Map_Problems
>
> I am on Centos 6
Hi folks!
Jus released beta version of new open sourced and cross-platform postgresql GUI
client.
Check it out - https://github.com/web-pal/DBGlass
--
Vladimir Pal
I understand correctly, that number of members cannot be more than 2^32 (also
uses a 32-bit counter)?
I had 69640 files in main/pg_multixact/members/, 69640*32*2045 = 4557241600
members, this is normal?
Kind regards,
Vladimir Pavlov
-Original Message-
From: Alvaro Herrera
embers (32*2045*10820): 708060800
Members per multixact (2075246000 - 2019511697)/708060800: 12,70421916
Multixact size (bytes) (2887696384/708060800): 4,078316981 - It's a lot?
Kind regards,
Vladimir Pavlov
-Original Message-
From: Alvaro Herrera [mailto:alvhe...@2ndquad
rver stops working.
The question is how to start the VACUUM at least once in three days.
Kind regards,
Vladimir Pavlov
-Original Message-
From: Adrian Klaver [mailto:adrian.kla...@aklaver.com]
Sent: Wednesday, March 30, 2016 4:52 PM
To: Pavlov Vladimir; 'Alvaro Herrera'
Hello,
There is no news?
Now I have to do VACUUM every night, so that the server worked.
Maybe run VACUUM FREEZE?
Kind regards,
Vladimir Pavlov
-Original Message-
From: Pavlov Vladimir
Sent: Friday, March 25, 2016 9:55 AM
To: 'Alvaro Herrera'
Cc: 'Adrian Klaver&
Hi, thank you very much for your help.
Pg_control out in the attachment.
Kind regards,
Vladimir Pavlov
-Original Message-
From: Alvaro Herrera [mailto:alvhe...@2ndquadrant.com]
Sent: Friday, March 25, 2016 12:25 AM
To: Pavlov Vladimir
Cc: 'Adrian Klaver'; pgsql-general@post
There is nothing:
select * from pg_prepared_xacts;
transaction | gid | prepared | owner | database
-+-+--+---+--
(0 rows)
It is also noticed that a lot of files in a directory
main/pg_multixact/members/, now - 69640.
Kind regards,
Vladimir Pavlov
Thanks for your reply.
Yes, the first thing I looked at the statistics from pg_stat_activity.
But I have a transaction is not more than 60 seconds and the condition 'idle in
transaction' lasts only a few seconds.
Kind regards,
Vladimir Pavlov
-Original Message-
From: Adr
multixacts soon to avoid wraparound
problems.
If I understand correctly, approaching Multixact member wraparound.
But how to understand when it comes exactly and what to do?
PostgreSQL version - 9.3.10, OS Debian 7.8.
Thank you.
Sorry, if I chose the wrong mailing list.
Kind regards,
Vladimir Pavlov
> but in this case all these transactions are independent with autocommit off,
At database level, there is no "autocommit=off".
There's just "begin-end".
It is database who forbids .commit, not the JDBC driver.
Vladimir
--
Sent via pgsql-general mailing list (pgsq
d not be used for a "control flow", should they?
If you want to shoot yourself in a foot for fun and profit, you can
try https://github.com/pgjdbc/pgjdbc/pull/477.
What it does, it creates savepoints before each statement, then it
rollbacks to that savepoint in case of failure.
Vladim
>Ok. I understand, to put there a pull request, I must to register into this
>webpage ??
Exactly.
Vladimir
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
>As I understand, it's all what you need, isn't you
Ideally I would like to see a pull request at
https://github.com/pgjdbc/pgjdbc/pulls, however your code seems to be
good enough so somebody else can pick it up, simplify a bit, and file
a PR.
Vladimir
--
Sent via pgsql-ge
> I hope I have been the most clear as my poor level of English could be..
It would be great if you could express that in java + sql as well, so
the exact code can be added to JDBC driver test suite as a regression
test.
Vladimir
--
Sent via pgsql-general mailing list (pgsql-gene
) with index
on pid.
Any pitfalls with that kind of "update mostly table"?
[1]: http://research.google.com/pubs/pub36356.html
--
Regards,
Vladimir Sitnikov
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.post
Hi all.
What is the best way to get current timeline of host? Right now I can imagine
two variants:
1. Do checkpoint and read it from control file.
2. Do something like "SELECT
substr(pg_xlogfile_name(pg_current_xlog_location()), 1, 8)".
Both variants seem to be a bit tricky. Is there a way be
> 19 марта 2015 г., в 20:30, Sergey Shchukin
> написал(а):
>
> 17.03.2015 13:22, Sergey Shchukin пишет:
>> 05.03.2015 11:25, Jim Nasby пишет:
>>> On 2/27/15 5:11 AM, Sergey Shchukin wrote:
show max_standby_streaming_delay;
max_standby_streaming_delay
--
05 янв. 2015 г., в 18:15, Vladimir Borodin написал(а):
> Hi all.
>
> I have a simple script for planned switchover of PostgreSQL (9.3 and 9.4)
> master to one of its replicas. This script checks a lot of things before
> doing it and one of them is that all data from master ha
Hi all.
I have a simple script for planned switchover of PostgreSQL (9.3 and 9.4)
master to one of its replicas. This script checks a lot of things before doing
it and one of them is that all data from master has been received by replica
that is going to be promoted. Right now the check is done
]
http://www.postgresql.org/docs/current/static/app-initdb.html#APP-INITDB-DATA-CHECKSUMS
[2] http://www.postgresql.org/docs/current/static/wal-reliability.html
--
Vladimir
12 мая 2014 г., в 22:26, Adrian Klaver написал(а):
> On 05/12/2014 09:42 AM, Borodin Vladimir wrote:
>> Hi all.
>>
>> Right now synchronous replication in postgresql chooses one replica as
>> synchronous and waits for replies from it (with synchronous_commit = on
&g
recent replica and promote it. Or there are pitfalls that I do not see?
--
Vladimir
Program written in C using Libpq, which receives large files (BYTEA)
has a memory leak.
I need to free ALL of the used memory after each sql query.
after each call PQclear() I drop the buffer:
conn->inBuffer = realloc(conn->inBuffer, 8192);
conn->inBufSize = 8192;
It works, but ..
I notic
l service: [ OK ]
--
Vladimir N. Indik
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
> > cat the-source-dump.sql | iconv -t utf8 - > my-converted.sql
> >
> > Size should not matter in this case...
>
> Yeah it does. iconv buffers everything in memory, as I recall.
Just found an alternative - "uconv" command (part of ICU project):
http://www.icu-project.org/userguide/intro.html
> You have not understood what I said. I ran iconv, and it changes the
> encoding of the data, but not the ENCODING= statements that are
> embedded in the datastream. Yes I can change those with sed, but
> I do not know what else I need to change. There must be an easier
> way.
Oops, please a
> iconv does not change the database encodings embedded in the file
> (and it is quite large).
Have you read the manual?
file A pathname of an input file. If no file operands are
specified, or if a file operand is '-', the standard input shall
be used.
cat the-source-dump
> Is there a definative HOWTO that I can follow, if not does someone
> have a set of instructions that will work?
What about running "iconv" command on the dumped .sql file and transform
it to the utf8?
Vlad
PS: man iconv for manual
--
Sent via pgsql-general mailing list (pgsql-general@postgre
you enormously. Lest you think I'm
> biased, I dba a mysql box professionally...every time I pop into the
> mysql shell I feel like I'm stepping backwards in time about 5 years.
> Don't let the inability to return multiple sets trip you up...you are
> missing the big
SELECTs.
I personally find the ability to do a direct SELECT from a stored
procedure to the client extremely useful (MySQL 5+). It makes data
retrieval easier to program than having a stored procedure return open
cursors or OUT parameters (saving additional SELECT queries after the
CALL() ).
Ok, e
QL is still relatively small and I
wanted to check my options before I dig myself too deeply into MySQL to
make a potential sensible migration too expensive :)
Maybe I'm going to revisit Postgresql again in 2009 or 2010 :)
Vladimir
--
Vladimir Dzhuvinov * www.valan.net * PGP key ID AC9A5C6C
signature.asc
Description: OpenPGP digital signature
result sets through pg_get_result(), but only for requests issued
asynchronously:
http://bg2.php.net/manual/en/function.pg-get-result.php
> Out of curiosity, what language are you using?
For MySQL I've been mostly using PHP, occasionally Java, Python and C.
Vladimir
--
Vladimir Dzhuvinov
x27;re still going to get a
result set, it's just going to be an empty one (result with no rows).
So, no matter how many rows the SELECT statements resolve to, you're
always going to get two result sets :)
Vladimir
--
Vladimir Dzhuvinov * www.valan.net * PGP key ID AC9A5C6C
signature.asc
Description: OpenPGP digital signature
ode, that
> allows multirecord sets.
Yes, I'll be glad to examine your patch. At least to get an idea of
what's involved in implementing multiple result sets.
Please, send the code or a link to it directly to my email (so as not to
spam the list ;)
Greetings from Bulgaria,
Vladimir
ansactions encapsulated within SPs, clients
allowed to do CALL only). Anyway, thanks everyone for the cursors tip :)
Vladimir
--
Vladimir Dzhuvinov * www.valan.net * PGP key ID AC9A5C6C
signature.asc
Description: OpenPGP digital signature
lt set (zero or more rows)
SELECT * FROM accounts WHERE account_holder = user_id;
END;
So, is it true that as of Postgresql 8.3 there is no way to have a
pgpqsql function return multiple SELECTs?
Vladimir Dzhuvinov
--
Vladimir Dzhuvinov * www.valan.net * PGP key ID AC9A5C6C
I've been having problem with pgsql-8.2.5 master/slave warm standby
replication setup where occasionally master node generates a wal file
smaller then expected 16MB. pg_standby on slave gets stuck on such short
files, and replication halts from that moment on. we have to do
pg_start_backup/ rsy
Hello, and thanks
> Are the tests that different that you need to segregate the data?
> I see them both as being the time taken to travel a distance. The
> only difference is whether the time or distance is used to end the
> measurement.
Good point (I have realised this after posting, when I dug
Hello,
>> vladimir konrad wrote:
>>> I think that I understand basic relational theory but
> Clearly, you'll have to revisit that thought.
Usually I have one table per "entity" modelled (and the table holds
fields describing that entity).
E.g. subject w
> If you have some part of your app that needs to "select" the list of
> columns in a table you should look at
> http://www.postgresql.org/docs/8.2/interactive/catalogs.html
> particularly pg_class and pg_attribute
Thanks, this could come handy.
Vlad
---(end of broadcast
> Basically, you would be creating your own data dictionary (i.e.
> system catalog) on top of the db data dictionary. The database
> already comes with a way to easily add columns: ddl. I have seen
> newbie database designers reinvent this method a hundred times. The
> performance hits and compl
> Yes, this is known as eg. Entity-Attribute-Value model (cf.
> wikipedia).
Thank you for the pointer and term. This will get me started.
> IMO most times its disadvantages (it can be very hard to write
> performant queries compared to the traditional row based model) weigh
> higher than you gain
Hello,
I think that I understand basic relational theory but then I had an
idea. What I would like to know if this is sometimes done or that I am
possibly mad... Also, I do not know the terminology for this kind of
thing so I do not know where and what to look for.
Basically, instead of adding f
Hello!
Running postgresql 8.2.5 (build from source on debian testing, amd64) i run
into following error when running "vacuum full analyze":
ERROR: invalid page header in block 1995925 of relation "data_pkey"
The database was freshly initialized and contains around 1.4 billion records
in the ma
>
> Any ideas what to do next?
Well, I am going to try the same with 8.3 beta1, will see what happens...
Vlad
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining
> I am trying to use postgresql-autodoc. The autodoc finds all the Perl
> modules and compiles but when I go to /usr/local/bin and run
> postgresql_autodoc like this
I had a good luck with schema-spy (done in java)...
http://schemaspy.sourceforge.net/
Vlad
ps: the command I use is (all on one
This looks like more table design problem than
database limitation.
The one column should accommodate values from both
columns with unique index built on this column. Your
requirements tell me that these values are the same
nature and should be placed in the same column. To
distinguish between them
You can try metalink (https://metalink.oracle.com/),
but they want $$$ for forum like this one.
--- [EMAIL PROTECTED] wrote:
> - Mensaje original -
> De: bcochofel <[EMAIL PROTECTED]>
> Fecha: Jueves, Abril 5, 2007 7:46 pm
> Asunto: [GENERAL] Migrate postgres DB to oracle
>
> > I need s
SQLite database is much better choice for flash drive
from my point of view.
--- James Neff <[EMAIL PROTECTED]> wrote:
> Mark wrote:
> > I would like to use postgresql with knopixx,
> Sounds like a simple
> > idea :-) and I would like to get full version of
> postgresql stored on
> > flash drive
Thank you very much.
It works.
Vladimir
--- Alvaro Herrera <[EMAIL PROTECTED]> wrote:
> Vladimir Zelinski wrote:
>
> > I don't believe that it's bug, probably it's a
> feature
> > of the postgreSql database.
>
> Correct.
>
> > Is any
g of the transaction)?
With other words, I would like to see different
timestamps on first and last timestamp.
Thank you,
Vladimir
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
;t need an
example for a function; I have tons of them but I
don't have ANY stored procedure example.
It would be great if you point me to a site with
PostgreSql examples for different Stored Procedures if
they actually exist as database objects on postgreSql
database.
Thank you,
V
Hi, I just now subscribed the mailing list, but I
can't understand what I should do next.
I need:
1) search forums for specific keywords
2) be able to post my question.
How can I do that? I read help but it didn't have any
information for helping me.
Thank you
My questions is how to restrict access on table "stats" to this user in
way where this user will be able to select only limited set of columns
from table "stats" and only rows with usernames for which this user
knows correct passwords validated via auth
Title: Message
Hello,
I'm curently in
version 7.3
I know, this is an
old version and It would be a good idea to migrate.
Before doing that, I
would like to make a dump of my base. That's why I must running this service at
any coast.
Thank's for your
help !! :)
2006-06-19 10:31:55 L
Karen Hill wrote:
>>From Access I'd like to run pass the following from MS Access to
> PostgreSQL 8.1 using VBA:
>
> BEGIN;
> UPDATE accounts SET balance = balance + 100.00 WHERE acctnum = 12345;
> UPDATE accounts SET balance = balance - 100.00 WHERE acctnum = 7534;
> COMMIT;
>
> It won't let me
> I remember the Borland of old that offered extraordinarily powerful
> tools at a reasonable price. Unfortunately, they are not the same
> company they used to be.
The freepascal + lazarus + ported ZeosDB could do the trick...
Vlad
---(end of broadcast)-
> Like I said before get the personal/standard version of Delphi 6,7 or
> 2005, it's 99.99 and you can connect to postgres with it using third
> party component sets like Zeos. (2005 may not be available yet)
Zeos was ported to http://www.lazarus.freepascal.org/ (a "free Delphi").
I did test the
On Wednesday 26 January 2005 20:01, you wrote:
> "Vladimir S. Petukhov" <[EMAIL PROTECTED]> writes:
> > pg_controldata /var/pgsql/data
> > ...
> > LC_COLLATE: ru_RU
> > LC_CTYPE: ru_RU
> >
>
recognize upper/lower case..
SELECT ... ORDER BY do something like that (in English Alphabet):
a
a
Tast12
tes
test
Test12
test12
?:(
On Wednesday 26 January 2005 10:15, Dawid Kuroczko wrote:
> On Wed, 26 Jan 2005 12:01:49 +0000, Vladimir S. Petukhov
>
> <[EMAIL PROTECTED]> wr
Hi!
Sorry for my English..
I want to do case-insensitivity search, like this
... WHERE lower (column_name) LIKE lower (%value%);
This work fine for English..
But i need search for Russian words, lower() operator does not work with
Russian (non-English) chars, but ORDER works fine...
Sorry, of course... :)
On Thursday 20 January 2005 03:15, Vladimir S. Petukhov wrote:
> select * from nets;
>
> name | note | net
> --+--+---
>
> | | 172.16.0.0/16
>
> (1 row)
>
> select * from nets where net >>=
select * from nets;
name | note | net
--+--+---
| | 172.16.0.0/16
(1 row)
select * from nets where net >>= '172.16.4.0/8';
name | note | net
--+--+-
(0 rows)
??
---(end of broadcast)---
TIP 1:
Hi!
Sorry for my English..
I want to dinamcly change type of column. If possible, of course.
Trivial idia - create new temporary column, try to write value from old columt
to temporarity (if type conersion ok - this made using select/update command
and type conversion checks on client's side),
ACUUM FULL after doing your INSERTs (not after each one, of course --
> after doing all of the INSERTs, or after doing a big chunk of them. If
> data is inserted incrementally over a period of time, then just do the
> VACUUM ANALYZE every so often during that time, and you shouldn't have
On Tuesday 21 December 2004 22:00, Bruno Wolff III wrote:
> On Wed, Dec 22, 2004 at 00:16:06 +,
>
> "Vladimir S. Petukhov" <[EMAIL PROTECTED]> wrote:
> > On Tuesday 21 December 2004 21:21, Bruno Wolff III wrote:
> > > On Tue, Dec 21, 2004 at 2
On Tuesday 21 December 2004 21:21, Bruno Wolff III wrote:
> On Tue, Dec 21, 2004 at 20:47:31 +,
>
> "Vladimir S. Petukhov" <[EMAIL PROTECTED]> wrote:
> > Ok, this is a real example:
> >
> > CREATE TABLE account (
> > val1
14:38, Bruno Wolff III wrote:
> On Mon, Dec 20, 2004 at 12:13:31 +,
>
> "Vladimir S. Petukhov" <[EMAIL PROTECTED]> wrote:
> > Hi
> > Sorry for my English..
> >
> > I need to organize database structure for saving statistic data for
> >
Hi
Sorry for my English..
I need to organize database structure for saving statistic data for objects. I
have about 24 * 31 * 4 fields (4 month, 31 days, 24 hours) of data for one
object. Each field contain 8 numbers (N in general). So:
object1 -> data -> field1, field2,...
object2 -> data ->
not work.
Did somebody have the same problem?
Please, help, this db is productive...
--
Vladimir Drobny mailto:[EMAIL PROTECTED]
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
87 matches
Mail list logo