fine.
What is exactly in your data file ? If it includes copy from stdin, then
'psql -f data.sql' would do the trick ( add approiate database names,
host, password, as needed, just the same you would use in an standard psql
command, remember db name goes last ).
Regards
Francisco Olarte.
a
( in this case the A file will simulate q2.sql, echo is simulating the
first psql command and cat is simulating the second psql command ). You are
redirecting the second psql input twice, one with | other with <. A simple
variant is:
folarte@paqueton:~$ (cat A; echo ) | cat
aaa
ine several commands and pipe its
output to another one. The problem what all the others constructs seem to
be trying to do it with a pipe of single commands, which is much more
difficult. Of course, if the problemis due to inadequate shells ( as, say,
cmd.exe ) it may need to be done in other ways.
Regards.
Francisco Olarte.
ed buffers, but every working
backend needs extra memory, first of all just to run, and also, if they
perform any signinficant query, to sort results, combine them and do its
magic. Those things are controled by settings like work_mem and others,
they are in the manual, and can add up to quite a bit.
Regards.
Francisco Olarte.
ot so powerful desktop with an extra sata disk to store
the pg_dumps and it worked really well, and we are presently using two
servers, using one of the extra gigabit interfaces with a crossover cable
for the replication connection plus an extra sata disk to make hourly
pg_dumps and it works quite well.
Francisco Olarte.
ther in xz which beat it in BOTH size AND time, YMMV ).
Regards.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Hi Robert:
On Thu, Mar 12, 2015 at 12:52 PM, Robert Inder wrote:
> On 11 March 2015 at 17:32, Francisco Olarte wrote:
>> This is, build an streaming replication slave, pg_dump from the slave. If
>> needed, restore in the master.
...
> I really like the idea of running pg_dump
quoting it in some statements ( create ) and not others ( 1st
delete ) will normally surprise you on unpleasant ways.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
ormally works everywhere ( case folding, in
either direction, but you are asking for problems if you decide to
quote identifiers for any reason ), and create "a" delete "A" should
fail everywhere.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgre
p://xyproblem.info/ , do you want an
aggregate to return the first value, or a function?. ( I suspect it is
the first case, as you are pasting a create function for first_agg and
a select for my_first, but not having posted a complete example makes
it very hard ).
Regards.
Francisco Olarte.
--
S
r
archive ) does normally the trick.
Francisco Olarte.
> -Original Message-
> From: Francisco Olarte [mailto:fola...@peoplecall.com]
> Sent: May-11-15 08:55
> To: Daniel Begin
> Subject: Re: [GENERAL] Restarting DB after moving to another drive
>
> Hi Daniel.
>
> On
moved
directories across disks before, and it has worked, but I use linux
which is much simpler, and just did an stop / cp / change pgdata /
restart, I suppose windows must have comparable ways.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To m
records. At 1 chunk page, with very bad luck, this could
easily go to 480G, 120G if you've got them 4 to a page, so 200G does
not seem that big ( I do not know if TOAST compression is in play
here, but the numbers look ok to me if the data is not append-only.
Francisco Olarte.
--
S
ilch about windows or pgadmin specific stuff.
Regards.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
7;s full of very
interesting info and it's very useful and when you hit a problem
you'll probably know there is something for it and search for it ).
For me the manual is one of the major points for using pg . pg_dumpall
is a fundamental tool for backups, as it's the only one that dum
then reindexed them concurrently with
some other ( performance degraded ) work. You can do the same thing by
editing the text dump, but it gets impractical and really complex
beyond a few megabytes.
regards.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
measure?
Also think if you touch 67-23+1=45 partitions and the DB has to check
other indexed fields it is a lot of work. As I said, it depend on your
actual data, actual indexes, and actual queries, just measure it. In
my excample table I partition the data by the TS, which is the single
indexed f
table, partitioned by months,
indexed by timestamp and number ( 2 indexes ) , queries for a TS range
go fast, as they go to the relevant partitions. Queries for a number
with a some-days timestamp range also go fast, as it has to index scan
one or two partitions for the number, but queries for j
partitioning was my last option...
Make sure you are not having an XY-problem. Also, from the type of
queries you've shown us, I do not think partitioning is the way to go.
It will only give you marginal improvements when querying for single
or range of ids ( and small ones if your ra
you have some unique condition on the
table, so it errors out on every duplicate. It's not pretty but should
work. I still recommend the temp table approach.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
ending / updating / deleting a lot. You are in a
case ( big data, medium machine, slow disks ) where the real access
patterns are what is going to determine your strategy, no boilerplate
solution is going to apply there.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
queries evolve with versions, but you
can always port them touse the information_schema (
http://www.postgresql.org/docs/9.4/static/information-schema.html )
which should be a little more stable.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make
rates on her currency.
I can also, if needed, build unique indexes on any single id ( they
are all serials, as I have no other candidate keys ), if I need them,
but given the access patterns I normally have all of them, and things
like populating a drop box to allow new rates for a seller are ve
xes to accelerate your queries while
preserving normalization, if you can afford them they are nice. The
big problem comes many times if you try to mix them with ORMs and
similar.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your sub
plicating
the parts of the first record in the wire a hundred times, which was
nice since the short table was wide and I only needed 3 short fields
from the second one, and that made the first query run at wire speed
and the second at disk speed ).
Francisco Olarte.
--
Sent via pgsql-general mai
ne row zero columns table is used when it is omited",
which is what it seems to be done, would be nice.
And, as I said, * only means the columns in a select, I think on no
from Pg may be generating a fake one row table to satisfy the
requirements ( maybe not, but is one easy way to make this
sion tricks. Unless you have lots of small columns, I doubt
serializing/deserializing it is worth the hassle. Postgres does not
always use the most compact form for storage. In fact I would be
greatly surprised that any database stores an array ( which can be
multidimensional, I do not know if other d
may be dwarfed by
network roundtrips, and if this is a problem to you, you should look
for optimizing the common path firsts, things like how many roundtrips
each PROTOCOL needs for the small query and other similar. You should
measure before. Requester is not normally going to be your big
pro
link it seems your problem is just you do too many
transactions without a vacuum ( also reading your pointed threas it
sees you do vacuum fulls, which seems unneeded ) and expecting
postgres has some kind of magic to avoid burning the xids.
Francisco Olarte.
--
Sent via pgsql-general mailing list
potential ugliness will remain encapsulated ( it may even
be faster, as the docs explicitly say exception blocks are expensive,
but as usual YMMV depending on the exact query and the collision ratio
).
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To ma
a bit
harder to obtain as you will need to massage the range ( been there,
done this, just not in sql, also, if you are in procedure and
precalculate the ranges for the three between and you know your
utc-offsets ranges you will notice int the 24 hrs range case one of
the three can always be p
dom ), you can use 2^32, which is just 9.5 digits, and search
for a 32 bit block cipher ( or build it yourself, it's not that hard
using stream ciphers or other tools ).
I also thinks random UUIDs are not ok, not because they are long but
because they are random, and can collide ( encrypte
rting nodes sequentially leads
to optimally filled leaf pages an denser trees, as an optimization for
an easy and common case, which are better than the normal ones
generated by random insertion. So is not that random are bad, it is
that ordered are very good, or in another way thay are not affected
ize: 2260992.
I would totally expect this. On reindex you get the values from a tree
walk, so both of them come in order, and being a reindex ( where you
know in advance the full set of values, so you can plan ahead where to
put the leaves, how many levels you need and how many splits ) you get
an e
choose a
totally correct encryption function for collision avoidance, like
identity, you are going to have problems in your scheme.
You are not voting for anything, you need a feature proposal to vote
upon. So far the only one I could extract from this thread is
"something which magica
the old voucher numbers, and you get no collisions.
If you run for some years, you can see which vouchers have been used,
so you can debug potential problems.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Hi Rafal:
On Tue, Jul 21, 2015 at 12:43 PM, Rafal Pietrak wrote:
> W dniu 21.07.2015 o 09:34, Francisco Olarte pisze:
>> In this case I think you are mixing vouchers with voucher-numbers. IMO
>> you could get a better dessign by using an auxiliary table and not
>> nullif
ype will be useful too, maybe Fedora is using
some extended thingies ).
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
irectly in the partitions ).
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
you shrink and fill the rest with other uses server will
crash on next growth ( some very special cases may be different, but
in general if you have free space is because you create/delete, be it
directly or via MVCC updates, so having it there for next usage is not
so bad ).
Francisco Olarte.
s:
# with source(s) as (values('200'),('2000'),('1150'),('3'))
, step1(s) as (select translate(s,'0123456789','ofghijklmn') from source)
, step2(s) as (select replace(s,'','b') from step1)
, step3(s) as (select
uld then need a little further processing to determine the tens,
> hundreds, etc.
>
> I'll leave that to you to work out, but additional functions of
> strpos(string, substring)
> substr(string, from [, count])
> length(string)
>
Do not forget replace, shorter, easier.
Francisco Olarte.
c uses, I think it still is
gzip, which is not that bad and is normally quite fast ( In fact I do
not use that 'pbzip2', but I did some tests about a year ago and I
found bzip2 was beaten by xz quite easily ( That means on every level
of bzip2 one of the levels of xz beat it in BOTH size
mber you'll need to decompress it before restoring ( this does not
happen for text format, as you can do stream restore, but the restore
options for text format are limited, it's an all or nothing approach unless
you are really fluent in stream editors ).
Francisco Olarte.
r than uncompressed ones ), and a
lot of code. But in the development process we did a full comparison
of several compressor, and found what I stated with bzip2, it was
surpassed in every combination of options by xz ( plain bzip2, plain
xz ).
Francisco Olarte.
--
Sent via pgsql-general mailing list
a, when matching them it should
already have been converted to whatever the language uses for newlines
( in C and perl that means \n, which needs not be \012, BTW . In unix
\n=\012 on disk, on CP/M it's \015\012 and when I worked with Mac (
before the unixy osX they use now ) it was \015, and I cannot think on
what they can use on EBCDIC machines ).
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
x27;
Mon, 28 Sep 2015 09:09:55 -0300
( I'm not familiar with your distro, but I got bitten by one of those
soem years ago, incorrect timezone definitions )
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
memory,
which would you think the correct behaviour would be for a 'poke
rand(),rand()' in the server process? It could have triple redundancy
copy of every page and try to vote and detect in each instruction, but
is pointless.
Francisco Olarte.
--
Sent via pgsql-general mailing l
t worked fine ( like CP/M ). For you the correct
behaviour maybe to fail fast and loose a little speed, for others the
current one may be OK. Normally for this things you go for the path
with less code, as deleted code is bug free.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general
his is not for me to say, but I think it would complicate things too
much for a narrow use case ( and I doubt poolers are used for this
kind of things anyway ).
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://ww
e system catalogs inside them ).
Francisco Olarte.
27; to them to
achieve this, but if N is, say, a hundred, and you have a huge M, like
ten thousand, with a different combo for each one, his solution may
make sense ( I do not think such a bizarre case justifies the bug-risk
of including the feature, but it can make sense ) )
Francisco Olarte.
t are the use cases for the
general public, etc.. Just eyeballing it I would estimate this will
need many pages just to state the problems and the intended semantics
of your proposal.
Regards.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make c
s '#' seems to indicate you run those commands as root, while the
server typically runs as postgres. Have you checked the commands work
when issued as the server user?
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to
l script but
asking pg_restore to generate it if you need it, but not the other way
round ).
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
eparated ( and potentially
compressed, it's a lot like a zip file ) and so can perform selective
restores ( you can select what to restore, and playing with the -l /
-L options even on what order, which gives you a lot of play ).
Francisco Olarte.
--
Sent via pgsql-general mailing list (pg
using freebcp, IIRC, on the sql server side )
You still can have problems IF you have updates to the tables, or
deletions, or . But
if you just have insertions, copy is easy to do.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subs
asted
> contains tabs and readline uses to do completion.
Doesn't 'cat | psql ' disable it? I use it with other programs for
these purpose ( as well as things like ls | cat to avoid
colors/wordwrapping, just makes the program see a non-tty on
stidn/stdout ).
Francisco Olar
where needs a BOOLEAN.
YOU need to be able to identify the inserted rows. YOU know your data
definitions. Ar you able to query them ?
> but this, I am sure has some syntax errors, could you help correct this,
NOT, because I do not know the table structure. Only you can do that.
Francisco Ol
n it for recovery ) ?
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
che, then
crash and have nothing on reboot.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
ster, boot it with that, restore, stop
it again and reboot with the normal fsync=on config. In this case we
do not mind losing data as we are doing a full restore anyway.
But normally, its a bad idea. As a classic photo caption says,
fsync=off => DBAs running with scissors.
Francisco Olarte
rks, in nearly every situation. If you have
problems, consider explaining it and may be you wil get some
``advice''.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
the docs for SQL. This kind of languages are
similar to SQL, but not the same. I think
https://www.postgresql.org/docs/9.5/static/plpgsql-statements.html#PLPGSQL-STATEMENTS-SQL-ONEROW
is the proper place to look it up.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postg
was just trying
to point that "select" is not the same in plpgsql and in sql, so you
need to read the docs for plpgsql to find how to solve it.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.pos
0
Specially if this happens, you may have some slightly bad disks/ram/
leading to this kind of problems.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
ted as postgres and rsync, tend
to indicate controller/RAM/disk going bad ( in your case it could be
caused by a single bit getting flipped in a sector for the data
portion of the table, and not being propagated either because it
happened after your sync of drdb or because it was synced from the WA
lems ( although the disk CRC should catch
all odd number of bit errors , but with VMs in the mix who knows where
the messages could end up ).
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
S
and be sure to scroll down to "SQL Interpolation" after the built in
variables list and read that. I've used it several times, just
remember it's a macro processor and it's done by psql, not by the
server.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql
re not al
vendids are present. If you prefer null you can use it, IIRC max
ignores them.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
he faster way to extract the relevant data ( the rest of my query,
after the first with, is just moving data around for pretty-printing (
or pretty-selecting ).
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://ww
just write 10M integers to
a disk file, then shuffle it and compare COPY FROM times from both ) (
unless you know of an easy way to generate a random permutation on the
fly without using a lot of memory, I do not ).
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql
/64. I think there are some pseudo-random number generators which
can be made to work with any range, but do not recall which ones right
now.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
* from table where common_condition and filter_condition order
by xx limit N
becomes
with base as (select * from table where common_condition order by xx
limit base_fecthes)
select * from base where filter_condition order by XX limit N;
In the example common_condition is non existent, put it as tr
every row older than that from staging to the
partition with whatever period is best). Staging partition is normally
small and cached and can be processed quite fast ( with 200k/day an
hourly movement will leave staging with less than about 10k rows if
distribution is somehow uniform ).
Francisco Ol
But anyway, to compare two things like that, as the original poster
was doing, I normally prefer to test just one thing at a time, that's
why I would normally try to do it by writing a sorted file, shuffling
it with sort -R, and copying it, server side if posible, to eliminate
so both
Francis
ry using *10, *100, *1k of the real limit until you have
enough results if you want to time-limit your queries.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
second one
> finish off after 13 rows fetched and returns the full 10 rows.
Good. The only problem is you are not guaranteed a result, like in the
contrived example I gave, but if it is what you want this is a way to
go.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-gene
ot current with the current postgres
details, but it does not surprise me they have big optimizations for
this, especially when index ordered insertion is quite common in
things like bulk loads or timestamped log lines.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-gen
t;. A big aborted bulk load may just fit the case, as
it may put a lot of tuples at new pages at the end and be executed in
a low-load period where the lock is easier to acquire.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your
f skipping large
chunks knowing where the info is can sava you a lot of work and mails.
AAMOF, it's one of the main reasons I've been using postgres all this
years.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes
On Tue, Aug 23, 2016 at 4:28 PM, Rob Sargent wrote:
> On 08/23/2016 07:44 AM, Francisco Olarte wrote:
>> On Tue, Aug 23, 2016 at 2:26 PM, pinker wrote:
>>> I am just surprised by the order of magnitude in the difference though. 2
>>> and 27 minutes that's the
not know if
pg inserts several items at a time in bulk loading, but I doubt it.
Normally every btree indexing library has some optimization for this
cases, as they are common, just like every real sort routine has some
optimization for presorted input.
Francisco Olarte.
--
Sent via pgsql
other.
He probably wants to just treat is as non-deferrable just during the
upsert. I do not know if he has thought this opens a can of worms (
like, the constraint may be already broken due to precious DML ).
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org
it can be
conditionally enabled with a simple set and implemented in very few (
< 20 ) lines of code, ok for me , otherwise I would prefer the reduced
bug surface.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
ll the contents of you pg_hba.conf? Note order matters, all
non comment ( or at least the host ones ) need to be checked .
Also, did you signal the postmaster to reread after adding the line?
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make
) is running and working.
It has nothing to do with it, except if postfix is using postgres.
> How can I verify ?
If you used hab, it is wrong, if you used hba, consult the docs for
your version & os and check.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@pos
signal the postmaster to reread after adding the line?
> What do you mean?
When you change the file you need to signal the postgres main process
( postmaster ) to reread it by sending it a HUP signal, or using
pg_ctl reload ( your OS/distro may have other methods ).
Francisco Olarte.
--
S
elect * from the_table, truncate th_table, insert into the_table
select * from tt order by index_expression, drop table tt. It is nice
to do it for tables that are normally ordered but somehow lost it.
Like having a log table with an indexed field for insertion timestamp
and updating it a lot, or pur
% of the table, a seq scan tends to beat index scan easily when
selecting that big part of the table, even accounting for dead tuples
it's more about 50% of the table, and a seq scan is much faster PER
TUPLE then an index scan ( and and index scan would likely touch every
data page for that big fr
nt. You could try one of the
functions in
https://www.postgresql.org/docs/9.5/static/functions-admin.html#FUNCTIONS-ADMIN-GENFILE
and many of the untrusted programming languages for postgres functions
( plperl, plpython, etc ) has methods of calling stat in the server.
Francisco Olarte.
--
Sent vi
.024617892 +0200
Change: 2016-09-30 17:31:21.024617892 +0200
Birth: -
Further details left for the reader.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
if the other transaction hasn't commited? or it has created the
table anew ( no drop, the table wasn't there ). What are the isolation
levels involved?
If all the transactions operating in the table are doing just what you
show an nothing more, and they are all serializable, I MAY expect
olation levels. And drop table if exsits means if it exists when the
server executes your command, not on the future ( the server cannot
know if it will exist then, your own transaction may recreate it or
not. Maybe you know your command sequence is not going to depend on
intermediate resu
ble did not exist, as
commands are not postponed ( it must show you the notice or not before
completing ), so you are just issuing to create commands for the same
table.
Your serial postponed execution is a nice desire, but I doubt it is necessary .
Francisco Olarte.
--
Sent via pgsql-genera
27;test and set' and 'compare-exchange' and similar.
This one is similar to a test and set, you set existence to false and
test whether it existed before. I can easily test and then set, but is
not the same as TAS. And the notice is not the reason it is not done
at commit time, the
zed file between the same disks someone may be able to say
something.
> Instance RAM: 60GB
> Instance CPU: 16Cores
Cores do not help, postgres is single-threaded. RAM MAY help, but I
suspect your operations are IO bound. Of course, with the sparseness
of the details, one can not say too much
uncached file between the
affected volumes. If move does say, 1.5 times slower I wouldn't say it
is that slow ( given copy is optimized for this kind of transfers and
a database not so much ).
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make ch
data
moving ops ( and I doubt it will, as presently you can easily saturate
the channels with a single core for that kind of simple ops, and
normally if you want to optimize this kind of op is better to target
concurrency ( table can be used while moving ) than pure speed .
Francisco Olarte.
stated a busy
system, but anyway the lock is going to execute fast and but with a
long delay, and counting the time form the issuing of the command to
the time of end is a perfectly reasonable way to do it.
Anyway, ok, exclusive locks cause the slownes.
Francisco Olarte.
--
Sent via pgsql-genera
1 - 100 of 259 matches
Mail list logo