(ship) it and for log based
replication slots ), but postgres recycles segments ( which can have
an impact on big memory machines ). I do not know to what extent a
modern OS can detect the access pattern and do things like evict the
log pages early after sync.
Francisco Olarte.
--
Sent via pgsql
t; which leads to higher throughput.
Have you accounted for disk caching? Your CDC may be getting log from
the cache when going with little lag but being forced to read from
disk (make the server do it ) when it falls behind.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-gener
mented as compressed BY DEFAULT, but you can use
options to avoid compression, and it is the only one which supports
paralell dumps.
Also, custom and tar can be made uncompressed, but I do not think
that's a great idea.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsq
rtitioned data. One of our current
> problems is exactly the time it takes for backup and restore operations. I
> did not mentioned it before because of the size of the original message.
We normally do the schema trick, and as 90% of data is in historic
schema, we skip most of it.
Francisc
gin with...
Seems fine to me. Never used that because y normally use special
insertion programs for my partitiones tables ( my usage allows thats
), so I insert directly in the appropiate partition always ( so I just
use inheritance, no triggers or rules ).
Francisco Olarte.
--
Sent via pgs
wxrwxrwt 14 root root 45056 Sep 12 19:17 ///tmp
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
\copy. As explained in the docs this just does "copy from stdin" ( or
to stdout ) on the client side and redirects the file you give in the
command line ( or you can issue a [psql ... -c "copy ...from stdin"]
in a command line and feed the file via shell redirections, but, II
her formats remains a
mistery to me), and LIMIT 0 ( I would try adding AND FALSE to the
where clause better, it may lead to faster response, although I doubt
it) .
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscri
do it, we may provide some
useful info if you show yours first.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
y, feel free to use it as
much as you like, or to recomend it as good if you want. Normally I
wouldn't even mention it, as I did not in my first response, I just
did to explain why I ignored the tail.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
wrote:
> On Sat, 2017-09-02 at 17:54 +0200, Francisco Olarte wrote:
>> It's probably doing 1(integer) => double precioson => numeric(20) or
>> something similar if you do not specify.
>>
>> Francisco Olarte.
>
> Well, the question was not only about why t
ified numeric
precision ( except for toy one liners and the like ), and I was trying
to show the OP why he got 20.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
?column?
--
0.33
(1 row)
It's probably doing 1(integer) => double precioson => numeric(20) or
something similar if you do not specify.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-gener
d arguments, but those extensions are NOT free to develop, test
and maintain. And every syntax extensions, specially one like this,
introduces the possibility of collisions with future standards ( de
facto or de iure, although Pg already deviates from ansi on the temp
stuff ).
Francisco Olarte.
--
enough
to avoid it, as the pg_temp. makes it equally clear and explicit you
are dropping a temporary table.
And if the programmer forgets the pg_temp. it can equally forget the TEMP.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subs
but not too sure about tar,
it should not. In this case, as previously suggested, a simple gunzip
-t is enough to verify backup file integrity, but checking internal
correctness is very difficult ( as it may even depend on server
configuration, i.e., needing some predefined users / locales /
encod
On Wed, Aug 2, 2017 at 6:23 PM, Scott Marlowe wrote:
> Does insert's "on conflict" clause not work for this usage?
Did you even bother to read the queries? He is using two different tables.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postg
nothing prohibits you from starting the output as soon as you can
calculate it even if no pager there ( note you have to make things
like calculate column widths, but this is done with and without pager
). In fact normally the pager will just introduce an small but
potentially noticeable del
even a simple 'while((c=getc())!=EOF) putc(c)' should be
fast if the console/redirections is not playing tricks.
You can also test with a single table, single text field, put a
several pages text value there and select it. If it is slow, I would
bet for terminal emulator problems.
Fran
like as a result. Is there
> any way to improve just the display/write performance in the console?
Are you sure the culprit is psql and not you terminal emulator ?
Francisco Olarte.-
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
h
; it isn't always easy to do.
Excessive = too much, normally implies bad things.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
bined with top posting I feel it as insulting (
to me it feels as 'you do not deserve me taking time to edit a bit and
make things clear' ) ( but well, I started when all the university
multiplexed over a 9600bps link, so I may be a bit extreme on this )
Regards.
Francisco Olarte.
--
On Tue, May 9, 2017 at 1:44 PM, vinny wrote:
> In fact, I don't think many companies/developers even choose a language
> or database, but rather just use whatever they have experience in.
That is choosing. You choose them because you know them.
Francisco Olarte.
--
Sent via pg
ogDB?
I do not think either of these is true.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
that in pg update~=delete+insert. I use those because many
times they are more efficient ( simple conditions on delete, insert is
fast in postgres, and you can vacuum in the middle if a large portion
is going to get reinserted to reuse the space )
Francisco Olarte.
--
Sent via pgsql-general m
debugging the code much easier ( as the temp table can be cloned to
test easily ). For encapsulation "with" helps a lot, or, in a
function, you can use an real temporary table.
Francisco Olarte
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
wer.
Anyway, with base backup + wal archive you always have the option of
making incremental. Just start a recovery on the backup each time you
receive a wal segment wal and you are done. In fact, you can treat a
replication slave as a very low lag backup.
Francisco Olarte.
--
Sent via pgsql-gen
superior ( IMNSHO ) to CSV. It ( by default ) separates records
with newlines and fields with tab, and escapes newlines, tabs and
backslashes in data with backslash, so the transformation is
contextless, much easier than csv:
Copy out: Replace NULL with '\N', newline with '\n'
e it ). IIRC windows had infraestructure to do that with
services, but haven't used it since they launched XP so I'm really
rusty and outdated.
Francisco Olarte
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
n do with
tar x and a tar backup is possible with pg_restore, and then more ).
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
because it is
faster ( than using it, index fetches are slower than sequential
fetches ( for the same number of rows ), so for some queries it is
better to not use the index ) ( specially if you are using them for
small test tables, i.e., for a single page table nothing beats a
sequential scan )
am I doing wrong please?
Not RTFM ? ( if I'm right, or not understanding it )
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
g and
accounting. And you can optimize some things ( like copying from
several satellites and then inserting them at once ).
YMMV anyway, just use whichever is easier for you, but avoid false lazyness ;-)
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To
punched cards times, and use it a lot for file processing ( as
renaming in Linux is atomic in many filesystems )
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
commited before the remote, you can lose rows.
If the remote ( t2 ) is commited before the local you'll have dupes,
so you need some way to purge them.
These things can be solved with the aid of transaction manager, and
prepared transactions, but I'm not sure of the status of it in your
g" ( I do not remember the
versions ) you can use an extra step. Instead of inserting in main in
2 do 2.a - Copy holding to main ( truncating before hand if copy
present ) and 2.b insert news from the copy, either by using and
anti-join with main or by deleting ( in the same transaction ) the
d
ll depends in the concrete app, but you can try
to fit the pattern in it, I've done it several times and its a useful
one.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
ccount with a .pgpass file works as
it. You store every password in file readable by a user, .pgpass, and
you use that user login credentials to get access to it.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription
y
regexp related function clustered with its siblings, either in the
string page or ( in another manuals ) in its dedicated section.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
op at 2 because element 1 can only be
swapped with itself. I've marked it volatile as it returns different
things each time you call it. My tests show it working, but it may
have some problems with the type conversions, as I'm not used to do
this kind of code in plpgsql, but you can ge
data between 25 and 25 and time between 8:00 and 12:59 can
easiliy be selected by the interval [20161225T08,
20161225T130000), but all the mornings in december can not ( although
a query with ts>='20160101' and ts <'20170101' and ts:time >='08:00'
and ts:time<'13:00' should work quite well, the first two condition
guide to an index scan and the rest is done with a filtering ).
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
ne based on a partial
description. I do not see any thing in common between 'like based
query' and timestmap columns.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
to preclean the data. It just seemed
to me from the description of your problem you were using a too
complex tool. Now that you are introducing new terms, like reject
handling, I'll step out until I can make a sugestion ( don't bother to
define it for me, it seems a bulkload related t
the-datafile | perl the_script.pl |
my_favourite_pager" until correct, the beauty of this approache is
you do not touch the db in debug, feed it to psql when done. In my
experience the perl script overhead is unnoticeable in any 2k+ machine
(and perl was specifically dessigned to be good at t
anish version
of April Fools I was tempted to write something about different
impedances in the copper tracks used for DB data traffic when entering
the CPU silicon interconnects via golden cables.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
d=inhrelid and inhparent='20473)
SELECT 'ALTER TABLE ' || relname || ' rest of alter table command;'
from childs ;
And feed the result back to the server using your favorite tool (
quoting maybe needed, schema names may be needed, YMMV ).
Francisco Olarte.
--
S
/file wether you have something similar.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
when having strange issues, I've found the combo
echo | od -tx1 -tc
very useful, this help rule out potential fancy quotes pointed previously
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
are imposible to satisfy ( of course, you could run with
scissors, but that will loose data without hdd failure ).
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
although I fear this will lock the table too, but it
will be a very short time, your readers may well tolerate it. )
Yours seem a special app with special need, try a few, measure, it is
certainly possible.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org
age patterns where a
heavy update plus vacuuum full was successfully used.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
---+--
t| t| t
(1 row)
I.e., the same happens with a nullable unique column, you can have one
of each not null values and as many nulls as you want.
SQL null is a strange beast.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To m
m.info/ ,
what are you truing to achieve by doing that?
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
h the index if it includes word
position or by reading the docs. In general, in FTS, you need to use
selective terms for fast queries.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
x27;ve found
several times is FTS does not mix too well with relational queries at
the optimizer level ( as FTS terms can have very diverse degrees of
correlation, which is very difficult to store in the statistics a
relational optimizer normally uses ).
Francisco Olarte.
--
Sent via pgsql-gene
a running system some
recovery is needed to make it consistent. With the target time you can
limit how much is done. But there is a minimum. Think of it, if you
stated '1970-01-01' it would be clearly imposible, your date is
bigger, but still imposible, try raising it a bit.
Francisco Olar
Merlin:
On Thu, Oct 27, 2016 at 7:29 PM, Merlin Moncure wrote:
> On Thu, Oct 27, 2016 at 11:18 AM, Francisco Olarte
> wrote:
>> It is, but handling them is not easy, and you have to deal with things
>> like DoS which are not trivial on the server ( as it is a heavy
>> s
Tom:
On Thu, Oct 27, 2016 at 6:32 PM, Tom Lane wrote:
> Francisco Olarte writes:
>> Isn't this a server setting, and so going to affect every connection,
> Yes,
Ok, just checking.
> but there are equivalent libpq parameters for firing heartbeat
> pings from the clie
Merlin:
On Thu, Oct 27, 2016 at 6:10 PM, Merlin Moncure wrote:
> On Thu, Oct 27, 2016 at 10:01 AM, Francisco Olarte
> wrote:
>> And I'd like to point libpq sessions does not sound to be the best
>> kind of traffic across a firewall, not a good service / protocol to
&g
nf
Isn't this a server setting, and so going to affect every connection,
being it from the (affected) libpq connections or from other sources (
like jdbc, although he may want keepalives for those too )?
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.
palive. Keepalives generate traffic which
normally keeps overzealous firewalls happy, I have used it before
successfully.
And I'd like to point libpq sessions does not sound to be the best
kind of traffic across a firewall, not a good service / protocol to
expose.
Francisco Olarte.
--
Se
rimary will stop
accepting work?
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
7;lost
in translation'. Unless you abuse things like ø or ö or things like
these people do not normally have problem running them ( in spanish we
just have to avoid tildes in vowels and ñ and are fine ).
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@p
:59:59+02
(1 row)
You'll see you are building timestamp WITH time zone, not plain
timestamps. I think this is not going to have influence in your
queries, but better convert explicitly ( as it can bite you in some
ocasions ).
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
0'::date
I think is the other way round ( date::date = '2016/10/20' ).
To me it seems yours will do:
date = '2016/10/20'::date::timestamp ( = 2016/10/20 00:00:00 )
( widening conversion )
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgre
y few rows
and speed up the last phase. Anyway, I fear bernouilly must read all
the table too, to be able to discard randomly, so you may not win
nothing ( I would compare the query time against a simple 'count(one)
query', to have a benchmark of how much time the server expends
reading t
x27;s fast, I would try selecting a several
thousands and eyeballing the result, if it does what I fear the
grouping should be obvious ).
Maybe you do not mind it, in which case it's ok, but a one minute run
should let you know wahat you are exactly doing.
Francisco Olarte.
--
Sent via pgs
stated a busy
system, but anyway the lock is going to execute fast and but with a
long delay, and counting the time form the issuing of the command to
the time of end is a perfectly reasonable way to do it.
Anyway, ok, exclusive locks cause the slownes.
Francisco Olarte.
--
Sent via pgsql-genera
data
moving ops ( and I doubt it will, as presently you can easily saturate
the channels with a single core for that kind of simple ops, and
normally if you want to optimize this kind of op is better to target
concurrency ( table can be used while moving ) than pure speed .
Francisco Olarte.
uncached file between the
affected volumes. If move does say, 1.5 times slower I wouldn't say it
is that slow ( given copy is optimized for this kind of transfers and
a database not so much ).
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make ch
zed file between the same disks someone may be able to say
something.
> Instance RAM: 60GB
> Instance CPU: 16Cores
Cores do not help, postgres is single-threaded. RAM MAY help, but I
suspect your operations are IO bound. Of course, with the sparseness
of the details, one can not say too much
27;test and set' and 'compare-exchange' and similar.
This one is similar to a test and set, you set existence to false and
test whether it existed before. I can easily test and then set, but is
not the same as TAS. And the notice is not the reason it is not done
at commit time, the
ble did not exist, as
commands are not postponed ( it must show you the notice or not before
completing ), so you are just issuing to create commands for the same
table.
Your serial postponed execution is a nice desire, but I doubt it is necessary .
Francisco Olarte.
--
Sent via pgsql-genera
olation levels. And drop table if exsits means if it exists when the
server executes your command, not on the future ( the server cannot
know if it will exist then, your own transaction may recreate it or
not. Maybe you know your command sequence is not going to depend on
intermediate resu
if the other transaction hasn't commited? or it has created the
table anew ( no drop, the table wasn't there ). What are the isolation
levels involved?
If all the transactions operating in the table are doing just what you
show an nothing more, and they are all serializable, I MAY expect
.024617892 +0200
Change: 2016-09-30 17:31:21.024617892 +0200
Birth: -
Further details left for the reader.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
nt. You could try one of the
functions in
https://www.postgresql.org/docs/9.5/static/functions-admin.html#FUNCTIONS-ADMIN-GENFILE
and many of the untrusted programming languages for postgres functions
( plperl, plpython, etc ) has methods of calling stat in the server.
Francisco Olarte.
--
Sent vi
% of the table, a seq scan tends to beat index scan easily when
selecting that big part of the table, even accounting for dead tuples
it's more about 50% of the table, and a seq scan is much faster PER
TUPLE then an index scan ( and and index scan would likely touch every
data page for that big fr
elect * from the_table, truncate th_table, insert into the_table
select * from tt order by index_expression, drop table tt. It is nice
to do it for tables that are normally ordered but somehow lost it.
Like having a log table with an indexed field for insertion timestamp
and updating it a lot, or pur
signal the postmaster to reread after adding the line?
> What do you mean?
When you change the file you need to signal the postgres main process
( postmaster ) to reread it by sending it a HUP signal, or using
pg_ctl reload ( your OS/distro may have other methods ).
Francisco Olarte.
--
S
) is running and working.
It has nothing to do with it, except if postfix is using postgres.
> How can I verify ?
If you used hab, it is wrong, if you used hba, consult the docs for
your version & os and check.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@pos
ll the contents of you pg_hba.conf? Note order matters, all
non comment ( or at least the host ones ) need to be checked .
Also, did you signal the postmaster to reread after adding the line?
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make
it can be
conditionally enabled with a simple set and implemented in very few (
< 20 ) lines of code, ok for me , otherwise I would prefer the reduced
bug surface.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
other.
He probably wants to just treat is as non-deferrable just during the
upsert. I do not know if he has thought this opens a can of worms (
like, the constraint may be already broken due to precious DML ).
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org
not know if
pg inserts several items at a time in bulk loading, but I doubt it.
Normally every btree indexing library has some optimization for this
cases, as they are common, just like every real sort routine has some
optimization for presorted input.
Francisco Olarte.
--
Sent via pgsql
On Tue, Aug 23, 2016 at 4:28 PM, Rob Sargent wrote:
> On 08/23/2016 07:44 AM, Francisco Olarte wrote:
>> On Tue, Aug 23, 2016 at 2:26 PM, pinker wrote:
>>> I am just surprised by the order of magnitude in the difference though. 2
>>> and 27 minutes that's the
f skipping large
chunks knowing where the info is can sava you a lot of work and mails.
AAMOF, it's one of the main reasons I've been using postgres all this
years.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes
t;. A big aborted bulk load may just fit the case, as
it may put a lot of tuples at new pages at the end and be executed in
a low-load period where the lock is easier to acquire.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your
ot current with the current postgres
details, but it does not surprise me they have big optimizations for
this, especially when index ordered insertion is quite common in
things like bulk loads or timestamped log lines.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-gen
second one
> finish off after 13 rows fetched and returns the full 10 rows.
Good. The only problem is you are not guaranteed a result, like in the
contrived example I gave, but if it is what you want this is a way to
go.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-gene
ry using *10, *100, *1k of the real limit until you have
enough results if you want to time-limit your queries.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
But anyway, to compare two things like that, as the original poster
was doing, I normally prefer to test just one thing at a time, that's
why I would normally try to do it by writing a sorted file, shuffling
it with sort -R, and copying it, server side if posible, to eliminate
so both
Francis
every row older than that from staging to the
partition with whatever period is best). Staging partition is normally
small and cached and can be processed quite fast ( with 200k/day an
hourly movement will leave staging with less than about 10k rows if
distribution is somehow uniform ).
Francisco Ol
* from table where common_condition and filter_condition order
by xx limit N
becomes
with base as (select * from table where common_condition order by xx
limit base_fecthes)
select * from base where filter_condition order by XX limit N;
In the example common_condition is non existent, put it as tr
/64. I think there are some pseudo-random number generators which
can be made to work with any range, but do not recall which ones right
now.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
just write 10M integers to
a disk file, then shuffle it and compare COPY FROM times from both ) (
unless you know of an easy way to generate a random permutation on the
fly without using a lot of memory, I do not ).
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql
he faster way to extract the relevant data ( the rest of my query,
after the first with, is just moving data around for pretty-printing (
or pretty-selecting ).
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://ww
re not al
vendids are present. If you prefer null you can use it, IIRC max
ignores them.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
S
and be sure to scroll down to "SQL Interpolation" after the built in
variables list and read that. I've used it several times, just
remember it's a macro processor and it's done by psql, not by the
server.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql
lems ( although the disk CRC should catch
all odd number of bit errors , but with VMs in the mix who knows where
the messages could end up ).
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
1 - 100 of 259 matches
Mail list logo