I don't really understand what you mean by 'performance'. To me it is not
surprising that incrementing (I know it is not just incrementing) a
64bit integer is faster than generating 128 bit data with a good amount of
random data even if it seems to be too slow. So in my opinion you need to
separate
1) generating data (which might happen on the client in case of UUID and
not in the db...)
2) inserting data
3) selecting data

in both sequential as well as parallel scenarios.

Am Mo., 6. Feb. 2023 um 19:32 Uhr schrieb veem v <veema0...@gmail.com>:

> So, it may be the machine on which the code is getting executed behind the
> scene , in the site "https://dbfiddle.uk/"; is playing a key role in the
> speed, however, the comparative performance of UUID vs sequence should stay
> the same.
>  So I think, after this test we can safely conclude that if we compare
> the performance of the UUID(both version-4, version 7) VS sequence. The
> UUID performs a lot worse as compared to sequence. So unless there exists
> some strong reason/justification for UUID, we should default use the
> sequence. Correct me if I'm wrong. And also  I understand the cases
> of multi master replication/sharding etc, may be a factor but other than
> that I can't think of any scenario where sequences can be used.
>
>
>
> On Fri, 3 Feb 2023 at 23:07, Dominique Devienne <ddevie...@gmail.com>
> wrote:
>
>> On Fri, Feb 3, 2023 at 5:48 PM veem v <veema0...@gmail.com> wrote:
>>
>>> Actually I did the testing by connecting to "https://dbfiddle.uk/";
>>> postgres version -15.
>>>
>>> PostgreSQL 15.0 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.5.0
>>> 20210514 (Red Hat 8.5.0-10), 64-bit
>>>
>>> Am I doing it wrong, please confirm?
>>>
>>>
>> No clue. But even SQLite can generate 1M uuid (16 bytes random blobs are
>> equivalent) is 1/3 of the time, so 30x faster.
>> So your timings of generating 100K uuids and counting them seems way too
>> slow to me. --DD
>>
>> sqlite> select count(randomblob(16)) from generate_series(1,1000*1000);
>> QUERY PLAN
>> `--SCAN generate_series VIRTUAL TABLE INDEX 3:
>> addr  opcode         p1    p2    p3    p4             p5  comment
>> ----  -------------  ----  ----  ----  -------------  --  -------------
>> 0     Init           0     15    0                    0   Start at 15
>> 1     Null           0     1     1                    0   r[1..1]=NULL
>> 2     VOpen          0     0     0     vtab:274D3E0   0
>> 3     Integer        1     4     0                    0   r[4]=1
>> 4     Multiply       6     6     5                    0   r[5]=r[6]*r[6]
>> 5     Integer        3     2     0                    0   r[2]=3
>> 6     Integer        2     3     0                    0   r[3]=2
>> 7     VFilter        0     11    2                    0   iplan=r[2]
>> zplan=''
>> 8       Function       1     8     7     randomblob(1)  0
>> r[7]=func(r[8])
>> 9       AggStep        0     7     1     count(1)       1   accum=r[1]
>> step(r[7])
>> 10    VNext          0     8     0                    0
>> 11    AggFinal       1     1     0     count(1)       0   accum=r[1] N=1
>> 12    Copy           1     9     0                    0   r[9]=r[1]
>> 13    ResultRow      9     1     0                    0   output=r[9]
>> 14    Halt           0     0     0                    0
>> 15    Transaction    0     0     1     0              1
>> usesStmtJournal=0
>> 16    Integer        1000  6     0                    0   r[6]=1000
>> 17    Integer        16    8     0                    0   r[8]=16
>> 18    Goto           0     1     0                    0
>> ┌───────────────────────┐
>> │ count(randomblob(16)) │
>> ├───────────────────────┤
>> │ 1000000               │
>> └───────────────────────┘
>> Run Time: real 0.278 user 0.250000 sys 0.000000
>>
>>
>>> On Fri, 3 Feb 2023 at 21:28, Dominique Devienne <ddevie...@gmail.com>
>>> wrote:
>>>
>>>> Something's off regarding Guid generations IMHO...
>>>> You generate 100K Guids in ~1s. While we generate (in C++, Windows
>>>> Release, using Boost) 16M of them in +/- the same time:
>>>>
>>>

Reply via email to