Hi. We've recently upgraded from libpq 15.2 to 16.1.
We custom build postgresql using the instructions and GCC 9.1 (from RH7's
dts9).
We used the same process for building 15.2 and 16.1.
But somehow psql crashes on any backslash command, while 15.2 works fine.
I've included the small backtrace belo
On Wed, Dec 20, 2023 at 1:39 AM Dominique Devienne wrote:
> Program received signal SIGSEGV, Segmentation fault.
> 0x004232b8 in slash_yylex ()
I think this might have to do with flex changing. Does it help if you
"make maintainer-clean"?
Thomas Munro writes:
> On Wed, Dec 20, 2023 at 1:39 AM Dominique Devienne
> wrote:
>> Program received signal SIGSEGV, Segmentation fault.
>> 0x004232b8 in slash_yylex ()
> I think this might have to do with flex changing. Does it help if you
> "make maintainer-clean"?
If that doesn't
On Tue, Dec 19, 2023 at 2:02 PM Thomas Munro wrote:
> On Wed, Dec 20, 2023 at 1:39 AM Dominique Devienne
> wrote:
> > Program received signal SIGSEGV, Segmentation fault.
> > 0x004232b8 in slash_yylex ()
>
> I think this might have to do with flex changing. Does it help if you
> "make m
On Wed, Dec 20, 2023 at 4:41 AM Dominique Devienne wrote:
> On Tue, Dec 19, 2023 at 2:02 PM Thomas Munro wrote:
>> On Wed, Dec 20, 2023 at 1:39 AM Dominique Devienne
>> wrote:
>> > Program received signal SIGSEGV, Segmentation fault.
>> > 0x004232b8 in slash_yylex ()
>>
>> I think this
Thank you for the confirmation.
So at first, we need to populate the base tables with the necessary data
(say 100million rows) with required skewness using random functions to
generate the variation in the values of different data types. Then in case
of row by row write/read test , we can travers
On 12/19/23 12:14, veem v wrote:
Thank you for the confirmation.
So at first, we need to populate the base tables with the necessary
data (say 100million rows) with required skewness using random
functions to generate the variation in the values of different data
types. Then in case of row b
On 2023-12-20 00:44:48 +0530, veem v wrote:
> So at first, we need to populate the base tables with the necessary data (say
> 100million rows) with required skewness using random functions to generate the
> variation in the values of different data types. Then in case of row by row
> write/read te
Thank you.
Yes, actually we are trying to compare and see what maximum TPS are we able
to reach with both of these row by row and batch read/write test. And then
afterwards, this figure may be compared with other databases etc with
similar setups.
So wanted to understand from experts here, if th
As Rob mentioned, the syntax you posted is not correct. You need to process
or read a certain batch of rows like 1000 or 10k etc. Not all 100M at one
shot.
But again your uses case seems common one considering you want to compare
the read and write performance on multiple databases with similar ta
Thank you.
That would really be helpful if such test scripts or similar setups are
already available. Can someone please guide me to some docs or blogs or
sample scripts, on same please.
On Wed, 20 Dec, 2023, 10:34 am Lok P, wrote:
> As Rob mentioned, the syntax you posted is not correct. You n
Hello Stephen
Just an update on this. After we deployed it on our PROD system, the
results were far better than testing.
Time taken is around 4-5 hours only. And has been the case for the last 3
months or so.
full backup: 20231209-150002F
timestamp start/stop: 2023-12-09 15:00:02+09 /
12 matches
Mail list logo