Get info about the index

2025-07-26 Thread Igor Korot
Hi, ALL,
Is there a better place to get the info about the index other than
https://www.postgresql.org/docs/current/view-pg-indexes.html

That guy has a full blown CREATE INDEX command and I will need to
parse it to get the info I need.

Thank you.




Re: Get info about the index

2025-07-26 Thread Adrian Klaver

On 7/26/25 08:00, Igor Korot wrote:

Hi, ALL,
Is there a better place to get the info about the index other than
https://www.postgresql.org/docs/current/view-pg-indexes.html


What information do you need?



That guy has a full blown CREATE INDEX command and I will need to
parse it to get the info I need.

Thank you.





--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Get info about the index

2025-07-26 Thread Robert Sjöblom
On Sat, 26 Jul 2025, 17:00 Igor Korot,  wrote:

> Hi, ALL,
> Is there a better place to get the info about the index other than
> https://www.postgresql.org/docs/current/view-pg-indexes.html
>
> That guy has a full blown CREATE INDEX command and I will need to
> parse it to get the info I need.
>
> Thank you.
>

What kind of info do you need?

-- 
Innehållet i detta e-postmeddelande är konfidentiellt och avsett endast för 
adressaten.Varje spridning, kopiering eller utnyttjande av innehållet är 
förbjuden utan tillåtelse av avsändaren. Om detta meddelande av misstag 
gått till fel adressat vänligen radera det ursprungliga meddelandet och 
underrätta avsändaren via e-post


Re: Get info about the index

2025-07-26 Thread Igor Korot
Adrian,

On Sat, Jul 26, 2025 at 11:08 AM Adrian Klaver
 wrote:
>
> On 7/26/25 08:00, Igor Korot wrote:
> > Hi, ALL,
> > Is there a better place to get the info about the index other than
> > https://www.postgresql.org/docs/current/view-pg-indexes.html
>
> What information do you need?

This is the command for creating the index:
https://www.postgresql.org/docs/current/sql-createindex.html.

I want to know all individual things:
- whether the index is unique.
- what type of index it is
- whether the field is ASC or DESC.
- all other individual arams

I don't want to parse the command since I'm sure I can query the DB
to get the info.

Thank you..

>
> >
> > That guy has a full blown CREATE INDEX command and I will need to
> > parse it to get the info I need.
> >
> > Thank you.
> >
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com




Re: Possible causes of high_replay lag, given replication settings?

2025-07-26 Thread Jon Zeppieri
On Fri, Jul 25, 2025 at 7:13 PM Greg Sabino Mullane  wrote:
>
> On Fri, Jul 25, 2025 at 9:57 AM Jon Zeppieri  wrote:
>>
>> Thanks for the response, Nick. I'm curious why the situation you describe 
>> wouldn't also lead to the write_lag and flush_lag also being
>> high. If the problem is simply keeping up with the primary, wouldn't you 
>> expect all three lag times to be elevated?
>
>
> No - write and flush are pretty quick and simple, it's just putting the WAL 
> onto the local disk. Replay involves a lot more work as we have to parse the 
> WAL and apply the changes, which means doing a lot of I/O across many files. 
> Still, *hours* to me indicates more than just a lot of extra traffic. Check 
> that recovery_min_apply_delay is still 0, then log onto the replica and see 
> what's going on with regards to open transactions and locks.

Thanks Greg. `recovery_min_apply_delay` is 0, just checked. Also, I
didn't mention in my initial post that it seemed the cause of the
delay was long-running queries on the replica, rather than the
primary. It's possible, of course, that I'm wrong, but I was able to
get the replica moving again when I killed off old queries on the
replica. If those were the problem, though, then I don't understand
why the max_standby_streaming_delay didn't prevent that situation.

- Jon




Re: Get info about the index

2025-07-26 Thread David G. Johnston
On Sat, Jul 26, 2025, 08:00 Igor Korot  wrote:

> Hi, ALL,
> Is there a better place to get the info about the index other than
> https://www.postgresql.org/docs/current/view-pg-indexes.html
>
> That guy has a full blown CREATE INDEX command and I will need to
> parse it to get the info I need.
>

Follow the foreign keys.

All useful info does get parsed out into columns somewhere.  The system
doesn't parse a create index command everytime it needs to inspect the
properties of the index.

David J.


Re: Get info about the index

2025-07-26 Thread Adrian Klaver

On 7/26/25 08:00, Igor Korot wrote:

Hi, ALL,
Is there a better place to get the info about the index other than
https://www.postgresql.org/docs/current/view-pg-indexes.html

That guy has a full blown CREATE INDEX command and I will need to
parse it to get the info I need.


FYI,

pg_class:

https://www.postgresql.org/docs/current/catalog-pg-class.html

and pg_attribute:

https://www.postgresql.org/docs/current/catalog-pg-attribute.html

also have index information.



Thank you.





--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Get info about the index

2025-07-26 Thread Tom Lane
Adrian Klaver  writes:
> On 7/26/25 08:00, Igor Korot wrote:
>> Is there a better place to get the info about the index other than
>> https://www.postgresql.org/docs/current/view-pg-indexes.html

> pg_class:
> https://www.postgresql.org/docs/current/catalog-pg-class.html
> and pg_attribute:
> https://www.postgresql.org/docs/current/catalog-pg-attribute.html
> also have index information.

pg_index is the real source of truth here.

regards, tom lane




Re: PostgreSQL on S3-backed Block Storage with Near-Local Performance

2025-07-26 Thread Pierre Barre
Also, Neon [0] and Aurora [1] pricing is so high that it seems to make most 
use-cases impractical (well, if you want a managed offering...). Neon's top 
public tier is not even what a single modern dedicated server (or virtual 
machine) can provide. I would have thought decoupling compute and storage would 
make the offerings cheaper, if anything.

Taking my own Merklemap [2] use-case where I run a 30TB database with Neon 
pricing (and I don't doubt that the non-public pricing would be even more 
expensive than that):

Storage Scaling:

- Business plan: 500 GB -> $700
- You need: 30,000 GB (30 TB)
- Scaling factor: 60x
- Linear estimate: $700 × 60 = $42,000/month
- Total 12 months cost: $504,000

Aurora calculation [3]:

- Instance type: db.r5.24xlarge
- Monthly cost: $21,887.28
- Total 12 months cost: $262,647.36

Now, calculating the same 30TB with the same instance type and S3 storage [4]:

- Instance Type: r5.24xlarge
- Monthly cost: $5,555.04
- Total 12 months cost: $66,660.48

But more interestingly, you don't need to use AWS at all anymore, because you 
can just move your setup anywhere at this point, as you get a similar level of 
reliability - and simplicity - but with very cheap services.

Hetzner ccx63 + Cloudflare R2:

- Hetzner ccx63: €287.99/month ≈ $338/month
- R2 storage (30TB): 30,000 GB × $0.015 = $450/month
- R2 operations: Should be measured to be calculated properly, but will 
probably be negligible.
- Total monthly: ~$760
- Total 12 months cost: $9,120/year

Best,
Pierre

[0] https://neon.com/pricing
[1] https://aws.amazon.com/rds/aurora/pricing/
[2] https://www.merklemap.com/
[3] 
https://calculator.aws/#/estimate?id=3f0ce6a91eed9a666d54bb8852ea00b042c3cd6e
[4] 
https://calculator.aws/#/estimate?id=1a77d8da3489bafc8681c6fd738a3186fb749ea3

On Sat, Jul 26, 2025, at 09:51, Pierre Barre wrote:
> Ah, by "shared storage" I mean that each node can acquire exclusivity, not 
> that they can both R/W to it at the same time.
> 
> > Some pretty well-known cases of storage / compute separation (Aurora, Neon) 
> > also share the storage between instances,
> 
> That model is cool, but I think it's more of a solution for outliers as I was 
> suggesting, not something that most would or should want.
> 
> Best,
> Pierre
> 
> On Sat, Jul 26, 2025, at 09:42, Vladimir Churyukin wrote:
>> Sorry, I was referring to this:
>> 
>> >  But when PostgreSQL instances share storage rather than replicate:
>> > - Consistency seems maintained (same data)
>> > - Availability seems maintained (client can always promote an accessible 
>> > node)
>> > - Partitions between PostgreSQL nodes don't prevent the system from 
>> > functioning
>> 
>> Some pretty well-known cases of storage / compute separation (Aurora, Neon) 
>> also share the storage between instances,
>> that's why I'm a bit confused by your reply. I thought you're thinking about 
>> this approach too, that's why I mentioned what kind of challenges one may 
>> have on that path.
>> 
>> 
>> On Sat, Jul 26, 2025 at 12:36 AM Pierre Barre  wrote:
>>> __
>>> What you describe doesn’t look like something very useful for the vast 
>>> majority of projects that needs a database. Why would you even want that if 
>>> you can avoid it? 
>>> 
>>> If your “single node” can handle tens / hundreds of thousands requests per 
>>> second, still have very durable and highly available storage, as well as 
>>> fast recovery mechanisms, what’s the point?
>>> 
>>> I am not trying to cater to extreme outliers that may want very weird like 
>>> this, that’s just not the use-cases I want to address, because I believe 
>>> they are few and far between.
>>> 
>>> Best,
>>> Pierre 
>>> 
>>> On Sat, Jul 26, 2025, at 08:57, Vladimir Churyukin wrote:
 A shared storage would require a lot of extra work. That's essentially 
 what AWS Aurora does.
 You will have to have functionality to sync in-memory states between 
 nodes, because all the instances will have cached data that can easily 
 become stale on any write operation.
 That alone is not that simple. You will have to modify some locking logic. 
 Most likely do a lot of other changes in a lot of places, Postgres was not 
 just built with the assumption that the storage can be shared.
 
 -Vladimir
 
 On Fri, Jul 18, 2025 at 5:31 AM Pierre Barre  wrote:
> Now, I'm trying to understand how CAP theorem applies here. Traditional 
> PostgreSQL replication has clear CAP trade-offs - you choose between 
> consistency and availability during partitions.
> 
> But when PostgreSQL instances share storage rather than replicate:
> - Consistency seems maintained (same data)
> - Availability seems maintained (client can always promote an accessible 
> node)
> - Partitions between PostgreSQL nodes don't prevent the system from 
> functioning
> 
> It seems that CAP assumes specific implementation details (like nodes 
> maintaining independent state) without

Re: PostgreSQL on S3-backed Block Storage with Near-Local Performance

2025-07-26 Thread Pierre Barre
Ah, by "shared storage" I mean that each node can acquire exclusivity, not that 
they can both R/W to it at the same time.

> Some pretty well-known cases of storage / compute separation (Aurora, Neon) 
> also share the storage between instances,

That model is cool, but I think it's more of a solution for outliers as I was 
suggesting, not something that most would or should want.

Best,
Pierre

On Sat, Jul 26, 2025, at 09:42, Vladimir Churyukin wrote:
> Sorry, I was referring to this:
> 
> >  But when PostgreSQL instances share storage rather than replicate:
> > - Consistency seems maintained (same data)
> > - Availability seems maintained (client can always promote an accessible 
> > node)
> > - Partitions between PostgreSQL nodes don't prevent the system from 
> > functioning
> 
> Some pretty well-known cases of storage / compute separation (Aurora, Neon) 
> also share the storage between instances,
> that's why I'm a bit confused by your reply. I thought you're thinking about 
> this approach too, that's why I mentioned what kind of challenges one may 
> have on that path.
> 
> 
> On Sat, Jul 26, 2025 at 12:36 AM Pierre Barre  wrote:
>> __
>> What you describe doesn’t look like something very useful for the vast 
>> majority of projects that needs a database. Why would you even want that if 
>> you can avoid it? 
>> 
>> If your “single node” can handle tens / hundreds of thousands requests per 
>> second, still have very durable and highly available storage, as well as 
>> fast recovery mechanisms, what’s the point?
>> 
>> I am not trying to cater to extreme outliers that may want very weird like 
>> this, that’s just not the use-cases I want to address, because I believe 
>> they are few and far between.
>> 
>> Best,
>> Pierre 
>> 
>> On Sat, Jul 26, 2025, at 08:57, Vladimir Churyukin wrote:
>>> A shared storage would require a lot of extra work. That's essentially what 
>>> AWS Aurora does.
>>> You will have to have functionality to sync in-memory states between nodes, 
>>> because all the instances will have cached data that can easily become 
>>> stale on any write operation.
>>> That alone is not that simple. You will have to modify some locking logic. 
>>> Most likely do a lot of other changes in a lot of places, Postgres was not 
>>> just built with the assumption that the storage can be shared.
>>> 
>>> -Vladimir
>>> 
>>> On Fri, Jul 18, 2025 at 5:31 AM Pierre Barre  wrote:
 Now, I'm trying to understand how CAP theorem applies here. Traditional 
 PostgreSQL replication has clear CAP trade-offs - you choose between 
 consistency and availability during partitions.
 
 But when PostgreSQL instances share storage rather than replicate:
 - Consistency seems maintained (same data)
 - Availability seems maintained (client can always promote an accessible 
 node)
 - Partitions between PostgreSQL nodes don't prevent the system from 
 functioning
 
 It seems that CAP assumes specific implementation details (like nodes 
 maintaining independent state) without explicitly stating them.
 
 How should we think about CAP theorem when distributed nodes share storage 
 rather than coordinate state? Are the trade-offs simply moved to a 
 different layer, or does shared storage fundamentally change the analysis?
 
 Client with awareness of both PostgreSQL nodes
 |   |
 ↓ (partition here)  ↓
 PostgreSQL Primary  PostgreSQL Standby
 |   |
 └───┬───┘
 ↓
  Shared ZFS Pool
 |
  6 Global ZeroFS instances
 
 Best,
 Pierre
 
 On Fri, Jul 18, 2025, at 12:57, Pierre Barre wrote:
 > Hi Seref,
 >
 > For the benchmarks, I used Hetzner's cloud service with the following 
 > setup:
 >
 > - A Hetzner s3 bucket in the FSN1 region
 > - A virtual machine of type ccx63 48 vCPU 192 GB memory
 > - 3 ZeroFS nbd devices (same s3 bucket)
 > - A ZFS stripped pool with the 3 devices
 > - 200GB zfs L2ARC
 > - Postgres configured accordingly memory-wise as well as with 
 > synchronous_commit = off, wal_init_zero = off and wal_recycle = off.
 >
 > Best,
 > Pierre
 >
 > On Fri, Jul 18, 2025, at 12:42, Seref Arikan wrote:
 >> Sorry, this was meant to go to the whole group:
 >>
 >> Very interesting!. Great work. Can you clarify how exactly you're 
 >> running postgres in your tests? A specific AWS service? What's the test 
 >> infrastructure that sits above the file system?
 >>
 >> On Thu, Jul 17, 2025 at 11:59 PM Pierre Barre  wrote:
 >>> Hi everyone,
 >>>
 >>> I wanted to share a project I've been working on that enables 
 >>> PostgreSQL to run on S3 storage while maintaining performance 
 >>> comparable to local NVMe. The approach us

Re: PostgreSQL on S3-backed Block Storage with Near-Local Performance

2025-07-26 Thread Pierre Barre
What you describe doesn’t look like something very useful for the vast majority 
of projects that needs a database. Why would you even want that if you can 
avoid it? 

If your “single node” can handle tens / hundreds of thousands requests per 
second, still have very durable and highly available storage, as well as fast 
recovery mechanisms, what’s the point?

I am not trying to cater to extreme outliers that may want very weird like 
this, that’s just not the use-cases I want to address, because I believe they 
are few and far between.

Best,
Pierre 

On Sat, Jul 26, 2025, at 08:57, Vladimir Churyukin wrote:
> A shared storage would require a lot of extra work. That's essentially what 
> AWS Aurora does.
> You will have to have functionality to sync in-memory states between nodes, 
> because all the instances will have cached data that can easily become stale 
> on any write operation.
> That alone is not that simple. You will have to modify some locking logic. 
> Most likely do a lot of other changes in a lot of places, Postgres was not 
> just built with the assumption that the storage can be shared.
> 
> -Vladimir
> 
> On Fri, Jul 18, 2025 at 5:31 AM Pierre Barre  wrote:
>> Now, I'm trying to understand how CAP theorem applies here. Traditional 
>> PostgreSQL replication has clear CAP trade-offs - you choose between 
>> consistency and availability during partitions.
>> 
>> But when PostgreSQL instances share storage rather than replicate:
>> - Consistency seems maintained (same data)
>> - Availability seems maintained (client can always promote an accessible 
>> node)
>> - Partitions between PostgreSQL nodes don't prevent the system from 
>> functioning
>> 
>> It seems that CAP assumes specific implementation details (like nodes 
>> maintaining independent state) without explicitly stating them.
>> 
>> How should we think about CAP theorem when distributed nodes share storage 
>> rather than coordinate state? Are the trade-offs simply moved to a different 
>> layer, or does shared storage fundamentally change the analysis?
>> 
>> Client with awareness of both PostgreSQL nodes
>> |   |
>> ↓ (partition here)  ↓
>> PostgreSQL Primary  PostgreSQL Standby
>> |   |
>> └───┬───┘
>> ↓
>>  Shared ZFS Pool
>> |
>>  6 Global ZeroFS instances
>> 
>> Best,
>> Pierre
>> 
>> On Fri, Jul 18, 2025, at 12:57, Pierre Barre wrote:
>> > Hi Seref,
>> > 
>> > For the benchmarks, I used Hetzner's cloud service with the following 
>> > setup:
>> > 
>> > - A Hetzner s3 bucket in the FSN1 region
>> > - A virtual machine of type ccx63 48 vCPU 192 GB memory
>> > - 3 ZeroFS nbd devices (same s3 bucket)
>> > - A ZFS stripped pool with the 3 devices
>> > - 200GB zfs L2ARC
>> > - Postgres configured accordingly memory-wise as well as with 
>> > synchronous_commit = off, wal_init_zero = off and wal_recycle = off.
>> > 
>> > Best,
>> > Pierre
>> > 
>> > On Fri, Jul 18, 2025, at 12:42, Seref Arikan wrote:
>> >> Sorry, this was meant to go to the whole group:
>> >> 
>> >> Very interesting!. Great work. Can you clarify how exactly you're running 
>> >> postgres in your tests? A specific AWS service? What's the test 
>> >> infrastructure that sits above the file system?
>> >> 
>> >> On Thu, Jul 17, 2025 at 11:59 PM Pierre Barre  wrote:
>> >>> Hi everyone,
>> >>> 
>> >>> I wanted to share a project I've been working on that enables PostgreSQL 
>> >>> to run on S3 storage while maintaining performance comparable to local 
>> >>> NVMe. The approach uses block-level access rather than trying to map 
>> >>> filesystem operations to S3 objects.
>> >>> 
>> >>> ZeroFS: https://github.com/Barre/ZeroFS
>> >>> 
>> >>> # The Architecture
>> >>> 
>> >>> ZeroFS provides NBD (Network Block Device) servers that expose S3 
>> >>> storage as raw block devices. PostgreSQL runs unmodified on ZFS pools 
>> >>> built on these block devices:
>> >>> 
>> >>> PostgreSQL -> ZFS -> NBD -> ZeroFS -> S3
>> >>> 
>> >>> By providing block-level access and leveraging ZFS's caching 
>> >>> capabilities (L2ARC), we can achieve microsecond latencies despite the 
>> >>> underlying storage being in S3.
>> >>> 
>> >>> ## Performance Results
>> >>> 
>> >>> Here are pgbench results from PostgreSQL running on this setup:
>> >>> 
>> >>> ### Read/Write Workload
>> >>> 
>> >>> ```
>> >>> postgres@ubuntu-16gb-fsn1-1:/root$ pgbench -c 50 -j 15 -t 10 example
>> >>> pgbench (16.9 (Ubuntu 16.9-0ubuntu0.24.04.1))
>> >>> starting vacuum...end.
>> >>> transaction type: 
>> >>> scaling factor: 50
>> >>> query mode: simple
>> >>> number of clients: 50
>> >>> number of threads: 15
>> >>> maximum number of tries: 1
>> >>> number of transactions per client: 10
>> >>> number of transactions actually processed: 500/500
>> >>> number of failed transactions: 0 (0.000%)
>> >>> latency average = 0.943 ms
>> >>> initial c

Re: PostgreSQL on S3-backed Block Storage with Near-Local Performance

2025-07-26 Thread Vladimir Churyukin
Sorry, I was referring to this:

>  But when PostgreSQL instances share storage rather than replicate:
> - Consistency seems maintained (same data)
> - Availability seems maintained (client can always promote an accessible
node)
> - Partitions between PostgreSQL nodes don't prevent the system from
functioning

Some pretty well-known cases of storage / compute separation (Aurora, Neon)
also share the storage between instances,
that's why I'm a bit confused by your reply. I thought you're thinking
about this approach too, that's why I mentioned what kind of challenges one
may have on that path.


On Sat, Jul 26, 2025 at 12:36 AM Pierre Barre  wrote:

> What you describe doesn’t look like something very useful for the vast
> majority of projects that needs a database. Why would you even want that if
> you can avoid it?
>
> If your “single node” can handle tens / hundreds of thousands requests per
> second, still have very durable and highly available storage, as well as
> fast recovery mechanisms, what’s the point?
>
> I am not trying to cater to extreme outliers that may want very weird like
> this, that’s just not the use-cases I want to address, because I believe
> they are few and far between.
>
> Best,
> Pierre
>
> On Sat, Jul 26, 2025, at 08:57, Vladimir Churyukin wrote:
>
> A shared storage would require a lot of extra work. That's essentially
> what AWS Aurora does.
> You will have to have functionality to sync in-memory states between
> nodes, because all the instances will have cached data that can easily
> become stale on any write operation.
> That alone is not that simple. You will have to modify some locking logic.
> Most likely do a lot of other changes in a lot of places, Postgres was not
> just built with the assumption that the storage can be shared.
>
> -Vladimir
>
> On Fri, Jul 18, 2025 at 5:31 AM Pierre Barre  wrote:
>
> Now, I'm trying to understand how CAP theorem applies here. Traditional
> PostgreSQL replication has clear CAP trade-offs - you choose between
> consistency and availability during partitions.
>
> But when PostgreSQL instances share storage rather than replicate:
> - Consistency seems maintained (same data)
> - Availability seems maintained (client can always promote an accessible
> node)
> - Partitions between PostgreSQL nodes don't prevent the system from
> functioning
>
> It seems that CAP assumes specific implementation details (like nodes
> maintaining independent state) without explicitly stating them.
>
> How should we think about CAP theorem when distributed nodes share storage
> rather than coordinate state? Are the trade-offs simply moved to a
> different layer, or does shared storage fundamentally change the analysis?
>
> Client with awareness of both PostgreSQL nodes
> |   |
> ↓ (partition here)  ↓
> PostgreSQL Primary  PostgreSQL Standby
> |   |
> └───┬───┘
> ↓
>  Shared ZFS Pool
> |
>  6 Global ZeroFS instances
>
> Best,
> Pierre
>
> On Fri, Jul 18, 2025, at 12:57, Pierre Barre wrote:
> > Hi Seref,
> >
> > For the benchmarks, I used Hetzner's cloud service with the following
> setup:
> >
> > - A Hetzner s3 bucket in the FSN1 region
> > - A virtual machine of type ccx63 48 vCPU 192 GB memory
> > - 3 ZeroFS nbd devices (same s3 bucket)
> > - A ZFS stripped pool with the 3 devices
> > - 200GB zfs L2ARC
> > - Postgres configured accordingly memory-wise as well as with
> synchronous_commit = off, wal_init_zero = off and wal_recycle = off.
> >
> > Best,
> > Pierre
> >
> > On Fri, Jul 18, 2025, at 12:42, Seref Arikan wrote:
> >> Sorry, this was meant to go to the whole group:
> >>
> >> Very interesting!. Great work. Can you clarify how exactly you're
> running postgres in your tests? A specific AWS service? What's the test
> infrastructure that sits above the file system?
> >>
> >> On Thu, Jul 17, 2025 at 11:59 PM Pierre Barre  wrote:
> >>> Hi everyone,
> >>>
> >>> I wanted to share a project I've been working on that enables
> PostgreSQL to run on S3 storage while maintaining performance comparable to
> local NVMe. The approach uses block-level access rather than trying to map
> filesystem operations to S3 objects.
> >>>
> >>> ZeroFS: https://github.com/Barre/ZeroFS
> >>>
> >>> # The Architecture
> >>>
> >>> ZeroFS provides NBD (Network Block Device) servers that expose S3
> storage as raw block devices. PostgreSQL runs unmodified on ZFS pools built
> on these block devices:
> >>>
> >>> PostgreSQL -> ZFS -> NBD -> ZeroFS -> S3
> >>>
> >>> By providing block-level access and leveraging ZFS's caching
> capabilities (L2ARC), we can achieve microsecond latencies despite the
> underlying storage being in S3.
> >>>
> >>> ## Performance Results
> >>>
> >>> Here are pgbench results from PostgreSQL running on this setup:
> >>>
> >>> ### Read/Write Workload
> >>>
> >>> ```
> >>> postgres@ubuntu-16gb-fsn1-1:/r

Re: Get info about the index

2025-07-26 Thread Christophe Pettus



> I want to know all individual things:
> - whether the index is unique.
> - what type of index it is
> - whether the field is ASC or DESC.
> - all other individual arams

pg_index is the source for those.  The one exception is the access method for 
the index, which is in pg_class.



Re: Get info about the index

2025-07-26 Thread Laurenz Albe
On Sat, 2025-07-26 at 15:13 -0400, Igor Korot wrote:
> On Sat, Jul 26, 2025, 2:14 PM Christophe Pettus  wrote:
> > > I want to know all individual things:
> > > - whether the index is unique.
> > > - what type of index it is
> > > - whether the field is ASC or DESC.
> > > - all other individual arams
> > 
> > pg_index is the source for those.  The one exception is the access method 
> > for the index, which is in pg_class.
> 
> I didn't find the sorting for the field.

That's in pg_index.indoption.

The flags stored there are defined in src/include/catalog/pg_index.h

/*
 * Index AMs that support ordered scans must support these two indoption
 * bits.  Otherwise, the content of the per-column indoption fields is
 * open for future definition.
 */
#define INDOPTION_DESC  0x0001  /* values are in reverse order */
#define INDOPTION_NULLS_FIRST   0x0002  /* NULLs are first instead of last */

So a value of 3 would mean DESC NULLS FIRST.

Yours,
Laurenz Albe




Re: Get info about the index

2025-07-26 Thread Igor Korot
Hi, Christopher,



On Sat, Jul 26, 2025, 2:14 PM Christophe Pettus  wrote:

>
>
> > I want to know all individual things:
> > - whether the index is unique.
> > - what type of index it is
> > - whether the field is ASC or DESC.
> > - all other individual arams
>
> pg_index is the source for those.  The one exception is the access method
> for the index, which is in pg_class.


I didn't find the sorting for the field.

Can you help?

Thank you.


Re: Get info about the index

2025-07-26 Thread Jon Zeppieri
On Sat, Jul 26, 2025 at 3:13 PM Igor Korot  wrote:
>
> I didn't find the sorting for the field.
>
> Can you help?

The pg_index_column_has_property() can provide this information. E.g.,

select pg_index_column_has_property('my_index'::regclass, 2, 'desc');