Hi,

Queries like this:

SELECT substring(bitarray from (32 * (n - 1) + 1) for 32) -- bitarray is a
column of type bit(64000000)
FROM array_test_bit
JOIN generate_series(1, 10000) n ON true;

SELECT substring(bytearr from (8 * (n - 1) + 1) for 8) -- bytearr is a
column of type bytea
FROM array_test_bytea
JOIN generate_series(1, 10000) n ON true;

...are really slow. These take over a minute each and a postgres backend
process uses 100% of a CPU while the query runs. The same thing in SQL
Server 2014 (using varbinary(max) columns) runs fast - about 20 seconds for
4 million rows. Are byte/bit arrays just inherently slow in Postgres? Or is
substring the wrong function to use for them?

The context is that I want to efficiently store many integers. The obvious
answer is integer[], but most of my integers can fit into less than 32
bits, so I'd like to see if I can pack them more efficiently.

Regards,

Evgeny Morozov

Reply via email to