Re: Why could different data in a table be processed with different performance?

2018-09-28 Thread Vladimir Ryabtsev
> That means, if your block size was bigger, then you would have bigger space allocated for one single record. But if I INSERT second, third ... hundredth record in the table, the size remains 8K. So my point is that if one decides to increase block size, increasing storage space is not so signific

Re: Why could different data in a table be processed with different performance?

2018-09-28 Thread Fabio Pardi
On 28/09/18 11:56, Vladimir Ryabtsev wrote: > > > It could affect space storage, for the smaller blocks. > But at which extent? As I understand it is not something about "alignment" to > block size for rows? Is it only low-level IO thing with datafiles? > Maybe 'for the smaller blocks' was not

Re: To keep indexes in memory, is large enough effective_cache_size enough?

2018-09-28 Thread David Rowley
On 28 September 2018 at 16:45, Sam R. wrote: > That was what I was suspecting a little. Double buffering may not matter in > our case, because the whole server is meant for PostgreSQL only. > > In our case, we can e.g. reserve almost "all memory" for PostgreSQL (shared > buffers etc.). > > Please

Re: Why could different data in a table be processed with different performance?

2018-09-28 Thread Vladimir Ryabtsev
> You will have lesser > slots in the cache, but the total available cache will indeed be > unchanged (half the blocks of double the size). But we have many other tables, queries to which may suffer from smaller number of blocks in buffer cache. > To change block size is a > painful thing, because

Re: Why could different data in a table be processed with different performance?

2018-09-28 Thread Vladimir Ryabtsev
> Does your LVM have readahead > ramped up ? Try lvchange -r 65536 data/postgres (or similar). Changed this from 256 to 65536. If it is supposed to take effect immediately (no server reboot or other changes), then I've got no changes in performance. No at all. Vlad