Thank you very much for reporting the problem.
And sorry for this bug and lack of negative tests.

Attempt to access unexisted value cause autoloading of data from the table to columnar store (because autoload property is enabled by default) and as far as this entry is not present in the table, the code falls into infinite recursion. Patched version of IMCS is available at http://www.garret.ru/imcs-1.01.tar.gz

I am going to place IMCS under version control now. Just looking for proper place for repository...


On 12/12/2013 04:06 AM, desmodemone wrote:



2013/12/9 knizhnik <knizh...@garret.ru <mailto:knizh...@garret.ru>>

    Hello!

    I want to annouce my implementation of In-Memory Columnar Store
    extension for PostgreSQL:

         Documentation: http://www.garret.ru/imcs/user_guide.html
         Sources: http://www.garret.ru/imcs-1.01.tar.gz

    Any feedbacks, bug reports and suggestions are welcome.

    Vertical representation of data is stored in PostgreSQL shared memory.
    This is why it is important to be able to utilize all available
    physical memory.
    Now servers with Tb or more RAM are not something exotic,
    especially in financial world.
    But there is limitation in Linux with standard 4kb pages  for
    maximal size of mapped memory segment: 256Gb.
    It is possible to overcome this limitation either by creating
    multiple segments - but it requires too much changes in PostgreSQL
    memory manager.
    Or just set MAP_HUGETLB flag (assuming that huge pages were
    allocated in the system).

    I found several messages related with MAP_HUGETLB flag, the most
    recent one was from 21 of November:
    http://www.postgresql.org/message-id/20131125032920.ga23...@toroid.org

    I wonder what is the current status of this patch?






-- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org
    <mailto:pgsql-hackers@postgresql.org>)
    To make changes to your subscription:
    http://www.postgresql.org/mailpref/pgsql-hackers



Hello,
excellent work! I begin to do testing and it's very fast, by the way I found a strange case of "endless" query with CPU a 100% when the value used as filter does not exists:

I am testing with postgres 9.3.1 on debian and I used default value for the extension except memory ( 512mb )

how to recreate the test case :

## create a table :

create table endless ( col1 int , col2 char(30) , col3 int ) ;

## insert some values:

insert into endless values ( 1, 'ahahahaha', 3);

insert into endless values ( 2, 'ghghghghg', 4);

## create the column store objects:

select cs_create('endless','col1','col2');
 cs_create
-----------

(1 row)

## try and test column store :

select cs_avg(col3) from  endless_get('ahahahaha');
 cs_avg
--------
      3
(1 row)

select cs_avg(col3) from  endless_get('ghghghghg');
 cs_avg
--------
      4
(1 row)

## now select with a value that does not exist :

select cs_avg(col3) from  endless_get('testing');

# and now start to loop on cpu and seems to never ends , I had to terminate backend

Bye

Mat

Reply via email to