Problem when parsing query

2011-06-27 Thread Germain Maurice

Hello everybody,

I have a problem with riaksearch 0.14.2 when searching the document i 
have in my bucket "affinities" :

{
   "start" : "20110417",
   "end" : "20110510",
"data" : {
...
}
}

When i request the index with this query (without escaping special 
characters) :
curl -vv 
'http://localhost:8098/solr/affinities/select?q=nsid:83786678@N00 AND 
start:[2* TO 20110501]&wt=json&rows=1'

it returns the document.

When i request the index with this query :
curl -vv 
'http://localhost:8098/solr/affinities/select?q=nsid:83786678@N00 AND 
start:[20* TO 20110501]&wt=json&rows=1'

i get a Bad Request error (HTTP 400) and this error in the riak console :

=ERROR REPORT 27-Jun-2011::22:05:04 ===
Unable to parse request: {badmatch,
 {error,
 {lucene_parse,
 "syntax error before: [50,48,42]"}}}
[{riak_search_client,parse_query,3},
 {riak_solr_searcher_wm,malformed_request,2},
 {webmachine_resource,resource_call,3},
 {webmachine_resource,do,3},
 {webmachine_decision_core,resource_call,1},
 {webmachine_decision_core,decision,1},
 {webmachine_decision_core,handle_request,2},
 {webmachine_mochiweb,loop,1}]

I think there is an error in parsing the query, am i wrong ?

Germain


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Problem when parsing query

2011-06-29 Thread Germain Maurice

Ok, thanks !

However, the behavior i got is different than the one exposed in the bug 
report.


Germain

Le 29/06/11 02:35, Mark Phillips a écrit :

Here you go:

https://issues.basho.com/show_bug.cgi?id=1092

Mark


On Jun 28, 2011, at 4:06 PM, Ryan Zezeski  wrote:


I would expect the first one to fail.  IIRC, there is a bug when following a 
single char with a wildcard

-Ryan

[Sent from my iPhone]

On Jun 27, 2011, at 4:25 PM, Germain Maurice  
wrote:


Hello everybody,

I have a problem with riaksearch 0.14.2 when searching the document i have in my bucket 
"affinities" :
{
  "start" : "20110417",
  "end" : "20110510",
   "data" : {
   ...
   }
}

When i request the index with this query (without escaping special characters) :
curl -vv 'http://localhost:8098/solr/affinities/select?q=nsid:83786678@N00 AND 
start:[2* TO 20110501]&wt=json&rows=1'
it returns the document.

When i request the index with this query :
curl -vv 'http://localhost:8098/solr/affinities/select?q=nsid:83786678@N00 AND 
start:[20* TO 20110501]&wt=json&rows=1'
i get a Bad Request error (HTTP 400) and this error in the riak console :

=ERROR REPORT 27-Jun-2011::22:05:04 ===
Unable to parse request: {badmatch,
{error,
{lucene_parse,
"syntax error before: [50,48,42]"}}}
[{riak_search_client,parse_query,3},
{riak_solr_searcher_wm,malformed_request,2},
{webmachine_resource,resource_call,3},
{webmachine_resource,do,3},
{webmachine_decision_core,resource_call,1},
{webmachine_decision_core,decision,1},
{webmachine_decision_core,handle_request,2},
{webmachine_mochiweb,loop,1}]

I think there is an error in parsing the query, am i wrong ?

Germain


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com



--
Germain Maurice
Administrateur Système/Réseau
Tel : +33.(0)1.42.43.54.33

http://www.linkfluence.net


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


No results provided by Riak search, another !

2011-07-05 Thread Germain Maurice

Hello everybody,

I have a problem with Riak Search. I tried to find the solution by my 
own, i tried all the solutions i found and no results.


Firstly, i tried to use it with the default schema, it works but it 
indexes too much data.

So, i set my own schema as this :
{
schema,
[
{version, "1.1"},
{n_val, 1},
{default_field, "nsid"},
{analyzer_factory, {erlang, text_analyzers, 
standard_analyzer_factory}}

],
[
{field, [
{name, "favedate"},
{type, string},
{analyzer_factory, {erlang, text_analyzers, 
standard_analyzer_factory}}

]},

{field, [
{name, "date_faved"},
{type, string},
{analyzer_factory, {erlang, text_analyzers, 
standard_analyzer_factory}}

]},

%% Everything else is skipped
{dynamic_field, [
{name, "*"},
{skip, true}
]}
]
}.

Hook on precommit of my buckets are ok.
I set the schema for each of my buckets as this : "search-cmd set-schema 
photostest My.schema"

Did a "search-cmd clear-schema-cache" command.

I re-indexed all of my documents of the bucket however, these documents 
are not indexed.

An example :

{ "fans":{
"data":[{"nsid":"83786678@N00",
"favedate":"1309539453"}
  ,{"nsid":"33233619@N02",
   "favedate":"1309539169"}]
 , ...
Here is the answer of the index :
{"responseHeader":{"status":0,"QTime":2,"params":{"q":"fans_data_nsid:83786678@N00","q.op":"or","filter":"","wt":"
json"}},"response":{"numFound":0,"start":0,"maxScore":"0.0","docs":[]}}


thank you !

--
Germain Maurice
Administrateur Système/Réseau
Tel : +33.(0)1.42.43.54.33

http://www.linkfluence.net


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: No results provided by Riak search, another !

2011-07-05 Thread Germain Maurice

Thanks Dan,

I changed the fields in the schema as this :
- nsid -> *_nsid
- favedate -> *_favedate
- date_faved -> *_date_faved

In order to take it in account i did :
search-cmd set-schema photostest My.schema
search-cmd clear-schema-cache
and reindexing data (reading/writing the documents in their own place)

No change :(
I tried to use the fields : fans_data_nsid, fans_data_favedate
but no better result.

I'm using Riak Search 0.14.2

Just for checking :
curl http://localhost:8098/riak/photostest
{"props":{"precommit":[{"fun":"precommit","mod":"riak_search_kv_hook"}]...}

Le 05/07/11 18:27, Dan Reverri a écrit :

Hi Germain,

It looks like your document has nested fields which means the schema 
you have defined won't match the fields produced by the pre-commit 
hook. The pre-commit hook flattens JSON documents using an underscore 
("_") between nested fields (e.g. fans_data_nsid); your schema should 
be using the flattened field name.


Thanks,
Dan

Daniel Reverri
Developer Advocate
Basho Technologies, Inc.
d...@basho.com <mailto:d...@basho.com>


On Tue, Jul 5, 2011 at 7:05 AM, Germain Maurice 
<mailto:germain.maur...@linkfluence.net>> wrote:


Hello everybody,

I have a problem with Riak Search. I tried to find the solution by
my own, i tried all the solutions i found and no results.

Firstly, i tried to use it with the default schema, it works but
it indexes too much data.
So, i set my own schema as this :
{
   schema,
   [
   {version, "1.1"},
   {n_val, 1},
   {default_field, "nsid"},
   {analyzer_factory, {erlang, text_analyzers,
standard_analyzer_factory}}
   ],
   [
   {field, [
   {name, "favedate"},
   {type, string},
   {analyzer_factory, {erlang, text_analyzers,
standard_analyzer_factory}}
   ]},

   {field, [
   {name, "date_faved"},
   {type, string},
   {analyzer_factory, {erlang, text_analyzers,
standard_analyzer_factory}}
   ]},

   %% Everything else is skipped
   {dynamic_field, [
   {name, "*"},
   {skip, true}
   ]}
   ]
}.

Hook on precommit of my buckets are ok.
I set the schema for each of my buckets as this : "search-cmd
set-schema photostest My.schema"
Did a "search-cmd clear-schema-cache" command.

I re-indexed all of my documents of the bucket however, these
documents are not indexed.
An example :

{ "fans":{
   "data":[{"nsid":"83786678@N00",
   "favedate":"1309539453"}
 ,{"nsid":"33233619@N02",
  "favedate":"1309539169"}]
     , ...
Here is the answer of the index :

{"responseHeader":{"status":0,"QTime":2,"params":{"q":"fans_data_nsid:83786678@N00","q.op":"or","filter":"","wt":"
json"}},"response":{"numFound":0,"start":0,"maxScore":"0.0","docs":[]}}


thank you !

-- 
Germain Maurice

Administrateur Système/Réseau
Tel : +33.(0)1.42.43.54.33

http://www.linkfluence.net


___
riak-users mailing list
riak-users@lists.basho.com <mailto:riak-users@lists.basho.com>
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com





--
Germain Maurice
Administrateur Système/Réseau
Tel : +33.(0)1.42.43.54.33

http://www.linkfluence.net

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: No results provided by Riak search, another !

2011-07-05 Thread Germain Maurice

Hi Dan,

No, i didn't change it.


Le 05/07/11 19:17, Dan Reverri a écrit :

Hi Germain,

Did you change the fields to "dynamic_field" in the schema?

Thanks,
Dan

Daniel Reverri
Developer Advocate
Basho Technologies, Inc.
d...@basho.com <mailto:d...@basho.com>


On Tue, Jul 5, 2011 at 9:51 AM, Germain Maurice 
<mailto:germain.maur...@linkfluence.net>> wrote:


Thanks Dan,

I changed the fields in the schema as this :
- nsid -> *_nsid
- favedate -> *_favedate
- date_faved -> *_date_faved

In order to take it in account i did :

search-cmd set-schema photostest My.schema
search-cmd clear-schema-cache
and reindexing data (reading/writing the documents in their own place)

No change :(
I tried to use the fields : fans_data_nsid, fans_data_favedate
but no better result.

I'm using Riak Search 0.14.2

Just for checking :
curl http://localhost:8098/riak/photostest
{"props":{"precommit":[{"fun":"precommit","mod":"riak_search_kv_hook"}]...}

Le 05/07/11 18:27, Dan Reverri a écrit :

Hi Germain,

It looks like your document has nested fields which means the
schema you have defined won't match the fields produced by the
pre-commit hook. The pre-commit hook flattens JSON documents
using an underscore ("_") between nested fields (e.g.
fans_data_nsid); your schema should be using the flattened field
name.

Thanks,
Dan

Daniel Reverri
Developer Advocate
Basho Technologies, Inc.
d...@basho.com <mailto:d...@basho.com>


On Tue, Jul 5, 2011 at 7:05 AM, Germain Maurice
mailto:germain.maur...@linkfluence.net>> wrote:

Hello everybody,

I have a problem with Riak Search. I tried to find the
solution by my own, i tried all the solutions i found and no
results.

Firstly, i tried to use it with the default schema, it works
but it indexes too much data.
So, i set my own schema as this :
{
   schema,
   [
   {version, "1.1"},
   {n_val, 1},
   {default_field, "nsid"},
   {analyzer_factory, {erlang, text_analyzers,
standard_analyzer_factory}}
   ],
   [
   {field, [
   {name, "favedate"},
   {type, string},
   {analyzer_factory, {erlang, text_analyzers,
standard_analyzer_factory}}
   ]},

   {field, [
   {name, "date_faved"},
   {type, string},
   {analyzer_factory, {erlang, text_analyzers,
standard_analyzer_factory}}
   ]},

   %% Everything else is skipped
   {dynamic_field, [
   {name, "*"},
   {skip, true}
   ]}
   ]
}.

Hook on precommit of my buckets are ok.
I set the schema for each of my buckets as this : "search-cmd
set-schema photostest My.schema"
Did a "search-cmd clear-schema-cache" command.

I re-indexed all of my documents of the bucket however, these
documents are not indexed.
An example :

{ "fans":{
   "data":[{"nsid":"83786678@N00",
   "favedate":"1309539453"}
 ,{"nsid":"33233619@N02",
  "favedate":"1309539169"}]
     , ...
    Here is the answer of the index :

{"responseHeader":{"status":0,"QTime":2,"params":{"q":"fans_data_nsid:83786678@N00","q.op":"or","filter":"","wt":"
json"}},"response":{"numFound":0,"start":0,"maxScore":"0.0","docs":[]}}


thank you !

-- 
Germain Maurice

Administrateur Système/Réseau
Tel : +33.(0)1.42.43.54.33

http://www.linkfluence.net


___
riak-users mailing list
riak-users@lists.basho.com <mailto:riak-users@lists.basho.com>
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com





-- 
Germain Maurice

Administrateur Système/Réseau
Tel : +33.(0)1.42.43.54.33

http://www.linkfluence.net


___
riak-users mailing list
riak-users@lists.basho.com <mailto:riak-users@lists.basho.com>
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com




___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: No results provided by Riak search, another !

2011-07-05 Thread Germain Maurice


Ok Dan, i have some news.

With
{dynamic_field, [
{name, "*_nsid"},
or with
   {field, [
{name, "fans_data_nsid"},
i'm able to query the index, however i have results with this query :
q=fans_data_nsid:837*
but no results with :
q=fans_data_nsid:83*
I remember it's an existing bug.

Thank you for your patience.

Le 05/07/11 23:36, Dan Reverri a écrit :
You'll need to change the schema fields to "dymanic_field" in order to 
use wildcards (*). Can you update the schema and test the issue again?


Thanks,
Dan

Daniel Reverri
Developer Advocate
Basho Technologies, Inc.
d...@basho.com <mailto:d...@basho.com>


On Tue, Jul 5, 2011 at 2:13 PM, Germain Maurice 
<mailto:germain.maur...@linkfluence.net>> wrote:


Hi Dan,

No, i didn't change it.


Le 05/07/11 19:17, Dan Reverri a écrit :

Hi Germain,

Did you change the fields to "dynamic_field" in the schema?

Thanks,
Dan

Daniel Reverri
Developer Advocate
Basho Technologies, Inc.
d...@basho.com <mailto:d...@basho.com>


On Tue, Jul 5, 2011 at 9:51 AM, Germain Maurice
mailto:germain.maur...@linkfluence.net>> wrote:

Thanks Dan,

I changed the fields in the schema as this :
- nsid -> *_nsid
- favedate -> *_favedate
- date_faved -> *_date_faved

In order to take it in account i did :

search-cmd set-schema photostest My.schema
search-cmd clear-schema-cache
and reindexing data (reading/writing the documents in their
own place)

No change :(
I tried to use the fields : fans_data_nsid, fans_data_favedate
but no better result.

I'm using Riak Search 0.14.2

Just for checking :
curl http://localhost:8098/riak/photostest

{"props":{"precommit":[{"fun":"precommit","mod":"riak_search_kv_hook"}]...}

Le 05/07/11 18:27, Dan Reverri a écrit :

Hi Germain,

It looks like your document has nested fields which means
the schema you have defined won't match the fields produced
by the pre-commit hook. The pre-commit hook flattens JSON
documents using an underscore ("_") between nested fields
(e.g. fans_data_nsid); your schema should be using the
flattened field name.

Thanks,
Dan

    Daniel Reverri
Developer Advocate
Basho Technologies, Inc.
d...@basho.com <mailto:d...@basho.com>


On Tue, Jul 5, 2011 at 7:05 AM, Germain Maurice
mailto:germain.maur...@linkfluence.net>> wrote:

Hello everybody,

I have a problem with Riak Search. I tried to find the
solution by my own, i tried all the solutions i found
and no results.

Firstly, i tried to use it with the default schema, it
works but it indexes too much data.
So, i set my own schema as this :
{
   schema,
   [
   {version, "1.1"},
   {n_val, 1},
   {default_field, "nsid"},
   {analyzer_factory, {erlang, text_analyzers,
standard_analyzer_factory}}
   ],
   [
   {field, [
   {name, "favedate"},
   {type, string},
   {analyzer_factory, {erlang, text_analyzers,
standard_analyzer_factory}}
   ]},

   {field, [
   {name, "date_faved"},
   {type, string},
   {analyzer_factory, {erlang, text_analyzers,
standard_analyzer_factory}}
   ]},

   %% Everything else is skipped
   {dynamic_field, [
   {name, "*"},
   {skip, true}
   ]}
   ]
}.

Hook on precommit of my buckets are ok.
I set the schema for each of my buckets as this :
"search-cmd set-schema photostest My.schema"
Did a "search-cmd clear-schema-cache" command.

I re-indexed all of my documents of the bucket however,
these documents are not indexed.
An example :

{ "fans":{
   "data":[{"nsid":"83786678@N00",
   "favedate":"1309539453"}
 ,{"nsid":"33233619@N02",
  "favedate":"1309539169"}]
 , ...
Here is the answer of the index :

{"responseHeader":{"

Riak crashed and crashed again when recovering

2010-05-05 Thread Germain Maurice
 : 0
vnode_puts_total : 0
node_gets : 0
node_gets_total : 3
node_get_fsm_time_mean : undefined
node_get_fsm_time_median : undefined
node_get_fsm_time_95 : undefined
node_get_fsm_time_99 : undefined
node_get_fsm_time_100 : undefined
node_puts : 0
node_puts_total : 0
node_put_fsm_time_mean : undefined
node_put_fsm_time_median : undefined
node_put_fsm_time_95 : undefined
node_put_fsm_time_99 : undefined
node_put_fsm_time_100 : undefined
cpu_nprocs : 124
cpu_avg1 : 312
cpu_avg5 : 248
cpu_avg15 : 207
mem_total : 1950601216
mem_allocated : 1931788288
disk : [{"/",86796672,5},
{"/dev",952440,1},
{"/dev/shm",952440,0},
{"/var/run",952440,1},
{"/var/lock",952440,0},
{"/lib/init/rw",952440,0},
{"/reiser",1218709872,28}]
nodename : 'r...@10.0.0.40'
connected_nodes : ['riak_maint_5...@10.0.0.40']
sys_driver_version : <<"1.5">>
sys_global_heaps_size : 0
sys_heap_type : private
sys_logical_processors : 2
sys_otp_release : <<"R13B04">>
sys_process_count : 140
sys_smp_support : true
sys_system_version : <<"Erlang R13B04 (erts-5.7.5) [source] [64-bit] 
[smp:2:2] [rq:2] [async-threads:5] [hipe] [kernel-poll:true]\n">>

sys_system_architecture : <<"x86_64-unknown-linux-gnu">>
sys_threads_enabled : true
sys_thread_pool_size : 5
sys_wordsize : 8
ring_members : ['r...@10.0.0.40','r...@10.0.0.41']
ring_num_partitions : 64
ring_ownership : <<"[{'r...@10.0.0.40',32},{'r...@10.0.0.41',32}]">>
ring_creation_size : 64
storage_backend : riak_kv_dets_backend
pbc_connects_total : 0
pbc_connects : 0
pbc_active : 0

Any idea about this behavior ?
Can you explain what does riak with the fs_r...@10.0.0.40_* files ?


Thank you

--
Germain Maurice
Administrateur Système/Réseau
Tel : +33.(0)1.42.43.54.33

http://www.linkfluence.net


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak crashed and crashed again when recovering

2010-05-05 Thread Germain Maurice
   {gen_fsm,init_it,6},
  {proc_lib,init_p_do_apply,3}]}}},
[{riak_kv_vnode_master,get_vnode,2},
 {riak_kv_vnode_master,handle_cast,2},
 {gen_server,handle_msg,5},
 {proc_lib,init_p_do_apply,3}]}

=INFO REPORT 5-May-2010::13:16:54 ===
Spidermonkey VM host stopping (<0.113.0>)

=INFO REPORT 5-May-2010::13:16:54 ===
Spidermonkey VM host stopping (<0.115.0>)

=INFO REPORT 5-May-2010::13:16:54 ===
Spidermonkey VM host stopping (<0.114.0>)

=INFO REPORT 5-May-2010::13:16:54 ===
Spidermonkey VM host stopping (<0.117.0>)

=INFO REPORT 5-May-2010::13:16:54 ===
Spidermonkey VM host stopping (<0.119.0>)

=INFO REPORT 5-May-2010::13:16:54 ===
Spidermonkey VM host stopping (<0.118.0>)

=INFO REPORT 5-May-2010::13:16:54 ===
Spidermonkey VM host stopping (<0.116.0>)

=INFO REPORT 5-May-2010::13:16:54 ===
Spidermonkey VM host stopping (<0.120.0>)

=INFO REPORT 5-May-2010::13:16:54 ===
    alarm_handler: {clear,system_memory_high_watermark}
/usr/lib/riak/lib/os_mon-2.2.5/priv/bin/memsup: Erlang has closed.
   
Erlang has closed





Le 05/05/10 11:35, Germain Maurice a écrit :

Hi all,
I am testing Riak for my document base and i got a problem when i was 
migrating documents from my previous

system to Riak.
I have two nodes and one bucket for the beginning.
There are more than 480 000 documents in the bucket and the documents 
are html pages.


In the following you'll find all the files and informations after a 
node was restarted.

After a while, riak crashed again for the two nodes I restarted ... :(

[...]



--
Germain Maurice
Administrateur Système/Réseau
Tel : +33.(0)1.42.43.54.33

http://www.linkfluence.net


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Backends maximum filesize

2010-05-06 Thread Germain Maurice

Hi,

I would like to know if there is a limitation of the size of each file 
under innostore (as 2GB with dets backend).
More widely, it will be useful if you could provide the various 
technical limitations of each backend.


Thank you.

--
Germain Maurice
Administrateur Système/Réseau
Tel : +33.(0)1.42.43.54.33

http://www.linkfluence.net


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Backuping a node or a cluster does not want to start, finally resolved.

2010-05-10 Thread Germain Maurice

Hi everybody,

I have a two nodes cluster and i'm trying to backup it before making 
some tests.

The backup process does not want to start, here is the error i'm getting :

r...@couch1:~# riak-admin backup r...@10.0.0.40 riak /reiser/riak-backup 
node

Backing up (node 'r...@10.0.0.40') to '/reiser/riak-backup-r...@10.0.0.40'.
{"init terminating in 
do_boot",{{badmatch,{error,{file_error,[47,114,101,105,115,101,114,47,114,105,97,107,45,98,97,99,107,117,112,45|'r...@10.0.0.40'],eacces}}},[{riak_kv_backup,backup,3},{erl_eval,do_apply,5},{init,start_it,1},{init,start_em,1}]}}

init terminating in do_boot ()

r...@couch1:~# riak-admin backup r...@10.0.0.40 riak /reiser/riak-backup all
Backing up (all nodes) to '/reiser/riak-backup'.
...from ['r...@10.0.0.40','r...@10.0.0.41']
{"init terminating in 
do_boot",{{badmatch,{error,{file_error,"/reiser/riak-backup",eacces}}},[{riak_kv_backup,backup,3},{erl_eval,do_apply,5},{init,start_it,1},{init,start_em,1}]}}

init terminating in do_boot ()

My consoles on a each don't throw any errors.

Just before sending this email, i found the solution :
-> the system user of riak process has to have permission of creating 
the backup file, /reiser/riak/backup-file in my case.

/reiser/riak contains /reiser/riak/ring /reiser/riak/innostore ...

If i want to export this backup outside of the node, i expect i have to 
use NFS mount point.

Do you agree with it ?

Thanks
--

Germain Maurice
Administrateur Système/Réseau
Tel : +33.(0)1.42.43.54.33

http://www.linkfluence.net


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Backuping a node or a cluster does not want to start, finally resolved.

2010-05-11 Thread Germain Maurice
But the "somewhere" has to be writable by "riak" user, that's the 
"problem. :)


Le 10/05/10 14:16, Sean Cribbs a écrit :

NFS would be one of many ways to export the backup off of the node.  You can 
also specify a filename on the command line when running `riak-admin backup` if 
you want to put it somewhere other than data/.

Sean Cribbs
Developer Advocate
Basho Technologies, Inc.
http://basho.com/

On May 10, 2010, at 6:51 AM, Germain Maurice wrote:

   

Hi everybody,

I have a two nodes cluster and i'm trying to backup it before making some tests.
The backup process does not want to start, here is the error i'm getting :

r...@couch1:~# riak-admin backup r...@10.0.0.40 riak /reiser/riak-backup node
Backing up (node 'r...@10.0.0.40') to '/reiser/riak-backup-r...@10.0.0.40'.
{"init terminating in 
do_boot",{{badmatch,{error,{file_error,[47,114,101,105,115,101,114,47,114,105,97,107,45,98,97,99,107,117,112,45|'r...@10.0.0.40'],eacces}}},[{riak_kv_backup,backup,3},{erl_eval,do_apply,5},{init,start_it,1},{init,start_em,1}]}}
init terminating in do_boot ()

r...@couch1:~# riak-admin backup r...@10.0.0.40 riak /reiser/riak-backup all
Backing up (all nodes) to '/reiser/riak-backup'.
...from ['r...@10.0.0.40','r...@10.0.0.41']
{"init terminating in 
do_boot",{{badmatch,{error,{file_error,"/reiser/riak-backup",eacces}}},[{riak_kv_backup,backup,3},{erl_eval,do_apply,5},{init,start_it,1},{init,start_em,1}]}}
init terminating in do_boot ()

My consoles on a each don't throw any errors.

Just before sending this email, i found the solution :
->  the system user of riak process has to have permission of creating the 
backup file, /reiser/riak/backup-file in my case.
/reiser/riak contains /reiser/riak/ring /reiser/riak/innostore ...

If i want to export this backup outside of the node, i expect i have to use NFS 
mount point.
Do you agree with it ?

Thanks
--

Germain Maurice
Administrateur Système/Réseau
Tel : +33.(0)1.42.43.54.33

http://www.linkfluence.net


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
 
   



--
Germain Maurice
Administrateur Système/Réseau
Tel : +33.(0)1.42.43.54.33

http://www.linkfluence.net


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Having N replicas more than nodes in the cluster

2010-05-11 Thread Germain Maurice

Hi,

I'd like to know how works Riak when my bucket has n_val = 3 and only 2 
physical nodes.
Does Riak make 3 copies of the bucket over the 2 nodes or it maintains 
only 2 copies of my bucket on the cluster ?


I'm doing tests of riak and i find some things weird.

I have a two nodes cluster and a bucket with n_val = 3.
I put more than 1 600 000 documents in the bucket and it takes less than 
200 GB with Innostore.

I shut down the second node with "q()." in a "riak console".
Firstly, there was no synchronisation of data on the first node, i 
assume it's better because very similar to a node crash.

Secondly, when i did a read request on a document with :
- ?r=1 -> success
- ?r=2 -> success
- ?r=3 -> fail

An other question, "n replicas" means "1 + n copies" or "n copies" of 
the bucket ?


I really appreciate if you can explain me this behavior ?

Thank you
Best Regards,
--
Germain Maurice
Administrateur Système/Réseau
Tel : +33.(0)1.42.43.54.33

http://www.linkfluence.net


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Backuping a cluster takes too much time !

2010-05-12 Thread Germain Maurice

Hi,

I'm testing the backup process of a node and all nodes.
I'm very surprised by the time it took.

The beginning using 'date' :
# date ; ls -lh
mardi 11 mai 2010, 14:31:54 (UTC+0200)
total 8,0K
-rw-r--r-- 1 riak riak8 2010-05-11 14:31 backup-20100511
drwxr-xr-x 2 riak riak 4,1K 2010-05-05 16:22 dets
drwxr-xr-x 3 riak riak  168 2010-05-05 16:25 innodb
drwxr-xr-x 2 riak riak  272 2010-05-11 14:26 ring
# date ; ls -lh
mercredi 12 mai 2010, 23:35:44 (UTC+0200)
total 300G
-rw-r--r-- 1 riak riak 300G 2010-05-12 21:42 backup-20100511
drwxr-xr-x 3 riak riak  168 2010-05-05 16:25 innodb
drwxr-xr-x 2 riak riak  272 2010-05-11 14:26 ring

On each of my two nodes i have :
194G/reiser/riak/innodb/innokeystore/

The backup process of all nodes took more than 33 hours !!
I hope it's not a normal behavior because we expect that the cluster 
will store up 4TB or more.


Regards,
Germain


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Backuping a cluster takes too much time !

2010-05-12 Thread Germain Maurice

I'm using Innostore (10-1) and riak_0.10.1 on an ubuntu 9.10.


Le 13/05/2010 00:28, Alan McConnell a écrit :
I get similar behavior using the innostore storage engine.  In 
addition, beam.smp memory footprint balloons to using all available 
system memory.


I'm running latest binary installs of riak and innostore (0.10.1) on 
ubuntu 8.04.


On Wed, May 12, 2010 at 3:07 PM, Preston Marshall 
mailto:pres...@synergyeoc.com>> wrote:


What storage engine are you using?
On May 12, 2010, at 4:57 PM, Germain Maurice wrote:

> Hi,
>
> I'm testing the backup process of a node and all nodes.
> I'm very surprised by the time it took.
>
> The beginning using 'date' :
> # date ; ls -lh
> mardi 11 mai 2010, 14:31:54 (UTC+0200)
> total 8,0K
> -rw-r--r-- 1 riak riak8 2010-05-11 14:31 backup-20100511
> drwxr-xr-x 2 riak riak 4,1K 2010-05-05 16:22 dets
> drwxr-xr-x 3 riak riak  168 2010-05-05 16:25 innodb
> drwxr-xr-x 2 riak riak  272 2010-05-11 14:26 ring
> # date ; ls -lh
> mercredi 12 mai 2010, 23:35:44 (UTC+0200)
> total 300G
> -rw-r--r-- 1 riak riak 300G 2010-05-12 21:42 backup-20100511
> drwxr-xr-x 3 riak riak  168 2010-05-05 16:25 innodb
> drwxr-xr-x 2 riak riak  272 2010-05-11 14:26 ring
>
> On each of my two nodes i have :
> 194G/reiser/riak/innodb/innokeystore/
>
> The backup process of all nodes took more than 33 hours !!
> I hope it's not a normal behavior because we expect that the
cluster will store up 4TB or more.
>
> Regards,
> Germain
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com <mailto:riak-users@lists.basho.com>
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com <mailto:riak-users@lists.basho.com>
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com




___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


InnoDB error

2010-05-14 Thread Germain Maurice

Hi all,

Another question...

I think i understand what does it mean, but how can i change it ?
What are the consequences of this kind error ?

100513  1:13:21  InnoDB: ERROR: the age of the last checkpoint is 
140104147910808,

InnoDB: which exceeds the log group capacity 1.
InnoDB: If you are using big BLOB or TEXT rows, you must set the
InnoDB: combined size of log files at least 10 times bigger than the
InnoDB: largest such row.

Thank you for your answer.

Best regards,
Germain

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: InnoDB error

2010-05-14 Thread Germain Maurice
These errors occur when a node came back to the cluster or on a joining 
a node to the cluster.
They never occur when i wrote datas in the bucket. The biggest data i 
put to a node had not exceeded 10MB, i write html pages in the bucket.


Grant, i'm testing Riak, so all of my datas are "test" datas :)
I have more than 1 600 000 docs that take more than 100GB space, only 
for information.


Le 14/05/10 17:52, Grant Schofield a écrit :

Are you writing a large amount of data to the node? What is the size of your 
write request?

The way to fix this would be to increase the number and size of the log files 
in your app.config. The relevant lines to add to the {innostore, []} section 
are:
{log_files_in_group,   6}, % How many files you need — usually, 3< 
 x<  6
{log_file_size, 268435456},  % No bigger than 256MB — otherwise recovery 
takes too long

You probably have issues with your Innostore now because there us data that 
didn't make it it into the filesystem when the log was overwritten. I would 
recommend starting with a new data  directory as well as moving your old Inno 
logs out of the way.  If this data isn't test data we can work with you to dump 
and load the data using the innodump and innoload utilities.

Grant Schofield
Developer Advocate
Basho Technologies

On May 14, 2010, at 10:27 AM, Germain Maurice wrote:

   

Hi all,

Another question...

I think i understand what does it mean, but how can i change it ?
What are the consequences of this kind error ?

 
   

100513  1:13:21  InnoDB: ERROR: the age of the last checkpoint is 
140104147910808,
InnoDB: which exceeds the log group capacity 1.
InnoDB: If you are using big BLOB or TEXT rows, you must set the
InnoDB: combined size of log files at least 10 times bigger than the
InnoDB: largest such row.

Thank you for your answer.

Best regards,
Germain

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
 
   



___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Recovering datas when a node was joining again the cluster (with all node datas lost)

2010-05-17 Thread Germain Maurice

Hi,

I have a 3 nodes cluster and I simulated a complete lost of one node 
(node3, erase of the entire hard disk).
I installed it again, i launche a "riak console", "riak-admin join 
r...@node2", i waited a while to see the recovering of datas on node3 
which was freshly repaired, but no datas were created.


I decided to request a document on my bucket via node3 - to verify the 
availability of datas -, datas are available and i was suprised that the 
datas were created at this exact time on node3.


Is a normal behavior ? If it's normal, why ?

Me and my questions wish you all the best ;)

Regards,
Germain

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Recovering datas when a node was joining again the cluster (with all node datas lost)

2010-05-17 Thread Germain Maurice
Hum... i'm reading again "Replication" section on the wiki, and i found 
that the behaviour i described seems to be a "read repair".


Sorry for the disturbing.


Le 17/05/10 13:49, Germain Maurice a écrit :

Hi,

I have a 3 nodes cluster and I simulated a complete lost of one node 
(node3, erase of the entire hard disk).
I installed it again, i launche a "riak console", "riak-admin join 
r...@node2", i waited a while to see the recovering of datas on node3 
which was freshly repaired, but no datas were created.


I decided to request a document on my bucket via node3 - to verify the 
availability of datas -, datas are available and i was suprised that 
the datas were created at this exact time on node3.


Is a normal behavior ? If it's normal, why ?

Me and my questions wish you all the best ;)

Regards,
Germain

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com



___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Recovering datas when a node was joining again the cluster (with all node datas lost)

2010-05-17 Thread Germain Maurice

Le 17/05/10 15:34, Paul R a écrit :

What should the user do to come back to the previous level of
replication ? A forced read repair, in other words a GET with R=2 on all
objects of all buckets ?

Yes, I wonder too what is the best thing to do after a node crash.
Eventually, i'm doing read requests on all keys of the bucket.
I found that R=1 (on all bucket keys) on the new node will adjust the 
replication level...

I wonder if R=3 or R=1, on the node to repopulate, aim to same result ?

In order to do a read repair, we have to make read requests, but it 
implies reading the stored object.
On a read repair, i assume that returning bodies is unecessary, 
especially on large objects (that i don't have).
It would be useful to provide an API operation to test the existance of 
an object without reading it...

I red REST API documentation, i didn't find this kind of operation.

Thanks.

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Recovering datas when a node was joining again the cluster (with all node datas lost)

2010-05-18 Thread Germain Maurice

Hi Dan,

Thank you for this "trick", it's faster than GET operation on objects.
HEAD requests on all docs will balance the replication for the node 
where we make the requests.
However, i make only about 100 000 HEAD requests by an hour, seems to be 
normal for you ?
The HEAD requests made the node to be repopulated with more than 120GB 
of datas.


Is there a "riak-admin" command to make this without knowledge of all 
keys of the bucket ?


See you for the next question ;)

Have a good day !


Le 17/05/10 18:11, Dan Reverri a écrit :

Hi Germain,

You can make a HEAD request to the bucket/key path. It will return 404 
or 200 without the document body.




On Mon, May 17, 2010 at 9:04 AM, Germain Maurice 
<mailto:germain.maur...@linkfluence.net>> wrote:


Le 17/05/10 15:34, Paul R a écrit :

What should the user do to come back to the previous level of
replication ? A forced read repair, in other words a GET with
R=2 on all
objects of all buckets ?

Yes, I wonder too what is the best thing to do after a node crash.
Eventually, i'm doing read requests on all keys of the bucket.
I found that R=1 (on all bucket keys) on the new node will adjust
the replication level...
I wonder if R=3 or R=1, on the node to repopulate, aim to same
result ?

In order to do a read repair, we have to make read requests, but
it implies reading the stored object.
On a read repair, i assume that returning bodies is unecessary,
especially on large objects (that i don't have).
It would be useful to provide an API operation to test the
existance of an object without reading it...
I red REST API documentation, i didn't find this kind of operation.

Thanks.


___
riak-users mailing list
riak-users@lists.basho.com <mailto:riak-users@lists.basho.com>
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com




___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Tuning Innostore backend (flush_mode option error)

2010-05-18 Thread Germain Maurice
I red this page : 
https://wiki.basho.com/display/RIAK/Innostore+Configuration+and+Tuning

Then, I put this configuration to my node :

{innostore, [
{data_home_dir, "/reiser/riak/innodb"}, %% Where data files go
{log_group_home_dir, "/reiser/riak/innodb"}, %% Where log files go
{log_files_in_group, 6}, % How many files you need — usually, 3 < x < 6
{log_file_size, 268435456}, % No bigger than 256MB — otherwise recovery 
takes too long

{buffer_pool_size, 1073741824} %% 1024MB in-memory buffer in bytes
,{flush_mode, "O_DIRECT"} %% Linux specific
]}

But when i launch Riak, i got this error/warning :

=ERROR REPORT 18-May-2010::11:18:14 ===
Skipping config setting flush_mode; unknown option.
InnoDB: Mutexes and rw_locks use GCC atomic builtins
100518 11:18:18 InnoDB: highest supported file format is Barracuda.
100518 11:18:25 Embedded InnoDB 1.0.6.6750 started; log sequence number 
110097552195


Is there a mistake in the option name ?

Thanks
Germain

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Some questions before moving from CouchDB to Riak on production

2010-06-03 Thread Germain Maurice

Hi

We've started to use CouchDB earlier this year, and we go from
disappointments to failures, only to switch back to disappointment
again. We've been testing riak for a few weeks now, and we are pretty
happy with the results. But before definitely switching to riak, we have
some questions.

We need to store a lot of documents. Each document is a web page, with a
lot of metadata (author, title, date, content extract from the article,
from the comments …). We currently have over 12 million documents, stored
in CouchDB, and the DB is currently a little bit over 1TB in size. We're
adding roughly 65k new documents a day (average document size: 220kB). Soon,
we will also add 150k smaller documents a day (average size: 1kB), so
the size of the DB will keep growing fast.

In our tests, we are using the following configurations:
- 1 node is a physical server (2GB RAM, XeonDualCore @1.60GHz, Ubuntu
9.10 Amd64, riak 0.10-1, innostore 10_1)
- 2 nodes are virtual server (each node is 1.5GB RAM, XeonDualCore
@2GHz, Ubuntu 9.10 Amd64, riak 0.10-1, innostore 10_1) inside a Xen host

We managed to crash the virtual servers on Xen, while having 15 workers
and doing about 125 writes/second (average size of the documents:
190kB). This size was measured while writing data to the disk using
Innostore.

What kind of configuration should we consider for this setup : what are
the potential bottlenecks we may encounter, and/or are there specific
tuning options in the Erlang VM that would help better fit our needs ?

To start a ring in production to handle this volume, how many nodes
should we consider for good performance ?

Finally, we are curious on how people are using riak: what do they
store, how many documents, which frequency, do they spent a lot of time
doing administration work on the nodes, what should be done after a node
crash and how should we recover in a production environment ?

Any feedback on existing production environments would be welcome.

Thanks!

--
Germain Maurice
Administrateur Système/Réseau
Tel : +33.(0)1.42.43.54.33

http://www.linkfluence.net


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Switching of backends

2010-06-08 Thread Germain Maurice

Hi everybody,

I assume this question was already discussed but i didn't find the answer.

In a production environment, we plan to use Riak with Innostore and 
after Bitcask has been approved for a production environment, we will 
switch to it. So, we wonder if we can, only in the migration time, 
switch off a node, change its backend,

and switch it on. This could work ?

I think the other way to do that is to backup the node, change the 
backend, and restore the node. Do you agree with that ?


Thank you.

--
Germain Maurice
Administrateur Système/Réseau
Tel : +33.(0)1.42.43.54.33

http://www.linkfluence.net


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Switching of backends

2010-06-08 Thread Germain Maurice

Sean,

Ok for the read-repair, I understand.
However, if i have to restore a node, what about datas written to the 
cluster during the restore ?
Are they kept in "buffered" state while waiting the end of the restoring 
phase ?

Do i have to force read-repair to complete the datas on the node ?

Regards,
Germain

Le 08/06/10 12:57, Sean Cribbs a écrit :

Germain,

Yes, you could do that, but you would need to force read-repair to repopulate 
the node.  Doing so would be nearly as expensive as simply reloading the data 
from the backup.

Sean Cribbs
Developer Advocate
Basho Technologies, Inc.
http://basho.com/

On Jun 8, 2010, at 4:47 AM, Germain Maurice wrote:

   

Hi everybody,

I assume this question was already discussed but i didn't find the answer.

In a production environment, we plan to use Riak with Innostore and after 
Bitcask has been approved for a production environment, we will switch to it. 
So, we wonder if we can, only in the migration time, switch off a node, change 
its backend,
and switch it on. This could work ?

I think the other way to do that is to backup the node, change the backend, and 
restore the node. Do you agree with that ?

Thank you.

--
Germain Maurice
Administrateur Système/Réseau
Tel : +33.(0)1.42.43.54.33

http://www.linkfluence.net


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
 
   



--
Germain Maurice
Administrateur Système/Réseau
Tel : +33.(0)1.42.43.54.33

http://www.linkfluence.net


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


ring_creation_size and Innostore

2010-06-08 Thread Germain Maurice

I have a problem with setting the ring_creation_size over Innostore.
The default value is 64 but when i set ring_creation_size to 1024, 512, 
or 256 i always get this following error, except to "128" value. I 
deleted /riak/ring/* and /riak/innodb/* each time i changed the value.


Why i can't get more than 128 partitions ?

Regards,
Germain

===
InnoDB: Doublewrite buffer not found: creating new
InnoDB: Doublewrite buffer created
InnoDB: Creating foreign key constraint system tables
InnoDB: Foreign key constraint system tables created
100608 15:55:03 Embedded InnoDB 1.0.6.6750 started; log sequence number 0
Innostore: Could not create port [lock=0xb45b8398, cv=0xb45b83e0]

=ERROR REPORT 8-Jun-2010::15:55:03 ===
** Generic server riak_kv_vnode_master terminating
** Last message in was {'$gen_cast',
   {start_vnode,
   
1273104941893716213903991084748949661653409988608}}

** When Server state == {state,12307,[]}
** Reason for termination ==
** {{badmatch,{error,{enomem,[{erlang,open_port,
  [{spawn,innostore_drv},[binary]]},
  {innostore,connect,0},
  {innostore_riak,start,2},
  {riak_kv_vnode,init,1},
  {gen_fsm,init_it,6},
  {proc_lib,init_p_do_apply,3}]}}},
[{riak_kv_vnode_master,get_vnode,2},
 {riak_kv_vnode_master,handle_cast,2},
 {gen_server,handle_msg,5},
 {proc_lib,init_p_do_apply,3}]}




--
Germain Maurice
Administrateur Système/Réseau
Tel : +33.(0)1.42.43.54.33

http://www.linkfluence.net


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Errors when using basho_bench

2010-06-08 Thread Germain Maurice
After viewing the demonstration of basho_bench during the webinar of the 
last week, we led some tests.
Here are some errors we are meeting while doing a benchmark with 
basho_bench.


Big notice : when the ring partition size was 64, i never got this kind 
of error (excepting a crash of a node, due to not enough memory i think, 
1,5GB RAM). But now, i have 128 partitions (see my other message on the 
riak ML about the ring partition size) and these errors occurs.



This error occured multiple times :

=ERROR REPORT 8-Jun-2010::16:21:06 ===
** State machine <0.937.0> terminating
** Last event in was timeout
** When State == initialize
**  Data  == {state,<0.932.0>,undefined,2,undefined,undefined,undefined,
 8594018,undefined,undefined,undefined,undefined,
 undefined,undefined,6,undefined,
 {<<"test">>,<<"25500">>},
 {chstate,'r...@10.0.0.40',
 [{'r...@10.0.0.40',{64,63443225844}}],
 {128,
  [{0,'r...@10.0.0.40'},
   
{11417981541647679048466287755595961091061972992,

'r...@10.0.0.41'},
   
{22835963083295358096932575511191922182123945984,

'r...@10.0.0.40'},
   
{34253944624943037145398863266787883273185918976,

'r...@10.0.0.41'},
   
{45671926166590716193865151022383844364247891968,

'r...@10.0.0.40'},
   
{57089907708238395242331438777979805455309864960,

'r...@10.0.0.41'},
[...]
   
{1415829711164312202009819681693899175291684651008,

'r...@10.0.0.40'},
   
{1427247692705959881058285969449495136382746624000,

'r...@10.0.0.41'},
   
{1438665674247607560106752257205091097473808596992,

'r...@10.0.0.40'},
   
{1450083655789255239155218544960687058564870569984,

'r...@10.0.0.41'}]},
 {dict,0,16,16,8,80,48,
 
{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]},
 
{{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],

   [],
 undefined}
** Reason for termination =
** {'function not exported',[{riak_core_bucket,defaults,[]},
 {riak_core_util,chash_key,1},
 {riak_kv_get_fsm,initialize,2},
 {gen_fsm,handle_msg,7},
 {proc_lib,init_p_do_apply,3}]}


After shutting down the benchmark we also got this following error, 
multiple times too :



=ERROR REPORT 8-Jun-2010::16:19:28 ===
webmachine error: path="/riak/test/16028"
[{webmachine_decision_core,'-decision/1-lc$^1/1-1-',
 [{error,
  {error,
  {case_clause,{error,timeout}},
  [{riak_kv_wm_raw,content_types_provided,2},
   {webmachine_resource,resource_call,3},
   {webmachine_resource,do,3},
   {webmachine_decision_core,resource_call,1},
   {webmachine_decision_core,decision,1},
   {webmachine_decision_core,handle_request,2},
   {webmachine_mochiweb,loop,1},
   {mochiweb_http,headers,5}]}}]},
 {webmachine_decision_core,decision,1},
 {webmachine_decision_core,handle_request,2},
 {webmachine_mochiweb,loop,1},
 {mochiweb_http,headers,5},
 {proc_lib,init_p_do_apply,3}]


Any idea about these errors ?
No problem with Webmachine ?

--
Germain Maurice
Administrateur Système/Réseau
Tel : +33.(0)1.42.43.54.33

http://www.linkfluence.net


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Fwd: Errors when using basho_bench

2010-06-08 Thread Germain Maurice

You're right, i put this line in the app.config :
,{default_bucket_props, [{n_val, 2}]}

Maybe, i made a mistake...

Le 08/06/10 17:55, Dan Reverri a écrit :

Forwarding my reply to the mailing list.

-- Forwarded message --
From: *Dan Reverri* mailto:d...@basho.com>>
Date: Tue, Jun 8, 2010 at 8:45 AM
Subject: Re: Errors when using basho_bench
To: Germain Maurice <mailto:germain.maur...@linkfluence.net>>



Hi Germain,

The first error "{'function not 
exported',[{riak_core_bucket,defaults,[]}," would occur when a bucket 
does not have a "chash_keyfun" property defined. This could occur if 
you have specified default bucket properties in your app.config. There 
is a known issue where bucket properties defined in the app.config 
file are not merged with the hard coded defaults:

http://issues.basho.com/show_bug.cgi?id=123

I'm not sure why this issue would only occur when increasing the 
partition size. Were other changes made to app.config?



The second error 
"{case_clause,{error,timeout}}, [{riak_kv_wm_raw,content_types_provided,2}" 
occurs when a key/value pair has been inserted into Riak without a 
content-type and retrieved via the REST interface. This can happen 
when inserting key/value pairs with the native Erlang client or the 
protobuffs client. These clients don't require that a content-type be 
set. I assume the values in question were inserted by the basho bench 
utility so I will look into how those values are inserted by basho bench.


Thanks,
Dan



On Tue, Jun 8, 2010 at 7:38 AM, Germain Maurice 
<mailto:germain.maur...@linkfluence.net>> wrote:


After viewing the demonstration of basho_bench during the webinar
of the last week, we led some tests.
Here are some errors we are meeting while doing a benchmark with
basho_bench.

Big notice : when the ring partition size was 64, i never got this
kind of error (excepting a crash of a node, due to not enough
memory i think, 1,5GB RAM). But now, i have 128 partitions (see my
other message on the riak ML about the ring partition size) and
these errors occurs.


This error occured multiple times :

=ERROR REPORT 8-Jun-2010::16:21:06 ===
** State machine <0.937.0> terminating
** Last event in was timeout
** When State == initialize
**  Data  ==
{state,<0.932.0>,undefined,2,undefined,undefined,undefined,
8594018,undefined,undefined,undefined,undefined,
undefined,undefined,6,undefined,
{<<"test">>,<<"25500">>},
{chstate,'r...@10.0.0.40 <mailto:r...@10.0.0.40>',
[{'r...@10.0.0.40
<mailto:r...@10.0.0.40>',{64,63443225844}}],
{128,
 [{0,'r...@10.0.0.40
<mailto:r...@10.0.0.40>'},
 
{11417981541647679048466287755595961091061972992,

   'r...@10.0.0.41 <mailto:r...@10.0.0.41>'},
 
{22835963083295358096932575511191922182123945984,

   'r...@10.0.0.40 <mailto:r...@10.0.0.40>'},
 
{34253944624943037145398863266787883273185918976,

   'r...@10.0.0.41 <mailto:r...@10.0.0.41>'},
 
{45671926166590716193865151022383844364247891968,

   'r...@10.0.0.40 <mailto:r...@10.0.0.40>'},
 
{57089907708238395242331438777979805455309864960,

   'r...@10.0.0.41 <mailto:r...@10.0.0.41>'},
[...]
 
{1415829711164312202009819681693899175291684651008,

   'r...@10.0.0.40 <mailto:r...@10.0.0.40>'},
 
{1427247692705959881058285969449495136382746624000,

   'r...@10.0.0.41 <mailto:r...@10.0.0.41>'},
 
{1438665674247607560106752257205091097473808596992,

   'r...@10.0.0.40 <mailto:r...@10.0.0.40>'},
 
{1450083655789255239155218544960687058564870569984,

   'r...@10.0.0.41
<mailto:r...@10.0.0.41>'}]},
{dict,0,16,16,8,80,48,
   
{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]},
   
{{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],

  [],
undefined}
** Reason for termination =
** {'function no

Re: Fwd: Errors when using basho_bench

2010-06-09 Thread Germain Maurice

The backend used is http_raw as you can see in the benchmark config file :

{mode, max}.
{duration, 30}.
{concurrent, 20}.
{driver, basho_bench_driver_http_raw}.
{code_paths, ["deps/stats",
  "deps/ibrowse"]}.
{http_raw_ips, ["10.0.0.40", "10.0.0.41"]}.
{key_generator, {uniform_int, 1}}.
{value_generator, {fixed_bin, 80}}.
{operations, [{get, 2}, {update, 2}]}.
{http_raw_params, "?r=1&w=2"}.


The webmachine errors occured immediately after shutting down the 
benchmark with no other requests.

We repeated the benchmark and we got these errors again.


Le 08/06/10 19:23, Dan Reverri a écrit :

Hi Germain,

I've confirmed that the basho bench drivers for riakclient and 
riakc_pb do not provide a content-type when submitting key/value pairs 
to Riak. This should not cause problems while running the benchmark.


Which driver were you using when you noticed the webmachine errors? 
Were you making GET requests for keys created in the benchmark against 
the REST API after the benchmark?


Thanks,
Dan

On Tue, Jun 8, 2010 at 9:29 AM, Dan Reverri <mailto:d...@basho.com>> wrote:


Hi Germain,

You should be able to work around this issue by specifying a full
list of default bucket properties with your change incorporated.
For example, try putting the following in your app.config:
{default_bucket_props, [{n_val,2},
 {allow_mult,false},
 {last_write_wins,false},
 {precommit, []},
 {postcommit, []},
 {chash_keyfun, {riak_core_util,
chash_std_keyfun}}]}


This is the full list of default bucket properties that can be
found in "apps/riak_core/ebin/riak_core.app" with the n_val
changed to "2".




On Tue, Jun 8, 2010 at 9:05 AM, Germain Maurice
mailto:germain.maur...@linkfluence.net>> wrote:

You're right, i put this line in the app.config :
,{default_bucket_props, [{n_val, 2}]}

Maybe, i made a mistake...

Le 08/06/10 17:55, Dan Reverri a écrit :

Forwarding my reply to the mailing list.

-- Forwarded message --
From: *Dan Reverri* mailto:d...@basho.com>>
    Date: Tue, Jun 8, 2010 at 8:45 AM
Subject: Re: Errors when using basho_bench
To: Germain Maurice mailto:germain.maur...@linkfluence.net>>


Hi Germain,

The first error "{'function not
exported',[{riak_core_bucket,defaults,[]}," would occur when
a bucket does not have a "chash_keyfun" property defined.
This could occur if you have specified default bucket
properties in your app.config. There is a known issue where
bucket properties defined in the app.config file are not
merged with the hard coded defaults:
http://issues.basho.com/show_bug.cgi?id=123

I'm not sure why this issue would only occur when increasing
the partition size. Were other changes made to app.config?


The second error
"{case_clause,{error,timeout}}, 
[{riak_kv_wm_raw,content_types_provided,2}"
occurs when a key/value pair has been inserted into Riak
without a content-type and retrieved via the REST interface.
This can happen when inserting key/value pairs with the
native Erlang client or the protobuffs client. These clients
don't require that a content-type be set. I assume the values
in question were inserted by the basho bench utility so I
will look into how those values are inserted by basho bench.

Thanks,
Dan



On Tue, Jun 8, 2010 at 7:38 AM, Germain Maurice
mailto:germain.maur...@linkfluence.net>> wrote:

After viewing the demonstration of basho_bench during the
webinar of the last week, we led some tests.
Here are some errors we are meeting while doing a
benchmark with basho_bench.

Big notice : when the ring partition size was 64, i never
got this kind of error (excepting a crash of a node, due
to not enough memory i think, 1,5GB RAM). But now, i have
128 partitions (see my other message on the riak ML about
the ring partition size) and these errors occurs.


This error occured multiple times :

=ERROR REPORT 8-Jun-2010::16:21:06 ===
** State machine <0.937.0> terminating
** Last event in was timeout
** When State == initialize
**  Data  ==
{state,<0.932.0>,undefined,2,undefined,undefined,undefined,
   
8594018,undefined,undefined,undefined,undefined,


Log file

2010-06-09 Thread Germain Maurice

Hi everybody,

I'm not really sure about this, but what about Riak log files (i mean 
/var/log/*) ?

I never saw options to configure them and no where to explain this.
Am I wrong ?

Thank you all.

--
Germain Maurice
Administrateur Système/Réseau
Tel : +33.(0)1.42.43.54.33

http://www.linkfluence.net


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: [ANN] Riak Release 0.11.0

2010-06-11 Thread Germain Maurice

Hi,
Because of its append-only nature, stale data are created, so, how does 
Bitcask to remove stale data ?
With CouchDB the compaction process on our data never succeed, too much 
data.

I really don't like to have to launch manually this kind of process.

Thank you


Le 09/06/10 22:21, David Smith a écrit :
I will say, however, that the append-only nature of bitcask minimizes 
the opportunity to lose and/or corrupt data, not to mention obviating 
the need for log files ala InnoDB.



--
Germain Maurice
Administrateur Système/Réseau
Tel : +33.(0)1.42.43.54.33

http://www.linkfluence.net


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: riak stats maintenance?

2010-06-18 Thread Germain Maurice

Hi Gareth, and all,

I got the same error, with the latest Riak release 0.11.0, but only one 
time...
Mostly, when i requested "/stats" the host crashed (i have two hosts, 
the both crashed)


The problem ? I think it's a memory issue, because it took memory and 
memory and after, swap space and swap space...

When all is full, the host goes down :(


Le 18/06/10 05:40, Gareth Stokes a écrit :

hey guys,

so i have a cluster of 4 physical machines with a load balancer 
sitting in front to handle requests going into riak.
i thought it would be a good idea (not anymore) to use the /stats url 
to ping the machines the cluster for their health. this is what i've 
noticed in the logs every few days or so


=ERROR REPORT 16-Jun-2010::03:48:37 ===
webmachine error: path="/stats"
{error,{exit,{timeout,{gen_server2,call,[riak_kv_stat,get_stats]}},
 [{gen_server2,call,2},
  {riak_kv_wm_stats,get_stats,0},
  {riak_kv_wm_stats,produce_body,2},
  {webmachine_resource,resource_call,3},
  {webmachine_resource,do,3},
  {webmachine_decision_core,resource_call,1},
  {webmachine_decision_core,decision,1},
  {webmachine_decision_core,handle_request,2}]}}

it lasts anywhere between 5 - 40 minutes.
what im thinking is that a riak machine will enter "maintenance" mode 
every now and then, when it does it turns the /stats url off.


am i correct in thinking this, or should i be worried?

regards,
gareth stokes


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
   



--
Germain Maurice
Administrateur Système/Réseau
Tel : +33.(0)1.42.43.54.33

http://www.linkfluence.net

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


basho_bench results

2010-06-18 Thread Germain Maurice

Hi all,

We led some tests with basho_bench and we got some results here :
http://erralt.wordpress.com/2010/06/18/benching-riak-with-basho_bench/

I put inside the pictures the content of each configuration file we used 
to launch the benchmarks (easier to compare).
When looking at the last benchmark and after 3000 seconds elapsed, can 
we considered that more than 2500 operations are done in one second ? On 
each second can we expect to get the same quantity of read and write 
requests throughout the benchmark ?


Regards,
Germain

--
Germain Maurice
Administrateur Système/Réseau
Tel : +33.(0)1.42.43.54.33

http://www.linkfluence.net

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: basho_bench results

2010-06-22 Thread Germain Maurice

No one ?

Le 18/06/2010 18:47, Germain Maurice a écrit :

Hi all,

We led some tests with basho_bench and we got some results here :
http://erralt.wordpress.com/2010/06/18/benching-riak-with-basho_bench/

I put inside the pictures the content of each configuration file we 
used to launch the benchmarks (easier to compare).
When looking at the last benchmark and after 3000 seconds elapsed, can 
we considered that more than 2500 operations are done in one second ? 
On each second can we expect to get the same quantity of read and 
write requests throughout the benchmark ?


Regards,
Germain
--
Germain Maurice
Administrateur Système/Réseau
Tel : +33.(0)1.42.43.54.33

http://www.linkfluence.net


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
   


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: basho_bench results

2010-06-23 Thread Germain Maurice

Hi Sean,

Thank you for your answer, i'm relieved by reading your first impression.
I tried to use the null driver as you advised, but {driver, null} and 
{driver, basho_bench_driver_null} didn't work.


INFO: Est. data size: 37.25 GB
INFO: Starting max worker: <0.64.0>
INFO: Starting max worker: <0.62.0>
INFO: Starting max worker: <0.60.0>
INFO: Starting max worker: <0.58.0>
DEBUG:Driver basho_bench_driver_null crashed: {function_clause,
   
[{basho_bench_driver_null,run,

 [update,
  
#Fun,
  
#Fun,

  51974]},
{basho_bench_worker,
 worker_next_op,1},
{basho_bench_worker,
 max_worker_run_loop,1}]}
INFO: Starting max worker: <0.56.0>
DEBUG:Driver basho_bench_driver_null crashed: {function_clause,
   
[{basho_bench_driver_null,run,

 [update,
  
#Fun,
  
#Fun,

  98189]},
{basho_bench_worker,
 worker_next_op,1},
{basho_bench_worker,
 max_worker_run_loop,1}]}


Any explanation ?

Thanks

Le 22/06/10 23:53, Sean Cribbs a écrit :

Germain,

Sorry for not getting back to you sooner. Your graphs are slightly 
disconcerting to me for a few reasons.


1) The throughput graphs have consistent upward trends, which says to 
me that something - cache, buffer pools, whatever - aren't "warmed up" 
until the end of the test.  There might also be something external to 
the test going on.


2) You have a lot of concurrent workers. 5 should be more than enough 
to saturate your cluster when there's no throttling ({mode,max}).  Be 
sure your client machine is not the limiting factor here (and that 
you're not running basho_bench on the same machine as a node).


3) Your payload size is really large (fixed size of 400K). If this is 
representative of your application's workload, that's fine.  But you 
might try running the test with the "null" driver, which will detect 
how well the key and value generators perform in combination (doesn't 
actually hit Riak).  It should give you an idea of how costly the 
generation is, and thus what the upper limits of throughput are.


Sean Cribbs mailto:s...@basho.com>>
Developer Advocate
Basho Technologies, Inc.
http://basho.com/

On Jun 22, 2010, at 4:38 PM, Germain Maurice wrote:


No one ?

Le 18/06/2010 18:47, Germain Maurice a écrit :

Hi all,

We led some tests with basho_bench and we got some results here :
http://erralt.wordpress.com/2010/06/18/benching-riak-with-basho_bench/

I put inside the pictures the content of each configuration file we 
used to launch the benchmarks (easier to compare).
When looking at the last benchmark and after 3000 seconds elapsed, 
can we considered that more than 2500 operations are done in one 
second ? On each second can we expect to get the same quantity of 
read and write requests throughout the benchmark ?


Regards,
Germain
--
Germain Maurice
Administrateur Système/Réseau
Tel : +33.(0)1.42.43.54.33

http://www.linkfluence.net


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
   


___
riak-users mailing list
riak-users@lists.basho.com <mailto:riak-users@lists.basho.com>
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com





--
Germain Maurice
Administrateur Système/Réseau
Tel : +33.(0)1.42.43.54.33

http://www.linkfluence.net

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Out of memory: kill process 8713 (beam.smp)

2010-06-23 Thread Germain Maurice

I'm still disturbing you with errors...

I just discovered some errors and i would like to know if it's normal...
You can see the errors in the attached file.


--
Germain Maurice
Administrateur Système/Réseau
Tel : +33.(0)1.42.43.54.33

http://www.linkfluence.net

[51.760074] beam.smp invoked oom-killer: gfp_mask=0x201da, order=0, 
oomkilladj=0
[51.760081] beam.smp cpuset=/ mems_allowed=0
[51.760086] Pid: 7792, comm: beam.smp Not tainted 2.6.31-14-server 
#48-Ubuntu
[51.760089] Call Trace:
[51.760101]  [] ? cpuset_print_task_mems_allowed+0x98/0xa0
[51.760108]  [] oom_kill_process+0xce/0x290
[51.760112]  [] ? select_bad_process+0xea/0x120
[51.760116]  [] __out_of_memory+0x50/0xb0
[51.760120]  [] out_of_memory+0x126/0x1a0
[51.760126]  [] ? _spin_lock+0x9/0x10
[51.760131]  [] __alloc_pages_slowpath+0x498/0x4e0
[51.760136]  [] __alloc_pages_nodemask+0x12c/0x130
[51.760142]  [] alloc_pages_current+0x82/0xd0
[51.760146]  [] __page_cache_alloc+0x5f/0x70
[51.760151]  [] __do_page_cache_readahead+0xc1/0x160
[51.760155]  [] ra_submit+0x1c/0x20
[51.760160]  [] do_sync_mmap_readahead+0x9b/0xd0
[51.760164]  [] filemap_fault+0x304/0x3b0
[51.760169]  [] __do_fault+0x4f/0x4e0
[51.760173]  [] handle_mm_fault+0x1a7/0x3c0
[51.760180]  [] ? default_spin_lock_flags+0x9/0x10
[51.760185]  [] do_page_fault+0x165/0x360
[51.760189]  [] page_fault+0x25/0x30
[51.760192] Mem-Info:
[51.760195] Node 0 DMA per-cpu:
[51.760198] CPU0: hi:0, btch:   1 usd:   0
[51.760201] CPU1: hi:0, btch:   1 usd:   0
[51.760203] Node 0 DMA32 per-cpu:
[51.760206] CPU0: hi:  186, btch:  31 usd: 160
[51.760209] CPU1: hi:  186, btch:  31 usd: 156
[51.760214] Active_anon:588242 active_file:699 inactive_anon:147340
[51.760216]  inactive_file:643 unevictable:0 dirty:0 writeback:0 unstable:0
[51.760217]  free:4846 slab:9408 mapped:0 pagetables:2824 bounce:0
[51.760220] Node 0 DMA free:12080kB min:32kB low:40kB high:48kB 
active_anon:2012kB inactive_anon:1576kB active_file:20kB inactive_file:28kB 
unevictable:0kB present:15344kB pages_scanned:0 all_unreclaimable? no
[51.760227] lowmem_reserve[]: 0 3013 3013 3013
[51.760233] Node 0 DMA32 free:7304kB min:7008kB low:8760kB high:10512kB 
active_anon:2350956kB inactive_anon:587784kB active_file:2776kB 
inactive_file:2544kB unevictable:0kB present:3086212kB pages_scanned:5376 
all_unreclaimable? no
[51.760240] lowmem_reserve[]: 0 0 0 0
[51.760245] Node 0 DMA: 2*4kB 1*8kB 4*16kB 3*32kB 2*64kB 4*128kB 2*256kB 
5*512kB 4*1024kB 2*2048kB 0*4096kB = 12080kB
[51.760258] Node 0 DMA32: 1118*4kB 4*8kB 19*16kB 13*32kB 4*64kB 5*128kB 
1*256kB 0*512kB 1*1024kB 0*2048kB 0*4096kB = 7400kB
[51.760271] 1910 total pagecache pages
[51.760273] 530 pages in swap cache
[51.760276] Swap cache stats: add 270637, delete 270107, find 48245/49850
[51.760279] Free swap  = 0kB
[51.760280] Total swap = 979956kB
[51.772535] 786344 pages RAM
[51.772538] 13514 pages reserved
[51.772539] 1381 pages shared
[51.772541] 766176 pages non-shared
[51.772545] Out of memory: kill process 7754 (beam.smp) score 639470 or a 
child
[51.772608] Killed process 7754 (beam.smp)
[601340.870691] beam.smp invoked oom-killer: gfp_mask=0x201da, order=0, 
oomkilladj=0
[601340.870698] beam.smp cpuset=/ mems_allowed=0
[601340.870703] Pid: 8707, comm: beam.smp Not tainted 2.6.31-14-server 
#48-Ubuntu
[601340.870706] Call Trace:
[601340.870718]  [] ? cpuset_print_task_mems_allowed+0x98/0xa0
[601340.870725]  [] oom_kill_process+0xce/0x290
[601340.870729]  [] ? select_bad_process+0xea/0x120
[601340.870734]  [] __out_of_memory+0x50/0xb0
[601340.870738]  [] out_of_memory+0x126/0x1a0
[601340.870744]  [] ? _spin_lock+0x9/0x10
[601340.870748]  [] __alloc_pages_slowpath+0x498/0x4e0
[601340.870753]  [] __alloc_pages_nodemask+0x12c/0x130
[601340.870759]  [] alloc_pages_current+0x82/0xd0
[601340.870763]  [] __page_cache_alloc+0x5f/0x70
[601340.870768]  [] __do_page_cache_readahead+0xc1/0x160
[601340.870772]  [] ra_submit+0x1c/0x20
[601340.870777]  [] do_sync_mmap_readahead+0x9b/0xd0
[601340.870781]  [] filemap_fault+0x304/0x3b0
[601340.870785]  [] __do_fault+0x4f/0x4e0
[601340.870790]  [] handle_mm_fault+0x1a7/0x3c0
[601340.870796]  [] ? default_spin_lock_flags+0x9/0x10
[601340.870801]  [] do_page_fault+0x165/0x360
[601340.870806]  [] page_fault+0x25/0x30
[601340.870809] Mem-Info:
[601340.870811] Node 0 DMA per-cpu:
[601340.870815] CPU0: hi:0, btch:   1 usd:   0
[601340.870817] CPU1: hi:0, btch:   1 usd:   0
[601340.870820] Node 0 DMA32 per-cpu:
[601340.870823] CPU0: hi:  186, btch:  31 usd:  80
[601340.870826] CPU1: hi:  186, btch:  31 usd: 143
[601340.870831] Active_anon:588608 active_file:728 inactive_anon:147510
[601340.870832]  inactive_file:645 unevictable:0 dirty:0 writeback:0 unstable:0
[601340.870834]  free:4749 slab:8861

Re: Out of memory: kill process 8713 (beam.smp)

2010-06-24 Thread Germain Maurice

Dan,

I don't know exactly when the errors occured because there are no 
timestamps :(
I'm sure that's during a basho_bench test (no map-reduce was launched). 
I see these errors on the all nodes of my cluster.
However, the riak nodes kept running, now i'm using riak 0.11.0 with 
innostore, previously i led some tests with bitcask. I believe that's 
during i was using Bitcask the errors occured.
Each node has 3GB memory installed, i assume this could be a bit small, 
but why a crash ?!


Thank you for the links, i just got a glance on them.


Le 23/06/10 19:37, Dan Reverri a écrit :

Hi Germain,

I'm sorry if I've missed any background information that may have been 
provided earlier but can you tell me the scenario that led up to these 
errors? Are you running a load test against a cluster? Are you seeing 
these out of memory errors in one of your nodes? How much memory does 
the machine running the node have? What type of load is being placed 
on the node (get, put, map reduce, etc.)?


If you are using the Bitcask backend, which is default in the latest 
release, you should be aware that all keys held on the node must fit 
in memory (this does NOT include the values, just the keys). Here is a 
related discussion from IRC:

http://gist.github.com/438065

Also worth reviewing is this discussion from the mailing list:
http://riak.markmail.org/search/?q=bitcask%20key%20memory#query:bitcask%20key%20memory+page:1+mid:2rreshpri7laq57h+state:results

Thank you,
Dan

Daniel Reverri
Developer Advocate
Basho Technologies, Inc.
d...@basho.com <mailto:d...@basho.com>


On Wed, Jun 23, 2010 at 12:24 PM, Germain Maurice 
<mailto:germain.maur...@linkfluence.net>> wrote:


I'm still disturbing you with errors...

I just discovered some errors and i would like to know if it's
normal...
    You can see the errors in the attached file.


-- 
Germain Maurice

Administrateur Système/Réseau
Tel : +33.(0)1.42.43.54.33

http://www.linkfluence.net


___
riak-users mailing list
riak-users@lists.basho.com <mailto:riak-users@lists.basho.com>
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com





--
Germain Maurice
Administrateur Système/Réseau
Tel : +33.(0)1.42.43.54.33

http://www.linkfluence.net

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Out of memory: kill process 8713 (beam.smp)

2010-06-24 Thread Germain Maurice

Dan,

I switched back to Bitcask, i have a n_value = 2 bucket over two nodes 
(each node has 3GB RAM)


i requested :
curl "http://10.0.0.40:8098/riak/test?keys=stream";
and i got this error
[...]"81902","50339","592","56815","55290","8857","251curl: (18) 
transfer closed with outstanding read data remaining

54","12321","17879","39060","48516","79056","72858","17509","15484","14623","61998","96868","55237","8196",
"68830","39850","44175","68814","18970","12112","15228","67859","24275","61334","67503","86341","75438","68686",
"45040","10442","20863","22342","68666","86289","12816","22693","4919","90026","69932","24564","19181","50704",
"59385","95929","34755","97493","7987","82073","70540","89454","44260","53609"]}

The riak process "riak console" died.

Have a look to the dmesg.txt attached with the message.
Can you certify me there is not enough RAM for riak ?



Le 24/06/10 10:28, Germain Maurice a écrit :

Dan,

I don't know exactly when the errors occured because there are no 
timestamps :(
I'm sure that's during a basho_bench test (no map-reduce was 
launched). I see these errors on the all nodes of my cluster.
However, the riak nodes kept running, now i'm using riak 0.11.0 with 
innostore, previously i led some tests with bitcask. I believe that's 
during i was using Bitcask the errors occured.
Each node has 3GB memory installed, i assume this could be a bit 
small, but why a crash ?!


Thank you for the links, i just got a glance on them.


Le 23/06/10 19:37, Dan Reverri a écrit :

Hi Germain,

I'm sorry if I've missed any background information that may have 
been provided earlier but can you tell me the scenario that led up to 
these errors? Are you running a load test against a cluster? Are you 
seeing these out of memory errors in one of your nodes? How much 
memory does the machine running the node have? What type of load is 
being placed on the node (get, put, map reduce, etc.)?


If you are using the Bitcask backend, which is default in the latest 
release, you should be aware that all keys held on the node must fit 
in memory (this does NOT include the values, just the keys). Here is 
a related discussion from IRC:

http://gist.github.com/438065

Also worth reviewing is this discussion from the mailing list:
http://riak.markmail.org/search/?q=bitcask%20key%20memory#query:bitcask%20key%20memory+page:1+mid:2rreshpri7laq57h+state:results

Thank you,
Dan

Daniel Reverri
Developer Advocate
Basho Technologies, Inc.
d...@basho.com <mailto:d...@basho.com>


On Wed, Jun 23, 2010 at 12:24 PM, Germain Maurice 
<mailto:germain.maur...@linkfluence.net>> wrote:


I'm still disturbing you with errors...

I just discovered some errors and i would like to know if it's
normal...
You can see the errors in the attached file.


-- 
Germain Maurice

Administrateur Système/Réseau
Tel : +33.(0)1.42.43.54.33

http://www.linkfluence.net


___
riak-users mailing list
riak-users@lists.basho.com <mailto:riak-users@lists.basho.com>
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com





--
Germain Maurice
Administrateur Système/Réseau
Tel : +33.(0)1.42.43.54.33

http://www.linkfluence.net


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
   



--
Germain Maurice
Administrateur Système/Réseau
Tel : +33.(0)1.42.43.54.33

http://www.linkfluence.net

[1300099.070113] __ratelimit: 580 callbacks suppressed
[1300099.070117] beam.smp invoked oom-killer: gfp_mask=0x201da, order=0, 
oomkilladj=0
[1300099.070121] beam.smp cpuset=/ mems_allowed=0
[1300099.070124] Pid: 7382, comm: beam.smp Not tainted 2.6.31-22-server 
#60-Ubuntu
[1300099.070127] Call Trace:
[1300099.070137]  [] ? 
cpuset_print_task_mems_allowed+0x98/0xa0
[1300099.070143]  [] oom_kill_process+0xce/0x280
[1300099.070146]  [] ? select_bad_process+0xea/0x120
[1300099.070150]  [] __out_of_memory+0x50/0xb0
[1300099.070153]  [] out_of_memory+0x126/0x1a0
[1300099.070158]  [] ? _spin_lock+0x9/0x10
[1300099.070162]  [] __alloc_pages_slowpath+0x4f1/0x560
[1300099.070166]  [] __alloc_pages_nodemask+0x12c/0x130
[1300099.070170]  [] alloc_pages_current+0x82/0xd0
[1300099.070174]  [] __page_c

Re: Any performance comparison / best practice advice for choosing a riak backend ?

2010-08-30 Thread Germain Maurice

 Hi Neville,
I just would add that riak_kv_dets_backend is limited to 2Gb by file 
i.e. by riak partition. It's just for your information.


Le 29/08/10 16:59, Sean Cribbs a écrit :
I'm not certain the purpose of the gb_trees backend as opposed to ets, 
although I imagine it might have faster lookups (and slower inserts). 
 The fs_backend uses one file per object, meaning that you will need 
lots of files open on a live system.


Sean Cribbs mailto:s...@basho.com>>
Developer Advocate
Basho Technologies, Inc.
http://basho.com/

On Aug 29, 2010, at 3:33 AM, Neville Burnell wrote:


Thanks Sean,

Do riak_kv_gb_trees_backend and riak_kv_fs_backend have particular 
strengths/weaknesses?


Kind Regards

Neville
When would one use

On 29 August 2010 01:02, Sean Cribbs <mailto:s...@basho.com>> wrote:


Your choice should be dictated by your use-case.  In most
situations, "riak_kv_bitcask_backend" (the default) will work for
you. It stores data on disk in a fast (append-only)
log-structured file format.  If your data is transient or doesn't
need to persist across restarts (and needs to be fast), try
"riak_kv_ets_backend" or "riak_kv_cache_backend"; the latter uses
a global LRU timeout.  If you want to use several of the backends
in the same cluster (for different buckets), use the
"riak_kv_multi_backend" and configure each backend separately.

Sean Cribbs mailto:s...@basho.com>>
Developer Advocate
Basho Technologies, Inc.
http://basho.com/

On Aug 28, 2010, at 5:10 AM, Neville Burnell wrote:


Hi,

I'm new to riak, and have been busily reading though the wiki,
watching the videos, and catching up on the mail list, so I will
have lots of questions over the next few weeks - so sorry 

To begin, I'm curious about the characteristics of the seven
backends for riak [1]

   1. riak_kv_bitcask_backend - stores data to bitcask
   2. riak_kv_fs_backend - stores data directly to files in a
  nested directory structure on disk
   3. riak_kv_ets_backend - stores data in ETS tables (which
  makes it volatile storage, but great for debugging)
   4. riak_kv_dets_backend - stores data on-disk in DETS tables
   5. riak_kv_gb_trees_backend - stores data using Erlang gb_trees
   6. riak_kv_cache_backend - turns a bucket into a
  memcached-type memory cache, and ejects the least recently
  used objects either when the cache becomes full or the
  object's lease expires
   7. riak_kv_multi_backend - configure per-bucket backends

Unfortunately this amount of choice means I need to do my
homework to make an informed decision ;-) so I'd love any
pointers or to hear any advice on performance comparisons, best
practices, backends for development vs deployment etc

Kind Regards

Neville

[1]
http://wiki.basho.com/display/RIAK/How+Things+Work#HowThingsWork-Backends
___
riak-users mailing list
riak-users@lists.basho.com <mailto:riak-users@lists.basho.com>
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com






___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com



--
Germain Maurice
Administrateur Système/Réseau
Tel : +33.(0)1.42.43.54.33

http://www.linkfluence.net

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


UPDATE HTTP method for REST API ?

2010-09-22 Thread Germain Maurice

 Hello,

I see on the wiki "API that allowed users to manipulate data using 
standard HTTP methods: GET, PUT, UPDATE and DELETE."

Did you mean POST in place of UPDATE ?


Thanks !

--
Germain Maurice
Administrateur Système/Réseau
Tel : +33.(0)1.42.43.54.33

http://www.linkfluence.net


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Map reduce timeout

2010-10-05 Thread Germain Maurice

 Hello everybody,

I'm trying to execute some mapreduce jobs and i have a problem with timeout.
I'm working with riak_kv_version : <<"0.12.1">> with a single node 
"cluster".


My inputs are "people" bucket which contains more than 200k documents. 
So i have decided to set mapreduce timeout to 60 milliseconds.

Obviously, it doesn't take care of timeout.
You'll find below the JSON object i sent to Riak/mapred and the Internal 
Server Error less than 60 seconds after. I added the error message i got

in riak console.

=
> date; perl riak.pl ; date
mardi 5 octobre 2010, 17:47:58 (UTC+0200)
--- "{\"timeout\":60,\"query\":[{\"map\":{\"source\":\"function(v) { 
var data = Riak.mapValuesJson(v)[0] ; if ( data.faved_what != undefined 
&& data.faved_what.total != undefined ) { return  [ 
data.faved_what.total ] ; } return []; 
}\",\"language\":\"javascript\",\"arg\":[],\"keep\":false}},{\"reduce\":{\"source\":\"function(values, 
arg) { return [ values.reduce( function (item, total) { return item + 
total*1 ; }, 0 ) ] 
}\",\"language\":\"javascript\",\"arg\":[],\"keep\":true}}],\"inputs\":\"people\"}"

--- 500 Internal Server Error

mardi 5 octobre 2010, 17:48:53 (UTC+0200)
=


=ERROR REPORT 5-Oct-2010::17:48:53 ===
Failed reduce: {timeout,
   {gen_server,call,
   [<0.143.0>,
{dispatch,
{<0.143.0>,#Ref<0.0.5.43514>},
{{jsanon,
<<"function(values, arg) { return [ values.reduce( function (item, 
total) { return item + total*1 ; }, 0 ) ] }">>},

 [<<"2607">>,<<"261">>,<<"1753">>,<<"81">>,
<<"18190">>,<<"581">>,<<"811">>,<<"732">>,
<<"1831">>,<<"967">>,<<"2650">>,<<"222">>,
<<"674">>,<<"2136">>,<<"1467">>,<<"362">>,
<<"19965">>,<<"2025">>,<<"348">>,<<"4817">>,
<<"286">>],
 []}},
1]}}

=ERROR REPORT==== 5-Oct-2010::17:48:53 ===
** State machine <0.24966.6> terminating
** Last event in was {inputs,[<<"286">>]}
** When State == executing
**  Data  == {state,1,riak_kv_reduce_phase,
 {state,
 {javascript,
 {reduce,
 {jsanon,
<<"function(values, arg) { return [ values.reduce( function (item, 
total) { return item + total*1 ; }, 0 ) ] }">>},

 [],true}},
 [],[]},
 true,true,undefined,[],undefined,1,<0.24964.6>,66}
** Reason for termination =
** {error,failed_reduce}


Any idea about this ??

Thank you,
I'll go back to work on my slides for saturday :)


--
Germain Maurice
Administrateur Système/Réseau
Tel : +33.(0)1.42.43.54.33

http://www.linkfluence.net


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Map reduce timeout

2010-10-05 Thread Germain Maurice

 Thank you Dan. :)
If i have searched in the previous emails on the mailing list, i would 
find the solution. :/



Le 05/10/10 19:11, Dan Reverri a écrit :

Hi Germain,

There is an issue in 0.12 that causes whole bucket map/reduce jobs to 
ignore user supplied timeouts. This happens if the results are not 
streamed. You can activate streaming by adding "chunked=true" to the 
end of your request:
curl -v -XPOST http://localhost:8098/mapred?chunked=true -d 
"{\"timeout\":60,\"query\":[{\"map\":{\"source\":\"function(v) { 
var data = Riak.mapValuesJson(v)[0] ; if ( data.faved_what != 
undefined && data.faved_what.total != undefined ) { return  [ 
data.faved_what.total ] ; } return []; 
}\",\"language\":\"javascript\",\"arg\":[],\"keep\":false}},{\"reduce\":{\"source\":\"function(values, 
arg) { return [ values.reduce( function (item, total) { return item + 
total*1 ; }, 0 ) ] 
}\",\"language\":\"javascript\",\"arg\":[],\"keep\":true}}],\"inputs\":\"people\"}"


This issue has been fixed in 0.13:
https://issues.basho.com/show_bug.cgi?id=523

Thanks,
Dan

Daniel Reverri
Developer Advocate
Basho Technologies, Inc.
d...@basho.com <mailto:d...@basho.com>


On Tue, Oct 5, 2010 at 9:05 AM, Germain Maurice 
<mailto:germain.maur...@linkfluence.net>> wrote:


 Hello everybody,

I'm trying to execute some mapreduce jobs and i have a problem
with timeout.
I'm working with riak_kv_version : <<"0.12.1">> with a single node
"cluster".

My inputs are "people" bucket which contains more than 200k
documents. So i have decided to set mapreduce timeout to 60
milliseconds.
Obviously, it doesn't take care of timeout.
You'll find below the JSON object i sent to Riak/mapred and the
Internal Server Error less than 60 seconds after. I added the
error message i got
in riak console.

=
> date; perl riak.pl <http://riak.pl> ; date
mardi 5 octobre 2010, 17:47:58 (UTC+0200)
---
"{\"timeout\":60,\"query\":[{\"map\":{\"source\":\"function(v)
{ var data = Riak.mapValuesJson(v)[0] ; if ( data.faved_what !=
undefined && data.faved_what.total != undefined ) { return  [
data.faved_what.total ] ; } return [];

}\",\"language\":\"javascript\",\"arg\":[],\"keep\":false}},{\"reduce\":{\"source\":\"function(values,
arg) { return [ values.reduce( function (item, total) { return
item + total*1 ; }, 0 ) ]

}\",\"language\":\"javascript\",\"arg\":[],\"keep\":true}}],\"inputs\":\"people\"}"
--- 500 Internal Server Error

mardi 5 octobre 2010, 17:48:53 (UTC+0200)
=


=ERROR REPORT 5-Oct-2010::17:48:53 ===
Failed reduce: {timeout,
  {gen_server,call,
  [<0.143.0>,
   {dispatch,
   {<0.143.0>,#Ref<0.0.5.43514>},
   {{jsanon,
<<"function(values, arg) { return [ values.reduce( function (item,
total) { return item + total*1 ; }, 0 ) ] }">>},
[<<"2607">>,<<"261">>,<<"1753">>,<<"81">>,
<<"18190">>,<<"581">>,<<"811">>,<<"732">>,
<<"1831">>,<<"967">>,<<"2650">>,<<"222">>,
<<"674">>,<<"2136">>,<<"1467">>,<<"362">>,
<<"19965">>,<<"2025">>,<<"348">>,<<"4817">>,
<<"286">>],
[]}},
   1]}}

=ERROR REPORT 5-Oct-2010::17:48:53 ===
** State machine <0.24966.6> terminating
** Last event in was {inputs,[<<"286">>]}
** When State == executing
**  Data  == {state,1,riak_kv_reduce_phase,
{state,
{javascript,
{reduce,
{jsanon,
<<"function(values, arg) { return [ values.reduce( function (item,
total) { return item + total*1 ; }, 0 ) ] }">>},
[],true}},
[],[]},
   
true,true,undefined,[],undefined,1,<0.24964.6>,66}

** Reason for termination =
** {error,failed_reduce}


Any idea about this ??

Thank you,
I'll go back to work on my slides for saturday :)


-- 
Germain Maurice

Administrateur Système/Réseau
Tel : +33.(0)1.42.43.54.33

http://www.linkfluence.net


___
riak-users mailing list
riak-users@lists.basho.com <mailto:riak-users@lists.basho.com>
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com




___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


About CAP theorem and Riak way of thinking

2010-10-08 Thread Germain Maurice

 Hi everybody,

Not really a technical question, i'm thinking about CAP theorem and the 
Riak way of thinking.


CAP theorem says : "You can't get Consistency, Availability and 
Partition tolerance at the same time"


It's advised to pick two of them and don't try to satisfy the three.
Riak says "Pick two at each operation".

So, am i right if i say : "the N_val of bucket is for Partition 
Tolerance, small R/W quorums are for Availability and high R/W/DW 
quorums are for Consistency" ?
I think high W/DW quorums will ensure effectiveness of Partition 
Tolerance of the read requests in the future.

When reading, if we have high R quorums the Partition Tolerance is lower.

I tried to list each configuration on each operation, could you correct 
it where i am wrong .

N=3, R=1 :: A,P
N=3, R=3 :: C
N=1, R=1 :: A
N=3, W=3 :: C
N=3, W=1 :: A,P
N=1, W=1 :: A

Thanks

--
Germain Maurice
Administrateur Système/Réseau
Tel : +33.(0)1.42.43.54.33

http://www.linkfluence.net


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Riak backend when using Riak search

2010-10-12 Thread Germain Maurice

 Hi everybody,

Is there any requirements concerning riak storage backend when we are 
using Riak Search ?

I think it's independant but we have to be insured about this.

Thank you..

--
Germain Maurice
Administrateur Système/Réseau
Tel : +33.(0)1.42.43.54.33

http://www.linkfluence.net


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak backend when using Riak search

2010-10-12 Thread Germain Maurice

 Argh, i replied directly to Dan :/

==
Hi Germain,

You can use Riak Search as you would a normal installation of Riak KV; 
Riak Search is a superset of Riak KV. You can modify the riak_kv portion 
of Riak Search exactly as you would a typical Riak KV installation.


One thing to note; along with index data Riak Search will also store a 
representation of indexed documents as an object in Riak KV. For 
example, indexing a document in the "search" index will do the following:
1. Store indexed data in the Merge Index backend using the merge_index 
data_root

2. Store a new document in Riak KV using the configured backend:
Bucket: _rsid_search
Key: DocId
Value: Data from the document

The document's data representation could be retrieved through the 
standard REST API as follows:

http://localhost:8098/riak/_rsid_search/DocId

Thanks,
Dan

Daniel Reverri
Developer Advocate
Basho Technologies, Inc.
d...@basho.com <mailto:d...@basho.com>


On Tue, Oct 12, 2010 at 12:11 PM, Germain Maurice 
<mailto:germain.maur...@linkfluence.net>> wrote:

Not really Dan, it was about pure data storage not about indexes storage.

I just tried Riak Search with Innostore for my buckets and it works, i 
was a bit hurry (and it was simple to make some test).

However, thank you for having answered.

Le 12/10/10 20:47, Dan Reverri a écrit :
Riak Search uses a custom backend called Merge Index. The Riak Search 
backend is configurable in app.config, however, Merge Index is the 
only backend that works for search:

 {riak_search, [
{search_backend, merge_index_backend},
{java_home, "/usr"}
   ]},

Merge index is configurable in app.config as well:
%% Merge Index Config
 {merge_index, [
{data_root, "data/merge_index"},
{buffer_rollover_size, 10485760},
{buffer_delayed_write_size, 524288},
{buffer_delayed_write_ms, 2000},
{max_compact_segments, 20},
{segment_query_read_ahead_size, 65536},
{segment_compaction_read_ahead_size, 5242880},
{segment_file_buffer_size, 20971520},
{segment_delayed_write_size, 20971520},
{segment_delayed_write_ms, 1},
{segment_full_read_size, 20971520},
{segment_block_size, 32767},
{segment_values_staging_size, 1000},
{segment_values_compression_threshold, 0},
{segment_values_compression_level, 1}
   ]},

The data_root parameter will tell Merge Index where to store it's data 
files.


Does this answer your question?

Thanks,
Dan

Daniel Reverri
Developer Advocate
Basho Technologies, Inc.
d...@basho.com <mailto:d...@basho.com>


On Tue, Oct 12, 2010 at 9:51 AM, Germain Maurice 
<mailto:germain.maur...@linkfluence.net>> wrote:


 Hi everybody,

Is there any requirements concerning riak storage backend when we
are using Riak Search ?
I think it's independant but we have to be insured about this.

Thank you..

-- 
Germain Maurice

Administrateur Système/Réseau
Tel : +33.(0)1.42.43.54.33

http://www.linkfluence.net


___
riak-users mailing list
riak-users@lists.basho.com <mailto:riak-users@lists.basho.com>
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com




___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak backend when using Riak search

2010-10-20 Thread Germain Maurice

 Hello,

I have to confess I tested Riak Search with Innostore to quickly because 
I just got these error messages when installing a new node :



> riaksearch console

=ERROR REPORT 20-Oct-2010::14:32:58 ===
storage_backend innostore_riak is non-loadable.

=INFO REPORT 20-Oct-2010::14:32:58 ===
application: riak_kv
exited: {invalid_storage_backend,{riak_kv_app,start,[normal,[]]}}
type: permanent
/usr/lib/riaksearch/lib/os_mon-2.2.5/priv/bin/memsup: Erlang has closed.
 Erlang 
has closed


> riaksearch console

=ERROR REPORT 20-Oct-2010::14:39:35 ===
storage_backend riak_kv_innostore_backend is non-loadable.

=INFO REPORT 20-Oct-2010::14:39:35 ===
application: riak_kv
exited: {invalid_storage_backend,{riak_kv_app,start,[normal,[]]}}
type: permanent
/usr/lib/riaksearch/lib/os_mon-2.2.5/priv/bin/memsup: Erlang has closed.

I'm using Riak Search 0.13.0 and innostore 1.0.2-2-amd64.

Is it normal ? Dan said me it could be not incompatible with innostore.

Thanks


Le 12/10/10 21:48, Germain Maurice a écrit :

Argh, i replied directly to Dan :/

==
Hi Germain,

You can use Riak Search as you would a normal installation of Riak KV; 
Riak Search is a superset of Riak KV. You can modify the riak_kv 
portion of Riak Search exactly as you would a typical Riak KV 
installation.


One thing to note; along with index data Riak Search will also store a 
representation of indexed documents as an object in Riak KV. For 
example, indexing a document in the "search" index will do the following:
1. Store indexed data in the Merge Index backend using the merge_index 
data_root

2. Store a new document in Riak KV using the configured backend:
Bucket: _rsid_search
Key: DocId
Value: Data from the document

The document's data representation could be retrieved through the 
standard REST API as follows:

http://localhost:8098/riak/_rsid_search/DocId

Thanks,
Dan

Daniel Reverri
Developer Advocate
Basho Technologies, Inc.
d...@basho.com <mailto:d...@basho.com>


On Tue, Oct 12, 2010 at 12:11 PM, Germain Maurice 
<mailto:germain.maur...@linkfluence.net>> wrote:

Not really Dan, it was about pure data storage not about indexes storage.

I just tried Riak Search with Innostore for my buckets and it works, i 
was a bit hurry (and it was simple to make some test).

However, thank you for having answered.

Le 12/10/10 20:47, Dan Reverri a écrit :
Riak Search uses a custom backend called Merge Index. The Riak Search 
backend is configurable in app.config, however, Merge Index is the 
only backend that works for search:

 {riak_search, [
{search_backend, merge_index_backend},
{java_home, "/usr"}
   ]},

Merge index is configurable in app.config as well:
%% Merge Index Config
 {merge_index, [
{data_root, "data/merge_index"},
{buffer_rollover_size, 10485760},
{buffer_delayed_write_size, 524288},
{buffer_delayed_write_ms, 2000},
{max_compact_segments, 20},
{segment_query_read_ahead_size, 65536},
{segment_compaction_read_ahead_size, 5242880},
{segment_file_buffer_size, 20971520},
{segment_delayed_write_size, 20971520},
{segment_delayed_write_ms, 1},
{segment_full_read_size, 20971520},
{segment_block_size, 32767},
{segment_values_staging_size, 1000},
{segment_values_compression_threshold, 0},
{segment_values_compression_level, 1}
   ]},

The data_root parameter will tell Merge Index where to store it's 
data files.


Does this answer your question?

Thanks,
Dan

Daniel Reverri
Developer Advocate
Basho Technologies, Inc.
d...@basho.com <mailto:d...@basho.com>


On Tue, Oct 12, 2010 at 9:51 AM, Germain Maurice 
<mailto:germain.maur...@linkfluence.net>> wrote:


 Hi everybody,

Is there any requirements concerning riak storage backend when we
are using Riak Search ?
I think it's independant but we have to be insured about this.

Thank you..

-- 
Germain Maurice

Administrateur Système/Réseau
Tel : +33.(0)1.42.43.54.33

http://www.linkfluence.net


___
riak-users mailing list
riak-users@lists.basho.com <mailto:riak-users@lists.basho.com>
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com





___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com



--
Germain Maurice
Administrateur Système/Réseau
Tel : +33.(0)1.42.43.54.33

http://www.linkfluence.net

___
ria

Re: Riak backend when using Riak search

2010-10-20 Thread Germain Maurice

Sean,

I already installed the latest innostore and i already tried with 
"riak_kv_innostore_backend" but got :

=ERROR REPORT 20-Oct-2010::14:39:35 ===
storage_backend riak_kv_innostore_backend is non-loadable.

I tried with the both innostore backend name in app.config.

Le 20/10/10 15:12, Sean Cribbs a écrit :

Germain,

There are new innostore packages available on downloads.basho.com 
<http://downloads.basho.com>, but also the name of the backend is 
"riak_kv_innostore_backend", which was changed sometime before 0.12.


Sean Cribbs mailto:s...@basho.com>>
Developer Advocate
Basho Technologies, Inc.
http://basho.com/

On Oct 20, 2010, at 8:47 AM, Germain Maurice wrote:


Hello,

I have to confess I tested Riak Search with Innostore to quickly 
because I just got these error messages when installing a new node :



> riaksearch console

=ERROR REPORT 20-Oct-2010::14:32:58 ===
storage_backend innostore_riak is non-loadable.

=INFO REPORT 20-Oct-2010::14:32:58 ===
application: riak_kv
exited: {invalid_storage_backend,{riak_kv_app,start,[normal,[]]}}
type: permanent
/usr/lib/riaksearch/lib/os_mon-2.2.5/priv/bin/memsup: Erlang has closed.
 
Erlang has closed



> riaksearch console

=ERROR REPORT 20-Oct-2010::14:39:35 ===
storage_backend riak_kv_innostore_backend is non-loadable.

=INFO REPORT 20-Oct-2010::14:39:35 ===
application: riak_kv
exited: {invalid_storage_backend,{riak_kv_app,start,[normal,[]]}}
type: permanent
/usr/lib/riaksearch/lib/os_mon-2.2.5/priv/bin/memsup: Erlang has closed.

I'm using Riak Search 0.13.0 and innostore 1.0.2-2-amd64.

Is it normal ? Dan said me it could be not incompatible with innostore.

Thanks


Le 12/10/10 21:48, Germain Maurice a écrit :

Argh, i replied directly to Dan :/

==
Hi Germain,

You can use Riak Search as you would a normal installation of Riak 
KV; Riak Search is a superset of Riak KV. You can modify the riak_kv 
portion of Riak Search exactly as you would a typical Riak KV 
installation.


One thing to note; along with index data Riak Search will also store 
a representation of indexed documents as an object in Riak KV. For 
example, indexing a document in the "search" index will do the 
following:
1. Store indexed data in the Merge Index backend using the 
merge_index data_root

2. Store a new document in Riak KV using the configured backend:
Bucket: _rsid_search
Key: DocId
Value: Data from the document

The document's data representation could be retrieved through the 
standard REST API as follows:

http://localhost:8098/riak/_rsid_search/DocId

Thanks,
Dan

Daniel Reverri
Developer Advocate
Basho Technologies, Inc.
d...@basho.com <mailto:d...@basho.com>


On Tue, Oct 12, 2010 at 12:11 PM, Germain Maurice 
<mailto:germain.maur...@linkfluence.net>> wrote:
Not really Dan, it was about pure data storage not about indexes 
storage.


I just tried Riak Search with Innostore for my buckets and it works, 
i was a bit hurry (and it was simple to make some test).

However, thank you for having answered.

Le 12/10/10 20:47, Dan Reverri a écrit :
Riak Search uses a custom backend called Merge Index. The Riak 
Search backend is configurable in app.config, however, Merge Index 
is the only backend that works for search:

 {riak_search, [
{search_backend, merge_index_backend},
{java_home, "/usr"}
   ]},

Merge index is configurable in app.config as well:
%% Merge Index Config
 {merge_index, [
{data_root, "data/merge_index"},
{buffer_rollover_size, 10485760},
{buffer_delayed_write_size, 524288},
{buffer_delayed_write_ms, 2000},
{max_compact_segments, 20},
{segment_query_read_ahead_size, 65536},
{segment_compaction_read_ahead_size, 5242880},
{segment_file_buffer_size, 20971520},
{segment_delayed_write_size, 20971520},
{segment_delayed_write_ms, 1},
{segment_full_read_size, 20971520},
{segment_block_size, 32767},
{segment_values_staging_size, 1000},
{segment_values_compression_threshold, 0},
{segment_values_compression_level, 1}
   ]},

The data_root parameter will tell Merge Index where to store it's 
data files.


Does this answer your question?

Thanks,
Dan

Daniel Reverri
Developer Advocate
Basho Technologies, Inc.
d...@basho.com <mailto:d...@basho.com>


On Tue, Oct 12, 2010 at 9:51 AM, Germain Maurice 
<mailto:germain.maur...@linkfluence.net>> wrote:


 Hi everybody,

Is there any requirements concerning riak storage backend when
we are using Riak Search ?
I think it's independant but we have to be insured about

Re: Riak backend when using Riak search

2010-10-20 Thread Germain Maurice

It works !!

Thanks you Sean.


Le 20/10/10 15:46, Sean Cribbs a écrit :
Aha, then you need to make sure the innostore library is in the load 
path. Add this to the vm.args:


-pz /path/to/innostore/ebin

Of course, replace that path with the real one.

Sean Cribbs mailto:s...@basho.com>>
Developer Advocate
Basho Technologies, Inc.
http://basho.com/

On Oct 20, 2010, at 9:28 AM, Germain Maurice wrote:


Sean,

I already installed the latest innostore and i already tried with 
"riak_kv_innostore_backend" but got :

=ERROR REPORT 20-Oct-2010::14:39:35 ===
storage_backend riak_kv_innostore_backend is non-loadable.

I tried with the both innostore backend name in app.config.

Le 20/10/10 15:12, Sean Cribbs a écrit :

Germain,

There are new innostore packages available on downloads.basho.com 
<http://downloads.basho.com/>, but also the name of the backend is 
"riak_kv_innostore_backend", which was changed sometime before 0.12.


Sean Cribbs mailto:s...@basho.com>>
Developer Advocate
Basho Technologies, Inc.
http://basho.com/

On Oct 20, 2010, at 8:47 AM, Germain Maurice wrote:


Hello,

I have to confess I tested Riak Search with Innostore to quickly 
because I just got these error messages when installing a new node :



> riaksearch console

=ERROR REPORT 20-Oct-2010::14:32:58 ===
storage_backend innostore_riak is non-loadable.

=INFO REPORT 20-Oct-2010::14:32:58 ===
application: riak_kv
exited: {invalid_storage_backend,{riak_kv_app,start,[normal,[]]}}
type: permanent
/usr/lib/riaksearch/lib/os_mon-2.2.5/priv/bin/memsup: Erlang has 
closed.
 
Erlang has closed



> riaksearch console

=ERROR REPORT 20-Oct-2010::14:39:35 ===
storage_backend riak_kv_innostore_backend is non-loadable.

=INFO REPORT 20-Oct-2010::14:39:35 ===
application: riak_kv
exited: {invalid_storage_backend,{riak_kv_app,start,[normal,[]]}}
type: permanent
/usr/lib/riaksearch/lib/os_mon-2.2.5/priv/bin/memsup: Erlang has 
closed.


I'm using Riak Search 0.13.0 and innostore 1.0.2-2-amd64.

Is it normal ? Dan said me it could be not incompatible with innostore.

Thanks


Le 12/10/10 21:48, Germain Maurice a écrit :

Argh, i replied directly to Dan :/

==
Hi Germain,

You can use Riak Search as you would a normal installation of Riak 
KV; Riak Search is a superset of Riak KV. You can modify the 
riak_kv portion of Riak Search exactly as you would a typical Riak 
KV installation.


One thing to note; along with index data Riak Search will also 
store a representation of indexed documents as an object in Riak 
KV. For example, indexing a document in the "search" index will do 
the following:
1. Store indexed data in the Merge Index backend using the 
merge_index data_root

2. Store a new document in Riak KV using the configured backend:
Bucket: _rsid_search
Key: DocId
Value: Data from the document

The document's data representation could be retrieved through the 
standard REST API as follows:

http://localhost:8098/riak/_rsid_search/DocId

Thanks,
Dan

Daniel Reverri
Developer Advocate
Basho Technologies, Inc.
d...@basho.com <mailto:d...@basho.com>


On Tue, Oct 12, 2010 at 12:11 PM, Germain Maurice 
<mailto:germain.maur...@linkfluence.net>> wrote:
Not really Dan, it was about pure data storage not about indexes 
storage.


I just tried Riak Search with Innostore for my buckets and it 
works, i was a bit hurry (and it was simple to make some test).

However, thank you for having answered.

Le 12/10/10 20:47, Dan Reverri a écrit :
Riak Search uses a custom backend called Merge Index. The Riak 
Search backend is configurable in app.config, however, Merge 
Index is the only backend that works for search:

 {riak_search, [
{search_backend, merge_index_backend},
{java_home, "/usr"}
   ]},

Merge index is configurable in app.config as well:
%% Merge Index Config
 {merge_index, [
{data_root, "data/merge_index"},
{buffer_rollover_size, 10485760},
{buffer_delayed_write_size, 524288},
{buffer_delayed_write_ms, 2000},
{max_compact_segments, 20},
{segment_query_read_ahead_size, 65536},
{segment_compaction_read_ahead_size, 5242880},
{segment_file_buffer_size, 20971520},
{segment_delayed_write_size, 20971520},
{segment_delayed_write_ms, 1},
{segment_full_read_size, 20971520},
{segment_block_size, 32767},
{segment_values_staging_size, 1000},
{segment_values_compression_threshold, 0},
{segment_values_compression_level, 1}
   ]},

The data_root parameter will tell Merge Index where to store it's 
data files.


Does this answer your que

Re: map-reduce Problem ?

2010-11-16 Thread Germain Maurice

Le 15/11/10 18:55, Kevin Smith a écrit :

innostore is moderately bucket-aware right now so I've forked it 
(http://github.com/kevsmith/innostore) and added bucket-aware key listing. 
Based on some very basic testing I'm seeing 2.5x speed up in overall key 
listing performance compared to the official version. I'm hoping the patch, or 
a modified form of it, will make the next release.

Good news, I'm waiting the new release with your patch :)

--
Germain Maurice
Administrateur Système/Réseau
Tel : +33.(0)1.42.43.54.33

http://www.linkfluence.net


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: upgrade process?

2010-11-22 Thread Germain Maurice

Hi Mark,

I think it would be helpful to add some words about upgrading riak nodes 
with a large amount of data,
because making a complete backup of data is not a reasonable task to do 
(takes too much space and time).

Maybe, in the "rolling upgrades" instructions, backup step is optionnal ?

Thank you for your answer.

Le 22/11/10 17:47, Mark Phillips a écrit :

Hey Tim,

On Mon, Nov 22, 2010 at 10:07 AM, Tim Heckman  wrote:

In general, what's the process for upgrading a ring?

Do you take nodes out of the ring one at a time, upgrade them, and
re-add them to the ring,  or do you build an entirely new ring and
restore the data from backups?

I imagine this partly depends on how many versions behind you are, and
the specific changes to the server and storage backends.


We just added a page to the wiki (this very morning, in fact) with
rolling upgrade instructions for Debian, Redhat and Solaris.

http://wiki.basho.com/display/RIAK/Rolling+Upgrades

Mark


Community Manager
Basho Technologies
wiki.basho.com
twitter.com/pharkmillups

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com



--
Germain Maurice
Administrateur Système/Réseau
Tel : +33.(0)1.42.43.54.33

http://www.linkfluence.net


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com