I have a problem to install riak, I am on a OSX of 2013
git clone https://github.com/basho/riak
cd riak
make rel
cd linking; make export
cc -o prlink.o -c -m32 -Wall -fno-common -pthread -O2 -fPIC -UDEBUG
-DNDEBUG=1 -DXP_UNIX=1 -DDARWIN=1 -DHAVE_BSD_FLOCK=1 -DHAVE_SOCKLEN_T=1
-DXP_MACOSX=1
Here's another thread with a similar issue:
http://lists.basho.com/pipermail/riak-users_lists.basho.com/2012-August/009110.html
On Tue, Aug 13, 2013 at 7:58 AM, Federico Mosca wrote:
> I have a problem to install riak, I am on a OSX of 2013
>
> git clone https://github.com/basho/riak
> cd ri
Hi riak peoples,
I'm in the process of adding a new node to a aging (1 node) cluster. I
would like to know what would be the prefered incrementing upgrade to get
all my nodes on the latest riak version. The best scenario would also have
the least downtime. The old node is at riak version 1.2.1.
I saw it, but how can I remove all the erlang?
the ln -s does not work for me
2013/8/13 Bhuwan Chawla
> Here's another thread with a similar issue:
>
>
> http://lists.basho.com/pipermail/riak-users_lists.basho.com/2012-August/009110.html
>
>
>
>
> On Tue, Aug 13, 2013 at 7:58 AM, Federico Mosca
From http://docs.basho.com/riak/latest/ops/running/rolling-upgrades/ it looks
like you should upgrade to 1.3.2 and then 1.4.1
Depending on how badly you need the extra capacity, it would probably be better
to start by upgrading all nodes and then adding the new one.
--
Jeremiah Peschka - Founde
Having done a similar upgrade, a gotcha to keep in mind:
"Note for Secondary Index users
If you use Riak's Secondary Indexes and are upgrading from a version prior
to Riak version 1.3.1, you need to reformat the indexes using the
riak-admin reformat-indexes command"
On Tue, Aug 13, 2013 at 8:3
Same here, except that Riak 1.3.2 did that for me automatically. As
Jeremiah mentioned, you should go first to 1.3.2 on all nodes, per node
the first time Riak starts it will take some time upgrading the 2i
indexes storage format, if you see any weirdness then execute
"riak-admin reformat-index
Hi Federico,
Are you building Riak for production or as a development/test environment?
Can I ask why you aren't using the precompiled tarball? [0]
If you are looking for a quick dev/test environment, there is an OSX devrel
[1] launcher on github [2]. We use it all the time to get a devrel clus
Apologies, I forgot to send the URL to the devrel launcher repo. Here it
is:
https://github.com/basho/riak-dev-cluster
On Tue, Aug 13, 2013 at 1:51 PM, Todd Tyree wrote:
> Hi Federico,
>
> Are you building Riak for production or as a development/test environment?
> Can I ask why you aren't u
Hi,
How do I change the filesystem where the RIAK CS buckets could run. Changing
the data_root values in storage_backend is not working as it is specified in a
FAQ
(http://docs.basho.com/riakcs/latest/cookbooks/faqs/riak-cs/#is-it-possible-to-specify-a-file-system-where-my-r).
When I change th
Hi All, On behalf of Basho, I'm excited to announce that Riak CS 1.4.0 is now official. Riak CS is Basho's open source cloud storage software. The biggest feature additions are support for the Swift API and Keystone authentication, which enables CS to be a drop-in storage replacement for OpenStack
Also, in theory if you have at least 5 nodes in the cluster one node
down at a time doesn't stop your cluster from working properly.
You could do the following node by node which I have done several times:
1. Stop Riak on the upgrading node and in another node mark the
upgrading node as down
I followed also the [1],
anyway I had two version of erlang
2013/8/13 Todd Tyree
> Apologies, I forgot to send the URL to the devrel launcher repo. Here it
> is:
>
> https://github.com/basho/riak-dev-cluster
>
>
> On Tue, Aug 13, 2013 at 1:51 PM, Todd Tyree wrote:
>
>> Hi Federico,
>>
>> Are
Louis-Philippe et al:
You can follow the rolling upgrade procedure to upgrade a node from 1.2 to
1.4.x directly. The note in the instructions only concerns upgrading from 1.0
to 1.4.
No need to stop at 1.3.2.
Thanks,
Charlie Voiselle
On Aug 13, 2013, at 9:23 AM, Guido Medina wrote:
> Als
Hi Dilip,
Are you making these changes to Riak's app.config?
If the `riak-cs start` command isn't working, that's generally an
indicator that Riak is not running. What happens when you execute
`riak ping`?
--
Hector
On Tue, Aug 13, 2013 at 9:20 AM, dilip kumar wrote:
> Hi,
>
> How do I change
Hi!
OS - Debian 6 2.6.32-5-amd64
gcc - version 4.7.2 (GCC)
boost version 1.51
make
...
error: #error "Threading support unavaliable: it has been explicitly
disabled with BOOST_DISABLE_THREADS"
...
How to solve that error?
Thanks
--
View this message in context:
http://riak-users.197444.n3.na
Hello,
I need to decide what database we will choose for our project. Certainly, we
need only 2 physical nodes (active-standby). Riak is good for us, becase it is
Erlang-based, as our project. But is's known that riak cluster should have at
least five nodes. I have some problems with my cluster
Hi Hector,
This is what happens, after changing the directories in riak_kv section on
/etc/riak/app.config:
# riak restart
ok
# stanchion restart
ok
# riak-cs start
riak-cs failed to start within 15 seconds,
see the output of 'riak-cs console' for more information.
If you want to
Dilip,Can you restart Riak with a riak stop then riak start? If this fails a riak ping, can you please attach a riak console output.-- John White On August 13, 2013 at 7:25:51 PM, dilip kumar (dilip_nuta...@yahoo.co.in) wrote: Hi Hector,This is what happens, after changing the directories in riak_
Hi guys,
I am setting up a new Riak cluster and I was wondering if there is any
drawback of increasing the LevelDB blocksize from 4K to 64K. The reason is
that we have all of the values way bigger than 4K and I guess from the
performance point of view it would make sense to increase the block size
Istvan,
"block_size" is not a "size", it is a threshold. Data is never split across
blocks. A single block contains one or more key/value pairs. leveldb starts a
new block only when the total size of all key/values in the current block
exceed the threshold.
Your must set block_size to a m
An interesting hybrid that I'm coming around to seems to be using a Unix
release - OmniOS has an AMI, for instance - and ZFS. With a large-enough
store, I can run without EBS on my nodes, and have a single ZFS backup
instance with a huge amount of slow-EBS storage for accepting ZFS snapshots.
I'm
That *does* sound like an interesting way to do it. Kinda
best-of-both-worlds, depending on your backup schemes and whatnot. I'm
definitely curious to hear about how it works out for you.
-B.
On Tue, Aug 13, 2013 at 4:03 PM, Dave Martorana wrote:
> An interesting hybrid that I'm coming around
Hi Matthew,
Thank you for the explanation.
I am experimenting with different block size and making sure I have at
least 100G data on disk for the tests.
I.
On Tue, Aug 13, 2013 at 12:11 PM, Matthew Von-Maszewski
wrote:
> Istvan,
>
> "block_size" is not a "size", it is a threshold. Data is n
Brady Wetherington wrote:
> First off - I know 5 instances is the "magic number" of instances to have.
> If I understand the thinking here, it's that at the default redundancy
> level ('n'?) of 3, it is most likely to start getting me some scaling
> (e.g., performance > just that of a single node)
** The following is copied from Basho's leveldb wiki page:
https://github.com/basho/leveldb/wiki/Riak-tuning-1
Summary:
leveldb has a higher read and write throughput in Riak if the Erlang scheduler
count is limited to half the number of CPU cores. Tests have demonstrated
improvements of 15%
It seems Riak does not like the leveldb block_size to be changed to 64k.
App config:
app.config: {sst_block_size, 65536},
basho_bench logs:
18:04:38.010 [info]
Errors:[{{delete,delete},542},{{get,get},15921},{{put,put},1253},{{{delete,delete},disconnected},542},{{{get,get},disconnected},15921
One thing that I *think* I've figured out is that the number of "how many
replicas can you lose and stay up" is actually n-w for writes, and n-r for
reads -
So with n=3 and r=2 and w=2, the loss of two replicas due to AZ failure
means that I still *have* my data ("durability") but I might lose _ac
When you say "CPU" does that mean "logical CPU core"? Or is this actually
referring to physical CPU cores?
E.g. On my laptop with 4 physical cores + HyperThreading, should I set +S
to +S 4:4
You hint that it doesn't matter, but I just wanted to trick you into
explicitly saying something.
---
Jer
On August 13, 2013 10:20:48 PM Brady Wetherington wrote:
> One thing that I *think* I've figured out is that the number of "how many
> replicas can you lose and stay up" is actually n-w for writes, and n-r for
> reads -
>
> So with n=3 and r=2 and w=2, the loss of two replicas due to AZ failure
>
30 matches
Mail list logo