So, call me naive, but couldn't ZFS be used as Heinze suggested?
I have some SAN horror stories - both operationally and from an economic
perspective.
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-use
I have heard some SAN's horrors stories too, Riak nodes are so cheap
that I don't see the point in even having any mirror on the node, here
my points:
1. Erlang interprocess communication brings some network usage, why yet
another network usage on replicating the data? If the whole idea of
And for ZFS? I wouldn't recommend it, after Riak 1.4 snappy LevelDB
compression does a nice job, why take the risk of yet another not so
enterprise ready compression algorithms.
I could be wrong though,
Guido.
On 03/10/13 12:11, Guido Medina wrote:
I have heard some SAN's horrors stories too,
Not sure what ZFS has to do with snappy compression, as it's a file system
not a compression algorithm..
feature wise, ZFS is quite possibly the most enterprise file system around,
including advanced data corruption prevention and remote backing up..
This would be a viable option in BSD/Solaris en
If using LevelDB backend, LevelDB has a nice compression (snappy),
including CRC checks and all sort of data corruption checks, I have read
on this mail list people that has required to disable snappy compression
because it renders ZFS useless (not much to compress after that)
Hence, it is kin
Hi Guido,
I don’t see how snappy compression renders ZFS useless, you might do some
things twice like crcing but it also protects on different layers. While the
ZFS crc protects data on the disks the in app crc could protect the data ‘all’
the way up, compression wise you might not even turn on
Some users might avoid the ZFS overheads, remember we are on a KV world
where to read/write many keys you will have to do so concurrently, say
there is less than 1% chances for things things going wrong with 1 a
server belonging to a Riak cluster, if building a Riak server is cheap,
would you p
Chiming in with the completely anecdotal statement that we have customers
who run large Riak clusters on ZFS. As far as I know, we haven't gotten
any complaints due to ZFS. The only cautionary tale would be to not let
your zpool completely fill up because deletes take up storage due to the
append
Alex,
Nice work benchmarking! One thing to keep in mind (and you probably
are well aware of this), particularly when benchmarking on cloud
platforms, is that there are a lot of factors that affect performance
outside -- other tenants, blackbox hardware upgrades, changes in SLAs
-- just to name a
Hi all,
I'm trying to understand the optimal ring size / vnode config for a 6 node
Riak cluster, which will most likely expand to many more nodes over time .
Does everyone bump from the default of 64 to 1024 to facilitate future
growth ? I can provide more detailed capacity metrics off list if
n
Hi John,
Our rule-of-thumb is 8-64 partitions per physical node, so a good starting
point for a 6-node cluster would be 128 or 256. 256 will let you expand up
to about 30 nodes without needing a ring resize.
On Thu, Oct 3, 2013 at 10:14 AM, John Kavanagh wrote:
> Hi all,
>
> I'm trying to unde
Hi I'm using Riak 1.3.1 and Java client 1.1.2
Using http and curl I see 4 siblings for an object one of which
has X-Riak-Deleted: true
but when I'm using Java client with DomainBucket my Converter's method
toDomain is called only 3 times.
I have set the property
builder.returnDeletedVClock(true)
Thanks for the link, James!
Windows Azure has an odd SLA- in fact if you read the fine print they
dont even offer SLA for individual VMs. They reserve the right to take
down individual VMs if they need to do an upgrades of the Host OS. But
they offer 'availability zones' so you can make sure only
Thanks Sean !
On Thu, Oct 3, 2013 at 4:33 PM, Sean Cribbs wrote:
> Hi John,
>
> Our rule-of-thumb is 8-64 partitions per physical node, so a good starting
> point for a 6-node cluster would be 128 or 256. 256 will let you expand up
> to about 30 nodes without needing a ring resize.
>
>
> On Thu
John,
If you are using leveldb as Riak's backend, keep in mind that each vnode
requires a fixed amount of memory (set via max_open_files and cache_size in
app.config). Recently had one user attempt too many vnodes on a 4Gbyte
machine. He populated data to the point of memory overflow … then m
Daniel -
Yeah, that is the case. When the ability to pass fetch/store/delete
meta was added to DomainBucket way back when it appears that was
missed.
I'll add it and forward-port to 1.4.x as well and cut new jars. Should
be avail by tomorrow morning at the latest.
Thanks!
- Roach
On Thu, Oct 3,
Thanks Matthew , we are actually using Bitcask with 128Gb RAM per node.
John
On 3 Oct 2013, at 17:10, Matthew Von-Maszewski wrote:
John,
If you are using leveldb as Riak's backend, keep in mind that each vnode
requires a fixed amount of memory (set via max_open_files and cache_size in
app.c
Thanks Brian for quick response.
As a side question, what is the best way to delete such an object i.e. once
I know one of the siblings has 'deleted' flag true because I fetched it?
Should I just use DomainBucket.delete(key) without providing any vclock?
Would it wipe it from Riak or create yet an
You know you're old when 128GB ram is omgwtflol.
-Alexander Sicular
@siculars
On Oct 3, 2013, at 12:18 PM, John Kavanagh wrote:
> Thanks Matthew , we are actually using Bitcask with 128Gb RAM per node.
>
> John
>
> On 3 Oct 2013, at 17:10, Matthew Von-Maszewski wrote:
>
> John,
>
> If yo
Hi Alex,
Does azure allow you to ensure that your vm's are not on the same physical
host? Linode lets me do that and you kinda need that when running something
like Riak.
Thanks,
-Alexander Sicular
@siculars
On Oct 3, 2013, at 11:43 AM, Alex Rice wrote:
> Thanks for the link, James!
>
> Wi
Hi Sean ,
What would the perform issues be with setting a ring size to 1024 (instead of
128 or 256) on a 6 node cluster?
I can see the possibility of memory overflow (which is not a high concern on a
128G machine ) , but other than that , are there any additional concerns (with
Bitcask) ?
Your main concern with greater numbers of partitions is contention for IO
(and CPU time for the vnodes to process requests). Consider that with 6
nodes, you will have 170(+/- 1) partitions running on each physical node.
That's *at least* 170 files open, all scribbling to and reading from disk.
You
Thanks , that makes sense and I think it gives me my answer.
I suppose I'm trying to understand if say , an acceptable perf decrease of ~%5
is a healthy trade off in order to mitigate against a future (but possibly
unlikely) capacity cap .
John
On 3 Oct 2013, at 18:51, Sean Cribbs wrote:
On Thu, Oct 3, 2013 at 10:32 AM, Daniel Iwan wrote:
> Thanks Brian for quick response.
>
> As a side question, what is the best way to delete such an object i.e. once
> I know one of the siblings has 'deleted' flag true because I fetched it?
> Should I just use DomainBucket.delete(key) without pro
Homebrew installed Riak will use the Erlang located in this directory:
/usr/local/Cellar/riak/1.4.2/erts-5.9.1/bin/erl
Could you retry the code snippet using the erl located there? Also, the
output of which -a erl could be useful. I think you should uninstall your
homebrew-installed Erlang as it'
One more use-case for backups: If you're running a big cluster and UserX
makes a bad code deploy which horks a bunch of data ... restore may be the
only option.
It happens.
-mox
On Wed, Oct 2, 2013 at 12:12 PM, John E. Vincent <
lusis.org+riak-us...@gmail.com> wrote:
> I'm going to take a com
I consider that the main use case ;p
On Thu, Oct 3, 2013 at 8:38 PM, Mike Oxford wrote:
> One more use-case for backups: If you're running a big cluster and UserX
> makes a bad code deploy which horks a bunch of data ... restore may be the
> only option.
>
> It happens.
>
> -mox
>
>
> On Wed,
Interesting… looks like that Erlang doesn't like something. I don't actually
know what to do from here (Erlang is still a big black box to me).
escorpiao:~ seawolf$ /usr/local/Cellar/riak/1.4.1/erts-5.9.1/bin/erl
Erlang R15B01 (erts-5.9.1) [source] [64-bit] [smp:4:4] [async-threads:0]
[kernel-po
Just trying to paraphrase how I understand it from the Riak docs, plus
helpful feedback from Jeremiah :) Please correct if I'm not really
groking it!
with allow_multi = false, the default setting
- To achieve CAS* -ish behavior for updates, you can always send the
vector clock with a Put. If it f
Hello,
Riak includes these commands:
search-cmd set-schema [INDEX] SCHEMAFILE
search-cmd show-schema [INDEX]
Once imported/loaded, where is this schema file stored? Is it in a bucket,
or on the filesystem?
Thanks
Jon
___
riak-users mailing lis
30 matches
Mail list logo