Oleg:
If you wanna use - ZFS - use smartos / openindiana and cassandra on top
dont work around with a FUSE FS.
Maybe BSD (not knowing their version of zfs / zpool)
2012/4/4 Oleg Proudnikov
> Thanks, Radim!
>
> What OS are you using and would ZFS be a good option under Linux on EC2?
>
> Thank
OrientDB may be a perfect fit for you - a little bit like couch - on
java we use it too - and it's super fast
2011/8/19 Milind Parikh
> Why not use couchdb for this use case?
> Milind
>
> /***
> sent from my android...please pardon occasional typos as I respond @ the
> sp
riptano - contact matt pfeil
mike
2011/2/17 A J
> By any chance are there companies that provide support for Cassandra ?
> Consult on setup and configuration and annual support packages ?
--
bayoda.com - Professional Online Backup Solutions for Small and Medium Sized
Companies
Hi out there ...
Without starting an OT Thread or an Evangelists war it would be interesting
what filesystems most cassandra installation uses, which performs best in
which cases
Actually we use Cassandra on ZFS (OpenSolaris) - fine tuned for our need. no
Raidcontroller used.
What are the expe
Hi
could we run kundera on 0.7beta Version?
Thanks for answer
Michael
2010/7/31 Sanjay Sharma
> Hi All,
>
> We are happy to announce and share a new ORM over Cassandra – kundera
>
> The project is Apache licensed and hosted at http://kundera.googlecode.com
>
>
>
> The project uses custom Cas
Hmm
I never will that anyone than one of my team will reboot a instance or
server of mine.
Means - if I don't have the possiblity to remote "terminate" the task - or
Remote Power (IP Based) reboot
the DataCenter isn't my Datacenter ;-)
Just my 2 cents - my names (chevron etc) are already on the l
Stargate Series Names:
ONeil
Asgard
Jumper
ZPM1 - till ZPMx
Chevron1 till Chevron9
2010/7/27 John Hogan
> Star Trek ship names.
>
>
>
> JH
>
>
>
> *From:* uncle mantis [mailto:uncleman...@gmail.com]
> *Sent:* Tuesday, July 27, 2010 9:55 AM
> *To:* cassandra-u...@incubator.apache.org
> *Subj
time usage.
> - not sure how much re-use their is, but row size grows with reuse. Should
> be ok for couple of million cols.
>
>
> Oh and if your going to use Hadoop / PIG to analyse the data in this
> beastie you need to think about that in the design. You'll probably want
&
ppropriate.
>
> You could also think about using the order preserving partitioner, and
> using a compound key for each row such as "file_name_hash.offset" . Then by
> using the get_range_slices to scan the range of chunks for a file you would
> not need to maintain a secondary index. Some dra
ery blobs hash value. Read from the index
> first, then from the blobs themselves.
>
> Aaron
>
>
> On 24 Jul, 2010,at 06:51 PM, Michael Widmann
> wrote:
>
> Hi Jonathan
>
> Thanks for your very valuable input on this.
>
> I maybe didn't enough explanation -
Hi Peter
We try to figure that out how much data is coming in to cassandra once in
full operation mode
Reads are more depending on the hash values (the file name) for the binary
blobs - not the binary data itself
We will try to store hash values "grouped" (based on their first byte
(a-z,A-Z,0-9)
memory is not likely
> to be able to hold enough hot data for the specific application.
>
> As always, the real questions have lots more to do with your specific
> access patterns, storage system, etc. I would look at the benchmarking
> info available on the lists as a good starting p
Hi
We plan to use cassandra as a data storage on at least 2 nodes with RF=2
for about 1 billion small files.
We do have about 48TB discspace behind for each node.
now my question is - is this possible with cassandra - reliable - means
(every blob is stored on 2 jbods)..
we may grow up to nearly
Hi
We plan to use cassandra as a data storage on at least 2 nodes with RF=2
for about 1 billion small files.
We do have about 48TB discspace behind for each node.
now my question is - is this possible with cassandra - reliable - means
(every blob is stored on 2 jbods)..
we may grow up to nearly
14 matches
Mail list logo