How is pagination accomplished when you dont know a start key? For
example, how can I "jump" to page 10?
3 Node cluster and I just ran a nodetool cleanup on node #3. 1 and 2 are
now at 100% disk space. What should I do?
org.apache.cassandra.db.Memtable.access$000(Memtable.java:42)
at org.apache.cassandra.db.Memtable$1.runMayThrow(Memtable.java:173)
at
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
... 6 more
Any tips?
On 12/7/10 8:44 PM, Mark wrote:
3 Node cluster and I just
e any that exist.
On Wed, Dec 8, 2010 at 9:04 AM, Oleg Anastasyev wrote:
Mark gmail.com> writes:
Caused by: java.lang.RuntimeException: Insufficient disk space to flush
at
On 12/7/10 8:44 PM, Mark wrote:
3 Node cluster and I just ran a nodetool cleanup on node #3. 1 and 2
are now
space but you are correct. I would be careful of using the disk
more than 50% as the anit-compaction during cleanup could fail.
I don't have any experience with adding a data directory on the fly.
On Wed, Dec 8, 2010 at 4:51 PM, Mark wrote:
Did both but didn't seem to help. I have ano
What is this directory used for and how was it created?
? Are there any normal day-to-day operations that would cause the
any one node to double in size that I should be aware of? If on or more
nodes to surpass the 50% mark, what should I plan to do?
Thanks for any advice
? Across how
many column families
You configuration is unusual both in terms of not setting min heap ==
max heap and the percentage of available RAM used for the heap. Did you
change the heap size in response to errors or for another reason?
On 03/04/2011 03:25 PM, Mark wrote:
This happens
the cassandra.yaml, I can help. Mostly I had to play with memtable
thresholds.
Thanks,
Naren
On Fri, Mar 4, 2011 at 12:43 PM, Mark <mailto:static.void@gmail.com>> wrote:
We have 7 column families and we are not using the default key
cache (20).
These were ou
that helps.
Aaron
On 5/03/2011, at 10:54 AM, Mark wrote:
Thats very nice of you. Thanks
MyCluster
true
true
128
org.apache.cassandra.locator.RackUnawareStrategy
2
org.apache.cassandra.locator.EndPointSnitch
org.apache.cassandra.auth.AllowAllAut
5:50 AM, Mark <mailto:static.void@gmail.com>> wrote:
If its determined that this is due to a very large row, what are my
options?
Thanks
On 3/5/11 7:11 PM, aaron morton wrote:
First question is which version are you running ? Am guessing 0.6
something
If you have OOM in the c
I haven't looked at Cassandra since 0.6.6 and now I notice in 0.7+ there
is support for secondary indexes. I haven't found much material on how
these are used and when one should use one. Can someone point me in the
right direction.
Also can these be created (and deleted) as needed without aff
Where do must people install Cassandra to? /var or /opt?
Thanks
I thought I read somewhere that Pig has an output format that can write
to Cassandra but I am unable to find any documentation on this. Is this
possible and if so can someone please point me in the right direction.
Thanks
PM, Mark wrote:
I thought I read somewhere that Pig has an output format that can write to
Cassandra but I am unable to find any documentation on this. Is this possible
and if so can someone please point me in the right direction. Thanks
oop-pig on IRC on freenode.
On Mar 10, 2011, at 11:43 PM, Mark wrote:
Sweet! This is exactly what I was looking for and it looks like it was just
resolved.
Are there any working examples or documentation on this feature?
Thanks
On 3/10/11 8:57 PM, Matt Kennedy wrote:
On its way... ht
Still not seeing 0.7.4 as a download option on the main site?
On 3/15/11 9:20 AM, Eric Evans wrote:
Hot on the heals of 0.7.3, I'm pleased to announce 0.7.4, with bugs
fixed, optimizations made, and features added[1].
Upgrading from 0.7.3 is a snap, but if you're upgrading from an earlier
versi
We are thinking about using Cassandra to store our search logs. Can
someone point me in the right direction/lend some guidance on design? I
am new to Cassandra and I am having trouble wrapping my head around some
of these new concepts. My brain keeps wanting to go back to a RDBMS design.
We wi
he
process of modeling a blog in cassandra so you can get a sense of the
process.
Dave Viner
On Mon, Jul 26, 2010 at 4:46 PM, Mark <mailto:static.void@gmail.com>> wrote:
We are thinking about using Cassandra to store our search logs.
Can someone point me in the right di
On 7/26/10 7:06 PM, Dave Viner wrote:
AFAIK, atomic increments are not available. There recently has been
quite a bit of discussion about them. So, you might search the archives.
Dave Viner
On Mon, Jul 26, 2010 at 7:02 PM, Mark <mailto:static.void@gmail.com>> wrote:
On
Can someone quickly explain the differences between the two? Other than
the fact that MongoDB supports ad-hoc querying I don't know whats
different. It also appears (using google trends) that MongoDB seems to
be growing while Cassandra is dying off. Is this the case?
Thanks for the help
e's a good post on stackoverflow comparing the two
http://stackoverflow.com/questions/2892729/mongodb-vs-cassandra
It seems to me that both projects have pretty vibrant communities behind them.
On Tue, Jul 27, 2010 at 11:14 AM, Mark wrote:
Can someone quickly explain the differe
I know there is no native support for "order by", "group by" etc but I
was wondering how it could be accomplished with some custom indexes?
For example, say I have a list of word counts like (notice 2 words have
the same count):
"cassandra" => 100
"foo" => 999
"bar" => 1
"
Is there any limitations on the number of columns a row can have? Does
all the day for a single key need to reside on a single host? If so,
wouldn't that mean there is an implicit limit on the number of columns
one can have... ie the disk size of that machine.
What is the proper way to handle
d see what works for you.
Aaron
On 29 Jul, 2010,at 02:36 PM, Mark wrote:
I know there is no native support for "order by", "group by" etc but I
was wondering how it could be accomplished with some custom indexes?
For example, say I have a list of word counts like (noti
the number of columns per row is constrained.
On Thu, Jul 29, 2010 at 2:39 PM, Mark wrote:
Is there any limitations on the number of columns a row can have? Does all
the day for a single key need to reside on a single host? If so, wouldn't
that mean there is an implicit limit on the n
Is there anyway to limit the number of results returned from a the CLI?
We just started using Cassandra 0.6.4 yesterday for simple logging of a
particular action. This is pretty much experimental for us at this stage
so we only have 1 node up and running (no gasps please). My question is
how can I add another column family (ie alter storage-conf.xml) without
disrup
On 8/5/10 8:01 AM, Rui Silva wrote:
Hi all,
first of all, I have read the Cassandra Hardware requirements page on
Cassandra wiki: http://wiki.apache.org/cassandra/CassandraHardware .
I am currently in a simple project that, fetches data from a message
broker. That data can be thought as logging
Has anyone had any success using Cassandra 0.7 w/ ruby? I'm attempting
to use the fauan/cassandra gem (http://github.com/fauna/cassandra/)
which has explicit support for 0.7 but I keep receiving the following
error message when making a request.
Thrift::TransportException: end of file reached
difference between these two and why
does 0.7 default to true while earlier versions default to false? Thanks
again!
On 8/6/10 9:51 AM, Ryan King wrote:
Make sure the client and server are both using the same transport
(framed vs. non)
-ryan
On Fri, Aug 6, 2010 at 9:47 AM, Mark wrote
On 8/6/10 4:50 PM, Thomas Heller wrote:
Thanks for the suggestion.
I've somewhat understand all that, the point where my head begins to explode
is when I want to figure out something like
Continuing with your example: "Over the last X amount of days give me all
the logs for remote_addr:XXX".
I'
On 8/6/10 6:36 PM, Benjamin Black wrote:
Same answer as on other thread right now about how to index:
http://maxgrinev.com/2010/07/12/do-you-really-need-sql-to-do-it-all-in-cassandra/
http://www.slideshare.net/benjaminblack/cassandra-basics-indexing
On Fri, Aug 6, 2010 at 6:18 PM, Mark wrote
On 8/6/10 6:36 PM, Benjamin Black wrote:
Same answer as on other thread right now about how to index:
http://maxgrinev.com/2010/07/12/do-you-really-need-sql-to-do-it-all-in-cassandra/
http://www.slideshare.net/benjaminblack/cassandra-basics-indexing
On Fri, Aug 6, 2010 at 6:18 PM, Mark wrote
In the "CassandraLimitations" wiki it states:
" Cassandra has two levels of indexes: key and column"
I understand how the column and subcolumn indexes work but can someone
explain to me how the key level index works?
On 8/7/10 4:22 AM, Thomas Heller wrote:
Ok, I think the part I was missing was the concatenation of the key and
partition to do the look ups. Is this the preferred way of accomplishing
needs such as this? Are there alternatives ways?
Depending on your needs you can concat the row key or us
On 8/7/10 11:30 AM, Mark wrote:
On 8/7/10 4:22 AM, Thomas Heller wrote:
Ok, I think the part I was missing was the concatenation of the key and
partition to do the look ups. Is this the preferred way of
accomplishing
needs such as this? Are there alternatives ways?
Depending on your needs
On 8/7/10 2:33 PM, Benjamin Black wrote:
Right, this is an index row per time interval (your previous email was not).
On Sat, Aug 7, 2010 at 11:43 AM, Mark wrote:
On 8/7/10 11:30 AM, Mark wrote:
On 8/7/10 4:22 AM, Thomas Heller wrote:
Ok, I think the part I was missing was
On 8/7/10 7:04 PM, Benjamin Black wrote:
certainly it matters: your previous version is not bounded on time, so
will grow without bound. ergo, it is not a good fit for cassandra.
On Sat, Aug 7, 2010 at 2:51 PM, Mark wrote:
On 8/7/10 2:33 PM, Benjamin Black wrote:
Right, this is an
I'm running a 2 node cluster and when I run nodetool ring I get the
following output
Address Status State LoadToken
160032583171087979418578389981025646900
127.0.0.1 Up Normal 42.28 MB
42909338385373526599163667549814010
On 8/9/10 12:51 PM, S Ahmed wrote:
that's the token range
so node#1 is from 1600.. to 429..
node#2 is from 429... to 1600...
hopefully others can chime into confirm.
On Mon, Aug 9, 2010 at 12:30 PM, Mark <mailto:static.void@gmail.com>> wrote:
I'm running a 2 node
When I kill cassandra and restart I keep seeing the following error
message. Is this something I should be concerned about?
I though killing cassandra via kill $PID was somewhat "safe"???
Thanks
Switching to ParallelGC to avoid CMS/CompressedOops incompatibility
INFO 17:05:08,680 DiskAccessMod
"org.apache.thrift.protocol.TProtocolException: Missing version in
readMessageBegin, old client?"
Is the CLI not supported when using TSocket? I don't believe this was
the same in 0.6.
Can someone explain the differences between TFramedTransport vs TSocket.
I tried searching but I couldn't f
How is this accomplished?
I tried using the
org.apache.cassandra.service.StorageService.loadSchemaFromYAML() method
but I am receiving the following error.
java.util.concurrent.ExecutionException:
org.apache.cassandra.config.ConfigurationException: Cannot load from XML
on top of pre-existin
On 8/11/10 8:02 PM, Brandon Williams wrote:
On Wed, Aug 11, 2010 at 7:09 PM, Mark <mailto:static.void@gmail.com>> wrote:
When I kill cassandra and restart I keep seeing the following
error message. Is this something I should be concerned about?
No, there's just a rac
On 8/11/10 10:11 PM, Jonathan Ellis wrote:
you have to use an up to date CLI, the old one used broken options w/
its framed mode
On Wed, Aug 11, 2010 at 6:39 PM, Mark wrote:
"org.apache.thrift.protocol.TProtocolException: Missing version in
readMessageBegin, old client?"
Is t
On 8/12/10 8:29 AM, Mark wrote:
On 8/11/10 10:11 PM, Jonathan Ellis wrote:
you have to use an up to date CLI, the old one used broken options w/
its framed mode
On Wed, Aug 11, 2010 at 6:39 PM, Mark wrote:
"org.apache.thrift.protocol.TProtocolException: Missing version in
readMessage
On 8/12/10 9:14 PM, Jonathan Ellis wrote:
Works fine here.
bin/cassandra-cli --host localhost --port 9160
Connected to: "Test Cluster" on localhost/9160
Welcome to cassandra CLI.
On Thu, Aug 12, 2010 at 2:18 PM, Mark wrote:
On 8/12/10 8:29 AM, Mark wrote:
On 8/11/1
On 8/12/10 10:20 PM, Mark wrote:
On 8/12/10 9:14 PM, Jonathan Ellis wrote:
Works fine here.
bin/cassandra-cli --host localhost --port 9160
Connected to: "Test Cluster" on localhost/9160
Welcome to cassandra CLI.
On Thu, Aug 12, 2010 at 2:18 PM, Mark wrote:
On 8/12/10 8:29 AM,
On 8/13/10 7:09 AM, Jonathan Ellis wrote:
if you turn off framed mode (by setting the the transport size to 0)
then you need to use the unframed option with cli
On Thu, Aug 12, 2010 at 10:20 PM, Mark wrote:
On 8/12/10 9:14 PM, Jonathan Ellis wrote:
Works fine here.
bin/cassandra
I'm a little confused on when I should be using TimeUUID vs Epoch/Long
when I want columns ordered by time. I know it sounds strange and the
obvious choice should be TimeUUID but I'm not sure why that would be
preferred over just using the Epoch stamp?
The pretty much seem to accomplish the sa
Keys are indexed in Cassandra but are they ordered? If so, how?
Do Key Slices work like Range Slices for columns.. ie I can give a start
and end range? It seems like if they are not ordered (which I think is
true) then performing KeyRanges would be somewhat inefficient or at
least not as effic
3, 2010 at 6:32 PM, Mark wrote:
I'm a little confused on when I should be using TimeUUID vs Epoch/Long when
I want columns ordered by time. I know it sounds strange and the obvious
choice should be TimeUUID but I'm not sure why that would be preferred over
just using the Epoch sta
ually, UUID generator don't give you that, you'll have to 'do it
yourself',
which is annoying (but not very hard if you look a UUID version 1 layout).
--
Sylvain
On Fri, Aug 13, 2010 at 6:54 PM, Mark wrote:
On 8/13/10 9:38 AM, Sylvain Lebresne wrote:
As long as
Is there some way I can count the number of rows in a CF.. CLI, MBean?
Gracias
On 8/13/10 10:44 AM, Jonathan Ellis wrote:
not without fetching all of them with get_range_slices
On Fri, Aug 13, 2010 at 10:37 AM, Mark wrote:
Is there some way I can count the number of rows in a CF.. CLI, MBean?
Gracias
Im guessing you would advise against this? Any
On 8/13/10 10:52 AM, Jonathan Ellis wrote:
because it would work amazingly poorly w/ billions of rows. it's an
antipattern.
On Fri, Aug 13, 2010 at 10:50 AM, Mark wrote:
On 8/13/10 10:44 AM, Jonathan Ellis wrote:
not without fetching all of them with get_range_slices
On Fri
Just upgraded my cassandra gem today to b/cassandra fork and noticed
that the transport changed. I re-enabled TFramedTransport in
cassandra.yml but my client no longer works. I keep receiving the
following error.
Thrift::ApplicationException: describe_keyspace failed: unknown result
from
lso fixed a bug another user found when
running with Ruby 1.9.
Summary: pull again, use master, have fun. If it still doesn't work,
please open an issue to me.
b
On Mon, Aug 16, 2010 at 2:13 PM, Mark wrote:
Just upgraded my cassandra gem today to b/cassandra fork and noticed that
the tra
On 8/16/10 6:19 PM, Benjamin Black wrote:
client = Cassandra.new('system', '127.0.0.1:9160')
Brand new download of beta-0.7.0-beta1
http://gist.github.com/528357
Which thrift/thrift_client versions are you using?
On 8/16/10 8:51 PM, Mark wrote:
On 8/16/10 6:19 PM, Benjamin Black wrote:
client = Cassandra.new('system', '127.0.0.1:9160')
Brand new download of beta-0.7.0-beta1
http://gist.github.com/528357
Which thrift/thrift_client versions are you using?
FYI also tested simi
someone comment on whether 0.7 beta1 is at Thrift interface
version 10.0.0 or 11.0.0?
b
On Mon, Aug 16, 2010 at 9:03 PM, Mark wrote:
On 8/16/10 8:51 PM, Mark wrote:
On 8/16/10 6:19 PM, Benjamin Black wrote:
client = Cassandra.new('system', '127.0.0.1:9160')
Brand new
On 8/17/10 5:44 PM, Benjamin Black wrote:
Updated code is now in my master branch, with the reversion to 10.0.0.
Please let me know of further trouble.
b
On Tue, Aug 17, 2010 at 8:31 AM, Mark wrote:
On 8/16/10 11:37 PM, Benjamin Black wrote:
I'm testing with the default cassandra
Are there any examples/tutorials on the web for reading/writing from
Cassandra into/from Hadoop?
I found the example in contrib/word_count but I really can't make sense
of it... a tutorial/explanation would help.
just write to cassandra directly via
thrift. there is a built-in outputformat coming in 0.7 but it
still might change before 0.7 final - that will queue up changes
so it will write large blocks all at once.
On Aug 19, 2010, at 12:07 PM, Mark wrote:
> Are there any ex
at will queue up changes so it will write large blocks all at
once.
On Aug 19, 2010, at 12:07 PM, Mark wrote:
Are there any examples/tutorials on the web for reading/writing from Cassandra
into/from Hadoop?
I found the example in contrib/word_count but I really can't make sense of
it..
On 8/19/10 11:14 AM, Mark wrote:
On 8/19/10 10:23 AM, Jeremy Hanna wrote:
I would check out http://wiki.apache.org/cassandra/HadoopSupport for
more info. I'll try to explain a bit more here, but I don't think
there's a tutorial out there yet.
For input:
- configure your m
On 8/20/10 1:05 AM, Thorvaldsson Justus wrote:
I think you should try to do it some other way than iterate, it sounds
super suboptimal to me. Also the plugin option he was thinking of I
think is changing Cassandra sourcecode, kind of hard when Cassandra is
changing so fast but very possible.
Is there anyway to remove drop column family/keyspace privileges?
On 8/21/10 4:36 PM, Benjamin Black wrote:
For reference, I learned this from reading the source:
thrift/CassandraServer.java
On Sat, Aug 21, 2010 at 4:19 PM, Mark wrote:
Is there anyway to remove drop column family/keyspace privileges?
It seems that SimpleAuthenticator out of box is all
Using 0.7beta1 and I noticed one of my nodes was not responding.. wtf?
Went to restart and I got the following error. Any clues?
Exception encountered during startup.
java.lang.StackOverflowError
at java.util.Vector.ensureCapacityHelper(Vector.java:238)
at java.util.Vector.addElement(Ve
On 8/26/10 11:15 AM, Mark wrote:
Using 0.7beta1 and I noticed one of my nodes was not responding..
wtf? Went to restart and I got the following error. Any clues?
Exception encountered during startup.
java.lang.StackOverflowError
at java.util.Vector.ensureCapacityHelper(Vector.java:238
On 8/26/10 11:16 AM, Mark wrote:
On 8/26/10 11:15 AM, Mark wrote:
Using 0.7beta1 and I noticed one of my nodes was not responding..
wtf? Went to restart and I got the following error. Any clues?
Exception encountered during startup.
java.lang.StackOverflowError
at
On 8/26/10 11:45 AM, thelastpickle.com wrote:
Looks like this https://issues.apache.org/jira/browse/CASSANDRA-1435
Aaron
Sent from my iPad
On 27 Aug 2010, at 06:16, Mark wrote:
On 8/26/10 11:15 AM, Mark wrote:
Using 0.7beta1 and I noticed one of my nodes was not responding.. wtf? Went to
alized, which... you get the picture.
The stack trace is very similar to
https://issues.apache.org/jira/browse/CASSANDRA-1382, which is fixed
and will be included in the next beta.
Gary.
On Thu, Aug 26, 2010 at 13:15, Mark wrote:
Using 0.7beta1 and I noticed one of my nodes was not respondi
I have a 2 node cluster (testing the waters) w/ a replication factor
of 2. One node got completed screwed up (see any of my previous messages
from today) so I deleted the commit log and data directory. I restarted
the node and rain nodetool repair as describe in
http://wiki.apache.org/cassand
On 8/26/10 3:03 PM, Aaron Morton wrote:
Check the logs for errors and run nodetool streams to see if it's
moving data around.
Aaron
On 27 Aug, 2010,at 09:53 AM, Mark wrote:
I have a 2 node cluster (testing the waters) w/ a replication factor
of 2. One node got completed screwed up
I will be loadbalancing between nodes using HAProxy. Is this recommended?
Also is there a some sort of ping/health check uri available?
Thanks
On 8/28/10 11:20 AM, Benjamin Black wrote:
no and no.
On Sat, Aug 28, 2010 at 10:28 AM, Mark wrote:
I will be loadbalancing between nodes using HAProxy. Is this recommended?
Also is there a some sort of ping/health check uri available?
Thanks
any reason on why loadbalancing client
On 8/28/10 11:20 AM, Benjamin Black wrote:
no and no.
On Sat, Aug 28, 2010 at 10:28 AM, Mark wrote:
I will be loadbalancing between nodes using HAProxy. Is this recommended?
Also is there a some sort of ping/health check uri available?
Thanks
Also, what would be a good way of
On 8/28/10 2:44 PM, Benjamin Black wrote:
On Sat, Aug 28, 2010 at 2:34 PM, Anthony Molinaro
wrote:
I think maybe he thought you meant put a layer between cassandra internal
communication.
No, I took the question to be about client connections.
There's no problem balancing client connection
Is there an easy way to retrieve all values from a CF.. similar to a
dump?
How about retrieving all columns for a particular key?
In the second use case a simple iteration would work using a start and
finish but how would this be accomplished across all keys for a
particular CF when you don'
How can one trucate or drop a column family in 0.6.x?
Thanks
I am using 0.6.5 so I guess its easy as draining and removing the data
files. Thanks
On 9/7/10 10:28 AM, Rob Coli wrote:
On 9/7/10 10:09 AM, Jonathan Ellis wrote:
flush, stop server, remove the data files, start server
As I understand it, there a race here where a new Memtable can be
creat
Does anyone know of any good tutorials for using Pig with Cassandra?
I am trying do a basic load:
rows = LOAD 'cassandra://Foo/Bar' USING CassandraStorage();
but i keep getting this error.
ERROR 1070: Could not resolve CassandraStorage using imports: [,
org.apache.pig.builtin., org.apache.pi
I am trying to run the Cassandra pig example but I keep receiving...
org.apache.pig.impl.logicalLayer.FrontendException: ERROR 1066: Unable
to open iterator for alias rows
at org.apache.pig.PigServer.openIterator(PigServer.java:521)
at
org.apache.pig.tools.grunt.GruntParser.processDum
How does one enable framed transport when using the pig loadfunc?
Thanks
0 at 10:32 AM, Mark wrote:
I am trying to run the Cassandra pig example but I keep receiving...
org.apache.pig.impl.logicalLayer.FrontendException: ERROR 1066: Unable to
open iterator for alias rows
at org.apache.pig.PigServer.openIterator(PigServer.java:5
hing else).
On Fri, Sep 24, 2010 at 2:14 PM, Mark wrote:
How does one enable framed transport when using the pig loadfunc?
Thanks
As the subject implies I am trying to dump Cassandra rows into Hadoop.
What is the easiest way for me to accomplish this? Thanks.
Should I be looking into pig for something like this?
I tried adding a new node and rebalanced the ring via nodetool move but
ending up in a weird state. Blew away all data from 2 nodes (out of 3)
and manually set tokens but its completely unbalanced.
[r...@cassandra1 apache-cassandra]# bin/nodetool --host localhost --port
8080 ring
Address
Is there anyway to use DIH to import from Cassandra? Thanks
es or indexes to know what has changed.
There is also the Lucandra project, not exactly what your after but
may be of interest anyway https://github.com/tjake/Lucandra
Hope that helps.
Aaron
On 30 Nov, 2010,at 05:04 AM, Mark wrote:
Is there anyway to use DIH to import from Cassandra? Thanks
emoryTracking=summary and use jcmd to
check out native memory usage from the JVM's perspective.
-Mark
-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org
the memory?
Mark Bryant
[Sumo-Logo] [Tel] +44 (0)114 2426766
[Web]www.sumo-digital.com
[Address]32 Jessops Riverside, 800 Brightside Lane, Sheffield, S9 2RX, United
Kingdom
Why would I want to use alter table vs upserts with the new document format?
Mark Furlong
Sr. Database Administrator
mfurl...@ancestry.com<mailto:mfurl...@ancestry.com>
M: 801-859-7427
O: 801-705-7115
1300 W Traverse Pkwy
Lehi, UT 84043
[http://c.mfcreative.com/mars/email/shared-ic
e
types of loads? Second, any ideas what could be creating bottlenecks for
schema alteration?
Thanks!
--
Mark Bidewell
http://www.linkedin.com/in/markbidewell
the performance limits of the gossip protocol this history makes
restoring snapshots time-intensive.
Is there a a way to, for lack of a better word, "rebase" the schema and
sstables of a table to the latest schema to expunge the history?
Thanks!
--
Mark Bidewell
http://www.linke
+1 to what Eric said, a queue is a classic C* anti-pattern. Something like
Kafka or RabbitMQ might fit your use case better.
Mark
On 24 May 2016 at 18:03, Eric Stevens wrote:
> It sounds like you're trying to build a queue in Cassandra, which is one
> of the classic anti-pattern us
sing c3/m3 instances with the commit log on the ephemeral storage and
data on st1 EBS volumes to be much more cost effective. It's something
to look into if you haven't already.
-Mark
On Fri, Jul 22, 2016 at 8:10 AM, Juho Mäkinen wrote:
> After a few days I've also tried disab
1 - 100 of 386 matches
Mail list logo