Awesome! Thanks!
-Karl
> On Sep 12, 2014, at 5:34 PM, Michael Shuler wrote:
>
>> On 09/12/2014 01:50 PM, Karl Rieb wrote:
>> Hi,
>>
>> Wondering when 2.0.10 will be available through the datastax apt repository?
>
> I'll have 2.0.10 deb/rpm packages in the repos on Monday, barring any issues.
On 09/12/2014 01:50 PM, Karl Rieb wrote:
Hi,
Wondering when 2.0.10 will be available through the datastax apt repository?
I'll have 2.0.10 deb/rpm packages in the repos on Monday, barring any
issues. You can certainly pull the identical cassandra deb package from
the Apache apt repository.
Hi,
Wondering when 2.0.10 will be available through the datastax apt repository?
-Karl
On Fri, Sep 12, 2014 at 10:37 AM, Robert Wille wrote:
> So, here’s my question. Should I roll out on 2.0 or 2.1? My code obviously
> works on 2.0 (although there are some 2.1 features I could take advantage
> of).
>
> The data I’m moving to Cassandra is very core to our product. It’s a few
> bi
I’m in a fairly unique position. Almost a year ago I developed code to migrate
part of our MySQL database to Cassandra. Shortly after 2.0.6 was released, I
was on the verge of rolling it to live when my project got shelved, and my team
got put on a completely different product. In a month or two
On Fri, Sep 12, 2014 at 6:57 AM, Tom van den Berge
wrote:
> Wouldn't it be far more efficient if a node that is rebuilding itself is
> responsible for not accepting reads until the rebuild is complete? E.g. by
> marking it as "Joining", similar to a node that is being bootstrapped?
>
Yes, and Ca
+1 for Redis.
It's really nice, good primitives, and then you can do some really cool
stuff chaining multiple atomic operations to create larger atomics through
the lua scripting.
On Thu, Sep 11, 2014 at 12:26 PM, Robert Coli wrote:
> On Thu, Sep 11, 2014 at 8:30 AM, Danny Chan wrote:
>
>> Wha
Hi Oleg,
Connectors don't deal with HA, they rely on Spark for that, so neither
the Datastax connector, Stratio Deep nor Calliope have anything to do
with Spark's HA. You should have previously configured Spark so that it
meets your high availability needs. Furthermore, as I mentioned in a
pr
Giving this some more thought, I think it's fair to say that using
LOCAL_ONE and LOCAL_QUORUM instead of ONE and QUORUM in this situation is a
actually workaround rather than a solution for this problem.
LOCAL_ONE and LOCAL_QUORUM are introduced to ensure that only the local DC
is used, which can