once with incremental repair, which
is what -pr intended to fix on full repair, by repairing all token ranges only
once instead of times the replication factor.
Cheers,
Le lun. 24 oct. 2016 18:05, Sean Bridges
mailto:sean.brid...@globalrelay.net>> a écrit :
Hey,
In the datastax documen
e -pr with incremental repairs?
Thanks,
Sean
[1]
https://docs.datastax.com/en/cassandra/3.x/cassandra/operations/opsRepairNodesManualRepair.html
--
Sean Bridges
senior systems architect
Global Relay
_sean.bridges@globalrelay.net_ <mailto:sean.brid...@globalrelay.net>
*866.484.6630 *
Ne
d and others
marked as unrepaired, which will never be compacted together.
You might want to flag all sstables as unrepaired before moving on, if
you do not intend to switch to incremental repair for now.
Cheers,
On Wed, Oct 19, 2016 at 6:31 PM Sean Bridges
mailto:sean.brid...@globalrelay.net>
Hey,
We are upgrading from cassandra 2.1 to cassandra 2.2.
With cassandra 2.1 we would periodically repair all nodes, using the -pr
flag.
With cassandra 2.2, the same repair takes a very long time, as cassandra
does an anti compaction after the repair. This anti compaction causes
most (all
We are using lightweight transactions, two datacenters and DC_LOCAL
consistency level.
There is a comment in CASSANDRA-5797,
"This would require manually truncating system.paxos when failing over."
Is that required? I don't see it documented anywhere else.
Thanks,
Sean
https://issues.apache.
> On Thu, Oct 23, 2014 at 9:33 AM, Sean Bridges
> wrote:
>
>> The change from parallel to sequential is very dramatic. For a small
>> cluster with 3 nodes, using cassandra 2.0.10, a parallel repair takes 2
>> hours, and io throughput peaks at 6 mb/s. Sequential repai
repair takes 40
hours, with average io around 27 mb/s. Should I file a jira?
Sean
On Wed, Oct 15, 2014 at 9:23 PM, Sean Bridges
wrote:
> Thanks Robert. Does the switch to sequential from parallel explain why IO
> increases, we see significantly higher IO with 2.10.
>
> The node
cularly important, they're primarily an
> optimization for startup time.
>
> On Thu, Oct 16, 2014 at 12:20 PM, Sean Bridges
> wrote:
>
>> Hello,
>>
>> I thought an sstable was immutable once written to disk. Before
>> upgrading from 1.2.18 to 2.0.10 we took a
Hello,
I thought an sstable was immutable once written to disk. Before upgrading
from 1.2.18 to 2.0.10 we took a snapshot of our sstables. Now when I
compare the files in the snaphot dir and the original files, the Summary.db
files have a newer modified date, and the file sizes have changed.
Th
other replicas, because at least one replica in the
snapshot is not undergoing repair."
Sean
[1]
http://www.datastax.com/documentation/cassandra/2.0/cassandra/tools/toolsRepair.html
On Wed, Oct 15, 2014 at 5:36 PM, Robert Coli wrote:
> On Wed, Oct 15, 2014 at 4:54 PM, Sean Bridges
>
Hello,
We upgraded a cassandra cluster from 1.2.18 to 2.0.10, and it looks like
repair is significantly more expensive now. Is this expected?
We schedule rolling repairs through the cluster. With 1.2.18 a repair
would take 3 hours or so. The first repair after the upgrade has been
going on for
t, because Cassandra doesn't wait until repair writes
>> are acked before the answer is returned. This is something we can fix.
>>
>> On Sun, Apr 17, 2011 at 12:05 AM, Sean Bridges
>> wrote:
>> > Tyler, your answer seems to contradict this email by Jonathan Ell
is something we can fix.
>
> On Sun, Apr 17, 2011 at 12:05 AM, Sean Bridges wrote:
>> Tyler, your answer seems to contradict this email by Jonathan Ellis
>> [1]. In it Jonathan says,
>>
>> "The important guarantee this gives you is that once one quorum read
>
Tyler, your answer seems to contradict this email by Jonathan Ellis
[1]. In it Jonathan says,
"The important guarantee this gives you is that once one quorum read
sees the new value, all others will too. You can't see the newest
version, then see an older version on a subsequent write [sic, I
a
If you are reading and writing at quorum, then what you are seeing
shouldn't happen. You shouldn't be able to read N+1 until N+1 has
been committed to a quorum of servers. At this point you should not
be able to read N anymore, since there is no quorum that contains N.
Dan - I think you are righ
23, 2010 at 2:26 PM, Sean Bridges wrote:
>> We were running a load test against a single 0.6.2 cassandra node. 24
>> hours into the test, Cassandra appeared to be nearly frozen for 10
>> minutes. Our write rate went to almost 0, and we had a large number
>> of write time
We were running a load test against a single 0.6.2 cassandra node. 24
hours into the test, Cassandra appeared to be nearly frozen for 10
minutes. Our write rate went to almost 0, and we had a large number
of write timeouts. We weren't swapping or gc'ing at the time.
It looks like the problems
12:00 AM, gabriele renzi wrote:
> On Wed, May 26, 2010 at 8:00 PM, Sean Bridges
> wrote:
> > So after CASSANDRA-579, anti compaction won't be done on the source node,
> > and we can use more than 50% of the disk space if we use multiple column
> > families?
>
>
for some
> background here: I was just about to start working on this one, but it won't
> make it in until 0.7.
>
>
> -Original Message-
> From: "Sean Bridges"
> Sent: Wednesday, May 26, 2010 11:50am
> To: user@cassandra.apache.org
> Subject: using m
We're investigating Cassandra, and we are looking for a way to get Cassandra
use more than 50% of it's data disks. Is this possible?
For major compactions, it looks like we can use more than 50% of the disk if
we use multiple similarly sized column families. If we had 10 column
families of the s
20 matches
Mail list logo