Hi, all.
I have a question about "batch" commit log sync behavior with C* version
2.2.8.
Here's what I have done:
* set commitlog_sync to the "batch" mode as follows:
> commitlog_sync: batch
> commitlog_sync_batch_window_in_ms: 1
* ran a script which inserts the data to a table
* prepared
Hello,
I put two nodes cluster on Azure. Each node in its own DC (ping about 10 ms.),
inter-node connection (SSL port 7001) is going throw external IPs, i.e.
listen_interface: eth0broadcast_address: 1.1.1.1
Cluster is starting, cqlsh can connect, stress-tool survives night of writes
with replic
Hi,
we have two Cassandra 2.1.15 clusters at work and are having some
trouble with repairs.
Each cluster has 9 nodes, and the amount of data is not gigantic but
some column families have 300+Gb of data.
We tried to use `nodetool repair` for these tables but at the time we
tested it, it made the w
Hi experts!
Seems, that we have a data loss after a rolling update of our Cassandra
2.2.7 nodes.
The nodes failed to start with an exception:
IllegalStateException: empty rows returned when reading
system.schema_keyspaces à (
https://issues.apache.org/jira/browse/CASSANDRA-12351)
*Workarou
Hi Vincent,
most people handle repair with :
- pain (by hand running nodetool commands)
- cassandra range repair :
https://github.com/BrianGallew/cassandra_range_repair
- Spotify Reaper
- and OpsCenter repair service for DSE users
Reaper is a good option I think and you should stick to it. If it
Thanks for the response.
We do break up repairs between tables, we also tried our best to have no
overlap between repair runs. Each repair has 1 segments (purely
arbitrary number, seemed to help at the time). Some runs have an
intensity of 0.4, some have as low as 0.05.
Still, sometimes one p
Oh right, that's what they advise :)
I'd say that you should skip the full repair phase in the migration
procedure as that will obviously fail, and just mark all sstables as
repaired (skip 1, 2 and 6).
Anyway you can't do better, so take a leap of faith there.
Intensity is already very low and 100
how can i get the size of a particular partition key belonging to an
sstable ?? can we find it using index or summary or Statistics.db files ??
does reading the hexdump of these files help ??
Thanks
Pranay.
Ok, I think we'll give incremental repairs a try on a limited number of
CFs first and then if it goes well we'll progressively switch more CFs
to incremental.
I'm not sure I understand the problem with anticompaction and
validation running concurrently. As far as I can tell, right now when a
CF is
Hi Ben,
Thanks for your reply. We dont use timestamps in primary key. We rely on server
side timestamps generated by coordinator. So, no functions at client side would
help.
Yes, drifts can create problems too. But even if you ensure that nodes are
perfectly synced with NTP, you will surely mes
unsubscribe
Following https://issues.apache.org/jira/browse/CASSANDRA-9131. It is very
interesting to track how the timestamp has moved from the user, to the
server, then back to the user quasi the driver.
Next we will be accounting for the earths slowing rotation as the ice caps
melt :)
https://www.uwgb.edu
Azure has aggressively low keepalive settings for it's networks. Ignore the
Mongo parts of this link and have a look at the OS settings they change.
https://docs.mongodb.com/ecosystem/platforms/windows-azure/
*---Cliff Gilmore*
*Vanguard Solutions
3.3GB is already too high, and it's surely not good to have well performing
compactions. Still I know changing a data model is no easy thing to do, but
you should try to do something here.
Anticompaction is a special type of compaction and if an sstable is being
anticompacted, then any attempt to
If you need guaranteed strict ordering in a distributed system, I would not
use Cassandra, Cassandra does not provide this out of the box. I would look
to a system that uses lamport or vector clocks. Based on your description
of how your systems runs at the moment (and how close your updates are
to
Yeah that particular table is badly designed, I intend to fix it, when
the roadmap allows us to do it :)
What is the recommended maximum partition size ?
Thanks for all the information.
On Thu, Oct 27, 2016, at 08:14 PM, Alexander Dejanovski wrote:
> 3.3GB is already too high, and it's surely no
I have the following table schema:
*CREATE TABLE ticket_by_member (*
* project_id text,*
* member_id text,*
* ticket_id text,*
* ticket ticket,*
*assigned_members list,*
* votes list>,*
*labels list>,*
* PRIMARY KEY ( project_id, member_id, ticket_id )*
*);*
I have a scenario wher
https://issues.apache.org/jira/browse/CASSANDRA-12654
On Thu, Oct 27, 2016 at 9:59 PM, Ali Akhtar wrote:
> I have the following table schema:
>
> *CREATE TABLE ticket_by_member (*
> * project_id text,*
> * member_id text,*
> * ticket_id text,*
> * ticket ticket,*
> *assigned_members list
I am interested if anyone has taken this approach to share the same
keystore across all the nodes with the 3rd party root/intermediate CA
existing only in the truststore. If so, please share your experience and
lessons learned. Would this impact client-to-node encryption as the
certificates used in
The "official" recommendation would be 100MB, but it's hard to give a
precise answer.
Keeping it under the GB seems like a good target.
A few patches are pushing the limits of partition sizes so we may soon be
more comfortable with big partitions.
Cheers
Le jeu. 27 oct. 2016 21:28, Vincent Rischm
I upgraded DSE 4.8.9 to 5.0.3, that is, from Cassandra 2.1.11 to 3.0.9
I used DSE 5.0.3 tarball installation. Cassandra cluster is up and running
OK and I am able to connect through DBeaver.
Tried a lot of things and cannot connect with cqlsh:
Connection error: ('Unable to connect to any servers'
If you go above ~1GB, the primary symptom you’ll see is a LOT of garbage
created on reads (CASSANDRA-9754 details this).
As redesigning data model is often expensive (engineering time, reloading data,
etc), one workaround is to tune your JVM to better handle situations where you
create a lot
Hello Satoshi and the community,
I am also using commitlog_sync for durability, but I have never
modified commitlog_sync_batch_window_in_ms parameter yet,
so I wondered if it is working or not.
As Satoshi said, I also changed commitlog_sync_batch_window_in_ms (to
1) and restarted C* and
issue
I mentioned during my Cassandra.yaml presentation at the summit that I
never saw anyone use these settings. Things off by default are typically
not highly not covered well by tests. It sounds like it is not working.
Quick suggestion: go back in time maybe to a version like 1.2.X or 0.7 and
see if i
Hi Jacob,
there is no problem to use the same certificate (whether issued by some
authority or self signed) on all nodes until it's present in truststore. CN
doesn't matter in this case, it can be any string you want.
Would this impact client-to-node encryption
Nu, but clients should either
Hi,
>size of a particular partition key
Can you please elucidate this? Key can be just number, or string, or several
values.
Best regards, Vladimir Yudovin,
Winguzone - Hosted Cloud Cassandra
Launch your cluster in minutes.
On Thu, 27 Oct 2016 11:45:47 -0400Pranay akula
26 matches
Mail list logo