tempting wasn't precisely the use case
I care about. What I'm following up with now was.
On Fri, Jun 19, 2015 at 8:22 PM, Mitch Gitman wrote:
> I checked the system.log for the Cassandra node that I did the jconsole
> JMX session against and which had the data to load. Lot of lo
I've inherited an integration of Cassandra's Codahale Metrics reporting
with Ganglia that looks sensible enough on the Metrics side. The
metrics-reporter-config.yaml points to a gmond.conf on the node. Excerpt:
ganglia:
-
period: 60
timeunit: 'SECONDS'
gmondConf: '/etc/ganglia/gmond.c
I just happened to run into a similar situation myself and I can see it's
through a bad schema design (and query design) on my part. What I wanted to
do was narrow down by the range on one clustering column and then by
another range on the next clustering column. Failing to adequately think
through
I want to add an extra data point to this thread having encountered much
the same problem. I'm using Apache Cassandra 3.10. I attempted to run an
incremental repair that was optimized to take advantage of some downtime
where the cluster is not fielding traffic and only repair each node's
primary pa
that particular JIRA ticket is coming from someone
reporting the same problem I'm seeing, and their experience indirectly
corroborates mine, or at least it doesn't contradict mine.
On Thu, Jul 27, 2017 at 10:26 AM, Michael Shuler
wrote:
> On 07/27/2017 12:10 PM, Mitch Gitman wrote:
I'm on Apache Cassandra 3.10. I'm interested in moving over to Reaper for
repairs, but in the meantime, I want to get nodetool repair working a
little more gracefully.
What I'm noticing is that, when I'm running a repair for the first time
with the --full option after a large initial load of data,
I'm using sstableloader to bulk-load a table from one cluster to another. I
can't just copy sstables because the clusters have different topologies.
While we're looking to upgrade soon to Cassandra 2.0.x, we're on Cassandra
1.2.19. The source data comes from a "nodetool snapshot."
Here's the comma
eady got this error on a 2.1 clusters because thrift was disabled. So
> you should check that thrift is enabled and accessible from the
> sstableloader process.
>
> Hope this help
>
> Fabien
> Le 19 juin 2015 05:44, "Mitch Gitman" a écrit :
>
>> I'm using ss
s something wrong and fixable
about my particular cluster. On to exporting and re-importing data at the
proprietary application level. Life is too short.
On Fri, Jun 19, 2015 at 2:40 PM, Mitch Gitman wrote:
> Fabien, thanks for the reply. We do have Thrift enabled. From what I can
> tell
I'm reviving this thread because I'm looking for a non-hacky way to migrate
data from one cluster to another using nodetool snapshot and sstableloader
without having to preserve dropped columns in the new schema. In my view,
that's just cruft and confusion that keeps building.
The best idea I can
matter. You define which fields in the resulting file are loaded into which
> columns. You also won’t have the limitations and slowness of COPY TO/FROM.
>
>
>
>
>
> Sean Durity
>
>
>
> *From:* Mitch Gitman
> *Sent:* Friday, July 24, 2020 2:22 PM
> *To:* us
With all the years I've been working with Cassandra, I'm embarrassed that I
have to ask this question.
We have some tables that are taking longer to repair than we're comfortable
with. We're on Cassandra 3.11, so we have to run full repairs as opposed to
incremental repairs, which to my understand
ey're not part of the
application read path.
Thanks. Not having to do these repairs on a regular basis is a big win for
us.
On Thu, Nov 5, 2020 at 11:33 AM Jeff Jirsa wrote:
>
>
> > On Nov 5, 2020, at 10:18 AM, Mitch Gitman wrote:
> >
>
> Hi!
>
> >
> >
We're running Cassandra 3.11.6 on AWS EC2 instances. These clusters have
been running for a few years.
We're suddenly noticing now that on one of our clusters the nodetool
command is failing on certain nodes but not on others.
The failure:
nodetool: Failed to connect to '...:7199' - SecurityEx
grade schedule that your colleague is working on may well
>> be relevant. Is your entire cluster on 3.11.6 or are the failing hosts
>> possibly on a newer version?
>>
>> Abe
>>
>> On Feb 26, 2023, at 10:38, Mitch Gitman wrote:
>>
>>
>>
>> We
15 matches
Mail list logo