Great Thank you for the answer and the link!
> On 4 Sep 2021, at 11:35, Erick Ramirez wrote:
>
> It isn't possible to change the tokens on a node once it is already part of
> the cluster. Cassandra won't allow you to do it because it will make the data
> already on disk unreadable. You'll ne
Hi,
We are currently running Cassandra 3.11.11 with the default values for
num_tokens: 256.
We want to migrate to Cassandra 4.0.0 which has default values for num_tokens
set to 16.
Is it safe to migrate with the default values, i.e. can I leave it set to 16
when migrating to Cassandra 4.0.0 or
Did you think about using a Materialised View to generate what you want to
keep, and then use DSBulk to extract the data?
> On 17 Jan 2020, at 14:30 , adrien ruffie wrote:
>
> Sorry I come back to a quick question about the bulk loader ...
>
> https://www.datastax.com/blog/2018/05/introducing-
Thank you for your answer Kai.
On 17 Aug 2016, at 11:34 , Kai Wang mailto:dep...@gmail.com>>
wrote:
yes, you are correct.
On Tue, Aug 16, 2016 at 2:37 PM, Jean Tremblay
mailto:jean.tremb...@zen-innovations.com>>
wrote:
Hi,
I’m using Cassandra 3.7.
In the documentation for sst
Hi,
I’m using Cassandra 3.7.
In the documentation for sstableloader I read the following:
<< Note: To get the best throughput from SSTable loading, you can use multiple
instances of sstableloader to stream across multiple machines. No hard limit
exists on the number of SSTables that sstableloa
To be clear, with the new tick-tock release scheme, 3.5 is designed to be a
stable release.
-- Jack Krupansky
On Thu, Apr 14, 2016 at 3:23 PM, Jean Tremblay
mailto:jean.tremb...@zen-innovations.com>>
wrote:
Hi,
Could someone give his opinion on this?
What should be considered more stabl
Hi,
Could someone give his opinion on this?
What should be considered more stable, Cassandra 3.0.5 or Cassandra 3.5?
Thank you
Jean
> On 12 Apr,2016, at 07:00, Jean Tremblay
> wrote:
>
> Hi,
> Which version of Cassandra should considered most stable in the version 3?
> I s
Hi,
Which version of Cassandra should considered most stable in the version 3?
I see two main branch: the branch with the version 3.0.* and the tick-tock one
3.*.*.
So basically my question is: which one is most stable, version 3.0.5 or version
3.3?
I know odd versions in tick-took are bug fix.
the brute force test and
Cassandra never logged any warnings.
Is this a valid test?
Ralf
On 24.03.2016, at 10:46, Jean Tremblay
mailto:jean.tremb...@zen-innovations.com>>
wrote:
Ralf,
Are you using protocol V4?
How do you measure if a tombstone was generated?
Thanks
Jean
Ralf,
Are you using protocol V4?
How do you measure if a tombstone was generated?
Thanks
Jean
On 24 Mar 2016, at 10:35 , Ralf Steppacher
mailto:ralf.viva...@gmail.com>> wrote:
How does this improvement apply to inserting JSON? The prepared statement has
exactly one parameter and it is always
Same for me. Only inserts not replacing old records.
On 24 Mar,2016, at 07:42, Ralf Steppacher
mailto:ralf.viva...@gmail.com>> wrote:
Eric,
I am writing the whole record in a single INSERT INTO ... JSON. I am not
"insert-updating" over an existing record nor do I run any UPDATE statements.
R
Hi,
I also have loads of tombstones while only inserting new rows without ever
deleting rows.
My rows contain null columns and also collections.
How can I avoid the creation of these tombstones?
Thanks for your help.
On 24 Mar,2016, at 02:08, Steve Robenalt
mailto:sroben...@highwire.org>> wr
that there is a new
key space.
Thanks again for your feedback
Jean
On 27 Jan,2016, at 19:58, Robert Coli
mailto:rc...@eventbrite.com>> wrote:
On Wed, Jan 27, 2016 at 6:49 AM, Jean Tremblay
mailto:jean.tremb...@zen-innovations.com>>
wrote:
Since it takes me 2 days to load my
Hi,
I have a huge set of data, which takes about 2 days to bulk load on a Cassandra
3.0 cluster of 5 nodes. That is about 13 billion rows.
Quite often I need to reload this data, new structure, or data is reorganise.
There are clients reading from a given keyspace (KS-X).
Since it takes me 2 d
stomers in 45 countries, DataStax is the database
technology and transactional backbone of choice for the worlds most innovative
companies such as Netflix, Adobe, Intuit, and eBay.
On Fri, Jan 15, 2016 at 4:00 AM, Jean Tremblay
mailto:jean.tremb...@zen-innovations.com>>
wrote:
Thank you Sebastián f
d transactional backbone of choice for the worlds most innovative
companies such as Netflix, Adobe, Intuit, and eBay.
On Thu, Jan 14, 2016 at 1:00 PM, Jean Tremblay
mailto:jean.tremb...@zen-innovations.com>>
wrote:
How can I restart?
It blocks with the error listed below.
Are my memory sett
r node and
perhaps what your schema looks like?
Thanks
On Thu, Jan 14, 2016 at 12:24 PM, Jean Tremblay
mailto:jean.tremb...@zen-innovations.com>>
wrote:
Ok, I will open a ticket.
How could I restart my cluster without loosing everything ?
Would there be a better memory configuration to s
mailto:ty...@datastax.com>> wrote:
I don't think that's a known issue. Can you open a ticket at
https://issues.apache.org/jira/browse/CASSANDRA and attach your schema along
with the commitlog files and the mutation that was saved to /tmp?
On Thu, Jan 14, 2016 at 10:56 AM, Jean T
Hi,
I have a small Cassandra Cluster with 5 nodes, having 16MB of RAM.
I use Cassandra 3.1.1.
I use the following setup for the memory:
MAX_HEAP_SIZE="6G"
HEAP_NEWSIZE="496M"
I have been loading a lot of data in this cluster over the last 24 hours. The
system behaved I think very nicely. It wa
ady. Note however that
Cassandra 2.2 and 3.0 are quite recent and most companies AFAICT do not
consider them yet as production-ready.
Hope that helps,
Alexandre
On Tue, Dec 22, 2015 at 4:40 PM Jean Tremblay
mailto:jean.tremb...@zen-innovations.com>>
wrote:
Hi,
Which Java Driver is sui
Hi,
Which Java Driver is suited for Cassandra 2.2.x. ?
I see datastax 3.0.0 beta1 and datastax 2.2.0 rc3...
Are they suited for production?
Is there anything better?
Thanks for your comments and replies?
Jean
I have the same problem.
When I bulk load my data, I have a problem with Cassandra Datastax driver.
com.datastax.cassandra
cassandra-driver-core
2.1.4
With version 2.1.6 and also with version 2.1.7.1 I have lost records with no
error message what so ever.
With version 2.1.4 I have no missing
n both clusters.
Again, Alain, thanks for your help.
Kind regards
Jean
Anyway, see if you can give us more info related to this.
C*heers,
Alain
2015-08-18 14:40 GMT+02:00 Jean Tremblay
mailto:jean.tremb...@zen-innovations.com>>:
No. I did not try.
I would like to understand what i
nodes, perhaps one at a time?
On Tue, Aug 18, 2015 at 3:59 AM, Jean Tremblay
mailto:jean.tremb...@zen-innovations.com>>
wrote:
Hi,
I have a phenomena I cannot explain, and I would like to understand.
I’m running Cassandra 2.1.8 on a cluster of 5 nodes.
I’m using replication factor 3, with
Hi,
I have a phenomena I cannot explain, and I would like to understand.
I’m running Cassandra 2.1.8 on a cluster of 5 nodes.
I’m using replication factor 3, with most default settings.
Last week I done a nodetool status which gave me on each node a load of about
200 GB.
Since then there was no
When you do a nodetool command and you don’t specify a hostname, it sends the
requests via JMX to the localhost node. If that node is down then the command
will not succeed.
In your case you are probably doing the command from a machine which has not
cassandra running, in that case you need to s
hat. An other solution would be to "replace" the node -->
http://docs.datastax.com/en/cassandra/2.0/cassandra/operations/ops_replace_node_t.html
C*heers,
Alain
2015-06-25 17:07 GMT+02:00 Jean Tremblay
mailto:jean.tremb...@zen-innovations.com>>:
Hi,
I am testing snapshot rest
Hi,
I am testing snapshot restore procedures in case of a major catastrophe on our
cluster. I’m using Cassandra 2.1.7 with RF:3
The scenario that I am trying to solve is how to quickly get one node back to
work after its disk failed and lost all its data assuming that the only thing I
have is
No, I did not.
On 24 Jun 2015, at 06:05, Jason Wee
mailto:peich...@gmail.com>> wrote:
on the node 192.168.2.100, did you run repair after its status is UN?
On Wed, Jun 24, 2015 at 2:46 AM, Jean Tremblay
mailto:jean.tremb...@zen-innovations.com>>
wrote:
Dear Alain,
Thank you fo
like the whole cluster is paralysed" --> what does it mean, be more
accurate on this please.
You should tell us action that were taken before this occurred and now what is
not working since a C* cluster in this state could perfectly run. No SPOF.
C*heers
2015-06-23 16:10 GMT+02:00 Jean Tre
is paralysed.
The only solution I see is to remove temporarily that node.
Thanks for your comments.
On 23 Jun 2015, at 12:45 , Jean Tremblay
mailto:jean.tremb...@zen-innovations.com>>
wrote:
Hi,
I have a cluster with 5 nodes running Cassandra 2.1.6.
I had to reboot a node. I kill
o I've opened
https://issues.apache.org/jira/browse/CASSANDRA-9636 to fix it.
Thanks,
Sam
On Tue, Jun 23, 2015 at 1:52 PM, Jean Tremblay
mailto:jean.tremb...@zen-innovations.com>>
wrote:
Hi Sam,
You have a real good gut feeling.
I went to see the query that I used since many months…
gt;> wrote:
Can you share the query that you're executing when you see the error and the
schema of the target table? It could be something related to CASSANDRA-9532.
On Tue, Jun 23, 2015 at 10:05 AM, Jean Tremblay
mailto:jean.tremb...@zen-innovations.com>>
wrote:
Hi,
I’m using Datas
Hi,
I have a cluster with 5 nodes running Cassandra 2.1.6.
I had to reboot a node. I killed the cassandra process on that node. Rebooted
the machine, and restarted Cassandra.
~/apache-cassandra-DATA/data:321>nodetool status
Datacenter: datacenter1
===
Status=Up/Down
|/ State
Hi,
I’m using Datastax Java Driver V 2.1.6
I migrated my cluster to Cassandra V2.1.7
And now I have an error on my client that goes like:
2015-06-23 10:49:11.914 WARN 20955 --- [ I/O worker #14]
com.datastax.driver.core.RequestHandler : /192.168.2.201:9042 replied with
server error (java.lang
etz
On Mon, Jun 22, 2015 at 8:40 AM, Jean Tremblay
mailto:jean.tremb...@zen-innovations.com>>
wrote:
Hi,
What is the best way to see if a repair is finished? Is there a JMX object or
is there a command to see if a repair is finished?
What happens if by mistake an operator starts a repair b
Hi,
What is the best way to see if a repair is finished? Is there a JMX object or
is there a command to see if a repair is finished?
What happens if by mistake an operator starts a repair before the previous is
not yet finished? Will they execute both one after the other or at the same
time?
T
pactions will remove tombstones
On Thu, Jun 18, 2015 at 11:46 PM, Jean Tremblay
mailto:jean.tremb...@zen-innovations.com>>
wrote:
Perfect thank you.
So making a weekly "nodetool repair -pr” on all nodes one after the other will
repair my cluster. That is great.
If it does a compact
Hi,
I understand that we must repair the DB on a regular basis.
Now I also see that making a repair is using lots of resources in the cluster
so I need to do this during the weekend because I really would like to have
high performance at least during the week days.
In the documentation I see th
1. Running repair will trigger compactions
2. Increase in CPU utilization.
Run node tool repair with -pr option, so that it will repair only the range
that node is responsible for.
On Thu, Jun 18, 2015 at 10:50 PM, Jean Tremblay
mailto:jean.tremb...@zen-innovations.com>>
wrote:
Thanks Jona
ll
manage it for you.
https://github.com/spotify/cassandra-reaper
On Thu, Jun 18, 2015 at 12:36 PM Jean Tremblay
mailto:jean.tremb...@zen-innovations.com>>
wrote:
Hi,
I want to make on a regular base repairs on my cluster as suggested by the
documentation.
I want to do this in a wa
Hi,
I want to make on a regular base repairs on my cluster as suggested by the
documentation.
I want to do this in a way that the cluster is still responding to read
requests.
So I understand that I should not use the -par switch for that as it will do
the repair in parallel and consume all ava
Jun 15, 2015, at 10:50 AM, Jean Tremblay
mailto:jean.tremb...@zen-innovations.com>>
wrote:
Dear all,
I identified a bit more closely the root cause of my missing data.
The problem is occurring when I use
com.datastax.cassandra
cassandra-driver-core
2.1.6
on my client against Cassandra 2
select * from TABLE
where token(key) > lastToken"
Thanks,
Bryan
On Mon, Jun 15, 2015 at 12:50 PM, Jean Tremblay
mailto:jean.tremb...@zen-innovations.com>>
wrote:
Dear all,
I identified a bit more closely the root cause of my missing data.
The problem is occurring when I use
com.d
.pythian.com<http://www.pythian.com/>
On Mon, Jun 15, 2015 at 10:54 AM, Jean Tremblay
mailto:jean.tremb...@zen-innovations.com>>
wrote:
Hi,
I have reloaded the data in my cluster of 3 nodes RF: 2.
I have loaded about 2 billion rows in one table.
I use LeveledCompactionStrategy on my table.
I
ng is a really good idea but you also have to read a lot imho.
Good luck,
C*heers,
Alain
2015-06-15 11:13 GMT+02:00 Jean Tremblay
mailto:jean.tremb...@zen-innovations.com>>:
Hi,
I have a cluster of 3 nodes RF: 2.
There are about 2 billion rows in one table.
I use LeveledCompactionStrateg
Hi,
I have a cluster of 3 nodes RF: 2.
There are about 2 billion rows in one table.
I use LeveledCompactionStrategy on my table.
I use version 2.1.6.
I use the default cassandra.yaml, only the ip address for seeds and throughput
has been change.
I am have tested a scenario where one node crashe
Hi,
I have reloaded the data in my cluster of 3 nodes RF: 2.
I have loaded about 2 billion rows in one table.
I use LeveledCompactionStrategy on my table.
I use version 2.1.6.
I use the default cassandra.yaml, only the ip address for seeds and throughput
has been change.
I loaded my data with si
I have experienced similar results: OperationTimedOut after inserting many
millions of records on a 5 nodes cluster, using Cassandra 2.1.5.
I rolled back to 2.1.4 using identically the same configuration as with 2.1.5
and these timeout went away… This is not the solution to your problem but just
; output from such.
Spend some time experimenting with those settings incrementally. Finding the
sweet spot is different for each workload will make a huge difference in
overall performance.
On Thu, May 14, 2015 at 8:06 AM, Jean Tremblay
mailto:jean.tremb...@zen-innovations.com>>
wrote:
&g
ferent for each workload will make a huge difference in
overall performance.
On Thu, May 14, 2015 at 8:06 AM, Jean Tremblay
mailto:jean.tremb...@zen-innovations.com>>
wrote:
>
> Hi,
>
> I’m using Cassandra 2.1.4 with a table using LeveledCompactionStrategy.
> Often I need
Hi,
I’m using Cassandra 2.1.4 with a table using LeveledCompactionStrategy.
Often I need to delete many rows and I want to make sure I don’t have too many
tombstones.
How does one get rid of tombstones in a table using LCS?
How can we monitor how many tombstones are around?
Thanks for your help
Hi,
Why do everyone say that Cassandra should not be used in production on an Mac
OS x?
Why would this not work?
Are there anyone out there using OS x in production? What is your experience
with this?
Thanks
Jean
delle
USA 415.501.0198
London +44.0.20.8144.9872
On Apr 5, 2015 1:40 AM, "Jean Tremblay"
mailto:jean.tremb...@zen-innovations.com>>
wrote:
Hi,
I have a cluster of 5 nodes. We use cassandra 2.1.3.
The 5 nodes use about 50-57% of the 1T SSD.
One node managed to compact all its data
Hi,
I have a cluster of 5 nodes. We use cassandra 2.1.3.
The 5 nodes use about 50-57% of the 1T SSD.
One node managed to compact all its data. During one compaction this node used
almost 100% of the drive. The other nodes refuse to continue compaction
claiming that there is not enough disk space
55 matches
Mail list logo