Using COPY .. TO you can export using the DELIMITER option, does that help?
> On Aug 15, 2017, at 9:01 PM, Harikrishnan A wrote:
>
> Thank you all
>
> Regards,
> Hari
>
>
> On Tuesday, August 15, 2017 12:55 AM, Erick Ramirez
> wrote:
>
>
> +1 to Jim and Tobin. cqlsh wasn't designed for w
Thank you all
Regards,Hari
On Tuesday, August 15, 2017 12:55 AM, Erick Ramirez
wrote:
+1 to Jim and Tobin. cqlsh wasn't designed for what you're trying to achieve.
Cheers!
On Tue, Aug 15, 2017 at 1:34 AM, Tobin Landricombe wrote:
Can't change the delimiter (I'm on cqlsh 5.0.1). Best
I agree with Jeff, it’s not necessary to launch a new cluster for this
operation.
> On Aug 15, 2017, at 7:39 PM, Jeff Jirsa wrote:
>
> Or just alter the key space replication strategy and remove the DSE specific
> strategies in favor of network topology strategy
>
>
> --
> Jeff Jirsa
>
>
Or just alter the key space replication strategy and remove the DSE specific
strategies in favor of network topology strategy
--
Jeff Jirsa
> On Aug 15, 2017, at 7:26 PM, Erick Ramirez wrote:
>
> Ioannis, it's not a straightforward process to migrate from DSE to COSS.
> There are some part
Ioannis, it's not a straightforward process to migrate from DSE to COSS.
There are some parts of DSE which are not recognised by COSS, e.g.
EverywhereStrategy for replication only known to DSE.
You are better off standing up a new COSS 3.11 cluster and restore app
keyspaces to the new cluster. Che
Myron, it just means that while the node was down, one of the tables got
dropped. When you eventually brought the node back online and the commit
logs were getting replayed, it tried to replay a mutation for a table which
no longer exists. Cheers!
On Wed, Aug 16, 2017 at 5:56 AM, Myron A. Semack
Fay, it's normal to need to increase the max heap for sstableloader if you
have large data sets. Cheers!
On Wed, Aug 16, 2017 at 1:18 AM, Fay Hou [Storage Service] <
fay...@coupang.com> wrote:
> We do snapshot and sstableloader. with sstableloader, it is ok to have
> different configuration of
Have you tried tracing (TRACING ON) the query? That would usually give you
clues as to where it's failing. Cheers!
On Wed, Aug 16, 2017 at 12:03 AM, Vladimir Yudovin
wrote:
> Hi,
>
> I recently encountered with strange issue.
> Assuming there is table
>
> id PRIMARY KEY
> indexed text
> column t
Haven't done it for 5.1 but went smoothly for earlier versions. If you're
not using any of the additional features of DSE, it should be OK. Just
change any custom replication strategies before migrating and also make
sure your yaml options are compatible.
what does nodetool describecluster show?
stab in the dark but you could try nodetool resetlocalschema or a rolling
restart of the cluster if it's schema issues.
Hi all,
We have setup a new cluster DSE 5.1.2 (with Cassandra 3.11.0.1758) and we
want to migrate it to Apache Cassandra 3.11.0 without loosing schema or
data.
Anybody, has done it before?
Obviously we are going to test this, but it would be nice to hear if
somebody else has gone through with th
We have a Cassandra 2.2.10 cluster of 9 nodes, hosted in AWS. One of the nodes
had a problem where it ran out of space on its root volume (NOT the Cassandra
volume which holds the Cassandra data and commit logs). I resolved the issue
with free space on the root volume and restarted the node.
We do snapshot and sstableloader. with sstableloader, it is ok to have
different configuration of the "stand by" cluster (i.e. number of the
nodes).
however, there is a issue we ran into with the sstableloader
(java.lang.OutOfMemoryError:
GC overhead limit exceeded)
https://issues.apache.org/jira/
SASI is in early experiment and had many major problems. for example,
"nodetool repair breaks SASI index"
https://issues.apache.org/jira/browse/CASSANDRA-13403
"OOM when using SASI index"
https://issues.apache.org/jira/browse/CASSANDRA-12662
I would not use SASI index for production.
Fay
On T
Hi,
I recently encountered with strange issue.
Assuming there is table
id PRIMARY KEY
indexed text
column text
CREATE custom index on table(indexed) using '...SASIIndex'
I inserted row like id=0, indexed='string1', column='just string'
When I did SELECT * FROM table WHERE id=0 AND
Mutations get dropped because a node can't keep up with writes. If you
understand the Cassandra write path, writes are ACKed when the mutation is
appended to the commitlog which is why it's very fast.
Knowing that, dropped mutations mean that the disk is not able to keep up
with the IO. Another wo
2 common causes of interrupted streams are (a) network interruptions, or
(b) nodes becoming unresponsive, e.g. GC pause during high loads.
As far as network is concerned, is there a firewall in the middle? If so,
it's quite common for firewalls to close sockets when it thinks the
connection is idl
Not sure if these are what Jeff was referring to but as a workaround, you
can configure the following STCS compaction subproperties:
- min_threshold - set to 2 so that only a minimum of 2 similar-sized
sstables are required to trigger a minor compaction instead of the default 4
- tombstone_threshol
A slight variation to Ben Slater's idea is to build the second cluster
like-for-like and assigning the same tokens used by the original nodes. If
you restore the data onto the equivalent nodes with the same tokens, the
data will be accessible as normal. Cheers!
On Tue, Aug 8, 2017 at 7:08 AM, Robe
+1 to Jim and Tobin. cqlsh wasn't designed for what you're trying to
achieve. Cheers!
On Tue, Aug 15, 2017 at 1:34 AM, Tobin Landricombe
wrote:
> Can't change the delimiter (I'm on cqlsh 5.0.1). Best I can offer is
> https://docs.datastax.com/en/cql/3.3/cql/cql_reference/cqlshExpand.html
>
> > O
1) You should not perform any streaming operations (repair, bootstrap,
decommission) in the middle of an upgrade. Note that an upgrade is not
complete until you have completed upgradesstables on all nodes in the
cluster.
2) No streaming involved with writes so it's not an issue.
3) It doesn't mat
I would discourage dropping to RF=2 because if you're using CL=*QUORUM, it
won't be able to tolerate a node outage.
You mentioned a couple of days ago that there's an index file that is
corrupted on 10.40.17.114. Could you try moving out the sstable set
associated with that corrupt file and try ag
Check what you have set for memtable_cleanup_threshold and if it's set too
low which means more flushing triggered. Cheers!
On Sat, Aug 12, 2017 at 5:05 AM, ZAIDI, ASAD A wrote:
> Hello Folks,
>
>
>
> I’m using Cassandra 2.2 on 14 node cluster.
>
>
>
> Now a days, I’m observing memtablepostflush
23 matches
Mail list logo