thank you
On Thu, Jul 16, 2020 at 12:29 PM Alex Ott wrote:
> look into a series of the blog posts that I sent, I think that it should
> be in the 4th post
>
> On Thu, Jul 16, 2020 at 8:27 PM Jai Bheemsen Rao Dhanwada <
> jaibheem...@gmail.com> wrote:
>
>> okay, is there a way to export the TTL u
look into a series of the blog posts that I sent, I think that it should be
in the 4th post
On Thu, Jul 16, 2020 at 8:27 PM Jai Bheemsen Rao Dhanwada <
jaibheem...@gmail.com> wrote:
> okay, is there a way to export the TTL using CQLsh or DSBulk?
>
> On Thu, Jul 16, 2020 at 11:20 AM Alex Ott wrot
okay, is there a way to export the TTL using CQLsh or DSBulk?
On Thu, Jul 16, 2020 at 11:20 AM Alex Ott wrote:
> if you didn't export TTL explicitly, and didn't load it back, then you'll
> get not expirable data.
>
> On Thu, Jul 16, 2020 at 7:48 PM Jai Bheemsen Rao Dhanwada <
> jaibheem...@gmail
if you didn't export TTL explicitly, and didn't load it back, then you'll
get not expirable data.
On Thu, Jul 16, 2020 at 7:48 PM Jai Bheemsen Rao Dhanwada <
jaibheem...@gmail.com> wrote:
> In tried verify metadata, In case of writetime it is setting it as insert
> time but the TTL value is showi
In tried verify metadata, In case of writetime it is setting it as insert
time but the TTL value is showing as null. Is this expected? Does this mean
this record will never expire after the insert?
Is there any alternative to preserve the TTL ?
In the new Table inserted with Cqlsh and Dsbulk
cqlsh
thank you
On Wed, Jul 15, 2020 at 1:11 PM Russell Spitzer
wrote:
> Alex is referring to the "writetime" and "tttl" values for each cell. Most
> tools copy via CQL writes and don't by default copy those previous
> writetime and ttl values and instead just give a new writetime value which
> matche
Alex is referring to the "writetime" and "tttl" values for each cell. Most
tools copy via CQL writes and don't by default copy those previous
writetime and ttl values and instead just give a new writetime value which
matches the copy time rather than initial insert time.
On Wed, Jul 15, 2020 at 3:
Hello Alex,
- use DSBulk - it's a very effective tool for unloading & loading data
from/to Cassandra/DSE. Use zstd compression for offloaded data to save disk
space (see blog links below for more details). But the *preserving
metadata* could be a problem.
Here what exactly do you me
Thank you for the suggestions
On Tue, Jul 14, 2020 at 1:42 AM Alex Ott wrote:
> CQLSH definitely won't work for that amount of data, so you need to use
> other tools.
>
> But before selecting them, you need to define requirements. For example:
>
>1. Are you copying the data into tables with
CQLSH definitely won't work for that amount of data, so you need to use
other tools.
But before selecting them, you need to define requirements. For example:
1. Are you copying the data into tables with exactly the same structure?
2. Do you need to preserve metadata, like, writetime & TTL?
I wouldn't say it's good approach for that size. But you can try dsbulk
approach too.
Try to split output into multiple files.
Best Regards,
Kiran M K
On Tue, Jul 14, 2020, 5:17 AM Jai Bheemsen Rao Dhanwada <
jaibheem...@gmail.com> wrote:
> Hello,
>
> I would like to copy some data from one ca
11 matches
Mail list logo