Sounds good.
Thanks for the explanation!
On Sun, Feb 18, 2018 at 5:15 PM, Rahul Singh
wrote:
> If you don’t have access to the file you don’t have access to the file.
> I’ve seen this issue several times. It’s he easiest low hanging fruit to
> resolve. So figure it out and make sure that it’s C
If you don’t have access to the file you don’t have access to the file. I’ve
seen this issue several times. It’s he easiest low hanging fruit to resolve. So
figure it out and make sure that it’s Cassandra.Cassandra from root to he Data
folder and either run as root or sudo it.
If it’s compacted
Not really sure with which user I ran it (root or cassandra), although I
don't understand why a permission issue will generate a File not Found
exception?
And in general, what if a file is being streamed and got compacted before
the streaming ended. Does Cassandra know how to handle this?
Thanks!
Check permissions maybe? Who owns the files vs. who is running sstableloader.
--
Rahul Singh
rahul.si...@anant.us
Anant Corporation
On Feb 18, 2018, 4:26 AM -0500, shalom sagges , wrote:
> Hi All,
>
> C* version 2.0.14.
>
> I was loading some data to another cluster using SSTableLoader. The stre
Hi All,
C* version 2.0.14.
I was loading some data to another cluster using SSTableLoader. The
streaming failed with the following error:
Streaming error occurred
java.lang.RuntimeException: java.io.*FileNotFoundException*:
/data1/keyspace1/table1/keyspace1-table1-jb-65174-Data.db (No such file
Hello,
It's about 2500 sstables worth 25TB of data.
-t parameter doesn't change -t 1000 and -t 1
Most probably I face some limitation at target cluster.
I'm preparing to split sstables and run up to ten parallel sstableloader
sessions.
Regards,
Osman
On 11-10-2016 21:46, Rajath Subramanyam
How many sstables are you trying to load ? Running sstableloaders in
parallel will help. Did you try setting the "-t" parameter and see if you
are getting the expected throughput ?
- Rajath
Rajath Subramanyam
On Mon, Oct 10, 2016 at 2:02 PM, Osman YOZGATLIOGLU <
osman.y
Hello,
Thank you Adam and Rajath.
I'll split input sstables and run parallel jobs for each.
I tested this approach and run 3 parallel sstableloader job without -t
parameter.
I raised stream_throughput_outbound_megabits_per_sec parameter from 200 to 600
Mbit/sec at all of target nodes.
But each
Hi Osman,
You cannot restart the streaming only to the failed nodes specifically. You
can restart the sstableloader job itself. Compaction will eventually take
care of the redundant rows.
- Rajath
Rajath Subramanyam
On Sun, Oct 9, 2016 at 7:38 PM, Adam Hutson wrote:
It'll start over from the beginning.
On Sunday, October 9, 2016, Osman YOZGATLIOGLU <
osman.yozgatlio...@krontech.com> wrote:
> Hello,
>
> I have running a sstableloader job.
> Unfortunately some of nodes restarted since beginnig streaming.
> I see streaming stop for those nodes.
> Can I restart
Hello,
I have running a sstableloader job.
Unfortunately some of nodes restarted since beginnig streaming.
I see streaming stop for those nodes.
Can I restart those streaming somehow?
Or if I restart sstableloader job, will it start from beginning?
Regards,
Osman
This e-mail message, including
11 matches
Mail list logo