I am getting the above error when I attempt to activate a baseline Ignite
cluster (
I have stopped and started the nodes in the cluster. I have attempted to
deactivate the cluster and reactivate it.
/home/ubuntu/apache-ignite-fabric-2.6.0-bin$ ./bin/control.sh --baseline
Control utility [ver. 2.6
as you tried to do on client.
>
> As I have already said, when client joins, if cache is already started on
> cluster then client configuration will be thrown away and one received from
> server will be used instead.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> чт, 1
he name is
> used).
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> вт, 12 февр. 2019 г. в 06:39, Max Barrios :
>> Hi,
>>
>> This is the code fragment from MyIgniteTest.scala:
>>
>> val cassandraDataSource = new DataSource
>> c
he name is
> used).
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> вт, 12 февр. 2019 г. в 06:39, Max Barrios :
>> Hi,
>>
>> This is the code fragment from MyIgniteTest.scala:
>>
>> val cassandraDataSource = new DataSource
>> c
;
> Regards,
> --
> Ilya Kasnacheev
>
>
> пт, 8 февр. 2019 г. в 23:01, Max Barrios :
>
>> Hi,
>>
>> Stack trace is below:
>>
>> [error] 19/02/07 22:17:14 ERROR GridDhtPartitionsExchangeFuture: Failed to
>> reinitialize local partitions (preloa
92)
Thanks!
Max
On Fri, 8 Feb 2019 at 01:26, Ilya Kasnacheev
wrote:
> Hello!
>
> Can you provide stack trace of the exception?
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> пт, 8 февр. 2019 г. в 01:33, Max Barrios :
>
>> FYI I am using Ignite Fabric 2.6.0.
FYI I am using Ignite Fabric 2.6.0. We use Spark 2.3.2 / Hadoop 2.8.5 so
upgrading to Ignite 2.7.0 means upgrading Spark/Hadoop.
On Thu, 7 Feb 2019 at 11:58, Max Barrios wrote:
> As far as I know, yes. Where would/should I look to check?
>
> Also, do I need to copy the ignite
gt; bean/app context should not be checked when you do.
> Are you sure that it is the only cassandra cache store factory that gets
> used in actual caches?
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> чт, 7 февр. 2019 г. в 05:59, Max Barrios :
>
>> I am running an
I am running an application written in Scala that is using Spark, Ignite,
and persisting from writeThrough cache to Cassandra. From the Ignite base
concepts code you are supposed to set the CacheStoreFactory as such:
Yet, when I run the application I
if you replace `ref` with just a value?
> Like
>
>
>
>
> Stan
>
> From: Max Barrios
> Sent: 12 декабря 2018 г. 23:51
> To: user@ignite.apache.org
> Subject: Amazon S3 Based Discovery NOT USING BasicAWSCredentials
>
> I am running Apache Ignite 2.6.0
//issues.apache.org/jira/browse/IGNITE-4530.
>
> Does it work if you replace `ref` with just a value?
> Like
>
>
>
>
> Stan
>
> From: Max Barrios
> Sent: 12 декабря 2018 г. 23:51
> To: user@ignite.apache.org
> Subject: Amazon S3 Based Discovery NOT U
I have a Spark 2.2.0 app that writes a RDD to Ignite 2.6.0. It *works* in
local Spark (2.2.0) mode, accessing a remote Ignite 2.6.0 cluster,
In my ignite.xml, I am specifying AWS S3-based Discovery, as my Ignite
cluster is running in AWS.
When I deploy this working-in-local-mode jar to a Spark 2.
I am running Apache Ignite 2.6.0 in AWS and am using S3 Based Discovery,
However, I DO NOT want to embed AWS Access or Secret Keys in my ignite.xml
I have AWS EC2 Instance Metadata Service for my instances so that the creds
can be loaded from there.
However, there's no guidance or documentation
Is there any way to restart Apache Ignite nodes via command line? I want to
* load new configurations
* shutdown the cluster to upgrade my VM instances
* do general maintenance
and just can't find any documentation that shows me how to do this via the
command line. Similar devops actions with other
14 matches
Mail list logo