Alexey Goncharuk created IGNITE-12490:
-
Summary: Service proxy throws "Service not found" exception right
after deploy
Key: IGNITE-12490
URL: https://issues.apache.org/jira/browse/IGNITE-12490
Pro
Well, this is exactly the case. The service is deployed from node A, the
proxy is created on node B, and "service not found" exception gets thrown
to a user anyway. Perhaps, the retry happens too fast?
Created a ticket [1].
[1] https://issues.apache.org/jira/browse/IGNITE-12490
пн, 23 дек. 2019
Ivan Bessonov created IGNITE-12491:
--
Summary: Eliminate contention on ConcurrentHashMap.size()
Key: IGNITE-12491
URL: https://issues.apache.org/jira/browse/IGNITE-12491
Project: Ignite
Issue
I’ll take a look at the end of the week.
There is one more use-case:
* if you initiate deployment from node A, but getting proxy on node B
(which isn’t deployment initiator) to call service on node A - it may fail
with "service not found", this is expected behaviour because we didn't
provide such
Igniters, i`l try to compare 2.8 release candidate vs 2.7.6,
last sha 2.8 was build from : 9d114f3137f92aebc2562a
i use yardstick benchmarks, 4 bare machine with: 2x Xeon X5570 96Gb 512GB SSD
2048GB HDD 10GB/s
1 for client (driver) and 3 for servers.
this mappings for graphs and real yardstick
Actually it would be great resolve this somehow. I checked rejected
messages and found one [1] related to really important ticket. It was
not delivered to my inbox at all =(
[1]
http://apache-ignite-developers.2346864.n4.nabble.com/jira-Created-IGNITE-12259-Create-new-module-for-support-spring-5-
Hello!
I have merged your PR to master after some tweaks.
Regards,
--
Ilya Kasnacheev
пт, 20 дек. 2019 г. в 09:44, Sunny Chan, CLSA :
> Sorry for taking so long, but it has dropped down my priority list :(
>
> I have now provided a github pull request for the logging changes I would
> like to
Another one valuable opinion missed [1] (at least my inbox).
[1]
http://apache-ignite-developers.2346864.n4.nabble.com/Critical-worker-threads-liveness-checking-drawbacks-tp34783p34978.html
вт, 24 дек. 2019 г. в 13:48, Ivan Pavlukhin :
>
> Actually it would be great resolve this somehow. I check
Hello!
It came to my attention that we output data regions' configurations twice
when starting node, but we never output list of data regions (including
system, etc) that were actually started.
First we have IgniteConfiguration printed (quiet=false):
2019-07-24 02:33:33.918[INFO ][Thread-139][o.a
I went through all such warnings in my inbox and all they are for
messages sent from nabble portal [1]. Currently I have following
guesses:
1. Something is wrong with content type.
2. Something is wrong with sender address (via portal).
[1] Sent from: http://apache-ignite-developers.2346864.n4.nab
Rechecked TC two more times.
Going to merge to master in case no objections here.
On Mon, Dec 23, 2019 at 1:44 PM Anton Vinogradov wrote:
> Igniters,
>
> One more PME optimization ready to be reviewed.
> I found a strange tx recovery delay caused by IGNITE_TX_SALVAGE_TIMEOUT.
> I've checked the
What should be the user fallback in this case? Retry infinitely? Is there a
way to wait for the proper deployment?
вт, 24 дек. 2019 г. в 12:41, Vyacheslav Daradur :
> I’ll take a look at the end of the week.
>
> There is one more use-case:
> * if you initiate deployment from node A, but getting p
Not sure that "user fallback" is the right definition, it is not new
behaviour in comparison with legacy implementation.
Our synchronous deployment provides guaranties for a deployment
initiator to be able to start work with service immediately after
deployment finished successfully.
For not the d
Ok, got it.
I agree that this is consistent with the old behavior, but this is the kind
of errors we wanted to get rid of when we started the IEP. From the
user perspective, even the local deployment looks broken: if a compute job
is sent to a remote node after the service deployment, the job exec
> even the local deployment looks broken: if a compute job
> is sent to a remote node after the service deployment
This is a different case and covered by retries:
* If you deploy a service from node A to node B, then take a proxy
from node A (deployment initiator) it should NOT fail even if node
Ivan,
Probably, INFRA team can give advice or clear things out. Please try to
reach them out by opening a ticket in Jira.
On Tuesday, December 24, 2019, Ivan Pavlukhin wrote:
> I went through all such warnings in my inbox and all they are for
> messages sent from nabble portal [1]. Currently I
Denis,
Thank you for advice!
Also one idea came to mind. As messages sent via nabble portal might
be lost, can we disable sending messages via nabble at all?
вт, 24 дек. 2019 г. в 20:38, Denis Magda :
>
> Ivan,
>
> Probably, INFRA team can give advice or clear things out. Please try to
> reach t
Amelchev Nikita created IGNITE-12492:
Summary: TDE - Phase-2. Documentation.
Key: IGNITE-12492
URL: https://issues.apache.org/jira/browse/IGNITE-12492
Project: Ignite
Issue Type: Sub-task
Hello Igniters!
Nikolay almost finished PR review. Does anyone else want to look at
the changes? [1]
I implemented master key change management through Java API and JMX. I
created the issue [2] to implement change through control.sh that I
will do after the merge first one.
[1] https://github.co
Hi Igniters,
I've detected some new issue on TeamCity to be handled. You are more than
welcomed to help.
If your changes can lead to this failure(s): We're grateful that you were a
volunteer to make the contribution to this project, but things change and you
may no longer be able to finalize
20 matches
Mail list logo