Hi Paul,
It looks like we wait for the next segment available, but currently segment
is being archived. I guess wal-file-archiver thread does I/O operations.
Sincerely,
Dmitriy Pavlov
ср, 25 июл. 2018 г. в 19:14, Paul Anderson :
> Any ideas?
>
> [2018-07-17T20:17:57,167][WARN ]grid-timeout-work
Hi Denis,
I agree that data presence should be double-checked. In case check will
show that 1) cache doesn't exist at all after restart and 2) we can find
two folders with binary_meta and wal that means this question is similar
with recent question
http://apache-ignite-users.70518.x6.nabble.com/Sp
Hi Ray,
I’m also trying to reproduce this behaviour, but for 20M of entries it
works fine for ignite 2.2.
It is expected that in-memory only mode works faster, because memory has a
higher write speed by several orders of magnitude than the disc.
Which type of disc is installed in servers? Is it
Hi Ray,
Thank you for your reply. In addition to checkpoint marker, setting page
size to 4K (already default in newer versions of Ignite) and WAL history
size to value “1” may help to reduce overhead and space used and make
loading a little faster.
I apologize if you already mentioned this, but a
sorry for misprint. I meant thread dumps, of course
вт, 17 окт. 2017 г. в 18:16, Dmitry Pavlov :
> Hi Ray,
>
> Thank you for your reply. In addition to checkpoint marker, setting page
> size to 4K (already default in newer versions of Ignite) and WAL history
> size to value
Hi Ray,
Thank you for thread dumps. 'Failed to wait for partition map exchange' is
related to rebalancing. What can be reasons of rebalancing, is it possible
some nodes were joining or left topology? Data load itself can't cause
rebalancing, partitions are not moved if cluster is stable.
If Persi
Hi Ray,
I plan to look again to the dumps.
I can note that evictions of records from memory to disk does not need to
be additionally configured, it works automatically.
Therefore, yes, increasing volume of RAM for data region will allow to
evict records from memory less often.
Sincerely,
Dmitri
Hi Ray,
I checked the dumps and from them it is clear that the client node can not
provide more data load, since the cluster is already busy. But there is no
activity on the server node.
I suppose that the problem could be located on some other server node of
the three remaining. The logs you sen
Hi,
>data is apparently successfully rebalanced
data rebalancing is background process and probably it was still in
progress, how did you check data rebalance was completed?
If data rebalance is not completed, queries and getting values will be
routed to node having full data set.
Sincerely,
Dmi
Hi,
I tried to write the same code that will execute the described scenario.
The results are as follows:
If I do not give enough time to completely rebalance partitions, then the
newly launched node will not have enough data to count(*).
If I do not wait for enough time to allow to distribute th
still reproduced?
Sincerely,
Pavlov Dmitry
пт, 20 окт. 2017 г. в 14:26, Dmitry Pavlov :
> Hi Ray,
>
> I checked the dumps and from them it is clear that the client node can not
> provide more data load, since the cluster is already busy. But there is no
> activity on the ser
fully rebalanced
> making sql queries (count(*))
>
> 5. Stop server nodes
> 6. Restart server nodes
> 7. Doing sql queries (count(*)) returns less data
>
> —
> Denis
>
> > On Oct 23, 2017, at 5:11 AM, Dmitry Pavlov
> wrote:
> >
> > Hi,
> >
Hi Denis,
I had short chat with Alex G.
You're right, It may be a bug. I'll prepare my reproducer and add is as
test. Also I will raise the ticket if count(*) will give incorrect result.
Sincerely,
Dmitry Pavlov
пт, 27 окт. 2017 г., 1:48 Denis Magda :
> Dmitriy,
>
> I don
Hi,
AffinityKeyMapped annotation allows to exclude some fields from affinity
key calculations but still use it in cache key. Field areaId was already
excluded from affinity calculation for OrganizationKey by applying
annotation. And for second entity Employee correct option is as follows:
public c
Hi,
I think in this case it is requied to enrich EmployeeKey with areaId,
public class OrganizationKey {
private int organizationId;
@AffinityKeyMapped
private int areaId;
}
public class EmployeeKey {
private int employeeId;
private int organizationId;
@AffinityKeyMapp
Hi,
Ignite will be able to read data.
Community decided to maintain backward compatibilty of persistent store at
least for minor versions. Also there are several tests checking this
compatiblity.
Sincerely,
Dmitriy Pavlov
пн, 30 окт. 2017 г. в 14:49, Paulus de B. :
> Hi, does anyone know if e.
Hi Andrey, should we limit scan query to be executed only for current node
here with setLocal() method?
пн, 30 окт. 2017 г. в 16:48, Andrey Mashenkov :
> As a workaround you can try to broadcast a task :
>
> Collection res =
> ignite.compute(ignite.cluster().forCacheNodes("mycache")).broadc
г. в 16:51, Dmitry Pavlov :
> Hi Denis,
>
> I had short chat with Alex G.
>
> You're right, It may be a bug. I'll prepare my reproducer and add is as
> test. Also I will raise the ticket if count(*) will give incorrect result.
>
> Sincerely,
> Dmitry Pavlov
I agree with Denis, if we don't have such warning we should continiously
warn users in wiki pages/blogs/presentations. It is simpler to warn from
code.
What do you think if we will issue warning only if size > 1. HashMap with 1
item will not cause deadlock. Moreover where can be some custom single
dimir.
>
> On Tue, Oct 31, 2017 at 8:34 PM, Denis Magda wrote:
>
>> Here is a ticket for the improvement:
>> https://issues.apache.org/jira/browse/IGNITE-6804
>>
>> —
>> Denis
>>
>> > On Oct 31, 2017, at 3:55 AM, Dmitry Pavlov
>> wrote:
&
Hi John,
No, WAL consists from several constant sized, append only files (segments).
Segments are rotated and deleted after (WAL_History_size) checkpoints.
WAL is common for all caches.
If you are interested in low-level details of implementation, you can see
it here in draft wiki page for Ignite
Hi,
Previously, in real applications, I needed such information about what is
the remaining lifetime for the object (at least for debug and monitoring).
So I am also +1 for providing such information in the API. Otherwise, user
have to duplicate this information in an entry field.
Sincerely,
Dmi
Hi Michail,
To avoid confusion between real evictions and PDS-enabled region pages
rotation with disk we've desided to call second process as 'page
replacement'.
In future releases all messages related to page purge to disk will contain
'page replacement' instead of eviction. Hope it helps to sep
Hi,
Ignite supposes there is another node up and running, which locks file
/home/apache-ignite-fabric-2.3.0-bin/work/db/node00-17cba8d
3-43b3-4e43-8546-d52ec3b20f02/lock
This protects new Inite node from using same DB folder concurrently with
other running node.
Is previous node process alive?
Could you try to delete node-00-...\lock file, but only this file and only
if there is no Ignite process holding this lock currently?
Which OS do you use?
вт, 27 февр. 2018 г. в 21:03, siva :
> Thanks for the replay,
> Yes,We have already tried like ,killing the ignite process and deleting
> ano
Hi Oleksandr,
Could you please check Ignite logs for messages were persistence directory
located?
When Ignite is started from code, it is possible that Ignite is not able to
locate IGNITE_WORK or IGNITE_HOME and creates work files in temp dir. Start
scripts always provide this setting, but in dev
Hi,
There was some well known issue with native persistence and rebalancing
data between nodes, later I can try to find its Id .
How much nodes do you use?
Sincerely,
Dmitry Pavlov
ср, 7 мар. 2018 г., 22:25 Subash Chaturanga :
> Hi team,
> With a cache having CreatedExpiryPolicy fo
not removed because of
this delay.
Sincerely
Dmitry Pavlov
чт, 8 мар. 2018 г., 17:15 Subash Chaturanga :
> Ideally 3 nodes.
>
> So expiry inconsistency is a known issue ?
>
> But I reproduce the expiry inconsistency issue even in a single node with
> the steps mentioned e
Hi, I'm not sure I understand what high watermark for persistence mean.
Could you please explain?
пн, 12 мар. 2018 г. в 19:43, Subash Chaturanga :
> Thank you very much for the responses.
>
> Will keep an eye on when it will be released. Any estimated release for
> the fix ?
>
> One more question
for persistence.
>
> On Tue, Mar 13, 2018 at 3:30 AM, Dmitry Pavlov
> wrote:
>
>> Hi, I'm not sure I understand what high watermark for persistence mean.
>> Could you please explain?
>>
>> пн, 12 мар. 2018 г. в 19:43, Subash Chaturanga :
>>
>>
Hi,
It should work and data from 2.3 should be loaded by 2.4. Could you please
share details?
Do you have logs, and entries saved in persistent store?
Sincerely,
Dmitriy Pavlov
чт, 15 мар. 2018 г. в 12:56, Mikael :
> Hi!
>
> Are persistent storage compatible between 2.3 and 2.4 ? I upgraded to
Hi Alexey,
It may be serious issue. Could you recommend expert here who can pick up
this?
Sincerely,
Dmitriy Pavlov
чт, 15 мар. 2018 г. в 19:25, Arseny Kovalchuk :
> Hi, guys.
>
> I've got a reproducer for a problem which is generally reported as "Caused
> by: java.lang.IllegalStateException: F
Hi, I've used this feature also, I've used it to build requests processing
system with decoupled componends.
Topic Based Messaging does not support persistence, so if persistable
message is required than it is better to use Ignite with Kafka.
Please see also related discussion at SO:
https://stac
Yes, you can have backup copies of data in caches.
At the same time you can have several message listeners set up on several
nodes being .
About topic message instance itself, there is only one copy of message in
cluster. So particlar message may be lost in some cases of node failure.
This is wh
Hi Arseny,
I've observed in reproducer
ignite_version=2.3.0
Could you check if it is reproducible in our freshest release 2.4.0.
I'm not sure about ticket number, but it is quite possible issue is already
fixed.
Sincerely,
Dmitriy Pavlov
чт, 15 мар. 2018 г. в 19:34, Dmitry Pavl
>
>
>
>
>
>
> On 20 Mar 2018, at 15:03, Petr Ivanov wrote:
>
>
>
> Not yet.
> Project is still under development, I will pass build to community after
> settling corresponding permissions and receiving QA report.
>
> Also — it is time to rise a matt
Hi Petr,
I've mentioned you in the ticket. Is it obvious change so we could apply
patch?
Or could you advise maintainer/expert here?
Sincerely,
Dmitriy Pavlov
ср, 18 апр. 2018 г. в 7:27, Roman Shtykh :
> I had the same problem (pretty common not having -i option) and fixed it.
> Need a review.
Hi Raymond,
Was this question answered?
Sincerely,
Dmitriy Pavlov
вт, 1 мая 2018 г. в 0:20, Raymond Wilson :
> Cross posting to dev list for comment on cache interceptor availability on
> Ignite .Net client.
>
> -Original Message-
> From: Raymond Wilson [mailto:raymond_wil...@trimble.co
Cross-posting to user list.
Hi Folks,
could you please comment?
Sincerely,
Dmitiry Pavlov
пн, 21 мая 2018 г. в 7:41, zhouxy1123 :
>
> hi , How dose ignite implement concurrent control in transaction?
> Since Ignite support Read Repeat isolation,so in a transaction lock
> protocol
> is tow phas
Hi David,
Described behaviour (when too much time is spend on locks) was fixed in
Ignite 2.5 by 3 or 4 optimization changes.
IMO release will be published soon,so new Ignite vesion should get
performance boost in this case.
Yes, 'times 4' should be removed from the doc
https://apacheignite.readm
Hi,
Cross posting to user list. dev list is intended for contribution related
discussions.
Sincerely,
-- Forwarded message -
From: vbm
Date: чт, 7 июн. 2018 г. в 10:55
Subject: Question on peer class loading
To:
Hi,
I have a Ignite server running with 3rd partyDB (MYSQL). I
veral different Web App class loaders are enabled on clients.
Do you have standalone reproducer you can share?
Best Regards,
Dmitry Pavlov
пт, 12 мая 2017 г. в 15:58, Ilya :
> Hi all!
>
> The question was originally asked (but not answered) on SO:
>
> http://stackoverflow.com/qu
load class at receiver?
Sincerely,
Dmitry Pavlov
вт, 16 мая 2017 г. в 12:17, Ilya :
> Hi Dmitry,
>
> Unfortunately, I've did not yet manage to reproduce this issue outside of
> our project.
>
> What do you mean by "GridDeploymentClassLoader is used for loading cl
icate can't
complete sucessfull because of different class loaders used.
May I ask to to provide log and/or stacktrace of original deployment
problem in the project? Additionally you may enable debug log level for
deployment.
Best Regards,
Dmitry Pavlov
ср, 17 мая 2017 г. в 11:39, Ilya :
&
, especially if you will have success
with "Failed to deploy user message" issue reproduce.
Best Regards,
Dmitry Pavlov
чт, 18 мая 2017 г. в 19:37, Ilya :
> Hi Dmitry,
>
> Fair enough about the classloaders, but the stack trace looks like it
> comes from a server node. Why does
Hi Lukas,
I have created improvement https://issues.apache.org/jira/browse/IGNITE-5288 to
consider this suggestion in future versions.
Best Regards,
Dmitry Pavlov
ср, 24 мая 2017 г. в 15:39, Lukas Lentner :
> Hi,
>
>
>
> When using Ignite 1.7 together with Excelsior JET Ahead-O
09 22 <+49%20176%2024770922>
>
> E-Mail: kont...@lukaslentner.de
>
> Website: www.LukasLentner.de
>
>
>
> IBAN:DE33 7019 0001 1810 17
>
> BIC: GENODEF1M01 (Münchner Bank)
>
>
>
> *Von:* Dmitry Pavlov [mailto:dpavlov@gmail.com]
>
: +49 / 89 / 71 67 44 96 <+49%2089%2071674496>
> Mobile: +49 / 176 / 24 77 09 22 <+49%20176%2024770922>
> E-Mail: kont...@lukaslentner.de
> Website: www.LukasLentner.de <http://www.lukaslentner.de/>
>
> IBAN:DE33 7019 0001 1810 17
> BIC: GENODEF1
Hi Matt,
Ignite cache more or less corresponds to table from relational world.
As for caches number: Both ways are possible. In relational world, by the
way, you also can place different business objects into one table, but you
will have to introduce additional type field.
Similar for the
Hi,
Of cause Ignite cache is more powerful than just one relational table.
Cache is object oriented storage and can store a lot of data tables in one
cache record.
Selecting how many caches to use is design question: - one per each table,
- one per some business object (root for several tables),
kont...@lukaslentner.de
>
> Website: www.LukasLentner.de
>
>
>
> IBAN:DE33 7019 0001 1810 17
>
> BIC: GENODEF1M01 (Münchner Bank)
>
>
>
> *Von:* Dmitry Pavlov [mailto:dpavlov@gmail.com]
> *Gesendet:* Mittwoch, 24. Mai 2017 20:23
>
>
>
Hi Megha,
Please note there was improvement added about this metric
https://issues.apache.org/jira/browse/IGNITE-5490
Best Regards,
Dmitry Pavlov
чт, 15 июн. 2017 г. в 12:34, Megha Mittal :
> Hi,
>
> Thanks for your reply. It's quite possible that rebalancing might be taking
>
Hi Raymond,
Ignite Persistent Store includes consistentID parameter of cluster node
into folders name. It is required because there is possible that 2 nodes
would be started at same physical machine.
Consistency of using same folder each time is provided by this property,
ClusterNode.consistentID
a client only feature?
>
>
>
> Thanks,
>
> Raymond.
>
>
>
> *From:* Raymond Wilson [mailto:raymond_wil...@trimble.com
> ]
> *Sent:* Tuesday, September 5, 2017 6:04 AM
> *To:* user@ignite.apache.org
> *Subject:* Re: Specifying location of persistent storage lo
Hi Raymond,
Total memory usage since 2.0 version is determined as sum of heap size and
memory policies MaxSizes (overall segment sizes). If it is not configured
there is 80% of physical RAM is used for each node (before 2.2). In 2.2
this behaviour will be changed.
To run several nodes at one PC i
Hi John,
Process of checkpointing makes all dirty pages to be written to disk. As
result they will become clean (non dirty)
When checkpoiting is not running, dirty page can't be evicted (if
Persistent Data storage mode is enabled). Only clean page may be evicted
from memory.
In the same time too
Hi John,
Quite long page will require Ignite to use much time during loading page
from disk or write it back during checkpointing. Ignite is able to change
field value pointwise within page in case of update. In that case and if
too long page is selected, one field update, for example 1 byte wil
57 matches
Mail list logo