Denis Mekhanikov,
Currently metadata are fsync'ed on write. This might be the case of
slow-downs in case of metadata burst writes.
I think removing fsync could help to mitigate performance issues with
current implementation until proper solution will be implemented: moving
metadata to metastore.
Alexey, but in this case customer need to be informed, that whole (for example
1 node) cluster crash (power off) could lead to partial data unavailability.
And may be further index corruption.
1. Why your meta takes a substantial size? may be context leaking ?
2. Could meta be compressed ?
>Сред
Denis,
Several clarifying questions:
1. Do you have an idea why metadata registration takes so long? So
poor disks? So many data to write? A contention with disk writes by
other subsystems?
2. Do we need a persistent metadata for in-memory caches? Or is it so
accidentally?
Generally, I think that
Maxim Muzafarov created IGNITE-12069:
Summary: Create cache shared preloader
Key: IGNITE-12069
URL: https://issues.apache.org/jira/browse/IGNITE-12069
Project: Ignite
Issue Type: Sub-task
Hi Igniters,
Finally, all dependent tickets are resolved and I've completed the
implementation of thin client transactions support. The patch [1] includes
server-side implementation and java thin client-side implementation.
Changes to thin client protocol and top-level view of implementation also
Folks,
Thanks for showing interest in this issue!
Alexey,
> I think removing fsync could help to mitigate performance issues with current
> implementation
Is my understanding correct, that if we remove fsync, then discovery won’t be
blocked, and data will be flushed to disk in background, an
Igniters,
Eduard,
I've checked this fail. It seems to me that the issue is completely
related to the issue IGNITE-9562 [1].
I see the following problems here:
First,
Tests are checking for the message thrown "Encryption cannot be used
with disk page compression" but after change [1] the error me
Denis Chudov created IGNITE-12070:
-
Summary: Document the new ability to track system/user time of
transactions
Key: IGNITE-12070
URL: https://issues.apache.org/jira/browse/IGNITE-12070
Project: Ignit
Igniters,
Since the file transmission between Ignite nodes [2] have been merged
to the master branch (it is the first mandatory part of file-based
rebalance procedure) I'd like to focus on the next step of the current
IEP-28 - the process of creating snapshots of cache group partitions.
Previous
Dmitriy Pavlov created IGNITE-12071:
---
Summary: Test failures after IGNITE-9562 fix
Key: IGNITE-12071
URL: https://issues.apache.org/jira/browse/IGNITE-12071
Project: Ignite
Issue Type: Test
Hi Maxim, thank you for stepping in.
I've created a blocker for 2.7.6
https://issues.apache.org/jira/browse/IGNITE-12071. Hopefully, it will be
fixed soon. Otherwise, we should start a discussion about reverting.
ср, 14 авг. 2019 г. в 15:00, Maxim Muzafarov :
> Igniters,
> Eduard,
>
> I've check
Eduard Shangareev created IGNITE-12072:
--
Summary: Starting node with extra cache in cache group cause
assertion error
Key: IGNITE-12072
URL: https://issues.apache.org/jira/browse/IGNITE-12072
Pro
Hello, Maxim.
I think backup is a great feature for Ignite.
Let's have it!
Few notes for it:
1. Backup directory should be taken from node configuration.
2. Backup should be stored on local node only.
Ignite admin can write sh script to move all backuped partitions to one storage
by himself.
Hi, sorry, I've updated the description of
https://issues.apache.org/jira/browse/IGNITE-12071
Since with first 2 tests is a duplicate of
https://issues.apache.org/jira/browse/IGNITE-12059, already in PA state.
My ticket will be related to new IGFS suite failures,
https://lists.apache.org/thread.h
Guys,
thank you for your attention.
I am aware of these issues. It's already fixed there
https://issues.apache.org/jira/browse/IGNITE-12059. Need to review/merge it.
On Wed, Aug 14, 2019 at 4:35 PM Dmitriy Pavlov wrote:
> Hi Maxim, thank you for stepping in.
>
> I've created a blocker for 2.7.
Hi!
There are two JDK internal things that are used by Ignite: Unsafe and
sun.nio.ch package.
None of these things are used by thin clients. So, it’s fine to use thin
clients without additional flags.
Denis
> On 13 Aug 2019, at 23:01, Shane Duan wrote:
>
> Hi Igniter,
>
> I understand that
Nikolay,
In my message above I've described only internal local BackupManager
for the rebalance needs, but for the backup feature of the whole
Ignite cluster I also have some thoughts. I'll give you a detailed
answer in an appropriate discussion topic [1] a bit later.
[1]
http://apache-ignite-de
Maxim, thanks!
В Ср, 14/08/2019 в 18:26 +0300, Maxim Muzafarov пишет:
> Nikolay,
>
> In my message above I've described only internal local BackupManager
> for the rebalance needs, but for the backup feature of the whole
> Ignite cluster I also have some thoughts. I'll give you a detailed
> answe
Thanks for the confirmation!
On Wed, Aug 14, 2019 at 7:56 AM Denis Mekhanikov
wrote:
> Hi!
>
> There are two JDK internal things that are used by Ignite: Unsafe and
> sun.nio.ch package.
> None of these things are used by thin clients. So, it’s fine to use thin
> clients without additional flags
Denis Mekhanikov,
1. Yes, only on OS failures. In such case data will be received from alive
nodes later.
2. Yes, for walmode=FSYNC writes to metastore will be slow. But such mode
should not be used if you have more than two nodes in grid because it has
huge impact on performance.
ср, 14 авг. 201
Alexey,
I still don’t understand completely if by using metastore we are going to stop
using discovery for metadata registration, or not. Could you clarify that point?
Is it going to be a distributed metastore or a local one?
Are there any relevant JIRA tickets for this change?
Denis
> On 14
chin created IGNITE-12073:
-
Summary: The doc should mention IGNITE_UPDATE_NOTIFIER has no
effect if you're not the first node that started up
Key: IGNITE-12073
URL: https://issues.apache.org/jira/browse/IGNITE-12073
Hi Igniters,
I've detected some new issue on TeamCity to be handled. You are more than
welcomed to help.
If your changes can lead to this failure(s): We're grateful that you were a
volunteer to make the contribution to this project, but things change and you
may no longer be able to finalize
chin created IGNITE-12074:
-
Summary: setFailureDetectionTimeout causes Critical system error
detected in log
Key: IGNITE-12074
URL: https://issues.apache.org/jira/browse/IGNITE-12074
Project: Ignite
Igniters,
It seems to me that this is my personal battle with the Ignite Javadoc.
I've prepared PR with one-liner fix [1] under the issue [3] merged
recently to handle this failure. And also re-run the Javadoc suite [2]
on TC.
Can anyone take a look?
[1] https://github.com/apache/ignite/pull/6
I've merged it since Javadoc is very sensitive.
Once Javadoc is broken,- it becomes a pain in the neck. It is always not
easy to overcome even as part of MTCGA activity.
ср, 14 авг. 2019 г. в 21:00, Maxim Muzafarov :
> Igniters,
>
>
> It seems to me that this is my personal battle with the
>
>> 1. Yes, only on OS failures. In such case data will be received from alive
>> nodes later.
What behavior would be in case of one node ? I suppose someone can obtain cache
data without unmarshalling schema, what in this case would be with grid
operability?
>
>> 2. Yes, for walmode=FSYNC wri
27 matches
Mail list logo