Hi Dmitry,
Thanks for the pointer to the MemoryPolicy. I added the following: cfg.MemoryConfiguration = new MemoryConfiguration() { SystemCacheMaxSize = (long)1 * 1024 * 1024 * 1024, DefaultMemoryPolicyName = "defaultPolicy", MemoryPolicies = new[] { new MemoryPolicyConfiguration { Name = "defaultPolicy", InitialSize = 128 * 1024 * 1024, // 128 MB MaxSize = 1L * 1024 * 1024 * 1024 // 1 GB } } }; After running both servers the commit size peaked at 4Gb for both processes (with ~430Mb actual allocated memory) which is s significant improvement, though still seems higher than might be expected. Thanks, Raymond. *From:* Dmitry Pavlov [mailto:dpavlov....@gmail.com] *Sent:* Thursday, September 7, 2017 10:22 PM *To:* u...@ignite.apache.org; dev@ignite.apache.org *Subject:* Re: Massive commit sizes for processes with local Ignite Grid servers Hi Raymond, Total memory usage since 2.0 version is determined as sum of heap size and memory policies MaxSizes (overall segment sizes). If it is not configured there is 80% of physical RAM is used for each node (before 2.2). In 2.2 this behaviour will be changed. To run several nodes at one PC it may be required to manually setup Memory Configuration and Memory Policy(ies). Hi Igniters, esp. Pavel T. please share your thoughts. To which Java property value of SystemCacheMaxSize is now mapped? Sincerely, Dmitriy Pavlov P.S. Please see example of configuration https://apacheignite-net.readme.io/docs/durable-memory MemoryPolicies = new[] { new MemoryPolicyConfiguration { Name = "defaultPolicy", MaxSize = 4L * 1024 * 1024 * 1025 // 4 GB } } чт, 7 сент. 2017 г. в 12:44, Raymond Wilson <raymond_wil...@trimble.com>: I tried an experiment where I ran only two instances of the server locally, this is the result in the Task Manager: *From:* Raymond Wilson [mailto:raymond_wil...@trimble.com] *Sent:* Thursday, September 7, 2017 9:21 PM *To:* u...@ignite.apache.org; 'dev@ignite.apache.org' <dev@ignite.apache.org > *Subject:* Massive commit sizes for processes with local Ignite Grid servers I’m running a set of four server applications on a local system to simulate a cluster. Each of the servers has the following memory configurations set: public override void ConfigureRaptorGrid(IgniteConfiguration cfg) { cfg.JvmInitialMemoryMb = 512; // Set to minimum advised memory for Ignite grid JVM of 512Mb cfg.JvmMaxMemoryMb = 1 * 1024; // Set max to 1Gb // Don't permit the Ignite node to use more than 1Gb RAM (handy when running locally...) cfg.MemoryConfiguration = new MemoryConfiguration() { SystemCacheMaxSize = (long)1 * 1024 * 1024 * 1024 }; } The snap below is from the Windows 10 Task Manager where I have included the Commit Size value. As can be seen, the four identical servers are using very large and wildly varying commit sizes. Some Googling suggests this is due to the JVM allocating the largest contiguous block of virtual memory it can, but I would not have expected this size to be larger than the configured memory for the JVM (1Gb plus memory from the wider process it is running in, though this is only a few hundred Mb at most) The result is that my local system reports ~50-60Gb committed memory on a system with 16Gb of physical RAM, and I don’t think it likes it! Is there are way to configure the Ignite JVM to be a better citizen with respect to the commited size it requests from the host operating system? Thanks, Raymond.