ptupitsyn commented on code in PR #6120:
URL: https://github.com/apache/ignite-3/pull/6120#discussion_r2174351626


##########
modules/platforms/dotnet/Apache.Ignite/Internal/Compute/Executor/JobLoadContextCache.cs:
##########
@@ -0,0 +1,241 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+namespace Apache.Ignite.Internal.Compute.Executor;
+
+using System;
+using System.Collections.Generic;
+using System.Diagnostics;
+using System.Diagnostics.CodeAnalysis;
+using System.Runtime.InteropServices;
+using System.Threading;
+using System.Threading.Tasks;
+
+/// <summary>
+/// Cache for job load contexts.
+/// <para />
+/// Every job load context is associated with a set of deployment unit paths. 
For example, [unit1, unit2] and [unit1, unit3] sets
+/// represent a different set of assemblies, so they will have different job 
load contexts. Unit order matters too.
+/// <para />
+/// Loading assemblies and resolving types can take from ~100us to several 
seconds, depending on the number of assemblies,
+/// type initializers (static constructors), and other factors.
+/// <para />
+/// If a deployment unit U is undeployed, all job load contexts with U in 
their set will be removed from the cache immediately.
+/// </summary>
+internal sealed class JobLoadContextCache : IDisposable
+{
+    private static readonly long TicksPerMs = Stopwatch.Frequency / 1000L;
+
+    private readonly int _ttlMs;
+
+    /** Main cache of job load contexts. */
+    private readonly Dictionary<DeploymentUnitPaths, (JobLoadContext Ctx, long 
Ts)> _jobLoadContextCache = new();
+
+    /** Additional map to quickly find all job load contexts that use a given 
deployment unit path. */
+    private readonly Dictionary<string, List<DeploymentUnitPaths>> 
_deploymentUnitSets = new();
+
+    private readonly SemaphoreSlim _cacheLock = new(1);
+
+    private readonly CancellationTokenSource _cts = new();
+
+    /// <summary>
+    /// Initializes a new instance of the <see cref="JobLoadContextCache"/> 
class.
+    /// </summary>
+    /// <param name="ttlMs">Cache entry time-to-live in milliseconds.</param>
+    /// <param name="cacheCleanupIntervalMs">Cache cleanup interval.</param>
+    internal JobLoadContextCache(int ttlMs = 30_000, int 
cacheCleanupIntervalMs = 5_000)
+    {
+        _ttlMs = ttlMs;
+
+        _ = StartCacheCleanupAsync(cacheCleanupIntervalMs);

Review Comment:
   Timer-based approach cleans up the memory when idle. E.g. when load comes in 
spikes, we might allocate a bunch of contexts, then clean it up after a delay 
and be ready for the next spike. With lazy expiration we keep everything until 
the next spike, which can be suboptimal in terms of GC and memory usage.
   
   Lazy expiration can be also beneficial when we get a cache hit. However, 
since cache miss costs us a few ms at most, I'd prefer to optimize for memory 
here. Also timer-based approach is simpler, IMO.
   
   Thoughts?
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscr...@ignite.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to