With EPC cgroups, the global reclamation function,
sgx_reclaim_pages_global(), can no longer apply to the global LRU as
pages are now in per-cgroup LRUs.

Create a wrapper, sgx_cgroup_reclaim_global() to invoke
sgx_cgroup_reclaim_pages() passing in the root cgroup.  The wrapper will
scan and attempt to reclaim SGX_NR_TO_SCAN pages just like the current
global reclaim.

Note this simple implementation doesn't _exactly_ mimic the current
global EPC reclaim (which always tries to do the actual reclaim in batch
of SGX_NR_TO_SCAN pages): in rare cases when LRUs have less than
SGX_NR_TO_SCAN reclaimable pages, the actual reclaim of EPC pages will
be split into smaller batches _across_ multiple LRUs with each being
smaller than SGX_NR_TO_SCAN pages.

A more precise way to mimic the current global EPC reclaim would be to
have a new function to only "scan" (or "isolate") SGX_NR_TO_SCAN pages
_across_ the given EPC cgroup _AND_ its descendants, and then do the
actual reclaim in one batch.  But this is unnecessarily complicated at
this stage to address such rare cases.

Signed-off-by: Haitao Huang <haitao.hu...@linux.intel.com>
Reviewed-by: Kai Huang <kai.hu...@intel.com>
Reviewed-by: Jarkko Sakkinen <jar...@kernel.org>
---
V17:
- Add comments in sgx_can_reclaim_global(). (Kai)
- Remove inaccurate comments in commit msg. (Kai)
---
 arch/x86/kernel/cpu/sgx/epc_cgroup.c | 12 ++++++++++++
 arch/x86/kernel/cpu/sgx/epc_cgroup.h |  2 ++
 arch/x86/kernel/cpu/sgx/main.c       | 14 ++++++++++++++
 3 files changed, 28 insertions(+)

diff --git a/arch/x86/kernel/cpu/sgx/epc_cgroup.c 
b/arch/x86/kernel/cpu/sgx/epc_cgroup.c
index 7394f78dec49..c85ec1a6ebf0 100644
--- a/arch/x86/kernel/cpu/sgx/epc_cgroup.c
+++ b/arch/x86/kernel/cpu/sgx/epc_cgroup.c
@@ -255,6 +255,18 @@ void sgx_cgroup_reclaim_direct(void)
        sgx_put_cg(sgx_cg);
 }
 
+/**
+ * sgx_cgroup_reclaim_pages_global() - Perform one round of global reclamation.
+ *
+ * @charge_mm: The mm to be charged for the backing store of reclaimed pages.
+ *
+ * Try to scan and attempt reclamation from root cgroup for %SGX_NR_TO_SCAN 
pages.
+ */
+void sgx_cgroup_reclaim_pages_global(struct mm_struct *charge_mm)
+{
+       sgx_cgroup_reclaim_pages(&sgx_cg_root, charge_mm, SGX_NR_TO_SCAN);
+}
+
 /*
  * Asynchronous work flow to reclaim pages from the cgroup when the cgroup is
  * at/near its maximum capacity.
diff --git a/arch/x86/kernel/cpu/sgx/epc_cgroup.h 
b/arch/x86/kernel/cpu/sgx/epc_cgroup.h
index a530c9611332..08cee70d975d 100644
--- a/arch/x86/kernel/cpu/sgx/epc_cgroup.h
+++ b/arch/x86/kernel/cpu/sgx/epc_cgroup.h
@@ -36,6 +36,7 @@ static inline int __init sgx_cgroup_wq_init(void)
 static inline void __init sgx_cgroup_wq_deinit(void) { }
 
 static inline void sgx_cgroup_reclaim_direct(void) { }
+static inline void sgx_cgroup_reclaim_pages_global(struct mm_struct 
*charge_mm) { }
 
 #else /* CONFIG_CGROUP_MISC */
 
@@ -89,6 +90,7 @@ static inline void sgx_put_cg(struct sgx_cgroup *sgx_cg)
 int sgx_cgroup_try_charge(struct sgx_cgroup *sgx_cg, enum sgx_reclaim reclaim);
 void sgx_cgroup_uncharge(struct sgx_cgroup *sgx_cg);
 void sgx_cgroup_reclaim_direct(void);
+void sgx_cgroup_reclaim_pages_global(struct mm_struct *charge_mm);
 void __init sgx_cgroup_init(void);
 int __init sgx_cgroup_wq_init(void);
 void __init sgx_cgroup_wq_deinit(void);
diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c
index 6f293115b75e..24dbcbc0f596 100644
--- a/arch/x86/kernel/cpu/sgx/main.c
+++ b/arch/x86/kernel/cpu/sgx/main.c
@@ -42,6 +42,13 @@ static inline struct sgx_epc_lru_list 
*sgx_epc_page_lru(struct sgx_epc_page *epc
  */
 static inline bool sgx_can_reclaim_global(void)
 {
+       /*
+        * Now all EPC pages are still tracked in the @sgx_global_lru, so only
+        * check @sgx_global_lru.
+        *
+        * When EPC pages are tracked in the actual per-cgroup LRUs,
+        * replace with sgx_cgroup_lru_empty(misc_cg_root()).
+        */
        return !list_empty(&sgx_global_lru.reclaimable);
 }
 
@@ -404,6 +411,13 @@ static bool sgx_should_reclaim_global(unsigned long 
watermark)
 
 static void sgx_reclaim_pages_global(struct mm_struct *charge_mm)
 {
+       /*
+        * Now all EPC pages are still tracked in the @sgx_global_lru.
+        * Still reclaim from it.
+        *
+        * When EPC pages are tracked in the actual per-cgroup LRUs,
+        * sgx_cgroup_reclaim_pages_global() will be called.
+        */
        sgx_reclaim_pages(&sgx_global_lru, charge_mm);
 }
 
-- 
2.43.0


Reply via email to