| Description | In the Linux kernel, the following vulnerability has been resolved:  mm: zswap: fix crypto_free_acomp() deadlock in zswap_cpu_comp_dead()  Currently, zswap_cpu_comp_dead() calls crypto_free_acomp() while holding the per-CPU acomp_ctx mutex.  crypto_free_acomp() then holds scomp_lock (through crypto_exit_scomp_ops_async()).  On the other hand, crypto_alloc_acomp_node() holds the scomp_lock (through crypto_scomp_init_tfm()), and then allocates memory.  If the allocation results in reclaim, we may attempt to hold the per-CPU acomp_ctx mutex.  The above dependencies can cause an ABBA deadlock.  For example in the following scenario:  (1) Task A running on CPU #1:     crypto_alloc_acomp_node()       Holds scomp_lock       Enters reclaim       Reads per_cpu_ptr(pool->acomp_ctx, 1)  (2) Task A is descheduled  (3) CPU #1 goes offline     zswap_cpu_comp_dead(CPU #1)       Holds per_cpu_ptr(pool->acomp_ctx, 1))       Calls crypto_free_acomp()       Waits for scomp_lock  (4) Task A running on CPU #2:       Waits for per_cpu_ptr(pool->acomp_ctx, 1) // Read on CPU #1       DEADLOCK  Since there is no requirement to call crypto_free_acomp() with the per-CPU acomp_ctx mutex held in zswap_cpu_comp_dead(), move it after the mutex is unlocked.  Also move the acomp_request_free() and kfree() calls for consistency and to avoid any potential sublte locking dependencies in the future.  With this, only setting acomp_ctx fields to NULL occurs with the mutex held.  This is similar to how zswap_cpu_comp_prepare() only initializes acomp_ctx fields with the mutex held, after performing all allocations before holding the mutex.  Opportunistically, move the NULL check on acomp_ctx so that it takes place before the mutex dereference. |