Currently if a user enqueues a work item using schedule_delayed_work() the
used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use
WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to
schedule_work() that is using system_wq and queue_work(), that makes use
again of WORK_CPU_UNBOUND.
This lack of consistency cannot be addressed without refactoring the API.
This patch continues the effort to refactor worqueue APIs, which has begun
with the change introducing new workqueues and a new alloc_workqueue flag:
commit
128ea9f6ccfb ("workqueue: Add system_percpu_wq and system_dfl_wq")
commit
930c2ea566af ("workqueue: Add new WQ_PERCPU flag")
Replace system_wq with system_percpu_wq, keeping the same old behavior.
The old wq (system_wq) will be kept for a few release cycles.
Suggested-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Marco Crivellari <marco.crivellari@suse.com>
Reviewed-by: Dave Jiang <dave.jiang@intel.com>> ---
Link: https://patch.msgid.link/20251105150826.248673-1-marco.crivellari@suse.com
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
* query.
*/
get_device(dev);
- queue_delayed_work(system_wq, &nvdimm->dwork, 0);
+ queue_delayed_work(system_percpu_wq, &nvdimm->dwork, 0);
}
return rc;
/* setup delayed work again */
tmo += 10;
- queue_delayed_work(system_wq, &nvdimm->dwork, tmo * HZ);
+ queue_delayed_work(system_percpu_wq, &nvdimm->dwork, tmo * HZ);
nvdimm->sec.overwrite_tmo = min(15U * 60U, tmo);
return;
}