summaryrefslogtreecommitdiffstats
path: root/kernel/bpf/task_iter.c
diff options
context:
space:
mode:
authorAndrii Nakryiko <andriin@fb.com>2020-05-13 22:51:37 -0700
committerAlexei Starovoitov <ast@kernel.org>2020-05-14 18:37:32 -0700
commitc70f34a8ac66c2cb05593ef5760142e5f862a9b4 (patch)
tree8c0045ceb528c231cc94e10534e305f9d196cc3c /kernel/bpf/task_iter.c
parent0645f7eb6f6af78aba2bdd37ae776bd8754bc8f0 (diff)
bpf: Fix bpf_iter's task iterator logic
task_seq_get_next might stop prematurely if get_pid_task() fails to get task_struct. Failure to do so doesn't mean that there are no more tasks with higher pids. Procfs's iteration algorithm (see next_tgid in fs/proc/base.c) does a retry in such case. After this fix, instead of stopping prematurely after about 300 tasks on my server, bpf_iter program now returns >4000, which sounds much closer to reality. Fixes: eaaacd23910f ("bpf: Add task and task/file iterator targets") Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20200514055137.1564581-1-andriin@fb.com
Diffstat (limited to 'kernel/bpf/task_iter.c')
-rw-r--r--kernel/bpf/task_iter.c8
1 files changed, 7 insertions, 1 deletions
diff --git a/kernel/bpf/task_iter.c b/kernel/bpf/task_iter.c
index a9b7264dda08..4dbf2b6035f8 100644
--- a/kernel/bpf/task_iter.c
+++ b/kernel/bpf/task_iter.c
@@ -27,9 +27,15 @@ static struct task_struct *task_seq_get_next(struct pid_namespace *ns,
struct pid *pid;
rcu_read_lock();
+retry:
pid = idr_get_next(&ns->idr, tid);
- if (pid)
+ if (pid) {
task = get_pid_task(pid, PIDTYPE_PID);
+ if (!task) {
+ ++*tid;
+ goto retry;
+ }
+ }
rcu_read_unlock();
return task;