Skip to content

Commit

Permalink
Merge branch 'enable-bpf-programs-to-declare-arrays-of-kptr-bpf_rb_ro…
Browse files Browse the repository at this point in the history
…ot-and-bpf_list_head'

Kui-Feng Lee says:

====================
Enable BPF programs to declare arrays of kptr, bpf_rb_root, and bpf_list_head.

Some types, such as type kptr, bpf_rb_root, and bpf_list_head, are
treated in a special way. Previously, these types could not be the
type of a field in a struct type that is used as the type of a global
variable. They could not be the type of a field in a struct type that
is used as the type of a field in the value type of a map either. They
could not even be the type of array elements. This means that they can
only be the type of global variables or of direct fields in the value
type of a map.

The patch set aims to enable the use of these specific types in arrays
and struct fields, providing flexibility. It examines the types of
global variables or the value types of maps, such as arrays and struct
types, recursively to identify these special types and generate field
information for them.

For example,

  ...
  struct task_struct __kptr *ptr[3];
  ...

it will create 3 instances of "struct btf_field" in the "btf_record" of
the data section.

 [...,
  btf_field(offset=0x100, type=BPF_KPTR_REF),
  btf_field(offset=0x108, type=BPF_KPTR_REF),
  btf_field(offset=0x110, type=BPF_KPTR_REF),
  ...
 ]

It creates a record of each of three elements. These three records are
almost identical except their offsets.

Another example is

  ...
  struct A {
    ...
    struct task_struct __kptr *task;
    struct bpf_rb_root root;
    ...
  }

  struct A foo[2];

it will create 4 records.

 [...,
  btf_field(offset=0x7100, type=BPF_KPTR_REF),
  btf_field(offset=0x7108, type=BPF_RB_ROOT:),
  btf_field(offset=0x7200, type=BPF_KPTR_REF),
  btf_field(offset=0x7208, type=BPF_RB_ROOT:),
  ...
 ]

Assuming that the size of an element/struct A is 0x100 and "foo"
starts at 0x7000, it includes two kptr records at 0x7100 and 0x7200,
and two rbtree root records at 0x7108 and 0x7208.

All these field information will be flatten, for struct types, and
repeated, for arrays.
---
Changes from v6:

 - Return BPF_KPTR_REF from btf_get_field_type() only if var_type is a
   struct type.

   - Pass btf and type to btf_get_field_type().

Changes from v5:

 - Ensure field->offset values of kptrs are advanced correctly from
   one nested struct/or array to another.

Changes from v4:

 - Return -E2BIG for i == MAX_RESOLVE_DEPTH.

Changes from v3:

 - Refactor the common code of btf_find_struct_field() and
   btf_find_datasec_var().

 - Limit the number of levels looking into a struct types.

Changes from v2:

 - Support fields in nested struct type.

 - Remove nelems and duplicate field information with offset
   adjustments for arrays.

Changes from v1:

 - Move the check of element alignment out of btf_field_cmp() to
   btf_record_find().

 - Change the order of the previous patch 4 "bpf:
   check_map_kptr_access() compute the offset from the reg state" as
   the patch 7 now.

 - Reject BPF_RB_NODE and BPF_LIST_NODE with nelems > 1.

 - Rephrase the commit log of the patch "bpf: check_map_access() with
   the knowledge of arrays" to clarify the alignment on elements.

v6: https://lore.kernel.org/all/[email protected]/
v5: https://lore.kernel.org/all/[email protected]/
v4: https://lore.kernel.org/all/[email protected]/
v3: https://lore.kernel.org/all/[email protected]/
v2: https://lore.kernel.org/all/[email protected]/
v1: https://lore.kernel.org/bpf/[email protected]/

Kui-Feng Lee (9):
  bpf: Remove unnecessary checks on the offset of btf_field.
  bpf: Remove unnecessary call to btf_field_type_size().
  bpf: refactor btf_find_struct_field() and btf_find_datasec_var().
  bpf: create repeated fields for arrays.
  bpf: look into the types of the fields of a struct type recursively.
  bpf: limit the number of levels of a nested struct type.
  selftests/bpf: Test kptr arrays and kptrs in nested struct fields.
  selftests/bpf: Test global bpf_rb_root arrays and fields in nested
    struct types.
  selftests/bpf: Test global bpf_list_head arrays.

 kernel/bpf/btf.c                              | 310 ++++++++++++------
 kernel/bpf/verifier.c                         |   4 +-
 .../selftests/bpf/prog_tests/cpumask.c        |   5 +
 .../selftests/bpf/prog_tests/linked_list.c    |  12 +
 .../testing/selftests/bpf/prog_tests/rbtree.c |  47 +++
 .../selftests/bpf/progs/cpumask_success.c     | 171 ++++++++++
 .../testing/selftests/bpf/progs/linked_list.c |  42 +++
 tools/testing/selftests/bpf/progs/rbtree.c    |  77 +++++
 8 files changed, 558 insertions(+), 110 deletions(-)
====================

Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Alexei Starovoitov <[email protected]>
  • Loading branch information
Alexei Starovoitov committed Jun 4, 2024
2 parents 49784c7 + 43d50ff commit 49df001
Show file tree
Hide file tree
Showing 8 changed files with 558 additions and 110 deletions.
310 changes: 202 additions & 108 deletions kernel/bpf/btf.c
Original file line number Diff line number Diff line change
Expand Up @@ -3442,10 +3442,12 @@ btf_find_graph_root(const struct btf *btf, const struct btf_type *pt,
goto end; \
}

static int btf_get_field_type(const char *name, u32 field_mask, u32 *seen_mask,
static int btf_get_field_type(const struct btf *btf, const struct btf_type *var_type,
u32 field_mask, u32 *seen_mask,
int *align, int *sz)
{
int type = 0;
const char *name = __btf_name_by_offset(btf, var_type->name_off);

if (field_mask & BPF_SPIN_LOCK) {
if (!strcmp(name, "bpf_spin_lock")) {
Expand Down Expand Up @@ -3481,7 +3483,7 @@ static int btf_get_field_type(const char *name, u32 field_mask, u32 *seen_mask,
field_mask_test_name(BPF_REFCOUNT, "bpf_refcount");

/* Only return BPF_KPTR when all other types with matchable names fail */
if (field_mask & BPF_KPTR) {
if (field_mask & BPF_KPTR && !__btf_type_is_struct(var_type)) {
type = BPF_KPTR_REF;
goto end;
}
Expand All @@ -3494,140 +3496,232 @@ static int btf_get_field_type(const char *name, u32 field_mask, u32 *seen_mask,

#undef field_mask_test_name

/* Repeat a number of fields for a specified number of times.
*
* Copy the fields starting from the first field and repeat them for
* repeat_cnt times. The fields are repeated by adding the offset of each
* field with
* (i + 1) * elem_size
* where i is the repeat index and elem_size is the size of an element.
*/
static int btf_repeat_fields(struct btf_field_info *info,
u32 field_cnt, u32 repeat_cnt, u32 elem_size)
{
u32 i, j;
u32 cur;

/* Ensure not repeating fields that should not be repeated. */
for (i = 0; i < field_cnt; i++) {
switch (info[i].type) {
case BPF_KPTR_UNREF:
case BPF_KPTR_REF:
case BPF_KPTR_PERCPU:
case BPF_LIST_HEAD:
case BPF_RB_ROOT:
break;
default:
return -EINVAL;
}
}

cur = field_cnt;
for (i = 0; i < repeat_cnt; i++) {
memcpy(&info[cur], &info[0], field_cnt * sizeof(info[0]));
for (j = 0; j < field_cnt; j++)
info[cur++].off += (i + 1) * elem_size;
}

return 0;
}

static int btf_find_struct_field(const struct btf *btf,
const struct btf_type *t, u32 field_mask,
struct btf_field_info *info, int info_cnt)
struct btf_field_info *info, int info_cnt,
u32 level);

/* Find special fields in the struct type of a field.
*
* This function is used to find fields of special types that is not a
* global variable or a direct field of a struct type. It also handles the
* repetition if it is the element type of an array.
*/
static int btf_find_nested_struct(const struct btf *btf, const struct btf_type *t,
u32 off, u32 nelems,
u32 field_mask, struct btf_field_info *info,
int info_cnt, u32 level)
{
int ret, idx = 0, align, sz, field_type;
const struct btf_member *member;
int ret, err, i;

level++;
if (level >= MAX_RESOLVE_DEPTH)
return -E2BIG;

ret = btf_find_struct_field(btf, t, field_mask, info, info_cnt, level);

if (ret <= 0)
return ret;

/* Shift the offsets of the nested struct fields to the offsets
* related to the container.
*/
for (i = 0; i < ret; i++)
info[i].off += off;

if (nelems > 1) {
err = btf_repeat_fields(info, ret, nelems - 1, t->size);
if (err == 0)
ret *= nelems;
else
ret = err;
}

return ret;
}

static int btf_find_field_one(const struct btf *btf,
const struct btf_type *var,
const struct btf_type *var_type,
int var_idx,
u32 off, u32 expected_size,
u32 field_mask, u32 *seen_mask,
struct btf_field_info *info, int info_cnt,
u32 level)
{
int ret, align, sz, field_type;
struct btf_field_info tmp;
const struct btf_array *array;
u32 i, nelems = 1;

/* Walk into array types to find the element type and the number of
* elements in the (flattened) array.
*/
for (i = 0; i < MAX_RESOLVE_DEPTH && btf_type_is_array(var_type); i++) {
array = btf_array(var_type);
nelems *= array->nelems;
var_type = btf_type_by_id(btf, array->type);
}
if (i == MAX_RESOLVE_DEPTH)
return -E2BIG;
if (nelems == 0)
return 0;

field_type = btf_get_field_type(btf, var_type,
field_mask, seen_mask, &align, &sz);
/* Look into variables of struct types */
if (!field_type && __btf_type_is_struct(var_type)) {
sz = var_type->size;
if (expected_size && expected_size != sz * nelems)
return 0;
ret = btf_find_nested_struct(btf, var_type, off, nelems, field_mask,
&info[0], info_cnt, level);
return ret;
}

if (field_type == 0)
return 0;
if (field_type < 0)
return field_type;

if (expected_size && expected_size != sz * nelems)
return 0;
if (off % align)
return 0;

switch (field_type) {
case BPF_SPIN_LOCK:
case BPF_TIMER:
case BPF_WORKQUEUE:
case BPF_LIST_NODE:
case BPF_RB_NODE:
case BPF_REFCOUNT:
ret = btf_find_struct(btf, var_type, off, sz, field_type,
info_cnt ? &info[0] : &tmp);
if (ret < 0)
return ret;
break;
case BPF_KPTR_UNREF:
case BPF_KPTR_REF:
case BPF_KPTR_PERCPU:
ret = btf_find_kptr(btf, var_type, off, sz,
info_cnt ? &info[0] : &tmp);
if (ret < 0)
return ret;
break;
case BPF_LIST_HEAD:
case BPF_RB_ROOT:
ret = btf_find_graph_root(btf, var, var_type,
var_idx, off, sz,
info_cnt ? &info[0] : &tmp,
field_type);
if (ret < 0)
return ret;
break;
default:
return -EFAULT;
}

if (ret == BTF_FIELD_IGNORE)
return 0;
if (nelems > info_cnt)
return -E2BIG;
if (nelems > 1) {
ret = btf_repeat_fields(info, 1, nelems - 1, sz);
if (ret < 0)
return ret;
}
return nelems;
}

static int btf_find_struct_field(const struct btf *btf,
const struct btf_type *t, u32 field_mask,
struct btf_field_info *info, int info_cnt,
u32 level)
{
int ret, idx = 0;
const struct btf_member *member;
u32 i, off, seen_mask = 0;

for_each_member(i, t, member) {
const struct btf_type *member_type = btf_type_by_id(btf,
member->type);

field_type = btf_get_field_type(__btf_name_by_offset(btf, member_type->name_off),
field_mask, &seen_mask, &align, &sz);
if (field_type == 0)
continue;
if (field_type < 0)
return field_type;

off = __btf_member_bit_offset(t, member);
if (off % 8)
/* valid C code cannot generate such BTF */
return -EINVAL;
off /= 8;
if (off % align)
continue;

switch (field_type) {
case BPF_SPIN_LOCK:
case BPF_TIMER:
case BPF_WORKQUEUE:
case BPF_LIST_NODE:
case BPF_RB_NODE:
case BPF_REFCOUNT:
ret = btf_find_struct(btf, member_type, off, sz, field_type,
idx < info_cnt ? &info[idx] : &tmp);
if (ret < 0)
return ret;
break;
case BPF_KPTR_UNREF:
case BPF_KPTR_REF:
case BPF_KPTR_PERCPU:
ret = btf_find_kptr(btf, member_type, off, sz,
idx < info_cnt ? &info[idx] : &tmp);
if (ret < 0)
return ret;
break;
case BPF_LIST_HEAD:
case BPF_RB_ROOT:
ret = btf_find_graph_root(btf, t, member_type,
i, off, sz,
idx < info_cnt ? &info[idx] : &tmp,
field_type);
if (ret < 0)
return ret;
break;
default:
return -EFAULT;
}

if (ret == BTF_FIELD_IGNORE)
continue;
if (idx >= info_cnt)
return -E2BIG;
++idx;
ret = btf_find_field_one(btf, t, member_type, i,
off, 0,
field_mask, &seen_mask,
&info[idx], info_cnt - idx, level);
if (ret < 0)
return ret;
idx += ret;
}
return idx;
}

static int btf_find_datasec_var(const struct btf *btf, const struct btf_type *t,
u32 field_mask, struct btf_field_info *info,
int info_cnt)
int info_cnt, u32 level)
{
int ret, idx = 0, align, sz, field_type;
int ret, idx = 0;
const struct btf_var_secinfo *vsi;
struct btf_field_info tmp;
u32 i, off, seen_mask = 0;

for_each_vsi(i, t, vsi) {
const struct btf_type *var = btf_type_by_id(btf, vsi->type);
const struct btf_type *var_type = btf_type_by_id(btf, var->type);

field_type = btf_get_field_type(__btf_name_by_offset(btf, var_type->name_off),
field_mask, &seen_mask, &align, &sz);
if (field_type == 0)
continue;
if (field_type < 0)
return field_type;

off = vsi->offset;
if (vsi->size != sz)
continue;
if (off % align)
continue;

switch (field_type) {
case BPF_SPIN_LOCK:
case BPF_TIMER:
case BPF_WORKQUEUE:
case BPF_LIST_NODE:
case BPF_RB_NODE:
case BPF_REFCOUNT:
ret = btf_find_struct(btf, var_type, off, sz, field_type,
idx < info_cnt ? &info[idx] : &tmp);
if (ret < 0)
return ret;
break;
case BPF_KPTR_UNREF:
case BPF_KPTR_REF:
case BPF_KPTR_PERCPU:
ret = btf_find_kptr(btf, var_type, off, sz,
idx < info_cnt ? &info[idx] : &tmp);
if (ret < 0)
return ret;
break;
case BPF_LIST_HEAD:
case BPF_RB_ROOT:
ret = btf_find_graph_root(btf, var, var_type,
-1, off, sz,
idx < info_cnt ? &info[idx] : &tmp,
field_type);
if (ret < 0)
return ret;
break;
default:
return -EFAULT;
}

if (ret == BTF_FIELD_IGNORE)
continue;
if (idx >= info_cnt)
return -E2BIG;
++idx;
ret = btf_find_field_one(btf, var, var_type, -1, off, vsi->size,
field_mask, &seen_mask,
&info[idx], info_cnt - idx,
level);
if (ret < 0)
return ret;
idx += ret;
}
return idx;
}
Expand All @@ -3637,9 +3731,9 @@ static int btf_find_field(const struct btf *btf, const struct btf_type *t,
int info_cnt)
{
if (__btf_type_is_struct(t))
return btf_find_struct_field(btf, t, field_mask, info, info_cnt);
return btf_find_struct_field(btf, t, field_mask, info, info_cnt, 0);
else if (btf_type_is_datasec(t))
return btf_find_datasec_var(btf, t, field_mask, info, info_cnt);
return btf_find_datasec_var(btf, t, field_mask, info, info_cnt, 0);
return -EINVAL;
}

Expand Down Expand Up @@ -6693,7 +6787,7 @@ int btf_struct_access(struct bpf_verifier_log *log,
for (i = 0; i < rec->cnt; i++) {
struct btf_field *field = &rec->fields[i];
u32 offset = field->offset;
if (off < offset + btf_field_type_size(field->type) && offset < off + size) {
if (off < offset + field->size && offset < off + size) {
bpf_log(log,
"direct access to %s is disallowed\n",
btf_field_type_name(field->type));
Expand Down
4 changes: 2 additions & 2 deletions kernel/bpf/verifier.c
Original file line number Diff line number Diff line change
Expand Up @@ -5448,7 +5448,7 @@ static int check_map_access(struct bpf_verifier_env *env, u32 regno,
* this program. To check that [x1, x2) overlaps with [y1, y2),
* it is sufficient to check x1 < y2 && y1 < x2.
*/
if (reg->smin_value + off < p + btf_field_type_size(field->type) &&
if (reg->smin_value + off < p + field->size &&
p < reg->umax_value + off + size) {
switch (field->type) {
case BPF_KPTR_UNREF:
Expand Down Expand Up @@ -11640,7 +11640,7 @@ __process_kf_arg_ptr_to_graph_node(struct bpf_verifier_env *env,

node_off = reg->off + reg->var_off.value;
field = reg_find_field_offset(reg, node_off, node_field_type);
if (!field || field->offset != node_off) {
if (!field) {
verbose(env, "%s not found at offset=%u\n", node_type_name, node_off);
return -EINVAL;
}
Expand Down
Loading

0 comments on commit 49df001

Please sign in to comment.