Skip to content

Commit

Permalink
Browse files Browse the repository at this point in the history
Daniel Borkmann says:

====================
pull-request: bpf-next 2022-02-09

We've added 126 non-merge commits during the last 16 day(s) which contain
a total of 201 files changed, 4049 insertions(+), 2215 deletions(-).

The main changes are:

1) Add custom BPF allocator for JITs that pack multiple programs into a huge
   page to reduce iTLB pressure, from Song Liu.

2) Add __user tagging support in vmlinux BTF and utilize it from BPF
   verifier when generating loads, from Yonghong Song.

3) Add per-socket fast path check guarding from cgroup/BPF overhead when
   used by only some sockets, from Pavel Begunkov.

4) Continued libbpf deprecation work of APIs/features and removal of their
   usage from samples, selftests, libbpf & bpftool, from Andrii Nakryiko
   and various others.

5) Improve BPF instruction set documentation by adding byte swap
   instructions and cleaning up load/store section, from Christoph Hellwig.

6) Switch BPF preload infra to light skeleton and remove libbpf dependency
   from it, from Alexei Starovoitov.

7) Fix architecture-agnostic macros in libbpf for accessing syscall
   arguments from BPF progs for non-x86 architectures,
   from Ilya Leoshkevich.

8) Rework port members in struct bpf_sk_lookup and struct bpf_sock to be
   of 16-bit field with anonymous zero padding, from Jakub Sitnicki.

9) Add new bpf_copy_from_user_task() helper to read memory from a different
   task than current. Add ability to create sleepable BPF iterator progs,
   from Kenny Yu.

10) Implement XSK batching for ice's zero-copy driver used by AF_XDP and
    utilize TX batching API from XSK buffer pool, from Maciej Fijalkowski.

11) Generate temporary netns names for BPF selftests to avoid naming
    collisions, from Hangbin Liu.

12) Implement bpf_core_types_are_compat() with limited recursion for
    in-kernel usage, from Matteo Croce.

13) Simplify pahole version detection and finally enable CONFIG_DEBUG_INFO_DWARF5
    to be selected with CONFIG_DEBUG_INFO_BTF, from Nathan Chancellor.

14) Misc minor fixes to libbpf and selftests from various folks.

* https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (126 commits)
  selftests/bpf: Cover 4-byte load from remote_port in bpf_sk_lookup
  bpf: Make remote_port field in struct bpf_sk_lookup 16-bit wide
  libbpf: Fix compilation warning due to mismatched printf format
  selftests/bpf: Test BPF_KPROBE_SYSCALL macro
  libbpf: Add BPF_KPROBE_SYSCALL macro
  libbpf: Fix accessing the first syscall argument on s390
  libbpf: Fix accessing the first syscall argument on arm64
  libbpf: Allow overriding PT_REGS_PARM1{_CORE}_SYSCALL
  selftests/bpf: Skip test_bpf_syscall_macro's syscall_arg1 on arm64 and s390
  libbpf: Fix accessing syscall arguments on riscv
  libbpf: Fix riscv register names
  libbpf: Fix accessing syscall arguments on powerpc
  selftests/bpf: Use PT_REGS_SYSCALL_REGS in bpf_syscall_macro
  libbpf: Add PT_REGS_SYSCALL_REGS macro
  selftests/bpf: Fix an endianness issue in bpf_syscall_macro test
  bpf: Fix bpf_prog_pack build HPAGE_PMD_SIZE
  bpf: Fix leftover header->pages in sparc and powerpc code.
  libbpf: Fix signedness bug in btf_dump_array_data()
  selftests/bpf: Do not export subtest as standalone test
  bpf, x86_64: Fail gracefully on bpf_jit_binary_pack_finalize failures
  ...
====================

Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Jakub Kicinski <[email protected]>
  • Loading branch information
kuba-moo committed Feb 10, 2022
2 parents 5cad527 + e531396 commit 1127170
Show file tree
Hide file tree
Showing 201 changed files with 4,049 additions and 2,215 deletions.
13 changes: 13 additions & 0 deletions Documentation/bpf/btf.rst
Original file line number Diff line number Diff line change
Expand Up @@ -503,6 +503,19 @@ valid index (starting from 0) pointing to a member or an argument.
* ``info.vlen``: 0
* ``type``: the type with ``btf_type_tag`` attribute

Currently, ``BTF_KIND_TYPE_TAG`` is only emitted for pointer types.
It has the following btf type chain:
::

ptr -> [type_tag]*
-> [const | volatile | restrict | typedef]*
-> base_type

Basically, a pointer type points to zero or more
type_tag, then zero or more const/volatile/restrict/typedef
and finally the base type. The base type is one of
int, ptr, array, struct, union, enum, func_proto and float types.

3. BTF Kernel API
=================

Expand Down
215 changes: 151 additions & 64 deletions Documentation/bpf/instruction-set.rst
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,13 @@ necessary across calls.
Instruction encoding
====================

eBPF uses 64-bit instructions with the following encoding:
eBPF has two instruction encodings:

* the basic instruction encoding, which uses 64 bits to encode an instruction
* the wide instruction encoding, which appends a second 64-bit immediate value
(imm64) after the basic instruction for a total of 128 bits.

The basic instruction encoding looks as follows:

============= ======= =============== ==================== ============
32 bits (MSB) 16 bits 4 bits 4 bits 8 bits (LSB)
Expand Down Expand Up @@ -82,9 +88,9 @@ BPF_ALU uses 32-bit wide operands while BPF_ALU64 uses 64-bit wide operands for
otherwise identical operations.
The code field encodes the operation as below:

======== ===== ==========================
======== ===== =================================================
code value description
======== ===== ==========================
======== ===== =================================================
BPF_ADD 0x00 dst += src
BPF_SUB 0x10 dst -= src
BPF_MUL 0x20 dst \*= src
Expand All @@ -98,8 +104,8 @@ The code field encodes the operation as below:
BPF_XOR 0xa0 dst ^= src
BPF_MOV 0xb0 dst = src
BPF_ARSH 0xc0 sign extending shift right
BPF_END 0xd0 endianness conversion
======== ===== ==========================
BPF_END 0xd0 byte swap operations (see separate section below)
======== ===== =================================================

BPF_ADD | BPF_X | BPF_ALU means::

Expand All @@ -118,6 +124,42 @@ BPF_XOR | BPF_K | BPF_ALU64 means::
src_reg = src_reg ^ imm32


Byte swap instructions
----------------------

The byte swap instructions use an instruction class of ``BFP_ALU`` and a 4-bit
code field of ``BPF_END``.

The byte swap instructions instructions operate on the destination register
only and do not use a separate source register or immediate value.

The 1-bit source operand field in the opcode is used to to select what byte
order the operation convert from or to:

========= ===== =================================================
source value description
========= ===== =================================================
BPF_TO_LE 0x00 convert between host byte order and little endian
BPF_TO_BE 0x08 convert between host byte order and big endian
========= ===== =================================================

The imm field encodes the width of the swap operations. The following widths
are supported: 16, 32 and 64.

Examples:

``BPF_ALU | BPF_TO_LE | BPF_END`` with imm = 16 means::

dst_reg = htole16(dst_reg)

``BPF_ALU | BPF_TO_BE | BPF_END`` with imm = 64 means::

dst_reg = htobe64(dst_reg)

``BPF_FROM_LE`` and ``BPF_FROM_BE`` exist as aliases for ``BPF_TO_LE`` and
``BPF_TO_LE`` respetively.


Jump instructions
-----------------

Expand Down Expand Up @@ -176,104 +218,149 @@ The mode modifier is one of:
============= ===== ====================================
mode modifier value description
============= ===== ====================================
BPF_IMM 0x00 used for 64-bit mov
BPF_ABS 0x20 legacy BPF packet access
BPF_IND 0x40 legacy BPF packet access
BPF_MEM 0x60 all normal load and store operations
BPF_IMM 0x00 64-bit immediate instructions
BPF_ABS 0x20 legacy BPF packet access (absolute)
BPF_IND 0x40 legacy BPF packet access (indirect)
BPF_MEM 0x60 regular load and store operations
BPF_ATOMIC 0xc0 atomic operations
============= ===== ====================================

BPF_MEM | <size> | BPF_STX means::

Regular load and store operations
---------------------------------

The ``BPF_MEM`` mode modifier is used to encode regular load and store
instructions that transfer data between a register and memory.

``BPF_MEM | <size> | BPF_STX`` means::

*(size *) (dst_reg + off) = src_reg

BPF_MEM | <size> | BPF_ST means::
``BPF_MEM | <size> | BPF_ST`` means::

*(size *) (dst_reg + off) = imm32

BPF_MEM | <size> | BPF_LDX means::
``BPF_MEM | <size> | BPF_LDX`` means::

dst_reg = *(size *) (src_reg + off)

Where size is one of: BPF_B or BPF_H or BPF_W or BPF_DW.
Where size is one of: ``BPF_B``, ``BPF_H``, ``BPF_W``, or ``BPF_DW``.

Atomic operations
-----------------

eBPF includes atomic operations, which use the immediate field for extra
encoding::
Atomic operations are operations that operate on memory and can not be
interrupted or corrupted by other access to the same memory region
by other eBPF programs or means outside of this specification.

All atomic operations supported by eBPF are encoded as store operations
that use the ``BPF_ATOMIC`` mode modifier as follows:

* ``BPF_ATOMIC | BPF_W | BPF_STX`` for 32-bit operations
* ``BPF_ATOMIC | BPF_DW | BPF_STX`` for 64-bit operations
* 8-bit and 16-bit wide atomic operations are not supported.

.imm = BPF_ADD, .code = BPF_ATOMIC | BPF_W | BPF_STX: lock xadd *(u32 *)(dst_reg + off16) += src_reg
.imm = BPF_ADD, .code = BPF_ATOMIC | BPF_DW | BPF_STX: lock xadd *(u64 *)(dst_reg + off16) += src_reg
The imm field is used to encode the actual atomic operation.
Simple atomic operation use a subset of the values defined to encode
arithmetic operations in the imm field to encode the atomic operation:

The basic atomic operations supported are::
======== ===== ===========
imm value description
======== ===== ===========
BPF_ADD 0x00 atomic add
BPF_OR 0x40 atomic or
BPF_AND 0x50 atomic and
BPF_XOR 0xa0 atomic xor
======== ===== ===========

BPF_ADD
BPF_AND
BPF_OR
BPF_XOR

Each having equivalent semantics with the ``BPF_ADD`` example, that is: the
memory location addresed by ``dst_reg + off`` is atomically modified, with
``src_reg`` as the other operand. If the ``BPF_FETCH`` flag is set in the
immediate, then these operations also overwrite ``src_reg`` with the
value that was in memory before it was modified.
``BPF_ATOMIC | BPF_W | BPF_STX`` with imm = BPF_ADD means::

The more special operations are::
*(u32 *)(dst_reg + off16) += src_reg

BPF_XCHG
``BPF_ATOMIC | BPF_DW | BPF_STX`` with imm = BPF ADD means::

This atomically exchanges ``src_reg`` with the value addressed by ``dst_reg +
off``. ::
*(u64 *)(dst_reg + off16) += src_reg

BPF_CMPXCHG
``BPF_XADD`` is a deprecated name for ``BPF_ATOMIC | BPF_ADD``.

This atomically compares the value addressed by ``dst_reg + off`` with
``R0``. If they match it is replaced with ``src_reg``. In either case, the
value that was there before is zero-extended and loaded back to ``R0``.
In addition to the simple atomic operations, there also is a modifier and
two complex atomic operations:

Note that 1 and 2 byte atomic operations are not supported.
=========== ================ ===========================
imm value description
=========== ================ ===========================
BPF_FETCH 0x01 modifier: return old value
BPF_XCHG 0xe0 | BPF_FETCH atomic exchange
BPF_CMPXCHG 0xf0 | BPF_FETCH atomic compare and exchange
=========== ================ ===========================

The ``BPF_FETCH`` modifier is optional for simple atomic operations, and
always set for the complex atomic operations. If the ``BPF_FETCH`` flag
is set, then the operation also overwrites ``src_reg`` with the value that
was in memory before it was modified.

The ``BPF_XCHG`` operation atomically exchanges ``src_reg`` with the value
addressed by ``dst_reg + off``.

The ``BPF_CMPXCHG`` operation atomically compares the value addressed by
``dst_reg + off`` with ``R0``. If they match, the value addressed by
``dst_reg + off`` is replaced with ``src_reg``. In either case, the
value that was at ``dst_reg + off`` before the operation is zero-extended
and loaded back to ``R0``.

Clang can generate atomic instructions by default when ``-mcpu=v3`` is
enabled. If a lower version for ``-mcpu`` is set, the only atomic instruction
Clang can generate is ``BPF_ADD`` *without* ``BPF_FETCH``. If you need to enable
the atomics features, while keeping a lower ``-mcpu`` version, you can use
``-Xclang -target-feature -Xclang +alu32``.

You may encounter ``BPF_XADD`` - this is a legacy name for ``BPF_ATOMIC``,
referring to the exclusive-add operation encoded when the immediate field is
zero.
64-bit immediate instructions
-----------------------------

16-byte instructions
--------------------
Instructions with the ``BPF_IMM`` mode modifier use the wide instruction
encoding for an extra imm64 value.

eBPF has one 16-byte instruction: ``BPF_LD | BPF_DW | BPF_IMM`` which consists
of two consecutive ``struct bpf_insn`` 8-byte blocks and interpreted as single
instruction that loads 64-bit immediate value into a dst_reg.
There is currently only one such instruction.

Packet access instructions
--------------------------
``BPF_LD | BPF_DW | BPF_IMM`` means::

eBPF has two non-generic instructions: (BPF_ABS | <size> | BPF_LD) and
(BPF_IND | <size> | BPF_LD) which are used to access packet data.
dst_reg = imm64

They had to be carried over from classic BPF to have strong performance of
socket filters running in eBPF interpreter. These instructions can only
be used when interpreter context is a pointer to ``struct sk_buff`` and
have seven implicit operands. Register R6 is an implicit input that must
contain pointer to sk_buff. Register R0 is an implicit output which contains
the data fetched from the packet. Registers R1-R5 are scratch registers
and must not be used to store the data across BPF_ABS | BPF_LD or
BPF_IND | BPF_LD instructions.

These instructions have implicit program exit condition as well. When
eBPF program is trying to access the data beyond the packet boundary,
the interpreter will abort the execution of the program. JIT compilers
therefore must preserve this property. src_reg and imm32 fields are
explicit inputs to these instructions.
Legacy BPF Packet access instructions
-------------------------------------

For example, BPF_IND | BPF_W | BPF_LD means::
eBPF has special instructions for access to packet data that have been
carried over from classic BPF to retain the performance of legacy socket
filters running in the eBPF interpreter.

R0 = ntohl(*(u32 *) (((struct sk_buff *) R6)->data + src_reg + imm32))
The instructions come in two forms: ``BPF_ABS | <size> | BPF_LD`` and
``BPF_IND | <size> | BPF_LD``.

and R1 - R5 are clobbered.
These instructions are used to access packet data and can only be used when
the program context is a pointer to networking packet. ``BPF_ABS``
accesses packet data at an absolute offset specified by the immediate data
and ``BPF_IND`` access packet data at an offset that includes the value of
a register in addition to the immediate data.

These instructions have seven implicit operands:

* Register R6 is an implicit input that must contain pointer to a
struct sk_buff.
* Register R0 is an implicit output which contains the data fetched from
the packet.
* Registers R1-R5 are scratch registers that are clobbered after a call to
``BPF_ABS | BPF_LD`` or ``BPF_IND`` | BPF_LD instructions.

These instructions have an implicit program exit condition as well. When an
eBPF program is trying to access the data beyond the packet boundary, the
program execution will be aborted.

``BPF_ABS | BPF_W | BPF_LD`` means::

R0 = ntohl(*(u32 *) (((struct sk_buff *) R6)->data + imm32))

``BPF_IND | BPF_W | BPF_LD`` means::

R0 = ntohl(*(u32 *) (((struct sk_buff *) R6)->data + src_reg + imm32))
2 changes: 2 additions & 0 deletions MAINTAINERS
Original file line number Diff line number Diff line change
Expand Up @@ -3523,6 +3523,8 @@ F: net/sched/act_bpf.c
F: net/sched/cls_bpf.c
F: samples/bpf/
F: scripts/bpf_doc.py
F: scripts/pahole-flags.sh
F: scripts/pahole-version.sh
F: tools/bpf/
F: tools/lib/bpf/
F: tools/testing/selftests/bpf/
Expand Down
5 changes: 5 additions & 0 deletions arch/arm64/net/bpf_jit_comp.c
Original file line number Diff line number Diff line change
Expand Up @@ -1143,6 +1143,11 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
return prog;
}

bool bpf_jit_supports_kfunc_call(void)
{
return true;
}

u64 bpf_jit_alloc_exec_limit(void)
{
return VMALLOC_END - VMALLOC_START;
Expand Down
2 changes: 1 addition & 1 deletion arch/powerpc/net/bpf_jit_comp.c
Original file line number Diff line number Diff line change
Expand Up @@ -264,7 +264,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *fp)
fp->jited = 1;
fp->jited_len = proglen + FUNCTION_DESCR_SIZE;

bpf_flush_icache(bpf_hdr, (u8 *)bpf_hdr + (bpf_hdr->pages * PAGE_SIZE));
bpf_flush_icache(bpf_hdr, (u8 *)bpf_hdr + bpf_hdr->size);
if (!fp->is_func || extra_pass) {
bpf_jit_binary_lock_ro(bpf_hdr);
bpf_prog_fill_jited_linfo(fp, addrs);
Expand Down
2 changes: 1 addition & 1 deletion arch/sparc/net/bpf_jit_comp_64.c
Original file line number Diff line number Diff line change
Expand Up @@ -1599,7 +1599,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
if (bpf_jit_enable > 1)
bpf_jit_dump(prog->len, image_size, pass, ctx.image);

bpf_flush_icache(header, (u8 *)header + (header->pages * PAGE_SIZE));
bpf_flush_icache(header, (u8 *)header + header->size);

if (!prog->is_func || extra_pass) {
bpf_jit_binary_lock_ro(header);
Expand Down
1 change: 1 addition & 0 deletions arch/x86/Kconfig
Original file line number Diff line number Diff line change
Expand Up @@ -158,6 +158,7 @@ config X86
select HAVE_ALIGNED_STRUCT_PAGE if SLUB
select HAVE_ARCH_AUDITSYSCALL
select HAVE_ARCH_HUGE_VMAP if X86_64 || X86_PAE
select HAVE_ARCH_HUGE_VMALLOC if HAVE_ARCH_HUGE_VMAP
select HAVE_ARCH_JUMP_LABEL
select HAVE_ARCH_JUMP_LABEL_RELATIVE
select HAVE_ARCH_KASAN if X86_64
Expand Down
1 change: 1 addition & 0 deletions arch/x86/include/asm/text-patching.h
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,7 @@ extern void text_poke_early(void *addr, const void *opcode, size_t len);
extern void *text_poke(void *addr, const void *opcode, size_t len);
extern void text_poke_sync(void);
extern void *text_poke_kgdb(void *addr, const void *opcode, size_t len);
extern void *text_poke_copy(void *addr, const void *opcode, size_t len);
extern int poke_int3_handler(struct pt_regs *regs);
extern void text_poke_bp(void *addr, const void *opcode, size_t len, const void *emulate);

Expand Down
Loading

0 comments on commit 1127170

Please sign in to comment.