summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2020-08-26bpf: Fix a verifier failure with xorYonghong Song
bpf selftest test_progs/test_sk_assign failed with llvm 11 and llvm 12. Compared to llvm 10, llvm 11 and 12 generates xor instruction which is not handled properly in verifier. The following illustrates the problem: 16: (b4) w5 = 0 17: ... R5_w=inv0 ... ... 132: (a4) w5 ^= 1 133: ... R5_w=inv(id=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) ... ... 37: (bc) w8 = w5 38: ... R5=inv(id=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R8_w=inv(id=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) ... ... 41: (bc) w3 = w8 42: ... R3_w=inv(id=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) ... 45: (56) if w3 != 0x0 goto pc+1 ... R3_w=inv0 ... 46: (b7) r1 = 34 47: R1_w=inv34 R7=pkt(id=0,off=26,r=38,imm=0) 47: (0f) r7 += r1 48: R1_w=invP34 R3_w=inv0 R7_w=pkt(id=0,off=60,r=38,imm=0) 48: (b4) w9 = 0 49: R1_w=invP34 R3_w=inv0 R7_w=pkt(id=0,off=60,r=38,imm=0) 49: (69) r1 = *(u16 *)(r7 +0) invalid access to packet, off=60 size=2, R7(id=0,off=60,r=38) R7 offset is outside of the packet At above insn 132, w5 = 0, but after w5 ^= 1, we give a really conservative value of w5. At insn 45, in reality the condition should be always false. But due to conservative value for w3, the verifier evaluates it could be true and this later leads to verifier failure complaining potential packet out-of-bound access. This patch implemented proper XOR support in verifier. In the above example, we have: 132: R5=invP0 132: (a4) w5 ^= 1 133: R5_w=invP1 ... 37: (bc) w8 = w5 ... 41: (bc) w3 = w8 42: R3_w=invP1 ... 45: (56) if w3 != 0x0 goto pc+1 47: R3_w=invP1 ... processed 353 insns ... and the verifier can verify the program successfully. Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20200825064608.2017937-1-yhs@fb.com
2020-08-26libbpf: Fix unintentional success return code in bpf_object__loadAlex Gartrell
There are code paths where EINVAL is returned directly without setting errno. In that case, errno could be 0, which would mask the failure. For example, if a careless programmer set log_level to 10000 out of laziness, they would have to spend a long time trying to figure out why. Fixes: 4f33ddb4e3e2 ("libbpf: Propagate EPERM to caller on program load") Signed-off-by: Alex Gartrell <alexgartrell@gmail.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200826075549.1858580-1-alexgartrell@gmail.com
2020-08-26Merge branch 'resolve_prog_type'Alexei Starovoitov
Udip Pant says: ==================== This patch series adds changes in verifier to make decisions such as granting of read / write access or enforcement of return code status based on the program type of the target program while using dynamic program extension (of type BPF_PROG_TYPE_EXT). The BPF_PROG_TYPE_EXT type can be used to extend types such as XDP, SKB and others. Since the BPF_PROG_TYPE_EXT program type on itself is just a placeholder for those, we need this extended check for those extended programs to actually work with proper access, while using this option. Patch #1 includes changes in the verifier. Patch #2 adds selftests to verify write access on a packet for a valid extension program type Patch #3 adds selftests to verify proper check for the return code Patch #4 adds selftests to ensure access permissions and restrictions for some map types such sockmap. Changelogs: v2 -> v3: * more comprehensive resolution of the program type in the verifier based on the target program (and not just for the packet access) * selftests for checking return code and map access * Also moved this patch to 'bpf-next' from 'bpf' tree v1 -> v2: * extraction of the logic to resolve prog type into a separate method * selftests to check for packet access for a valid freplace prog ==================== Acked-by: Yonghong Song <yhs@fb.com> Acked-by: Toke Høiland-Jørgensen <toke@redhat.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2020-08-26selftests/bpf: Test for map update access from within EXT programsUdip Pant
This adds further tests to ensure access permissions and restrictions are applied properly for some map types such as sock-map. It also adds another negative tests to assert static functions cannot be replaced. In the 'unreliable' mode it still fails with error 'tracing progs cannot use bpf_spin_lock yet' with the change in the verifier Signed-off-by: Udip Pant <udippant@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200825232003.2877030-5-udippant@fb.com
2020-08-26selftests/bpf: Test for checking return code for the extended progUdip Pant
This adds test to enforce same check for the return code for the extended prog as it is enforced for the target program. It asserts failure for a return code, which is permitted without the patch in this series, while it is restricted after the application of this patch. Signed-off-by: Udip Pant <udippant@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200825232003.2877030-4-udippant@fb.com
2020-08-26selftests/bpf: Add test for freplace program with write accessUdip Pant
This adds a selftest that tests the behavior when a freplace target program attempts to make a write access on a packet. The expectation is that the read or write access is granted based on the program type of the linked program and not itself (which is of type, for e.g., BPF_PROG_TYPE_EXT). This test fails without the associated patch on the verifier. Signed-off-by: Udip Pant <udippant@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200825232003.2877030-3-udippant@fb.com
2020-08-26bpf: verifier: Use target program's type for access verificationsUdip Pant
This patch adds changes in verifier to make decisions such as granting of read / write access or enforcement of return code status based on the program type of the target program while using dynamic program extension (of type BPF_PROG_TYPE_EXT). The BPF_PROG_TYPE_EXT type can be used to extend types such as XDP, SKB and others. Since the BPF_PROG_TYPE_EXT program type on itself is just a placeholder for those, we need this extended check for those extended programs to actually work with proper access, while using this option. Specifically, it introduces following changes: - may_access_direct_pkt_data: allow access to packet data based on the target prog - check_return_code: enforce return code based on the target prog (currently, this check is skipped for EXT program) - check_ld_abs: check for 'may_access_skb' based on the target prog - check_map_prog_compatibility: enforce the map compatibility check based on the target prog - may_update_sockmap: allow sockmap update based on the target prog Some other occurrences of prog->type is left as it without replacing with the 'resolved' type: - do_check_common() and check_attach_btf_id(): already have specific logic to handle the EXT prog type - jit_subprogs() and bpf_check(): Not changed for jit compilation or while inferring env->ops Next few patches in this series include selftests for some of these cases. Signed-off-by: Udip Pant <udippant@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200825232003.2877030-2-udippant@fb.com
2020-08-26selftests/bpf: Fix spelling mistake "scoket" -> "socket"Colin Ian King
There is a spelling mistake in a check error message. Fix it. Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200826085907.43095-1-colin.king@canonical.com
2020-08-26selftests/bpf: Fix open call in trigger_fstat_eventsJiri Olsa
Alexei reported compile breakage on newer systems with following error: In file included from /usr/include/fcntl.h:290:0, 4814 from ./test_progs.h:29, 4815 from .../bpf-next/tools/testing/selftests/bpf/prog_tests/d_path.c:3: 4816In function ‘open’, 4817 inlined from ‘trigger_fstat_events’ at .../bpf-next/tools/testing/selftests/bpf/prog_tests/d_path.c:50:10, 4818 inlined from ‘test_d_path’ at .../bpf-next/tools/testing/selftests/bpf/prog_tests/d_path.c:119:6: 4819/usr/include/x86_64-linux-gnu/bits/fcntl2.h:50:4: error: call to ‘__open_missing_mode’ declared with attribute error: open with O_CREAT or O_TMPFILE in second argument needs 3 arguments 4820 __open_missing_mode (); 4821 ^~~~~~~~~~~~~~~~~~~~~~ We're missing permission bits as 3rd argument for open call with O_CREAT flag specified. Fixes: e4d1af4b16f8 ("selftests/bpf: Add test for d_path helper") Reported-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200826101845.747617-1-jolsa@kernel.org
2020-08-25selftests/bpf: Add set test to resolve_btfidsJiri Olsa
Adding test to for sets resolve_btfids. We're checking that testing set gets properly resolved and sorted. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20200825192124.710397-15-jolsa@kernel.org
2020-08-25selftests/bpf: Add test for d_path helperJiri Olsa
Adding test for d_path helper which is pretty much copied from Wenbo Zhang's test for bpf_get_fd_path, which never made it in. The test is doing fstat/close on several fd types, and verifies we got the d_path helper working on kernel probes for vfs_getattr/filp_close functions. Original-patch-by: Wenbo Zhang <ethercflow@gmail.com> Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20200825192124.710397-14-jolsa@kernel.org
2020-08-25selftests/bpf: Add verifier test for d_path helperJiri Olsa
Adding verifier test for attaching tracing program and calling d_path helper from within and testing that it's allowed for dentry_open function and denied for 'd_path' function with appropriate error. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20200825192124.710397-13-jolsa@kernel.org
2020-08-25bpf: Update .BTF_ids section in btf.rst with sets infoJiri Olsa
Updating btf.rst doc with info about BTF_SET_START/END macros. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20200825192124.710397-12-jolsa@kernel.org
2020-08-25bpf: Add d_path helperJiri Olsa
Adding d_path helper function that returns full path for given 'struct path' object, which needs to be the kernel BTF 'path' object. The path is returned in buffer provided 'buf' of size 'sz' and is zero terminated. bpf_d_path(&file->f_path, buf, size); The helper calls directly d_path function, so there's only limited set of function it can be called from. Adding just very modest set for the start. Updating also bpf.h tools uapi header and adding 'path' to bpf_helpers_doc.py script. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Acked-by: KP Singh <kpsingh@google.com> Link: https://lore.kernel.org/bpf/20200825192124.710397-11-jolsa@kernel.org
2020-08-25bpf: Add BTF_SET_START/END macrosJiri Olsa
Adding support to define sorted set of BTF ID values. Following defines sorted set of BTF ID values: BTF_SET_START(btf_allowlist_d_path) BTF_ID(func, vfs_truncate) BTF_ID(func, vfs_fallocate) BTF_ID(func, dentry_open) BTF_ID(func, vfs_getattr) BTF_ID(func, filp_close) BTF_SET_END(btf_allowlist_d_path) It defines following 'struct btf_id_set' variable to access values and count: struct btf_id_set btf_allowlist_d_path; Adding 'allowed' callback to struct bpf_func_proto, to allow verifier the check on allowed callers. Adding btf_id_set_contains function, which will be used by allowed callbacks to verify the caller's BTF ID value is within allowed set. Also removing extra '\' in __BTF_ID_LIST macro. Added BTF_SET_START_GLOBAL macro for global sets. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20200825192124.710397-10-jolsa@kernel.org
2020-08-25bpf: Add btf_struct_ids_match functionJiri Olsa
Adding btf_struct_ids_match function to check if given address provided by BTF object + offset is also address of another nested BTF object. This allows to pass an argument to helper, which is defined via parent BTF object + offset, like for bpf_d_path (added in following changes): SEC("fentry/filp_close") int BPF_PROG(prog_close, struct file *file, void *id) { ... ret = bpf_d_path(&file->f_path, ... The first bpf_d_path argument is hold by verifier as BTF file object plus offset of f_path member. The btf_struct_ids_match function will walk the struct file object and check if there's nested struct path object on the given offset. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20200825192124.710397-9-jolsa@kernel.org
2020-08-25bpf: Factor btf_struct_access functionJiri Olsa
Adding btf_struct_walk function that walks through the struct type + given offset and returns following values: enum bpf_struct_walk_result { /* < 0 error */ WALK_SCALAR = 0, WALK_PTR, WALK_STRUCT, }; WALK_SCALAR - when SCALAR_VALUE is found WALK_PTR - when pointer value is found, its ID is stored in 'next_btf_id' output param WALK_STRUCT - when nested struct object is found, its ID is stored in 'next_btf_id' output param It will be used in following patches to get all nested struct objects for given type and offset. The btf_struct_access now calls btf_struct_walk function, as long as it gets nested structs as return value. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20200825192124.710397-8-jolsa@kernel.org
2020-08-25bpf: Remove recursion call in btf_struct_accessJiri Olsa
Andrii suggested we can simply jump to again label instead of making recursion call. Suggested-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20200825192124.710397-7-jolsa@kernel.org
2020-08-25bpf: Add type_id pointer as argument to __btf_resolve_sizeJiri Olsa
Adding type_id pointer as argument to __btf_resolve_size to return also BTF ID of the resolved type. It will be used in following changes. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20200825192124.710397-6-jolsa@kernel.org
2020-08-25bpf: Add elem_id pointer as argument to __btf_resolve_sizeJiri Olsa
If the resolved type is array, make btf_resolve_size return also ID of the elem type. It will be needed in following changes. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20200825192124.710397-5-jolsa@kernel.org
2020-08-25bpf: Move btf_resolve_size into __btf_resolve_sizeJiri Olsa
Moving btf_resolve_size into __btf_resolve_size and keeping btf_resolve_size public with just first 3 arguments, because the rest of the arguments are not used by outside callers. Following changes are adding more arguments, which are not useful to outside callers. They will be added to the __btf_resolve_size function. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20200825192124.710397-4-jolsa@kernel.org
2020-08-25tools resolve_btfids: Add support for set symbolsJiri Olsa
The set symbol does not have the unique number suffix, so we need to give it a special parsing function. This was omitted in the first batch, because there was no set support yet, so it slipped in the testing. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20200825192124.710397-3-jolsa@kernel.org
2020-08-25tools resolve_btfids: Add size check to get_id functionJiri Olsa
To make sure we don't crash on malformed symbols. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20200825192124.710397-2-jolsa@kernel.org
2020-08-25bpf: Disallow BPF_PRELOAD in allmodconfig buildsAlexei Starovoitov
The CC_CAN_LINK checks that the host compiler can link, but bpf_preload relies on libbpf which in turn needs libelf to be present during linking. allmodconfig runs in odd setups with cross compilers and missing host libraries like libelf. Instead of extending kconfig with every possible library that bpf_preload might need disallow building BPF_PRELOAD in such build-only configurations. Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2020-08-25bpf: Add selftests for local_storageKP Singh
inode_local_storage: * Hook to the file_open and inode_unlink LSM hooks. * Create and unlink a temporary file. * Store some information in the inode's bpf_local_storage during file_open. * Verify that this information exists when the file is unlinked. sk_local_storage: * Hook to the socket_post_create and socket_bind LSM hooks. * Open and bind a socket and set the sk_storage in the socket_post_create hook using the start_server helper. * Verify if the information is set in the socket_bind hook. Signed-off-by: KP Singh <kpsingh@google.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20200825182919.1118197-8-kpsingh@chromium.org
2020-08-25bpf: Allow local storage to be used from LSM programsKP Singh
Adds support for both bpf_{sk, inode}_storage_{get, delete} to be used in LSM programs. These helpers are not used for tracing programs (currently) as their usage is tied to the life-cycle of the object and should only be used where the owning object won't be freed (when the owning object is passed as an argument to the LSM hook). Thus, they are safer to use in LSM hooks than tracing. Usage of local storage in tracing programs will probably follow a per function based whitelist approach. Since the UAPI helper signature for bpf_sk_storage expect a bpf_sock, it, leads to a compilation warning for LSM programs, it's also updated to accept a void * pointer instead. Signed-off-by: KP Singh <kpsingh@google.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/20200825182919.1118197-7-kpsingh@chromium.org
2020-08-25bpf: Implement bpf_local_storage for inodesKP Singh
Similar to bpf_local_storage for sockets, add local storage for inodes. The life-cycle of storage is managed with the life-cycle of the inode. i.e. the storage is destroyed along with the owning inode. The BPF LSM allocates an __rcu pointer to the bpf_local_storage in the security blob which are now stackable and can co-exist with other LSMs. Signed-off-by: KP Singh <kpsingh@google.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200825182919.1118197-6-kpsingh@chromium.org
2020-08-25bpf: Split bpf_local_storage to bpf_sk_storageKP Singh
A purely mechanical change: bpf_sk_storage.c = bpf_sk_storage.c + bpf_local_storage.c bpf_sk_storage.h = bpf_sk_storage.h + bpf_local_storage.h Signed-off-by: KP Singh <kpsingh@google.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/20200825182919.1118197-5-kpsingh@chromium.org
2020-08-25bpf: Generalize bpf_sk_storageKP Singh
Refactor the functionality in bpf_sk_storage.c so that concept of storage linked to kernel objects can be extended to other objects like inode, task_struct etc. Each new local storage will still be a separate map and provide its own set of helpers. This allows for future object specific extensions and still share a lot of the underlying implementation. This includes the changes suggested by Martin in: https://lore.kernel.org/bpf/20200725013047.4006241-1-kafai@fb.com/ adding new map operations to support bpf_local_storage maps: * storages for different kernel objects to optionally have different memory charging strategy (map_local_storage_charge, map_local_storage_uncharge) * Functionality to extract the storage pointer from a pointer to the owning object (map_owner_storage_ptr) Co-developed-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: KP Singh <kpsingh@google.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200825182919.1118197-4-kpsingh@chromium.org
2020-08-25bpf: Generalize caching for sk_storage.KP Singh
Provide the a ability to define local storage caches on a per-object type basis. The caches and caching indices for different objects should not be inter-mixed as suggested in: https://lore.kernel.org/bpf/20200630193441.kdwnkestulg5erii@kafai-mbp.dhcp.thefacebook.com/ "Caching a sk-storage at idx=0 of a sk should not stop an inode-storage to be cached at the same idx of a inode." Signed-off-by: KP Singh <kpsingh@google.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/20200825182919.1118197-3-kpsingh@chromium.org
2020-08-25bpf: Renames in preparation for bpf_local_storageKP Singh
A purely mechanical change to split the renaming from the actual generalization. Flags/consts: SK_STORAGE_CREATE_FLAG_MASK BPF_LOCAL_STORAGE_CREATE_FLAG_MASK BPF_SK_STORAGE_CACHE_SIZE BPF_LOCAL_STORAGE_CACHE_SIZE MAX_VALUE_SIZE BPF_LOCAL_STORAGE_MAX_VALUE_SIZE Structs: bucket bpf_local_storage_map_bucket bpf_sk_storage_map bpf_local_storage_map bpf_sk_storage_data bpf_local_storage_data bpf_sk_storage_elem bpf_local_storage_elem bpf_sk_storage bpf_local_storage The "sk" member in bpf_local_storage is also updated to "owner" in preparation for changing the type to void * in a subsequent patch. Functions: selem_linked_to_sk selem_linked_to_storage selem_alloc bpf_selem_alloc __selem_unlink_sk bpf_selem_unlink_storage_nolock __selem_link_sk bpf_selem_link_storage_nolock selem_unlink_sk __bpf_selem_unlink_storage sk_storage_update bpf_local_storage_update __sk_storage_lookup bpf_local_storage_lookup bpf_sk_storage_map_free bpf_local_storage_map_free bpf_sk_storage_map_alloc bpf_local_storage_map_alloc bpf_sk_storage_map_alloc_check bpf_local_storage_map_alloc_check bpf_sk_storage_map_check_btf bpf_local_storage_map_check_btf Signed-off-by: KP Singh <kpsingh@google.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/20200825182919.1118197-2-kpsingh@chromium.org
2020-08-24selftests/bpf: Enable tc verbose mode for test_sk_assignYonghong Song
Currently test_sk_assign failed verifier with llvm11/llvm12. During debugging, I found the default verifier output is truncated like below Verifier analysis: Skipped 2200 bytes, use 'verb' option for the full verbose log. [...] off=23,r=34,imm=0) R5=inv0 R6=ctx(id=0,off=0,imm=0) R7=pkt(id=0,off=0,r=34,imm=0) R10=fp0 80: (0f) r7 += r2 last_idx 80 first_idx 21 regs=4 stack=0 before 78: (16) if w3 == 0x11 goto pc+1 when I am using "./test_progs -vv -t assign". The reason is tc verbose mode is not enabled. This patched enabled tc verbose mode and the output looks like below Verifier analysis: 0: (bf) r6 = r1 1: (b4) w0 = 2 2: (61) r1 = *(u32 *)(r6 +80) 3: (61) r7 = *(u32 *)(r6 +76) 4: (bf) r2 = r7 5: (07) r2 += 14 6: (2d) if r2 > r1 goto pc+61 R0_w=inv2 R1_w=pkt_end(id=0,off=0,imm=0) R2_w=pkt(id=0,off=14,r=14,imm=0) ... Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200824222807.100200-1-yhs@fb.com
2020-08-24samples: bpf: Refactor tracepoint tracing programs with libbpfDaniel T. Lee
For the problem of increasing fragmentation of the bpf loader programs, instead of using bpf_loader.o, which is used in samples/bpf, this commit refactors the existing tracepoint tracing programs with libbbpf bpf loader. - Adding a tracepoint event and attaching a bpf program to it was done through bpf_program_attach(). - Instead of using the existing BPF MAP definition, MAP definition has been refactored with the new BTF-defined MAP format. Signed-off-by: Daniel T. Lee <danieltimlee@gmail.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200823085334.9413-4-danieltimlee@gmail.com
2020-08-24samples: bpf: Refactor kprobe tracing programs with libbpfDaniel T. Lee
For the problem of increasing fragmentation of the bpf loader programs, instead of using bpf_loader.o, which is used in samples/bpf, this commit refactors the existing kprobe tracing programs with libbbpf bpf loader. - For kprobe events pointing to system calls, the SYSCALL() macro in trace_common.h was used. - Adding a kprobe event and attaching a bpf program to it was done through bpf_program_attach(). - Instead of using the existing BPF MAP definition, MAP definition has been refactored with the new BTF-defined MAP format. Signed-off-by: Daniel T. Lee <danieltimlee@gmail.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200823085334.9413-3-danieltimlee@gmail.com
2020-08-24samples: bpf: Cleanup bpf_load.o from MakefileDaniel T. Lee
Since commit cc7f641d637b ("samples: bpf: Refactor BPF map performance test with libbpf") has ommited the removal of bpf_load.o from Makefile, this commit removes the bpf_load.o rule for targets where bpf_load.o is not used. Fixes: cc7f641d637b ("samples: bpf: Refactor BPF map performance test with libbpf") Signed-off-by: Daniel T. Lee <danieltimlee@gmail.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200823085334.9413-2-danieltimlee@gmail.com
2020-08-24selftests: bpf: Fix sockmap update nitsLorenz Bauer
Address review by Yonghong, to bring the new tests in line with the usual code style. Signed-off-by: Lorenz Bauer <lmb@cloudflare.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20200824084523.13104-1-lmb@cloudflare.com
2020-08-24libbpf: Fix type compatibility check copy-paste errorAndrii Nakryiko
Fix copy-paste error in types compatibility check. Local type is accidentally used instead of target type for the very first type check strictness check. This can result in potentially less strict candidate comparison. Fix the error. Fixes: 3fc32f40c402 ("libbpf: Implement type-based CO-RE relocations support") Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200821225653.2180782-1-andriin@fb.com
2020-08-24libbpf: Avoid false unuinitialized variable warning in bpf_core_apply_reloAndrii Nakryiko
Some versions of GCC report uninitialized targ_spec usage. GCC is wrong, but let's avoid unnecessary warnings. Fixes: ddc7c3042614 ("libbpf: implement BPF CO-RE offset relocation algorithm") Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200821225556.2178419-1-andriin@fb.com
2020-08-24bpf: sk_lookup: Add user documentationJakub Sitnicki
Describe the purpose of BPF sk_lookup program, how it can be attached, when it gets invoked, and what information gets passed to it. Point the reader to examples and further documentation. Signed-off-by: Jakub Sitnicki <jakub@cloudflare.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200821100226.403844-1-jakub@cloudflare.com
2020-08-24docs: Correct subject prefix and update LLVM infoJianlin Lv
bpf_devel_QA.rst:152 The subject prefix information is not accurate, it should be 'PATCH bpf-next v2' Also update LLVM version info and add information about ‘-DLLVM_TARGETS_TO_BUILD’ to prompt the developer to build the desired target. Signed-off-by: Jianlin Lv <Jianlin.Lv@arm.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20200821052817.46887-1-Jianlin.Lv@arm.com
2020-08-24Merge branch 'bpf-tcp-header-opts'Alexei Starovoitov
Martin KaFai Lau says: ==================== The earlier effort in BPF-TCP-CC allows the TCP Congestion Control algorithm to be written in BPF. It opens up opportunities to allow a faster turnaround time in testing/releasing new congestion control ideas to production environment. The same flexibility can be extended to writing TCP header option. It is not uncommon that people want to test new TCP header option to improve the TCP performance. Another use case is for data-center that has a more controlled environment and has more flexibility in putting header options for internal traffic only. This patch set introduces the necessary BPF logic and API to allow bpf program to write and parse header options. There are also some changes to TCP and they are mostly to provide the needed sk and skb info to the bpf program to make decision. Patch 9 is the main patch and has more details on the API and design. The set includes an example which sends the max delay ack in the BPF TCP header option and the receiving side can then adjust its RTO accordingly. v5: - Move some of the comments from git commit message to the UAPI bpf.h in patch 9 - Some variable clean up in the tests (patch 11). v4: - Since bpf-next is currently closed, tag the set with RFC to keep the review cadence - Separate tcp changes in its own patches (5, 6, 7). It is a bit tricky since most of the tcp changes is to call out the bpf prog to write and parse the header. The write and parse callout has been modularized into a few bpf_skops_* function in v3. This revision (v4) tries to move those bpf_skops_* functions into separate TCP patches. However, they will be half implemented to highlight the changes to the TCP stack, mainly: - when the bpf prog will be called in the TCP stack and - what information needs to pump through the TCP stack to the actual bpf prog callsite. The bpf_skops_* functions will be fully implemented in patch 9 together with other bpf pieces. - Use struct_size() in patch 1 (Eric) - Add saw_unknown to struct tcp_options_received in patch 4 (Eric) v3: - Add kdoc for tcp_make_synack (Jakub Kicinski) - Add BPF_WRITE_HDR_TCP_CURRENT_MSS and BPF_WRITE_HDR_TCP_SYNACK_COOKIE in bpf.h to give a clearer meaning to sock_ops->args[0] when writing header option. - Rename BPF_SOCK_OPS_PARSE_UNKWN_HDR_OPT_CB_FLAG to BPF_SOCK_OPS_PARSE_UNKNOWN_HDR_OPT_CB_FLAG v2: - Instead of limiting the bpf prog to write experimental option (kind:254, magic:0xeB9F), this revision allows the bpf prog to write any TCP header option through the bpf_store_hdr_opt() helper. That will allow different bpf-progs to write its own option and the helper will guarantee there is no duplication. - Add bpf_load_hdr_opt() helper to search a particular option by kind. Some of the get_syn logic is refactored to bpf_sock_ops_get_syn(). - Since bpf prog is no longer limited to option (254, 0xeB9F), the TCP_SKB_CB(skb)->bpf_hdr_opt_off is no longer needed. Instead, when there is any option kernel cannot recognize, the bpf prog will be called if the BPF_SOCK_OPS_PARSE_UNKWN_HDR_OPT_CB_FLAG is set. [ The "unknown_opt" is learned in tcp_parse_options() in patch 4. ] - Add BPF_SOCK_OPS_PARSE_ALL_HDR_OPT_CB_FLAG. If this flag is set, the bpf-prog will be called on all tcp packet received at an established sk. It will be useful to ensure a previously written header option is received by the peer. e.g. The latter test is using this on the active-side during syncookie. - The test_tcp_hdr_options.c is adjusted accordingly to test writing both experimental and regular TCP header option. - The test_misc_tcp_hdr_options.c is added to mainly test different cases on the new helpers. - Break up the TCP_BPF_RTO_MIN and TCP_BPF_DELACK_MAX into two patches. - Directly store the tcp_hdrlen in "struct saved_syn" instead of going back to the tcp header to obtain it by "th->doff * 4" - Add a new optval(==2) for setsockopt(TCP_SAVE_SYN) such that it will also store the mac header (patch 9). ==================== Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2020-08-24tcp: bpf: Optionally store mac header in TCP_SAVE_SYNMartin KaFai Lau
This patch is adapted from Eric's patch in an earlier discussion [1]. The TCP_SAVE_SYN currently only stores the network header and tcp header. This patch allows it to optionally store the mac header also if the setsockopt's optval is 2. It requires one more bit for the "save_syn" bit field in tcp_sock. This patch achieves this by moving the syn_smc bit next to the is_mptcp. The syn_smc is currently used with the TCP experimental option. Since syn_smc is only used when CONFIG_SMC is enabled, this patch also puts the "IS_ENABLED(CONFIG_SMC)" around it like the is_mptcp did with "IS_ENABLED(CONFIG_MPTCP)". The mac_hdrlen is also stored in the "struct saved_syn" to allow a quick offset from the bpf prog if it chooses to start getting from the network header or the tcp header. [1]: https://lore.kernel.org/netdev/CANn89iLJNWh6bkH7DNhy_kmcAexuUCccqERqe7z2QsvPhGrYPQ@mail.gmail.com/ Suggested-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Reviewed-by: Eric Dumazet <edumazet@google.com> Link: https://lore.kernel.org/bpf/20200820190123.2886935-1-kafai@fb.com
2020-08-24bpf: selftests: Tcp header optionsMartin KaFai Lau
This patch adds tests for the new bpf tcp header option feature. test_tcp_hdr_options.c: - It tests header option writing and parsing in 3WHS: regular connection establishment, fastopen, and syncookie. - In syncookie, the passive side's bpf prog is asking the active side to resend its bpf header option by specifying a RESEND bit in the outgoing SYNACK. handle_active_estab() and write_nodata_opt() has some details. - handle_passive_estab() has comments on fastopen. - It also has test for header writing and parsing in FIN packet. - Most of the tests is writing an experimental option 254 with magic 0xeB9F. - The no_exprm_estab() also tests writing a regular TCP option without any magic. test_misc_tcp_options.c: - It is an one directional test. Active side writes option and passive side parses option. The focus is to exercise the new helpers and API. - Testing the new helper: bpf_load_hdr_opt() and bpf_store_hdr_opt(). - Testing the bpf_getsockopt(TCP_BPF_SYN). - Negative tests for the above helpers. - Testing the sock_ops->skb_data. Signed-off-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200820190117.2886749-1-kafai@fb.com
2020-08-24bpf: selftests: Add fastopen_connect to network_helpersMartin KaFai Lau
This patch adds a fastopen_connect() helper which will be used in a later test. Signed-off-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200820190111.2886196-1-kafai@fb.com
2020-08-24bpf: tcp: Allow bpf prog to write and parse TCP header optionMartin KaFai Lau
[ Note: The TCP changes here is mainly to implement the bpf pieces into the bpf_skops_*() functions introduced in the earlier patches. ] The earlier effort in BPF-TCP-CC allows the TCP Congestion Control algorithm to be written in BPF. It opens up opportunities to allow a faster turnaround time in testing/releasing new congestion control ideas to production environment. The same flexibility can be extended to writing TCP header option. It is not uncommon that people want to test new TCP header option to improve the TCP performance. Another use case is for data-center that has a more controlled environment and has more flexibility in putting header options for internal only use. For example, we want to test the idea in putting maximum delay ACK in TCP header option which is similar to a draft RFC proposal [1]. This patch introduces the necessary BPF API and use them in the TCP stack to allow BPF_PROG_TYPE_SOCK_OPS program to parse and write TCP header options. It currently supports most of the TCP packet except RST. Supported TCP header option: ─────────────────────────── This patch allows the bpf-prog to write any option kind. Different bpf-progs can write its own option by calling the new helper bpf_store_hdr_opt(). The helper will ensure there is no duplicated option in the header. By allowing bpf-prog to write any option kind, this gives a lot of flexibility to the bpf-prog. Different bpf-prog can write its own option kind. It could also allow the bpf-prog to support a recently standardized option on an older kernel. Sockops Callback Flags: ────────────────────── The bpf program will only be called to parse/write tcp header option if the following newly added callback flags are enabled in tp->bpf_sock_ops_cb_flags: BPF_SOCK_OPS_PARSE_UNKNOWN_HDR_OPT_CB_FLAG BPF_SOCK_OPS_PARSE_ALL_HDR_OPT_CB_FLAG BPF_SOCK_OPS_WRITE_HDR_OPT_CB_FLAG A few words on the PARSE CB flags. When the above PARSE CB flags are turned on, the bpf-prog will be called on packets received at a sk that has at least reached the ESTABLISHED state. The parsing of the SYN-SYNACK-ACK will be discussed in the "3 Way HandShake" section. The default is off for all of the above new CB flags, i.e. the bpf prog will not be called to parse or write bpf hdr option. There are details comment on these new cb flags in the UAPI bpf.h. sock_ops->skb_data and bpf_load_hdr_opt() ───────────────────────────────────────── sock_ops->skb_data and sock_ops->skb_data_end covers the whole TCP header and its options. They are read only. The new bpf_load_hdr_opt() helps to read a particular option "kind" from the skb_data. Please refer to the comment in UAPI bpf.h. It has details on what skb_data contains under different sock_ops->op. 3 Way HandShake ─────────────── The bpf-prog can learn if it is sending SYN or SYNACK by reading the sock_ops->skb_tcp_flags. * Passive side When writing SYNACK (i.e. sock_ops->op == BPF_SOCK_OPS_WRITE_HDR_OPT_CB), the received SYN skb will be available to the bpf prog. The bpf prog can use the SYN skb (which may carry the header option sent from the remote bpf prog) to decide what bpf header option should be written to the outgoing SYNACK skb. The SYN packet can be obtained by getsockopt(TCP_BPF_SYN*). More on this later. Also, the bpf prog can learn if it is in syncookie mode (by checking sock_ops->args[0] == BPF_WRITE_HDR_TCP_SYNACK_COOKIE). The bpf prog can store the received SYN pkt by using the existing bpf_setsockopt(TCP_SAVE_SYN). The example in a later patch does it. [ Note that the fullsock here is a listen sk, bpf_sk_storage is not very useful here since the listen sk will be shared by many concurrent connection requests. Extending bpf_sk_storage support to request_sock will add weight to the minisock and it is not necessary better than storing the whole ~100 bytes SYN pkt. ] When the connection is established, the bpf prog will be called in the existing PASSIVE_ESTABLISHED_CB callback. At that time, the bpf prog can get the header option from the saved syn and then apply the needed operation to the newly established socket. The later patch will use the max delay ack specified in the SYN header and set the RTO of this newly established connection as an example. The received ACK (that concludes the 3WHS) will also be available to the bpf prog during PASSIVE_ESTABLISHED_CB through the sock_ops->skb_data. It could be useful in syncookie scenario. More on this later. There is an existing getsockopt "TCP_SAVED_SYN" to return the whole saved syn pkt which includes the IP[46] header and the TCP header. A few "TCP_BPF_SYN*" getsockopt has been added to allow specifying where to start getting from, e.g. starting from TCP header, or from IP[46] header. The new getsockopt(TCP_BPF_SYN*) will also know where it can get the SYN's packet from: - (a) the just received syn (available when the bpf prog is writing SYNACK) and it is the only way to get SYN during syncookie mode. or - (b) the saved syn (available in PASSIVE_ESTABLISHED_CB and also other existing CB). The bpf prog does not need to know where the SYN pkt is coming from. The getsockopt(TCP_BPF_SYN*) will hide this details. Similarly, a flags "BPF_LOAD_HDR_OPT_TCP_SYN" is also added to bpf_load_hdr_opt() to read a particular header option from the SYN packet. * Fastopen Fastopen should work the same as the regular non fastopen case. This is a test in a later patch. * Syncookie For syncookie, the later example patch asks the active side's bpf prog to resend the header options in ACK. The server can use bpf_load_hdr_opt() to look at the options in this received ACK during PASSIVE_ESTABLISHED_CB. * Active side The bpf prog will get a chance to write the bpf header option in the SYN packet during WRITE_HDR_OPT_CB. The received SYNACK pkt will also be available to the bpf prog during the existing ACTIVE_ESTABLISHED_CB callback through the sock_ops->skb_data and bpf_load_hdr_opt(). * Turn off header CB flags after 3WHS If the bpf prog does not need to write/parse header options beyond the 3WHS, the bpf prog can clear the bpf_sock_ops_cb_flags to avoid being called for header options. Or the bpf-prog can select to leave the UNKNOWN_HDR_OPT_CB_FLAG on so that the kernel will only call it when there is option that the kernel cannot handle. [1]: draft-wang-tcpm-low-latency-opt-00 https://tools.ietf.org/html/draft-wang-tcpm-low-latency-opt-00 Signed-off-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200820190104.2885895-1-kafai@fb.com
2020-08-24bpf: sock_ops: Change some members of sock_ops_kern from u32 to u8Martin KaFai Lau
A later patch needs to add a few pointers and a few u8 to sock_ops_kern. Hence, this patch saves some spaces by moving some of the existing members from u32 to u8 so that the later patch can still fit everything in a cacheline. Signed-off-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200820190058.2885640-1-kafai@fb.com
2020-08-24bpf: tcp: Add bpf_skops_hdr_opt_len() and bpf_skops_write_hdr_opt()Martin KaFai Lau
The bpf prog needs to parse the SYN header to learn what options have been sent by the peer's bpf-prog before writing its options into SYNACK. This patch adds a "syn_skb" arg to tcp_make_synack() and send_synack(). This syn_skb will eventually be made available (as read-only) to the bpf prog. This will be the only SYN packet available to the bpf prog during syncookie. For other regular cases, the bpf prog can also use the saved_syn. When writing options, the bpf prog will first be called to tell the kernel its required number of bytes. It is done by the new bpf_skops_hdr_opt_len(). The bpf prog will only be called when the new BPF_SOCK_OPS_WRITE_HDR_OPT_CB_FLAG is set in tp->bpf_sock_ops_cb_flags. When the bpf prog returns, the kernel will know how many bytes are needed and then update the "*remaining" arg accordingly. 4 byte alignment will be included in the "*remaining" before this function returns. The 4 byte aligned number of bytes will also be stored into the opts->bpf_opt_len. "bpf_opt_len" is a newly added member to the struct tcp_out_options. Then the new bpf_skops_write_hdr_opt() will call the bpf prog to write the header options. The bpf prog is only called if it has reserved spaces before (opts->bpf_opt_len > 0). The bpf prog is the last one getting a chance to reserve header space and writing the header option. These two functions are half implemented to highlight the changes in TCP stack. The actual codes preparing the bpf running context and invoking the bpf prog will be added in the later patch with other necessary bpf pieces. Signed-off-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Reviewed-by: Eric Dumazet <edumazet@google.com> Link: https://lore.kernel.org/bpf/20200820190052.2885316-1-kafai@fb.com
2020-08-24bpf: tcp: Add bpf_skops_parse_hdr()Martin KaFai Lau
The patch adds a function bpf_skops_parse_hdr(). It will call the bpf prog to parse the TCP header received at a tcp_sock that has at least reached the ESTABLISHED state. For the packets received during the 3WHS (SYN, SYNACK and ACK), the received skb will be available to the bpf prog during the callback in bpf_skops_established() introduced in the previous patch and in the bpf_skops_write_hdr_opt() that will be added in the next patch. Calling bpf prog to parse header is controlled by two new flags in tp->bpf_sock_ops_cb_flags: BPF_SOCK_OPS_PARSE_UNKNOWN_HDR_OPT_CB_FLAG and BPF_SOCK_OPS_PARSE_ALL_HDR_OPT_CB_FLAG. When BPF_SOCK_OPS_PARSE_UNKNOWN_HDR_OPT_CB_FLAG is set, the bpf prog will only be called when there is unknown option in the TCP header. When BPF_SOCK_OPS_PARSE_ALL_HDR_OPT_CB_FLAG is set, the bpf prog will be called on all received TCP header. This function is half implemented to highlight the changes in TCP stack. The actual codes preparing the bpf running context and invoking the bpf prog will be added in the later patch with other necessary bpf pieces. Signed-off-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Reviewed-by: Eric Dumazet <edumazet@google.com> Link: https://lore.kernel.org/bpf/20200820190046.2885054-1-kafai@fb.com
2020-08-24bpf: tcp: Add bpf_skops_established()Martin KaFai Lau
In tcp_init_transfer(), it currently calls the bpf prog to give it a chance to handle the just "ESTABLISHED" event (e.g. do setsockopt on the newly established sk). Right now, it is done by calling the general purpose tcp_call_bpf(). In the later patch, it also needs to pass the just-received skb which concludes the 3 way handshake. E.g. the SYNACK received at the active side. The bpf prog can then learn some specific header options written by the peer's bpf-prog and potentially do setsockopt on the newly established sk. Thus, instead of reusing the general purpose tcp_call_bpf(), a new function bpf_skops_established() is added to allow passing the "skb" to the bpf prog. The actual skb passing from bpf_skops_established() to the bpf prog will happen together in a later patch which has the necessary bpf pieces. A "skb" arg is also added to tcp_init_transfer() such that it can then be passed to bpf_skops_established(). Calling the new bpf_skops_established() instead of tcp_call_bpf() should be a noop in this patch. Signed-off-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20200820190039.2884750-1-kafai@fb.com
2020-08-24tcp: Add saw_unknown to struct tcp_options_receivedMartin KaFai Lau
In a later patch, the bpf prog only wants to be called to handle a header option if that particular header option cannot be handled by the kernel. This unknown option could be written by the peer's bpf-prog. It could also be a new standard option that the running kernel does not support it while a bpf-prog can handle it. This patch adds a "saw_unknown" bit to "struct tcp_options_received" and it uses an existing one byte hole to do that. "saw_unknown" will be set in tcp_parse_options() if it sees an option that the kernel cannot handle. Signed-off-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Reviewed-by: Eric Dumazet <edumazet@google.com> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20200820190033.2884430-1-kafai@fb.com