@@ -502,12 +502,12 @@ HELPERS
502
502
503
503
Also, be aware that the newer helper
504
504
**bpf_perf_event_read_value **\ () is recommended over
505
- **bpf_perf_event_read*\ () in general. The latter has some ABI
505
+ **bpf_perf_event_read ** \ () in general. The latter has some ABI
506
506
quirks where error and counter value are used as a return code
507
507
(which is wrong to do since ranges may overlap). This issue is
508
- fixed with bpf_perf_event_read_value(), which at the same time
509
- provides more features over the **bpf_perf_event_read **\ ()
510
- interface. Please refer to the description of
508
+ fixed with ** bpf_perf_event_read_value ** \ (), which at the same
509
+ time provides more features over the **bpf_perf_event_read **\
510
+ () interface. Please refer to the description of
511
511
**bpf_perf_event_read_value **\ () for details.
512
512
Return
513
513
The value of the perf event counter read from the map, or a
@@ -1036,7 +1036,7 @@ HELPERS
1036
1036
Return
1037
1037
0
1038
1038
1039
- **int bpf_setsockopt(struct bpf_sock_ops_kern \* **\ *bpf_socket *\ **, int ** *level *\ **, int ** *optname *\ **, char \* **\ *optval *\ **, int ** *optlen *\ **) **
1039
+ **int bpf_setsockopt(struct bpf_sock_ops \* **\ *bpf_socket *\ **, int ** *level *\ **, int ** *optname *\ **, char \* **\ *optval *\ **, int ** *optlen *\ **) **
1040
1040
Description
1041
1041
Emulate a call to **setsockopt() ** on the socket associated to
1042
1042
*bpf_socket *, which must be a full socket. The *level * at
@@ -1110,7 +1110,7 @@ HELPERS
1110
1110
Return
1111
1111
**SK_PASS ** on success, or **SK_DROP ** on error.
1112
1112
1113
- **int bpf_sock_map_update(struct bpf_sock_ops_kern \* **\ *skops *\ **, struct bpf_map \* **\ *map *\ **, void \* **\ *key *\ **, u64 ** *flags *\ **) **
1113
+ **int bpf_sock_map_update(struct bpf_sock_ops \* **\ *skops *\ **, struct bpf_map \* **\ *map *\ **, void \* **\ *key *\ **, u64 ** *flags *\ **) **
1114
1114
Description
1115
1115
Add an entry to, or update a *map * referencing sockets. The
1116
1116
*skops * is used as a new value for the entry associated to
@@ -1208,7 +1208,7 @@ HELPERS
1208
1208
Return
1209
1209
0 on success, or a negative error in case of failure.
1210
1210
1211
- **int bpf_perf_prog_read_value(struct bpf_perf_event_data_kern \* **\ *ctx *\ **, struct bpf_perf_event_value \* **\ *buf *\ **, u32 ** *buf_size *\ **) **
1211
+ **int bpf_perf_prog_read_value(struct bpf_perf_event_data \* **\ *ctx *\ **, struct bpf_perf_event_value \* **\ *buf *\ **, u32 ** *buf_size *\ **) **
1212
1212
Description
1213
1213
For en eBPF program attached to a perf event, retrieve the
1214
1214
value of the event counter associated to *ctx * and store it in
@@ -1219,7 +1219,7 @@ HELPERS
1219
1219
Return
1220
1220
0 on success, or a negative error in case of failure.
1221
1221
1222
- **int bpf_getsockopt(struct bpf_sock_ops_kern \* **\ *bpf_socket *\ **, int ** *level *\ **, int ** *optname *\ **, char \* **\ *optval *\ **, int ** *optlen *\ **) **
1222
+ **int bpf_getsockopt(struct bpf_sock_ops \* **\ *bpf_socket *\ **, int ** *level *\ **, int ** *optname *\ **, char \* **\ *optval *\ **, int ** *optlen *\ **) **
1223
1223
Description
1224
1224
Emulate a call to **getsockopt() ** on the socket associated to
1225
1225
*bpf_socket *, which must be a full socket. The *level * at
@@ -1263,7 +1263,7 @@ HELPERS
1263
1263
Return
1264
1264
0
1265
1265
1266
- **int bpf_sock_ops_cb_flags_set(struct bpf_sock_ops_kern \* **\ *bpf_sock *\ **, int ** *argval *\ **) **
1266
+ **int bpf_sock_ops_cb_flags_set(struct bpf_sock_ops \* **\ *bpf_sock *\ **, int ** *argval *\ **) **
1267
1267
Description
1268
1268
Attempt to set the value of the **bpf_sock_ops_cb_flags ** field
1269
1269
for the full TCP socket associated to *bpf_sock_ops * to
@@ -1396,7 +1396,7 @@ HELPERS
1396
1396
Return
1397
1397
0 on success, or a negative error in case of failure.
1398
1398
1399
- **int bpf_bind(struct bpf_sock_addr_kern \* **\ *ctx *\ **, struct sockaddr \* **\ *addr *\ **, int ** *addr_len *\ **) **
1399
+ **int bpf_bind(struct bpf_sock_addr \* **\ *ctx *\ **, struct sockaddr \* **\ *addr *\ **, int ** *addr_len *\ **) **
1400
1400
Description
1401
1401
Bind the socket associated to *ctx * to the address pointed by
1402
1402
*addr *, of length *addr_len *. This allows for making outgoing
@@ -1443,6 +1443,40 @@ HELPERS
1443
1443
Return
1444
1444
0 on success, or a negative error in case of failure.
1445
1445
1446
+ **int bpf_get_stack(struct pt_regs \* **\ *regs *\ **, void \* **\ *buf *\ **, u32 ** *size *\ **, u64 ** *flags *\ **) **
1447
+ Description
1448
+ Return a user or a kernel stack in bpf program provided buffer.
1449
+ To achieve this, the helper needs *ctx *, which is a pointer
1450
+ to the context on which the tracing program is executed.
1451
+ To store the stacktrace, the bpf program provides *buf * with
1452
+ a nonnegative *size *.
1453
+
1454
+ The last argument, *flags *, holds the number of stack frames to
1455
+ skip (from 0 to 255), masked with
1456
+ **BPF_F_SKIP_FIELD_MASK **. The next bits can be used to set
1457
+ the following flags:
1458
+
1459
+ **BPF_F_USER_STACK **
1460
+ Collect a user space stack instead of a kernel stack.
1461
+ **BPF_F_USER_BUILD_ID **
1462
+ Collect buildid+offset instead of ips for user stack,
1463
+ only valid if **BPF_F_USER_STACK ** is also specified.
1464
+
1465
+ **bpf_get_stack **\ () can collect up to
1466
+ **PERF_MAX_STACK_DEPTH ** both kernel and user frames, subject
1467
+ to sufficient large buffer size. Note that
1468
+ this limit can be controlled with the **sysctl ** program, and
1469
+ that it should be manually increased in order to profile long
1470
+ user stacks (such as stacks for Java programs). To do so, use:
1471
+
1472
+ ::
1473
+
1474
+ # sysctl kernel.perf_event_max_stack=<new value>
1475
+
1476
+ Return
1477
+ a non-negative value equal to or less than size on success, or
1478
+ a negative error in case of failure.
1479
+
1446
1480
1447
1481
EXAMPLES
1448
1482
========
0 commit comments