Skip to content

Commit

Permalink
Fix a bunch of typos (iovisor#2693)
Browse files Browse the repository at this point in the history
fix a bunch of types in man pages, docs, tools, tests, src and examples.
  • Loading branch information
mika authored and yonghong-song committed Jan 9, 2020
1 parent c707a55 commit c14d02a
Show file tree
Hide file tree
Showing 45 changed files with 67 additions and 67 deletions.
2 changes: 1 addition & 1 deletion docs/reference_guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -1374,7 +1374,7 @@ BPF_PERF_OUTPUT(events);
[...]
```
In Python, you can either let bcc generate the data structure from C declaration automatically (recommanded):
In Python, you can either let bcc generate the data structure from C declaration automatically (recommended):
```Python
def print_event(cpu, data, size):
Expand Down
4 changes: 2 additions & 2 deletions examples/cpp/TCPSendStack.cc
Original file line number Diff line number Diff line change
Expand Up @@ -101,7 +101,7 @@ int main(int argc, char** argv) {
for (auto sym : syms)
std::cout << " " << sym << std::endl;
} else {
// -EFAULT normally means the stack is not availiable and not an error
// -EFAULT normally means the stack is not available and not an error
if (it.first.kernel_stack != -EFAULT) {
lost_stacks++;
std::cout << " [Lost Kernel Stack" << it.first.kernel_stack << "]"
Expand All @@ -114,7 +114,7 @@ int main(int argc, char** argv) {
for (auto sym : syms)
std::cout << " " << sym << std::endl;
} else {
// -EFAULT normally means the stack is not availiable and not an error
// -EFAULT normally means the stack is not available and not an error
if (it.first.user_stack != -EFAULT) {
lost_stacks++;
std::cout << " [Lost User Stack " << it.first.user_stack << "]"
Expand Down
2 changes: 1 addition & 1 deletion examples/networking/http_filter/http-parse-complete.c
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ BPF_HASH(sessions, struct Key, struct Leaf, 1024);
AND ALL the other packets having same (src_ip,dst_ip,src_port,dst_port)
this means belonging to the same "session"
this additional check avoids url truncation, if url is too long
userspace script, if necessary, reassembles urls splitted in 2 or more packets.
userspace script, if necessary, reassembles urls split in 2 or more packets.
if the program is loaded as PROG_TYPE_SOCKET_FILTER
and attached to a socket
return 0 -> DROP the packet
Expand Down
6 changes: 3 additions & 3 deletions examples/networking/http_filter/http-parse-complete.py
Original file line number Diff line number Diff line change
Expand Up @@ -254,13 +254,13 @@ def help():
#check if the packet belong to a session saved in bpf_sessions
if (current_Key in bpf_sessions):
#check id the packet belong to a session saved in local_dictionary
#(local_dictionary mantains HTTP GET/POST url not printed yet because splitted in N packets)
#(local_dictionary maintains HTTP GET/POST url not printed yet because split in N packets)
if (binascii.hexlify(current_Key) in local_dictionary):
#first part of the HTTP GET/POST url is already present in local dictionary (prev_payload_string)
prev_payload_string = local_dictionary[binascii.hexlify(current_Key)]
#looking for CR+LF in current packet.
if (crlf in payload_string):
#last packet. containing last part of HTTP GET/POST url splitted in N packets.
#last packet. containing last part of HTTP GET/POST url split in N packets.
#append current payload
prev_payload_string += payload_string
#print HTTP GET/POST url
Expand All @@ -272,7 +272,7 @@ def help():
except:
print ("error deleting from map or dictionary")
else:
#NOT last packet. containing part of HTTP GET/POST url splitted in N packets.
#NOT last packet. containing part of HTTP GET/POST url split in N packets.
#append current payload
prev_payload_string += payload_string
#check if not size exceeding (usually HTTP GET/POST url < 8K )
Expand Down
2 changes: 1 addition & 1 deletion examples/tracing/dddos.py
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@
* timestamp between 2 successive packets is so small
* (which is not like regular applications behaviour).
* This script looks for this difference in time and if it sees
* more than MAX_NB_PACKETS succesive packets with a difference
* more than MAX_NB_PACKETS successive packets with a difference
* of timestamp between each one of them less than
* LEGAL_DIFF_TIMESTAMP_PACKETS ns,
* ------------------ It Triggers an ALERT -----------------
Expand Down
4 changes: 2 additions & 2 deletions man/man8/compactsnoop.8
Original file line number Diff line number Diff line change
Expand Up @@ -145,11 +145,11 @@ output - internal to compaction
.PP
.in +8n
"complete" (COMPACT_COMPLETE): The full zone was compacted scanned but wasn't
successfull to compact suitable pages.
successful to compact suitable pages.
.PP
.in +8n
"partial_skipped" (COMPACT_PARTIAL_SKIPPED): direct compaction has scanned part
of the zone but wasn't successfull to compact suitable pages.
of the zone but wasn't successful to compact suitable pages.
.PP
.in +8n
"contended" (COMPACT_CONTENDED): compaction terminated prematurely due to lock
Expand Down
4 changes: 2 additions & 2 deletions man/man8/criticalstat.8
Original file line number Diff line number Diff line change
Expand Up @@ -5,10 +5,10 @@ criticalstat \- A tracer to find and report long atomic critical sections in ker
.B criticalstat [\-h] [\-p] [\-i] [\-d DURATION]
.SH DESCRIPTION

criticalstat traces and reports occurences of atomic critical sections in the
criticalstat traces and reports occurrences of atomic critical sections in the
kernel with useful stacktraces showing the origin of them. Such critical
sections frequently occur due to use of spinlocks, or if interrupts or
preemption were explicity disabled by a driver. IRQ routines in Linux are also
preemption were explicitly disabled by a driver. IRQ routines in Linux are also
executed with interrupts disabled. There are many reasons. Such critical
sections are a source of long latency/responsive issues for real-time systems.

Expand Down
2 changes: 1 addition & 1 deletion man/man8/dbslower.8
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ those that exceed a latency (query time) threshold. By default a threshold of
This uses User Statically-Defined Tracing (USDT) probes, a feature added to
MySQL and PostgreSQL for DTrace support, but which may not be enabled on a
given installation. See requirements.
Alternativly, MySQL queries can be traced without the USDT support using the
Alternatively, MySQL queries can be traced without the USDT support using the
-x option.

Since this uses BPF, only the root user can use this tool.
Expand Down
2 changes: 1 addition & 1 deletion man/man8/filetop.8
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ This is top for files.
This traces file reads and writes, and prints a per-file summary every interval
(by default, 1 second). By default the summary is sorted on the highest read
throughput (Kbytes). Sorting order can be changed via -s option. By default only
IO on regular files is shown. The -a option will list all file types (sokets,
IO on regular files is shown. The -a option will list all file types (sockets,
FIFOs, etc).

This uses in-kernel eBPF maps to store per process summaries for efficiency.
Expand Down
2 changes: 1 addition & 1 deletion man/man8/runqlen.8
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ Print run queue occupancy every second:
#
.B runqlen \-O 1
.TP
Print run queue occupancy, with timetamps, for each CPU:
Print run queue occupancy, with timestamps, for each CPU:
#
.B runqlen \-COT 1
.SH FIELDS
Expand Down
2 changes: 1 addition & 1 deletion src/cc/api/BPF.cc
Original file line number Diff line number Diff line change
Expand Up @@ -854,7 +854,7 @@ StatusTuple USDT::init() {
for (auto& p : ctx->probes_) {
if (p->provider_ == provider_ && p->name_ == name_) {
// Take ownership of the probe that we are interested in, and avoid it
// being destrcuted when we destruct the USDT::Context instance
// being destructed when we destruct the USDT::Context instance
probe_ = std::unique_ptr<void, std::function<void(void*)>>(p.release(),
deleter);
p.swap(ctx->probes_.back());
Expand Down
2 changes: 1 addition & 1 deletion src/cc/bcc_elf.h
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ int bcc_elf_foreach_load_section(const char *path,
bcc_elf_load_sectioncb callback,
void *payload);
// Iterate over symbol table of a binary module
// Parameter "option" points to a bcc_symbol_option struct to indicate wheather
// Parameter "option" points to a bcc_symbol_option struct to indicate whether
// and how to use debuginfo file, and what types of symbols to load.
// Returns -1 on error, and 0 on success or stopped by callback
int bcc_elf_foreach_sym(const char *path, bcc_elf_symcb callback, void *option,
Expand Down
2 changes: 1 addition & 1 deletion src/cc/export/helpers.h
Original file line number Diff line number Diff line change
Expand Up @@ -104,7 +104,7 @@ BPF_TABLE(_table_type, _key_type, _leaf_type, _name, _max_entries); \
__attribute__((section("maps/export"))) \
struct _name##_table_t __##_name

// define a table that is shared accross the programs in the same namespace
// define a table that is shared across the programs in the same namespace
#define BPF_TABLE_SHARED(_table_type, _key_type, _leaf_type, _name, _max_entries) \
BPF_TABLE(_table_type, _key_type, _leaf_type, _name, _max_entries); \
__attribute__((section("maps/shared"))) \
Expand Down
2 changes: 1 addition & 1 deletion src/cc/frontends/b/codegen_llvm.cc
Original file line number Diff line number Diff line change
Expand Up @@ -399,7 +399,7 @@ StatusTuple CodegenLLVM::visit_packet_expr_node(PacketExprNode *n) {
expr_ = B.CreateCall(load_fn, vector<Value *>({skb_ptr8, skb_hdr_offset,
B.getInt64(bit_offset & 0x7), B.getInt64(bit_width)}));
// this generates extra trunc insns whereas the bpf.load fns already
// trunc the values internally in the bpf interpeter
// trunc the values internally in the bpf interpreter
//expr_ = B.CreateTrunc(pop_expr(), B.getIntNTy(bit_width));
}
} else {
Expand Down
4 changes: 2 additions & 2 deletions src/cc/frontends/b/parser.yy
Original file line number Diff line number Diff line change
Expand Up @@ -213,9 +213,9 @@ block
;

enter_varscope : /* empty */ { $$ = parser.scopes_->enter_var_scope(); } ;
exit_varscope : /* emtpy */ { $$ = parser.scopes_->exit_var_scope(); } ;
exit_varscope : /* empty */ { $$ = parser.scopes_->exit_var_scope(); } ;
enter_statescope : /* empty */ { $$ = parser.scopes_->enter_state_scope(); } ;
exit_statescope : /* emtpy */ { $$ = parser.scopes_->exit_state_scope(); } ;
exit_statescope : /* empty */ { $$ = parser.scopes_->exit_state_scope(); } ;

struct_decl
: TSTRUCT ident TLBRACE struct_decl_stmts TRBRACE
Expand Down
2 changes: 1 addition & 1 deletion src/cc/frontends/p4/compiler/ebpfTable.py
Original file line number Diff line number Diff line change
Expand Up @@ -136,7 +136,7 @@ def serializeType(self, serializer, keyTypeName):
# Sort fields in decreasing size; this will ensure that
# there is no padding.
# Padding may cause the ebpf verification to fail,
# since padding fields are not initalized
# since padding fields are not initialized
fieldOrder = sorted(
self.fields, key=EbpfTableKey.fieldRank, reverse=True)
for f in fieldOrder:
Expand Down
2 changes: 1 addition & 1 deletion src/cc/libbpf.c
Original file line number Diff line number Diff line change
Expand Up @@ -516,7 +516,7 @@ int bcc_prog_load_xattr(struct bpf_load_program_attr *attr, int prog_len,

if (attr->log_level > 0) {
if (log_buf_size > 0) {
// Use user-provided log buffer if availiable.
// Use user-provided log buffer if available.
log_buf[0] = 0;
attr_log_buf = log_buf;
attr_log_buf_size = log_buf_size;
Expand Down
2 changes: 1 addition & 1 deletion src/cc/libbpf.h
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ int bpf_get_next_key(int fd, void *key, void *next_key);
* it will not to any additional memory allocation.
* - Otherwise, it will allocate an internal temporary buffer for log message
* printing, and continue to attempt increase that allocated buffer size if
* initial attemp was insufficient in size.
* initial attempt was insufficient in size.
*/
int bcc_prog_load(enum bpf_prog_type prog_type, const char *name,
const struct bpf_insn *insns, int prog_len,
Expand Down
2 changes: 1 addition & 1 deletion tests/cc/test_perf_event.cc
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@
TEST_CASE("test read perf event", "[bpf_perf_event]") {
// The basic bpf_perf_event_read is supported since Kernel 4.3. However in that
// version it only supported HARDWARE and RAW events. On the other hand, our
// tests running on Jenkins won't have availiable HARDWARE counters since they
// tests running on Jenkins won't have available HARDWARE counters since they
// are running on VMs. The support of other types of events such as SOFTWARE are
// only added since Kernel 4.13, hence we can only run the test since that.
#if LINUX_VERSION_CODE >= KERNEL_VERSION(4, 13, 0)
Expand Down
4 changes: 2 additions & 2 deletions tests/lua/luaunit.lua
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ M.VERBOSITY_VERBOSE = 20
-- set EXPORT_ASSERT_TO_GLOBALS to have all asserts visible as global values
-- EXPORT_ASSERT_TO_GLOBALS = true

-- we need to keep a copy of the script args before it is overriden
-- we need to keep a copy of the script args before it is overridden
local cmdline_argv = rawget(_G, "arg")

M.FAILURE_PREFIX = 'LuaUnit test FAILURE: ' -- prefix string for failed tests
Expand Down Expand Up @@ -2136,7 +2136,7 @@ end
end
-- class LuaUnit

-- For compatbility with LuaUnit v2
-- For compatibility with LuaUnit v2
M.run = M.LuaUnit.run
M.Run = M.LuaUnit.run

Expand Down
2 changes: 1 addition & 1 deletion tests/python/include/folly/tracing/StaticTracepoint-ELF.h
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@
#define FOLLY_SDT_ARGSIZE(x) (FOLLY_SDT_ISARRAY(x) ? sizeof(void*) : sizeof(x))

// Format of each probe arguments as operand.
// Size of the arugment tagged with FOLLY_SDT_Sn, with "n" constraint.
// Size of the argument tagged with FOLLY_SDT_Sn, with "n" constraint.
// Value of the argument tagged with FOLLY_SDT_An, with configured constraint.
#define FOLLY_SDT_ARG(n, x) \
[FOLLY_SDT_S##n] "n" ((size_t)FOLLY_SDT_ARGSIZE(x)), \
Expand Down
2 changes: 1 addition & 1 deletion tools/cachetop.py
Original file line number Diff line number Diff line change
Expand Up @@ -107,7 +107,7 @@ def get_processes_stats(
misses = (apcl + apd)

# rtaccess is the read hit % during the sample period.
# wtaccess is the write hit % during the smaple period.
# wtaccess is the write hit % during the sample period.
if mpa > 0:
rtaccess = float(mpa) / (access + misses)
if apcl > 0:
Expand Down
2 changes: 1 addition & 1 deletion tools/capable.py
Original file line number Diff line number Diff line change
Expand Up @@ -189,7 +189,7 @@ def __getattr__(self, name):
"TIME", "UID", "PID", "COMM", "CAP", "NAME", "AUDIT"))

def stack_id_err(stack_id):
# -EFAULT in get_stackid normally means the stack-trace is not availible,
# -EFAULT in get_stackid normally means the stack-trace is not available,
# Such as getting kernel stack trace in userspace code
return (stack_id < 0) and (stack_id != -errno.EFAULT)

Expand Down
4 changes: 2 additions & 2 deletions tools/compactsnoop.py
Original file line number Diff line number Diff line change
Expand Up @@ -329,10 +329,10 @@ def compact_result_to_str(status):
# COMPACT_CONTINUE: compaction should continue to another pageblock
4: "continue",
# COMPACT_COMPLETE: The full zone was compacted scanned but wasn't
# successfull to compact suitable pages.
# successful to compact suitable pages.
5: "complete",
# COMPACT_PARTIAL_SKIPPED: direct compaction has scanned part of the
# zone but wasn't successfull to compact suitable pages.
# zone but wasn't successful to compact suitable pages.
6: "partial_skipped",
# COMPACT_CONTENDED: compaction terminated prematurely due to lock
# contentions
Expand Down
4 changes: 2 additions & 2 deletions tools/compactsnoop_example.txt
Original file line number Diff line number Diff line change
Expand Up @@ -65,9 +65,9 @@ or (kernel 4.7 and above)
3: "no_suitable_page",
# COMPACT_CONTINUE: compaction should continue to another pageblock
4: "continue",
# COMPACT_COMPLETE: The full zone was compacted scanned but wasn't successfull to compact suitable pages.
# COMPACT_COMPLETE: The full zone was compacted scanned but wasn't successful to compact suitable pages.
5: "complete",
# COMPACT_PARTIAL_SKIPPED: direct compaction has scanned part of the zone but wasn't successfull to compact suitable pages.
# COMPACT_PARTIAL_SKIPPED: direct compaction has scanned part of the zone but wasn't successful to compact suitable pages.
6: "partial_skipped",
# COMPACT_CONTENDED: compaction terminated prematurely due to lock contentions
7: "contended",
Expand Down
4 changes: 2 additions & 2 deletions tools/criticalstat_example.txt
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
Demonstrations of criticalstat: Find long atomic critical sections in the kernel.

criticalstat traces and reports occurences of atomic critical sections in the
criticalstat traces and reports occurrences of atomic critical sections in the
kernel with useful stacktraces showing the origin of them. Such critical
sections frequently occur due to use of spinlocks, or if interrupts or
preemption were explicity disabled by a driver. IRQ routines in Linux are also
preemption were explicitly disabled by a driver. IRQ routines in Linux are also
executed with interrupts disabled. There are many reasons. Such critical
sections are a source of long latency/responsive issues for real-time systems.

Expand Down
12 changes: 6 additions & 6 deletions tools/inject.py
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ def _get_if_top(self):
else:
early_pred = "bpf_get_prandom_u32() > %s" % str(int((1<<32)*Probe.probability))
# init the map
# dont do an early exit here so the singular case works automatically
# don't do an early exit here so the singular case works automatically
# have an early exit for probability option
enter = """
/*
Expand Down Expand Up @@ -112,7 +112,7 @@ def _get_heading(self):
self.func_name = self.event + ("_entry" if self.is_entry else "_exit")
func_sig = "struct pt_regs *ctx"

# assume theres something in there, no guarantee its well formed
# assume there's something in there, no guarantee its well formed
if right > left + 1 and self.is_entry:
func_sig += ", " + self.func[left + 1:right]

Expand Down Expand Up @@ -209,13 +209,13 @@ def _generate_bottom(self):
pred = self.preds[0][0]
text = self._get_heading() + """
{
u32 overriden = 0;
u32 overridden = 0;
int zero = 0;
u32* val;
val = count.lookup(&zero);
if (val)
overriden = *val;
overridden = *val;
/*
* preparation for predicate, if necessary
Expand All @@ -224,7 +224,7 @@ def _generate_bottom(self):
/*
* If this is the only call in the chain and predicate passes
*/
if (%s == 1 && %s && overriden < %s) {
if (%s == 1 && %s && overridden < %s) {
count.increment(zero);
bpf_override_return(ctx, %s);
return 0;
Expand All @@ -239,7 +239,7 @@ def _generate_bottom(self):
/*
* If all conds have been met and predicate passes
*/
if (p->conds_met == %s && %s && overriden < %s) {
if (p->conds_met == %s && %s && overridden < %s) {
count.increment(zero);
bpf_override_return(ctx, %s);
}
Expand Down
2 changes: 1 addition & 1 deletion tools/klockstat.py
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ def positive_nonzero_int(val):
return ival

def stack_id_err(stack_id):
# -EFAULT in get_stackid normally means the stack-trace is not availible,
# -EFAULT in get_stackid normally means the stack-trace is not available,
# Such as getting kernel stack trace in userspace code
return (stack_id < 0) and (stack_id != -errno.EFAULT)

Expand Down
2 changes: 1 addition & 1 deletion tools/nfsslower.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@
#
# This tool uses kprobes to instrument the kernel for entry and exit
# information, in the future a preferred way would be to use tracepoints.
# Currently there are'nt any tracepoints available for nfs_read_file,
# Currently there aren't any tracepoints available for nfs_read_file,
# nfs_write_file and nfs_open_file, nfs_getattr does have entry and exit
# tracepoints but we chose to use kprobes for consistency
#
Expand Down
2 changes: 1 addition & 1 deletion tools/offcputime.py
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ def positive_nonzero_int(val):
return ival

def stack_id_err(stack_id):
# -EFAULT in get_stackid normally means the stack-trace is not availible,
# -EFAULT in get_stackid normally means the stack-trace is not available,
# Such as getting kernel stack trace in userspace code
return (stack_id < 0) and (stack_id != -errno.EFAULT)

Expand Down
2 changes: 1 addition & 1 deletion tools/offwaketime.py
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ def positive_nonzero_int(val):
return ival

def stack_id_err(stack_id):
# -EFAULT in get_stackid normally means the stack-trace is not availible,
# -EFAULT in get_stackid normally means the stack-trace is not available,
# Such as getting kernel stack trace in userspace code
return (stack_id < 0) and (stack_id != -errno.EFAULT)

Expand Down
4 changes: 2 additions & 2 deletions tools/old/profile.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@
# wrong, note that the first line is the IP, and then the (skipped) stack.
#
# Note: if another perf-based sampling session is active, the output may become
# polluted with their events. On older kernels, the ouptut may also become
# polluted with their events. On older kernels, the output may also become
# polluted with tracing sessions (when the kprobe is used instead of the
# tracepoint). If this becomes a problem, logic can be added to filter events.
#
Expand Down Expand Up @@ -300,7 +300,7 @@ def aksym(addr):
counts = b.get_table("counts")
stack_traces = b.get_table("stack_traces")
for k, v in sorted(counts.items(), key=lambda counts: counts[1].value):
# handle get_stackid erorrs
# handle get_stackid errors
if (not args.user_stacks_only and k.kernel_stack_id < 0 and
k.kernel_stack_id != -errno.EFAULT) or \
(not args.kernel_stacks_only and k.user_stack_id < 0 and
Expand Down
Loading

0 comments on commit c14d02a

Please sign in to comment.