diff options
author | Ian Rogers <irogers@google.com> | 2020-05-15 15:17:32 -0700 |
---|---|---|
committer | Arnaldo Carvalho de Melo <acme@redhat.com> | 2020-05-28 10:03:26 -0300 |
commit | ded80bda8bc9bb65a344b79b36d5acf45a907b25 (patch) | |
tree | ec45adaf10e85930775375755e4d171ab94b3c96 /tools/perf/tests/pmu-events.c | |
parent | eee19501926d98c894f15c1644b815c7d1fbd187 (diff) |
perf expr: Migrate expr ids table to a hashmap
Use a hashmap between a char* string and a double* value. While bpf's
hashmap entries are size_t in size, we can't guarantee sizeof(size_t) >=
sizeof(double). Avoid a memory allocation when gathering ids by making
0.0 a special value encoded as NULL.
Original map suggestion by Andi Kleen:
https://lore.kernel.org/lkml/20200224210308.GQ160988@tassilo.jf.intel.com/
and seconded by Jiri Olsa:
https://lore.kernel.org/lkml/20200423112915.GH1136647@krava/
Committer notes:
There are fixes that need to land upstream before we can use libbpf's
headers, for now use our copy unconditionally, since the data structures
at this point are exactly the same, no problem.
When the fixes for libbpf's hashmap land upstream, we can fix this up.
Testing it:
Building with LIBBPF=1, i.e. the default:
$ perf -vv | grep -i bpf
bpf: [ on ] # HAVE_LIBBPF_SUPPORT
$ nm ~/bin/perf | grep -i libbpf_ | wc -l
39
$ nm ~/bin/perf | grep -i hashmap_ | wc -l
17
$
Explicitely building without LIBBPF:
$ perf -vv | grep -i bpf
bpf: [ OFF ] # HAVE_LIBBPF_SUPPORT
$
$ nm ~/bin/perf | grep -i libbpf_ | wc -l
0
$ nm ~/bin/perf | grep -i hashmap_ | wc -l
9
$
Signed-off-by: Ian Rogers <irogers@google.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Andrii Nakryiko <andriin@fb.com>
Cc: Cong Wang <xiyou.wangcong@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: Jin Yao <yao.jin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: John Fastabend <john.fastabend@gmail.com>
Cc: John Garry <john.garry@huawei.com>
Cc: Kajol Jain <kjain@linux.ibm.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Kim Phillips <kim.phillips@amd.com>
Cc: Leo Yan <leo.yan@linaro.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Martin KaFai Lau <kafai@fb.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Song Liu <songliubraving@fb.com>
Cc: Stephane Eranian <eranian@google.com>
Cc: Yonghong Song <yhs@fb.com>
Cc: bpf@vger.kernel.org
Cc: kp singh <kpsingh@chromium.org>
Cc: netdev@vger.kernel.org
Link: http://lore.kernel.org/lkml/20200515221732.44078-8-irogers@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Diffstat (limited to 'tools/perf/tests/pmu-events.c')
-rw-r--r-- | tools/perf/tests/pmu-events.c | 25 |
1 files changed, 13 insertions, 12 deletions
diff --git a/tools/perf/tests/pmu-events.c b/tools/perf/tests/pmu-events.c index 6baff670525c..ab64b4a4e284 100644 --- a/tools/perf/tests/pmu-events.c +++ b/tools/perf/tests/pmu-events.c @@ -437,8 +437,6 @@ static int test_parsing(void) struct pmu_events_map *map; struct pmu_event *pe; int i, j, k; - const char **ids; - int idnum; int ret = 0; struct expr_parse_ctx ctx; double result; @@ -450,29 +448,34 @@ static int test_parsing(void) break; j = 0; for (;;) { + struct hashmap_entry *cur; + size_t bkt; + pe = &map->table[j++]; if (!pe->name && !pe->metric_group && !pe->metric_name) break; if (!pe->metric_expr) continue; - if (expr__find_other(pe->metric_expr, NULL, - &ids, &idnum, 0) < 0) { + expr__ctx_init(&ctx); + if (expr__find_other(pe->metric_expr, NULL, &ctx, 0) + < 0) { expr_failure("Parse other failed", map, pe); ret++; continue; } - expr__ctx_init(&ctx); /* * Add all ids with a made up value. The value may * trigger divide by zero when subtracted and so try to * make them unique. */ - for (k = 0; k < idnum; k++) - expr__add_id(&ctx, ids[k], k + 1); + k = 1; + hashmap__for_each_entry((&ctx.ids), cur, bkt) + expr__add_id(&ctx, strdup(cur->key), k++); - for (k = 0; k < idnum; k++) { - if (check_parse_id(ids[k], map == cpus_map, pe)) + hashmap__for_each_entry((&ctx.ids), cur, bkt) { + if (check_parse_id(cur->key, map == cpus_map, + pe)) ret++; } @@ -480,9 +483,7 @@ static int test_parsing(void) expr_failure("Parse failed", map, pe); ret++; } - for (k = 0; k < idnum; k++) - zfree(&ids[k]); - free(ids); + expr__ctx_clear(&ctx); } } /* TODO: fail when not ok */ |