changed CHANGELOG.md
 
@@ -1,3 +1,29 @@
1
+ # 0.8.0 (2017-05-07)
2
+
3
+ Another smaller release that focuses on adding type specs and structs in appropriate places along with fixing a couple of small bugs.
4
+
5
+ ## Features (User Facing)
6
+ * Providing an unrecognized configuration option (say `runNtime` instead of `runtime`) will now raise an exception
7
+ * Durations in the configuration will now be scaled appropriately (minutes, microseconds etc)
8
+ * Major functions are type specced for your viewing pleasure in the docs and your dialyzer pleasure at type check time.
9
+
10
+ ## Bugfixes (User Facing)
11
+ * In 0.7.0 statistics generation might time out if Millions of run times were captured so that it takes longer than 5 seconds, this is fixed by waiting infinitely - thanks @devonestes for the [report](https://github.com/PragTob/benchee/issues/71).
12
+ * Unintended line break in the fast function warning removed
13
+ * All necessary dependencies added to `:applications` (deep_merge was missing)
14
+
15
+ ## Breaking Changes (User Facing)
16
+ * Dropped support for elixir 1.2, new support is elixir 1.3+
17
+ * `Benchee.Config` was renamed to `Benchee.Configuration` (important when you use the more verbose API or used it in a Plugin)
18
+
19
+ ## Features (Plugins)
20
+ * Major public interfacing functions are now typespecced!
21
+ * A couple of major data structures are now proper structs e.g. `Benchee.Suite`, `Benchee.Configuration`, `Benchee.Statistics`
22
+
23
+ ## Breaking Changes (Plugins)
24
+ * The `config` key is now `configuration` to go along with the Configuration name change
25
+ * As `Benchee.Configuration` is a proper struct now, arbitrary keys don't end up in it anymore. Custom data for plugins should be passed in through `formatter_options` or `assigns`. Existing plugin keys (`csv`, `json`, `html` and `console`) are automatically put into the `formatter_options` key space for now.
26
+
1
27
# 0.7.0 (April 23, 2017)
2
28
3
29
Smaller convenience features in here - the biggest part of work went into breaking reports in [benchee_html](https://github.com/PragTob/benchee_html) apart :)
changed README.md
 
@@ -48,7 +48,7 @@ The aforementioned [plugins](#plugins) like [benchee_html](https://github.com/Pr
48
48
## Features
49
49
50
50
* first runs the functions for a given warmup time without recording the results, to simulate a _"warm"_ running system
51
- * plugin/extensible friendly architecture so you can use different formatters to generate CSV and more
51
+ * plugin/extensible friendly architecture so you can use different formatters to generate CSV, HTML and more
52
52
* well tested
53
53
* well documented
54
54
* execute benchmark jobs in parallel to gather more results in the same time, or simulate a system under load
 
@@ -64,7 +64,7 @@ Provides you with the following **statistical data**:
64
64
65
65
Benchee does not:
66
66
67
- * Keep results of previous runs and compare them, if you want that have a look at [benchfella](https://github.com/alco/benchfella) or [bmark](https://github.com/joekain/bmark)
67
+ * Keep results of previous runs and compare them (yet), if you want that have a look at [benchfella](https://github.com/alco/benchfella) or [bmark](https://github.com/joekain/bmark) until benchee gets that feature :)
68
68
69
69
Benchee only has a small runtime dependency on `deep_merge` for merging configuration and is aimed at being the core benchmarking logic. Further functionality is provided through plugins that then pull in dependencies, such as HTML generation and CSV export. Check out the [available plugins](#plugins)!
70
70
 
@@ -74,7 +74,7 @@ Add benchee to your list of dependencies in `mix.exs`:
74
74
75
75
```elixir
76
76
defp deps do
77
- [{:benchee, "~> 0.6", only: :dev}]
77
+ [{:benchee, "~> 0.8", only: :dev}]
78
78
end
79
79
```
80
80
 
@@ -131,13 +131,13 @@ Benchee takes a wealth of configuration options, in the most common `Benchee.run
131
131
Benchee.run(%{"some function" => fn -> magic end}, print: [benchmarking: false])
132
132
```
133
133
134
- The available options are the following (also documented in [hexdocs](https://hexdocs.pm/benchee/Benchee.Config.html#init/1)).
134
+ The available options are the following (also documented in [hexdocs](https://hexdocs.pm/benchee/Benchee.Configuration.html#init/1)).
135
135
136
- * `warmup` - the time in seconds for which a benchmark should be run without measuring times before real measurements start. This simulates a _"warm"_ running system. Defaults to 2.
137
- * `time` - the time in seconds for how long each individual benchmark should be run and measured. Defaults to 5.
138
- * `inputs` - a map from descriptive input names to some different input, your benchmarking jobs will then be run with each of these inputs. For this to work your benchmarking function gets the current input passed in as an argument into the function. Defaults to `nil`, aka no input specified and functions are called without an argument. See [Inputs](#inputs)
139
- * `parallel` - each the function of each job will be executed in `parallel` number processes. If `parallel` is `4` then 4 processes will be spawned that all execute the _same_ function for the given time. When these finish/the time is up 4 new processes will be spawned for the next job/function. This gives you more data in the same time, but also puts a load on the system interfering with benchmark results. For more on the pros and cons of parallel benchmarking [check the wiki](https://github.com/PragTob/benchee/wiki/Parallel-Benchmarking). Defaults to 1 (no parallel execution).
140
- * `formatters` - list of formatter functions you'd like to run to output the benchmarking results of the suite when using `Benchee.run/2`. Functions need to accept one argument (which is the benchmarking suite with all data) and then use that to produce output. Used for plugins. Defaults to the builtin console formatter calling `Benchee.Formatters.Console.output/1`. See [Formatters](#formatters)
136
+ * `warmup` - the time in seconds for which a benchmarking job should be run without measuring times before real measurements start. This simulates a _"warm"_ running system. Defaults to 2.
137
+ * `time` - the time in seconds for how long each individual benchmarking job should be run and measured. Defaults to 5.
138
+ * `inputs` - a map from descriptive input names to some different input, your benchmarking jobs will then be run with each of these inputs. For this to work your benchmarking function gets the current input passed in as an argument into the function. Defaults to `nil`, aka no input specified and functions are called without an argument. See [Inputs](#inputs).
139
+ * `parallel` - the function of each benchmarking job will be executed in `parallel` number processes. If `parallel: 4` then 4 processes will be spawned that all execute the _same_ function for the given time. When these finish/the time is up 4 new processes will be spawned for the next job/function. This gives you more data in the same time, but also puts a load on the system interfering with benchmark results. For more on the pros and cons of parallel benchmarking [check the wiki](https://github.com/PragTob/benchee/wiki/Parallel-Benchmarking). Defaults to 1 (no parallel execution).
140
+ * `formatters` - list of formatter functions you'd like to run to output the benchmarking results of the suite when using `Benchee.run/2`. Functions need to accept one argument (which is the benchmarking suite with all data) and then use that to produce output. Used for plugins. Defaults to the builtin console formatter calling `Benchee.Formatters.Console.output/1`. See [Formatters](#formatters).
141
141
* `print` - a map from atoms to `true` or `false` to configure if the output identified by the atom will be printed during the standard Benchee benchmarking process. All options are enabled by default (true). Options are:
142
142
* `:benchmarking` - print when Benchee starts benchmarking a new job (Benchmarking name ..)
143
143
* `:configuration` - a summary of configured benchmarking options including estimated total run time is printed before benchmarking starts
 
@@ -161,7 +161,7 @@ The available options are the following (also documented in [hexdocs](https://he
161
161
162
162
### Inputs
163
163
164
- `:inputs` is a very useful configuration that allows you to run the same benchmarking with different inputs. Functions can have different performance characteristics on differently shaped inputs be that structure or input size.
164
+ `:inputs` is a very useful configuration that allows you to run the same benchmarking jobs with different inputs. Functions can have different performance characteristics on differently shaped inputs - be that structure or input size.
165
165
166
166
One of such cases is comparing tail-recursive and body-recursive implementations of `map`. More information in the [repository with the benchmark](https://github.com/PragTob/elixir_playground/blob/master/bench/tco_blog_post_focussed_inputs.exs) and the [blog post](https://pragtob.wordpress.com/2016/06/16/tail-call-optimization-in-elixir-erlang-not-as-efficient-and-important-as-you-probably-think/).
167
167
 
@@ -232,6 +232,37 @@ As you can see, the tail-recursive approach is significantly faster for the _Big
232
232
233
233
Therefore, I **highly recommend** using this feature and checking different realistically structured and sized inputs for the functions you benchmark!
234
234
235
+ ### Interpreted vs. Compiled Code
236
+
237
+ As benchmarks are written in `.exs` files on the top level, the code is interpreted and not compiled. This is suboptimal as benchmarks should be **as close to the production system as possible** - otherwise benchmarks will lie to you. E.g. benchmarks _should not_ be executed within in `iex` session.
238
+
239
+ In the examples above, all benchmarking functions directly call into code that is defined in another module and hence compiled so that isn't a problem and also recommended as you can also test the functions you benchmark to see that they actually do what you want them to.
240
+
241
+ When writing _more complex_ benchmarking jobs (longer, more invocations, lots of pattern matches...) that doesn't hold true and it is **recommended to wrap the benchmark job definition in a module so that they are compiled:**
242
+
243
+ ```elixir
244
+ defmodule MyBenchmark do
245
+ def benches do
246
+ %{"x" => fn -> ... end,
247
+ "y" => fn -> ... end}
248
+ end
249
+ end
250
+
251
+ Benchee.run(MyBenchmark.benches())
252
+ ```
253
+
254
+ or even:
255
+
256
+ ```elixir
257
+ defmodule MyBenchmark do
258
+ def benchmark do
259
+ Benchee.run(...)
260
+ end
261
+ end
262
+
263
+ MyBenchmark.benchmark()
264
+ ```
265
+
235
266
### Formatters
236
267
237
268
Among all the configuration options, one that you probably want to use are the formatters. Formatters are functions that take one argument (the benchmarking suite with all its results) and then generate some output. You can specify multiple formatters to run for the benchmarking run.
 
@@ -250,7 +281,7 @@ Benchee.run(%{
250
281
&Benchee.Formatters.HTML.output/1,
251
282
&Benchee.Formatters.Console.output/1
252
283
],
253
- html: [file: "samples_output/my.html"],
284
+ formatter_options: [html: [file: "samples_output/my.html"]],
254
285
)
255
286
256
287
```
 
@@ -288,13 +319,14 @@ Benchee.init(time: 3)
288
319
This is a take on the _functional transformation_ of data applied to benchmarks here:
289
320
290
321
1. Configure the benchmarking suite to be run
291
- 2. Define the functions to be benchmarked
292
- 3. Run n benchmarks with the given configuration gathering raw run times per function
293
- 4. Generate statistics based on the raw run times
294
- 5. Format the statistics in a suitable way
295
- 6. Output the formatted statistics
322
+ 2. Gather System data
323
+ 3. Define the functions to be benchmarked
324
+ 4. Run n benchmarks with the given configuration gathering raw run times per function
325
+ 5. Generate statistics based on the raw run times
326
+ 6. Format the statistics in a suitable way
327
+ 7. Output the formatted statistics
296
328
297
- This is also part of the **official API** and allows for more **fine grained control**. (It's also what Benchee does internally when you use `Benchee.run/2`).
329
+ This is also part of the **official API** and allows for more **fine grained control**. (It's also what benchee does internally when you use `Benchee.run/2`).
298
330
299
331
Do you just want to have all the raw run times? Just work with the result of `Benchee.measure/1`! Just want to have the calculated statistics and use your own formatting? Grab the result of `Benchee.statistics/1`! Or, maybe you want to write to a file or send an HTTP post to some online service? Just use the `Benchee.Formatters.Console.format/1` and then send the result where you want.
300
332
 
@@ -302,7 +334,7 @@ This way Benchee should be flexible enough to suit your needs and be extended at
302
334
303
335
## Plugins
304
336
305
- Packages that work with Benchee to provide additional functionality.
337
+ Packages that work with benchee to provide additional functionality.
306
338
307
339
* [benchee_html](//github.com/PragTob/benchee_html) - generate HTML including a data table and many different graphs with the possibility to export individual graphs as PNG :)
308
340
* [benchee_csv](//github.com/PragTob/benchee_csv) - generate CSV from your Benchee benchmark results so you can import them into your favorite spreadsheet tool and make fancy graphs
 
@@ -324,7 +356,7 @@ If you're into watching videos of conference talks and also want to learn more a
324
356
325
357
## Contributing
326
358
327
- Contributions to Benchee are very welcome! Bug reports, documentation, spelling corrections, whole features, feature ideas, bugfixes, new plugins, fancy graphics... all of those (and probably more) are much appreciated contributions!
359
+ Contributions to benchee are very welcome! Bug reports, documentation, spelling corrections, whole features, feature ideas, bugfixes, new plugins, fancy graphics... all of those (and probably more) are much appreciated contributions!
328
360
329
361
Please respect the [Code of Conduct](//github.com/PragTob/benchee/blob/master/CODE_OF_CONDUCT.md).
330
362
 
@@ -341,4 +373,5 @@ A couple of (hopefully) helpful points:
341
373
342
374
* `mix deps.get` to install dependencies
343
375
* `mix test` to run tests or `mix test.watch` to run them continuously while you change files
376
+ * `mix dialyzer` to run dialyzer for type checking, might take a while on the first invocation (try building plts first with `mix dialyzer --plt`)
344
377
* `mix credo` or `mix credo --strict` to find code style problems (not too strict with the 80 width limit for sample output in the docs)
changed hex_metadata.config
 
@@ -2,18 +2,18 @@
2
2
{<<"build_tools">>,[<<"mix">>]}.
3
3
{<<"description">>,
4
4
<<"Versatile (micro) benchmarking that is extensible. Get statistics such as:\naverage, iterations per second, standard deviation and the median.">>}.
5
- {<<"elixir">>,<<"~> 1.2">>}.
5
+ {<<"elixir">>,<<"~> 1.3">>}.
6
6
{<<"files">>,
7
7
[<<"lib/benchee.ex">>,<<"lib/benchee/benchmark.ex">>,
8
- <<"lib/benchee/config.ex">>,<<"lib/benchee/conversion/count.ex">>,
8
+ <<"lib/benchee/configuration.ex">>,<<"lib/benchee/conversion/count.ex">>,
9
9
<<"lib/benchee/conversion/deviation_percent.ex">>,
10
10
<<"lib/benchee/conversion/duration.ex">>,
11
11
<<"lib/benchee/conversion/format.ex">>,
12
12
<<"lib/benchee/conversion/scale.ex">>,<<"lib/benchee/conversion/unit.ex">>,
13
13
<<"lib/benchee/formatters/console.ex">>,
14
14
<<"lib/benchee/output/benchmark_printer.ex">>,
15
- <<"lib/benchee/statistics.ex">>,<<"lib/benchee/system.ex">>,
16
- <<"lib/benchee/utility/deep_convert.ex">>,
15
+ <<"lib/benchee/statistics.ex">>,<<"lib/benchee/suite.ex">>,
16
+ <<"lib/benchee/system.ex">>,<<"lib/benchee/utility/deep_convert.ex">>,
17
17
<<"lib/benchee/utility/file_creation.ex">>,
18
18
<<"lib/benchee/utility/map_value.ex">>,
19
19
<<"lib/benchee/utility/repeat_n.ex">>,<<"mix.exs">>,<<"README.md">>,
 
@@ -29,4 +29,4 @@
29
29
{<<"name">>,<<"deep_merge">>},
30
30
{<<"optional">>,false},
31
31
{<<"requirement">>,<<"~> 0.1">>}]]}.
32
- {<<"version">>,<<"0.7.0">>}.
32
+ {<<"version">>,<<"0.8.0">>}.
changed lib/benchee.ex
 
@@ -12,8 +12,8 @@ defmodule Benchee do
12
12
13
13
* jobs - a map from descriptive benchmark job name to a function to be
14
14
executed and benchmarked
15
- * config - configuration options to alter what Benchee does, see
16
- `Benchee.Config.init/1` for documentation of the available options.
15
+ * configuration - configuration options to alter what Benchee does, see
16
+ `Benchee.Configuration.init/1` for documentation of the available options.
17
17
18
18
## Examples
19
19
 
@@ -46,15 +46,16 @@ defmodule Benchee do
46
46
|> Benchee.statistics
47
47
end
48
48
49
- defp output_results(suite = %{config: %{formatters: formatters}}) do
49
+ defp output_results(suite = %{configuration: %{formatters: formatters}}) do
50
50
Enum.each formatters, fn(output_function) ->
51
51
output_function.(suite)
52
52
end
53
+
53
54
suite
54
55
end
55
56
56
- defdelegate init(), to: Benchee.Config
57
- defdelegate init(config), to: Benchee.Config
57
+ defdelegate init(), to: Benchee.Configuration
58
+ defdelegate init(config), to: Benchee.Configuration
58
59
defdelegate system(suite), to: Benchee.System
59
60
defdelegate measure(suite), to: Benchee.Benchmark
60
61
defdelegate measure(suite, printer), to: Benchee.Benchmark
changed lib/benchee/benchmark.ex
 
@@ -7,18 +7,22 @@ defmodule Benchee.Benchmark do
7
7
8
8
alias Benchee.Utility.RepeatN
9
9
alias Benchee.Output.BenchmarkPrinter, as: Printer
10
+ alias Benchee.Suite
11
+
12
+ @type name :: String.t
10
13
11
14
@doc """
12
15
Adds the given function and its associated name to the benchmarking jobs to
13
16
be run in this benchmarking suite as a tuple `{name, function}` to the list
14
17
under the `:jobs` key.
15
18
"""
16
- def benchmark(suite = %{jobs: jobs}, name, function, printer \\ Printer) do
19
+ @spec benchmark(Suite.t, name, fun, module) :: Suite.t
20
+ def benchmark(suite = %Suite{jobs: jobs}, name, function, printer \\ Printer) do
17
21
if Map.has_key?(jobs, name) do
18
22
printer.duplicate_benchmark_warning name
19
23
suite
20
24
else
21
- %{suite | jobs: Map.put(jobs, name, function)}
25
+ %Suite{suite | jobs: Map.put(jobs, name, function)}
22
26
end
23
27
end
24
28
 
@@ -36,11 +40,12 @@ defmodule Benchee.Benchmark do
36
40
There will be `parallel` processes spawned exeuting the benchmark job in
37
41
parallel.
38
42
"""
39
- def measure(suite = %{jobs: jobs, config: config}, printer \\ Printer) do
43
+ @spec measure(Suite.t, module) :: Suite.t
44
+ def measure(suite = %Suite{jobs: jobs, configuration: config}, printer \\ Printer) do
40
45
printer.configuration_information(suite)
41
46
run_times = record_runtimes(jobs, config, printer)
42
47
43
- Map.put suite, :run_times, run_times
48
+ %Suite{suite | run_times: run_times}
44
49
end
45
50
46
51
@no_input :__no_input
 
@@ -88,7 +93,7 @@ defmodule Benchee.Benchmark do
88
93
print: %{fast_warning: fast_warning}},
89
94
printer) do
90
95
pmap 1..parallel, fn ->
91
- run_warmup function, input, warmup, printer
96
+ _ = run_warmup function, input, warmup, printer
92
97
measure_runtimes function, input, time, fast_warning, printer
93
98
end
94
99
end
removed lib/benchee/config.ex
 
@@ -1,190 +0,0 @@
1
- defmodule Benchee.Config do
2
- @moduledoc """
3
- Functions to handle the configuration of Benchee, exposes `init` function.
4
- """
5
-
6
- alias Benchee.Conversion.Duration
7
- alias Benchee.Utility.DeepConvert
8
-
9
- @doc """
10
- Returns the initial benchmark configuration for Benchee, composed of defaults
11
- and an optional custom configuration.
12
-
13
- Configuration times are given in seconds, but are converted to microseconds
14
- internally.
15
-
16
- Possible options:
17
-
18
- * `time` - total run time in seconds of a single benchmark (determines
19
- how often it is executed). Defaults to 5.
20
- * `warmup` - the time in seconds for which the benchmarking function
21
- should be run without gathering results. Defaults to 2.
22
- * `inputs` - a map from descriptive input names to some different input,
23
- your benchmarking jobs will then be run with each of these inputs. For this
24
- to work your benchmarking function gets the current input passed in as an
25
- argument into the function. Defaults to `nil`, aka no input specified and
26
- functions are called without an argument.
27
- * `parallel` - each the function of each job will be executed in
28
- `parallel` number processes. If `parallel` is `4` then 4 processes will be
29
- spawned that all execute the _same_ function for the given time. When these
30
- finish/the time is up 4 new processes will be spawned for the next
31
- job/function. This gives you more data in the same time, but also puts a
32
- load on the system interfering with benchmark results. For more on the pros
33
- and cons of parallel benchmarking [check the
34
- wiki](https://github.com/PragTob/benchee/wiki/Parallel-Benchmarking).
35
- Defaults to 1 (no parallel execution).
36
- * `formatters` - list of formatter functions you'd like to run to output the
37
- benchmarking results of the suite when using `Benchee.run/2`. Functions need
38
- to accept one argument (which is the benchmarking suite with all data) and
39
- then use that to produce output. Used for plugins. Defaults to the builtin
40
- console formatter calling `Benchee.Formatters.Console.output/1`.
41
- * `print` - a map from atoms to `true` or `false` to configure if the
42
- output identified by the atom will be printed. All options are enabled by
43
- default (true). Options are:
44
- * `:benchmarking` - print when Benchee starts benchmarking a new job
45
- (Benchmarking name ..)
46
- * `:configuration` - a summary of configured benchmarking options
47
- including estimated total run time is printed before benchmarking starts
48
- * `:fast_warning` - warnings are displayed if functions are executed
49
- too fast leading to inaccurate measures
50
- * `console` - options for the built-in console formatter. Like the
51
- `print` options the boolean options are also enabled by default:
52
- * `:comparison` - if the comparison of the different benchmarking jobs
53
- (x times slower than) is shown (true/false)
54
- * `:unit_scaling` - the strategy for choosing a unit for durations and
55
- counts. When scaling a value, Benchee finds the "best fit" unit (the
56
- largest unit for which the result is at least 1). For example, 1_200_000
57
- scales to `1.2 M`, while `800_000` scales to `800 K`. The `unit_scaling`
58
- strategy determines how Benchee chooses the best fit unit for an entire
59
- list of values, when the individual values in the list may have different
60
- best fit units. There are four strategies, defaulting to `:best`:
61
- * `:best` - the most frequent best fit unit will be used, a tie
62
- will result in the larger unit being selected.
63
- * `:largest` - the largest best fit unit will be used (i.e. thousand
64
- and seconds if values are large enough).
65
- * `:smallest` - the smallest best fit unit will be used (i.e.
66
- millisecond and one)
67
- * `:none` - no unit scaling will occur. Durations will be displayed
68
- in microseconds, and counts will be displayed in ones (this is
69
- equivalent to the behaviour Benchee had pre 0.5.0)
70
-
71
- ## Examples
72
-
73
- iex> Benchee.init
74
- %{
75
- config:
76
- %{
77
- parallel: 1,
78
- time: 5_000_000,
79
- warmup: 2_000_000,
80
- inputs: nil,
81
- formatters: [&Benchee.Formatters.Console.output/1],
82
- print: %{
83
- benchmarking: true,
84
- fast_warning: true,
85
- configuration: true
86
- },
87
- console: %{ comparison: true, unit_scaling: :best }
88
- },
89
- jobs: %{}
90
- }
91
-
92
- iex> Benchee.init time: 1, warmup: 0.2
93
- %{
94
- config:
95
- %{
96
- parallel: 1,
97
- time: 1_000_000,
98
- warmup: 200_000.0,
99
- inputs: nil,
100
- formatters: [&Benchee.Formatters.Console.output/1],
101
- print: %{
102
- benchmarking: true,
103
- fast_warning: true,
104
- configuration: true
105
- },
106
- console: %{ comparison: true, unit_scaling: :best }
107
- },
108
- jobs: %{}
109
- }
110
-
111
- iex> Benchee.init %{time: 1, warmup: 0.2}
112
- %{
113
- config:
114
- %{
115
- parallel: 1,
116
- time: 1_000_000,
117
- warmup: 200_000.0,
118
- inputs: nil,
119
- formatters: [&Benchee.Formatters.Console.output/1],
120
- print: %{
121
- benchmarking: true,
122
- fast_warning: true,
123
- configuration: true
124
- },
125
- console: %{ comparison: true, unit_scaling: :best }
126
- },
127
- jobs: %{}
128
- }
129
-
130
- iex> Benchee.init(
131
- ...> parallel: 2,
132
- ...> time: 1,
133
- ...> warmup: 0.2,
134
- ...> formatters: [&IO.puts/2],
135
- ...> print: [fast_warning: false],
136
- ...> console: [unit_scaling: :smallest],
137
- ...> inputs: %{"Small" => 5, "Big" => 9999})
138
- %{
139
- config:
140
- %{
141
- parallel: 2,
142
- time: 1_000_000,
143
- warmup: 200_000.0,
144
- inputs: %{"Small" => 5, "Big" => 9999},
145
- formatters: [&IO.puts/2],
146
- print: %{
147
- benchmarking: true,
148
- fast_warning: false,
149
- configuration: true
150
- },
151
- console: %{ comparison: true, unit_scaling: :smallest }
152
- },
153
- jobs: %{}
154
- }
155
- """
156
- @default_config %{
157
- parallel: 1,
158
- time: 5,
159
- warmup: 2,
160
- formatters: [&Benchee.Formatters.Console.output/1],
161
- inputs: nil,
162
- print: %{
163
- benchmarking: true,
164
- configuration: true,
165
- fast_warning: true
166
- },
167
- console: %{
168
- comparison: true,
169
- unit_scaling: :best
170
- }
171
- }
172
- @time_keys [:time, :warmup]
173
- def init(config \\ %{}) do
174
- map_config = DeepConvert.to_map(config)
175
- config = @default_config
176
- |> DeepMerge.deep_merge(map_config)
177
- |> convert_time_to_micro_s
178
- :ok = :timer.start
179
- %{config: config, jobs: %{}}
180
- end
181
-
182
- defp convert_time_to_micro_s(config) do
183
- Enum.reduce @time_keys, config, fn(key, new_config) ->
184
- {_, new_config} = Map.get_and_update! new_config, key, fn(seconds) ->
185
- {seconds, Duration.microseconds({seconds, :second})}
186
- end
187
- new_config
188
- end
189
- end
190
- end
added lib/benchee/configuration.ex
 
@@ -0,0 +1,261 @@
1
+ defmodule Benchee.Configuration do
2
+ @moduledoc """
3
+ Functions to handle the configuration of Benchee, exposes `init` function.
4
+ """
5
+
6
+ alias Benchee.Conversion.Duration
7
+ alias Benchee.Utility.DeepConvert
8
+ alias Benchee.Suite
9
+
10
+ defstruct [
11
+ parallel: 1,
12
+ time: 5,
13
+ warmup: 2,
14
+ formatters: [&Benchee.Formatters.Console.output/1],
15
+ print: %{
16
+ benchmarking: true,
17
+ configuration: true,
18
+ fast_warning: true
19
+ },
20
+ inputs: nil,
21
+ # formatters should end up here but known once are still picked up at
22
+ # the top level for now
23
+ formatter_options: %{
24
+ console: %{
25
+ comparison: true,
26
+ unit_scaling: :best
27
+ }
28
+ },
29
+ # If you/your plugin/whatever needs it your data can go here
30
+ assigns: %{}
31
+ ]
32
+
33
+ @type t :: %__MODULE__{
34
+ parallel: integer,
35
+ time: number,
36
+ warmup: number,
37
+ formatters: [((Suite.t) -> Suite.t)],
38
+ print: map,
39
+ inputs: %{Suite.key => any} | nil,
40
+ formatter_options: map,
41
+ assigns: map
42
+ }
43
+
44
+ @type user_configuration :: map | keyword
45
+ @time_keys [:time, :warmup]
46
+
47
+ @doc """
48
+ Returns the initial benchmark configuration for Benchee, composed of defaults
49
+ and an optional custom configuration.
50
+
51
+ Configuration times are given in seconds, but are converted to microseconds
52
+ internally.
53
+
54
+ Possible options:
55
+
56
+ * `time` - total run time in seconds of a single benchmark (determines
57
+ how often it is executed). Defaults to 5.
58
+ * `warmup` - the time in seconds for which the benchmarking function
59
+ should be run without gathering results. Defaults to 2.
60
+ * `inputs` - a map from descriptive input names to some different input,
61
+ your benchmarking jobs will then be run with each of these inputs. For this
62
+ to work your benchmarking function gets the current input passed in as an
63
+ argument into the function. Defaults to `nil`, aka no input specified and
64
+ functions are called without an argument.
65
+ * `parallel` - each the function of each job will be executed in
66
+ `parallel` number processes. If `parallel` is `4` then 4 processes will be
67
+ spawned that all execute the _same_ function for the given time. When these
68
+ finish/the time is up 4 new processes will be spawned for the next
69
+ job/function. This gives you more data in the same time, but also puts a
70
+ load on the system interfering with benchmark results. For more on the pros
71
+ and cons of parallel benchmarking [check the
72
+ wiki](https://github.com/PragTob/benchee/wiki/Parallel-Benchmarking).
73
+ Defaults to 1 (no parallel execution).
74
+ * `formatters` - list of formatter functions you'd like to run to output the
75
+ benchmarking results of the suite when using `Benchee.run/2`. Functions need
76
+ to accept one argument (which is the benchmarking suite with all data) and
77
+ then use that to produce output. Used for plugins. Defaults to the builtin
78
+ console formatter calling `Benchee.Formatters.Console.output/1`.
79
+ * `print` - a map from atoms to `true` or `false` to configure if the
80
+ output identified by the atom will be printed. All options are enabled by
81
+ default (true). Options are:
82
+ * `:benchmarking` - print when Benchee starts benchmarking a new job
83
+ (Benchmarking name ..)
84
+ * `:configuration` - a summary of configured benchmarking options
85
+ including estimated total run time is printed before benchmarking starts
86
+ * `:fast_warning` - warnings are displayed if functions are executed
87
+ too fast leading to inaccurate measures
88
+ * `console` - options for the built-in console formatter. Like the
89
+ `print` options the boolean options are also enabled by default:
90
+ * `:comparison` - if the comparison of the different benchmarking jobs
91
+ (x times slower than) is shown (true/false)
92
+ * `:unit_scaling` - the strategy for choosing a unit for durations and
93
+ counts. When scaling a value, Benchee finds the "best fit" unit (the
94
+ largest unit for which the result is at least 1). For example, 1_200_000
95
+ scales to `1.2 M`, while `800_000` scales to `800 K`. The `unit_scaling`
96
+ strategy determines how Benchee chooses the best fit unit for an entire
97
+ list of values, when the individual values in the list may have different
98
+ best fit units. There are four strategies, defaulting to `:best`:
99
+ * `:best` - the most frequent best fit unit will be used, a tie
100
+ will result in the larger unit being selected.
101
+ * `:largest` - the largest best fit unit will be used (i.e. thousand
102
+ and seconds if values are large enough).
103
+ * `:smallest` - the smallest best fit unit will be used (i.e.
104
+ millisecond and one)
105
+ * `:none` - no unit scaling will occur. Durations will be displayed
106
+ in microseconds, and counts will be displayed in ones (this is
107
+ equivalent to the behaviour Benchee had pre 0.5.0)
108
+
109
+ ## Examples
110
+
111
+ iex> Benchee.init
112
+ %Benchee.Suite{
113
+ configuration:
114
+ %Benchee.Configuration{
115
+ parallel: 1,
116
+ time: 5_000_000,
117
+ warmup: 2_000_000,
118
+ inputs: nil,
119
+ formatters: [&Benchee.Formatters.Console.output/1],
120
+ print: %{
121
+ benchmarking: true,
122
+ fast_warning: true,
123
+ configuration: true
124
+ },
125
+ formatter_options: %{
126
+ console: %{ comparison: true, unit_scaling: :best }
127
+ },
128
+ assigns: %{}
129
+ },
130
+ jobs: %{},
131
+ run_times: nil,
132
+ statistics: nil,
133
+ system: nil
134
+ }
135
+
136
+ iex> Benchee.init time: 1, warmup: 0.2
137
+ %Benchee.Suite{
138
+ configuration:
139
+ %Benchee.Configuration{
140
+ parallel: 1,
141
+ time: 1_000_000,
142
+ warmup: 200_000.0,
143
+ inputs: nil,
144
+ formatters: [&Benchee.Formatters.Console.output/1],
145
+ print: %{
146
+ benchmarking: true,
147
+ fast_warning: true,
148
+ configuration: true
149
+ },
150
+ formatter_options: %{
151
+ console: %{ comparison: true, unit_scaling: :best }
152
+ },
153
+ assigns: %{}
154
+ },
155
+ jobs: %{},
156
+ run_times: nil,
157
+ statistics: nil,
158
+ system: nil
159
+ }
160
+
161
+ iex> Benchee.init %{time: 1, warmup: 0.2}
162
+ %Benchee.Suite{
163
+ configuration:
164
+ %Benchee.Configuration{
165
+ parallel: 1,
166
+ time: 1_000_000,
167
+ warmup: 200_000.0,
168
+ inputs: nil,
169
+ formatters: [&Benchee.Formatters.Console.output/1],
170
+ print: %{
171
+ benchmarking: true,
172
+ fast_warning: true,
173
+ configuration: true
174
+ },
175
+ formatter_options: %{
176
+ console: %{ comparison: true, unit_scaling: :best }
177
+ },
178
+ assigns: %{}
179
+ },
180
+ jobs: %{},
181
+ run_times: nil,
182
+ statistics: nil,
183
+ system: nil
184
+ }
185
+
186
+ iex> Benchee.init(
187
+ ...> parallel: 2,
188
+ ...> time: 1,
189
+ ...> warmup: 0.2,
190
+ ...> formatters: [&IO.puts/2],
191
+ ...> print: [fast_warning: false],
192
+ ...> console: [unit_scaling: :smallest],
193
+ ...> inputs: %{"Small" => 5, "Big" => 9999},
194
+ ...> formatter_options: [some: "option"])
195
+ %Benchee.Suite{
196
+ configuration:
197
+ %Benchee.Configuration{
198
+ parallel: 2,
199
+ time: 1_000_000,
200
+ warmup: 200_000.0,
201
+ inputs: %{"Small" => 5, "Big" => 9999},
202
+ formatters: [&IO.puts/2],
203
+ print: %{
204
+ benchmarking: true,
205
+ fast_warning: false,
206
+ configuration: true
207
+ },
208
+ formatter_options: %{
209
+ console: %{ comparison: true, unit_scaling: :smallest },
210
+ some: "option"
211
+ },
212
+ assigns: %{}
213
+ },
214
+ jobs: %{},
215
+ run_times: nil,
216
+ statistics: nil,
217
+ system: nil
218
+ }
219
+ """
220
+ @spec init(user_configuration) :: Suite.t
221
+ def init(config \\ %{}) do
222
+ map_config = config
223
+ |> DeepConvert.to_map
224
+ |> translate_formatter_keys
225
+
226
+ config = %Benchee.Configuration{}
227
+ |> DeepMerge.deep_merge(map_config)
228
+ |> convert_time_to_micro_s
229
+
230
+ :ok = :timer.start
231
+
232
+ %Suite{configuration: config}
233
+ end
234
+
235
+ # backwards compatible translation of formatter keys to go into
236
+ # formatter_options now
237
+ @formatter_keys [:console, :csv, :json, :html]
238
+ defp translate_formatter_keys(config) do
239
+ {formatter_options, config} = Map.split(config, @formatter_keys)
240
+ DeepMerge.deep_merge(%{formatter_options: formatter_options}, config)
241
+ end
242
+
243
+ defp convert_time_to_micro_s(config) do
244
+ Enum.reduce @time_keys, config, fn(key, new_config) ->
245
+ {_, new_config} = Map.get_and_update! new_config, key, fn(seconds) ->
246
+ {seconds, Duration.microseconds({seconds, :second})}
247
+ end
248
+ new_config
249
+ end
250
+ end
251
+ end
252
+
253
+ defimpl DeepMerge.Resolver, for: Benchee.Configuration do
254
+ def resolve(_original, override = %{__struct__: Benchee.Configuration}, _) do
255
+ override
256
+ end
257
+ def resolve(original, override, resolver) when is_map(override) do
258
+ merged = Map.merge(original, override, resolver)
259
+ struct! Benchee.Configuration, Map.to_list(merged)
260
+ end
261
+ end
changed lib/benchee/conversion/duration.ex
 
@@ -26,8 +26,8 @@ defmodule Benchee.Conversion.Duration do
26
26
minute: %Unit{
27
27
name: :minute,
28
28
magnitude: @microseconds_per_minute,
29
- label: "m",
30
- long: "Minutes"
29
+ label: "min",
30
+ long: "Minutes"
31
31
},
32
32
second: %Unit{
33
33
name: :second,
 
@@ -45,7 +45,7 @@ defmodule Benchee.Conversion.Duration do
45
45
name: :microsecond,
46
46
magnitude: 1,
47
47
label: "μs",
48
- long: "Microseconds"
48
+ long: "Microseconds"
49
49
}
50
50
}
51
51
 
@@ -203,7 +203,10 @@ defmodule Benchee.Conversion.Duration do
203
203
iex> Benchee.Conversion.Duration.format({45.6789, :millisecond})
204
204
"45.68 ms"
205
205
206
- iex> Benchee.Conversion.Duration.format({45.6789, %Benchee.Conversion.Unit{long: "Milliseconds", magnitude: 1000, label: "ms"}})
206
+ iex> Benchee.Conversion.Duration.format {45.6789,
207
+ ...> %Benchee.Conversion.Unit{
208
+ ...> long: "Milliseconds", magnitude: 1000, label: "ms"}
209
+ ...> }
207
210
"45.68 ms"
208
211
209
212
"""
changed lib/benchee/conversion/format.ex
 
@@ -74,6 +74,7 @@ defmodule Benchee.Conversion.Format do
74
74
label
75
75
end
76
76
77
+ defp float_precision(float) when float == 0, do: 1
77
78
defp float_precision(float) when float < 0.01, do: 5
78
79
defp float_precision(float) when float < 0.1, do: 4
79
80
defp float_precision(float) when float < 0.2, do: 3
changed lib/benchee/conversion/scale.ex
 
@@ -13,10 +13,6 @@ defmodule Benchee.Conversion.Scale do
13
13
@type any_unit :: unit | unit_atom
14
14
@type scaled_number :: {number, unit}
15
15
16
- # In 1.3, this could be declared as `keyword`, but use a custom type so it
17
- # will also compile in 1.2
18
- @type options ::[{atom, atom}]
19
-
20
16
@doc """
21
17
Scales a number in a domain's base unit to an equivalent value in the best
22
18
fit unit. Results are a `{number, unit}` tuple. See `Benchee.Conversion.Count` and
 
@@ -36,7 +32,7 @@ defmodule Benchee.Conversion.Scale do
36
32
"Best fit" is the most common unit, or (in case of tie) the largest of the
37
33
most common units.
38
34
"""
39
- @callback best(list, options) :: unit
35
+ @callback best(list, keyword) :: unit
40
36
41
37
@doc """
42
38
Returns the base_unit in which Benchee takes its measurements, which in
changed lib/benchee/formatters/console.ex
 
@@ -4,8 +4,11 @@ defmodule Benchee.Formatters.Console do
4
4
output through `IO.puts` on the console.
5
5
"""
6
6
7
- alias Benchee.Statistics
8
- alias Benchee.Conversion.{Count, Duration, DeviationPercent}
7
+ alias Benchee.{Statistics, Suite}
8
+ alias Benchee.Conversion.{Count, Duration, Unit, DeviationPercent}
9
+
10
+ @type job_statistics :: {Suite.key, Statistics.t}
11
+ @type unit_per_statistic :: %{atom => Unit.t}
9
12
10
13
@default_label_width 4 # Length of column header
11
14
@ips_width 13
 
@@ -17,10 +20,13 @@ defmodule Benchee.Formatters.Console do
17
20
Formats the benchmark statistis using `Benchee.Formatters.Console.format/1`
18
21
and then prints it out directly to the console using `IO.puts/2`
19
22
"""
20
- def output(suite) do
23
+ @spec output(Suite.t) :: Suite.t
24
+ def output(suite = %Suite{}) do
21
25
suite
22
26
|> format
23
27
|> IO.write
28
+
29
+ suite
24
30
end
25
31
26
32
@doc """
 
@@ -33,9 +39,17 @@ defmodule Benchee.Formatters.Console do
33
39
## Examples
34
40
35
41
```
36
- iex> jobs = %{ "My Job" =>%{average: 200.0, ips: 5000.0,std_dev_ratio: 0.1, median: 190.0}, "Job 2" => %{average: 400.0, ips: 2500.0, std_dev_ratio: 0.2, median: 390.0}}
42
+ iex> jobs = %{ "My Job" => %Benchee.Statistics{average: 200.0, ips: 5000.0,std_dev_ratio: 0.1, median: 190.0}, "Job 2" => %Benchee.Statistics{average: 400.0, ips: 2500.0, std_dev_ratio: 0.2, median: 390.0}}
37
43
iex> inputs = %{"My input" => jobs}
38
- iex> Benchee.Formatters.Console.format(%{statistics: inputs, config: %{console: %{comparison: false, unit_scaling: :best}}})
44
+ iex> suite = %Benchee.Suite{
45
+ ...> statistics: inputs,
46
+ ...> configuration: %Benchee.Configuration{
47
+ ...> formatter_options: %{
48
+ ...> console: %{comparison: false, unit_scaling: :best}
49
+ ...> }
50
+ ...> }
51
+ ...> }
52
+ iex> Benchee.Formatters.Console.format(suite)
39
53
[["\n##### With input My input #####", "\nName ips average deviation median\n",
40
54
"My Job 5.00 K 200.00 μs ±10.00% 190.00 μs\n",
41
55
"Job 2 2.50 K 400.00 μs ±20.00% 390.00 μs\n"]]
 
@@ -43,11 +57,12 @@ defmodule Benchee.Formatters.Console do
43
57
```
44
58
45
59
"""
46
- def format(%{statistics: jobs_per_input, config: %{console: config}}) do
47
- jobs_per_input
48
- |> Enum.map(fn({input, jobs_stats}) ->
49
- [input_header(input) | format_jobs(jobs_stats, config)]
50
- end)
60
+ @spec format(Suite.t) :: [any]
61
+ def format(%Suite{statistics: jobs_per_input,
62
+ configuration: %{formatter_options: %{console: config}}}) do
63
+ Enum.map(jobs_per_input, fn({input, jobs_stats}) ->
64
+ [input_header(input) | format_jobs(jobs_stats, config)]
65
+ end)
51
66
end
52
67
53
68
defp input_header(input) do
 
@@ -64,7 +79,7 @@ defmodule Benchee.Formatters.Console do
64
79
## Examples
65
80
66
81
```
67
- iex> jobs = %{ "My Job" =>%{average: 200.0, ips: 5000.0,std_dev_ratio: 0.1, median: 190.0}, "Job 2" => %{average: 400.0, ips: 2500.0, std_dev_ratio: 0.2, median: 390.0}}
82
+ iex> jobs = %{ "My Job" =>%Benchee.Statistics{average: 200.0, ips: 5000.0,std_dev_ratio: 0.1, median: 190.0}, "Job 2" => %Benchee.Statistics{average: 400.0, ips: 2500.0, std_dev_ratio: 0.2, median: 390.0}}
68
83
iex> Benchee.Formatters.Console.format_jobs(jobs, %{comparison: false, unit_scaling: :best})
69
84
["\nName ips average deviation median\n",
70
85
"My Job 5.00 K 200.00 μs ±10.00% 190.00 μs\n",
 
@@ -109,14 +124,8 @@ defmodule Benchee.Formatters.Console do
109
124
measurements =
110
125
jobs
111
126
|> Enum.flat_map(fn({_name, job}) -> Map.to_list(job) end)
112
- # TODO: Simplify when dropping support for 1.2
113
- # For compatibility with Elixir 1.2. In 1.3, the following group-reduce-map
114
- # can b replaced by a single call to `group_by/3`
115
- # Enum.group_by(fn({stat_name, _}) -> stat_name end, fn({_, value}) -> value end)
116
- |> Enum.group_by(fn({stat_name, _value}) -> stat_name end)
117
- |> Enum.reduce(%{}, fn({stat_name, occurrences}, acc) ->
118
- Map.put(acc, stat_name, Enum.map(occurrences, fn({_stat_name, value}) -> value end))
119
- end)
127
+ |> Enum.group_by(fn({stat_name, _}) -> stat_name end,
128
+ fn({_, value}) -> value end)
120
129
121
130
%{
122
131
run_time: Duration.best(measurements.average, strategy: scaling_strategy),
 
@@ -124,7 +133,8 @@ defmodule Benchee.Formatters.Console do
124
133
}
125
134
end
126
135
127
- defp format_jobs({name, %{average: average,
136
+ @spec format_jobs(job_statistics, unit_per_statistic, integer) :: String.t
137
+ defp format_jobs({name, %Statistics{average: average,
128
138
ips: ips,
129
139
std_dev_ratio: std_dev_ratio,
130
140
median: median}
 
@@ -152,6 +162,7 @@ defmodule Benchee.Formatters.Console do
152
162
DeviationPercent.format(std_dev_ratio)
153
163
end
154
164
165
+ @spec comparison_report([job_statistics], unit_per_statistic, integer, map) :: [String.t]
155
166
defp comparison_report([_reference], _, _, _config) do
156
167
[] # No need for a comparison when only one benchmark was run
157
168
end
 
@@ -172,6 +183,7 @@ defmodule Benchee.Formatters.Console do
172
183
|> to_string
173
184
end
174
185
186
+ @spec comparisons({any, Statistics.t}, %{atom => Unit.t}, integer, [{any, Statistics.t}]) :: [String.t]
175
187
defp comparisons({_, reference_stats}, units, label_width, jobs_to_compare) do
176
188
Enum.map jobs_to_compare, fn(job = {_, job_stats}) ->
177
189
format_comparison(job, units, label_width, (reference_stats.ips / job_stats.ips))
changed lib/benchee/output/benchmark_printer.ex
 
@@ -1,7 +1,5 @@
1
1
defmodule Benchee.Output.BenchmarkPrinter do
2
- @moduledoc """
3
- Printing happening during the Benchmark stage.
4
- """
2
+ @moduledoc false
5
3
6
4
alias Benchee.Conversion.Duration
7
5
 
@@ -18,10 +16,10 @@ defmodule Benchee.Output.BenchmarkPrinter do
18
16
Prints general information such as system information and estimated
19
17
benchmarking time.
20
18
"""
21
- def configuration_information(%{config: %{print: %{configuration: false}}}) do
19
+ def configuration_information(%{configuration: %{print: %{configuration: false}}}) do
22
20
nil
23
21
end
24
- def configuration_information(%{jobs: jobs, system: sys, config: config}) do
22
+ def configuration_information(%{jobs: jobs, system: sys, configuration: config}) do
25
23
system_information(sys)
26
24
suite_information(jobs, config)
27
25
end
 
@@ -35,19 +33,19 @@ defmodule Benchee.Output.BenchmarkPrinter do
35
33
time: time,
36
34
warmup: warmup,
37
35
inputs: inputs}) do
38
- warmup_seconds = time_precision Duration.scale(warmup, :second)
39
- time_seconds = time_precision Duration.scale(time, :second)
40
36
job_count = map_size jobs
41
- exec_time = warmup_seconds + time_seconds
42
- total_time = time_precision(job_count * inputs_count(inputs) * exec_time)
37
+ exec_time = warmup + time
38
+ total_time = job_count * inputs_count(inputs) * exec_time
43
39
44
- IO.puts "Benchmark suite executing with the following configuration:"
45
- IO.puts "warmup: #{warmup_seconds}s"
46
- IO.puts "time: #{time_seconds}s"
47
- IO.puts "parallel: #{parallel}"
48
- IO.puts "inputs: #{inputs_out(inputs)}"
49
- IO.puts "Estimated total run time: #{total_time}s"
50
- IO.puts ""
40
+ IO.puts """
41
+ Benchmark suite executing with the following configuration:
42
+ warmup: #{Duration.format(warmup)}
43
+ time: #{Duration.format(time)}
44
+ parallel: #{parallel}
45
+ inputs: #{inputs_out(inputs)}
46
+ Estimated total run time: #{Duration.format(total_time)}
47
+
48
+ """
51
49
end
52
50
53
51
defp inputs_count(nil), do: 1 # no input specified still executes
 
@@ -60,11 +58,6 @@ defmodule Benchee.Output.BenchmarkPrinter do
60
58
|> Enum.join(", ")
61
59
end
62
60
63
- @round_precision 2
64
- defp time_precision(float) do
65
- Float.round(float, @round_precision)
66
- end
67
-
68
61
@doc """
69
62
Prints a notice which job is currently being benchmarked.
70
63
"""
 
@@ -80,8 +73,7 @@ defmodule Benchee.Output.BenchmarkPrinter do
80
73
IO.puts """
81
74
Warning: The function you are trying to benchmark is super fast, making measures more unreliable! See: https://github.com/PragTob/benchee/wiki/Benchee-Warnings#fast-execution-warning
82
75
83
- You may disable this warning by passing print: [fast_warning: false] as
84
- configuration options.
76
+ You may disable this warning by passing print: [fast_warning: false] as configuration options.
85
77
"""
86
78
end
changed lib/benchee/statistics.ex
 
@@ -4,7 +4,24 @@ defmodule Benchee.Statistics do
4
4
times and then compute statistics like the average and the standard devaition.
5
5
"""
6
6
7
- alias Benchee.{Statistics, Conversion.Duration}
7
+ defstruct [:average, :ips, :std_dev, :std_dev_ratio, :std_dev_ips, :median,
8
+ :minimum, :maximum, :sample_size]
9
+
10
+ @type t :: %__MODULE__{
11
+ average: float,
12
+ ips: float,
13
+ std_dev: float,
14
+ std_dev_ratio: float,
15
+ std_dev_ips: float,
16
+ median: number,
17
+ minimum: number,
18
+ maximum: number,
19
+ sample_size: integer
20
+ }
21
+
22
+ @type samples :: [number]
23
+
24
+ alias Benchee.{Statistics, Conversion.Duration, Suite}
8
25
import Benchee.Utility.MapValues
9
26
require Integer
10
27
 
@@ -37,12 +54,14 @@ defmodule Benchee.Statistics do
37
54
## Examples
38
55
39
56
iex> run_times = [200, 400, 400, 400, 500, 500, 700, 900]
40
- iex> suite = %{run_times: %{"Input" => %{"My Job" => run_times}}}
57
+ iex> suite = %Benchee.Suite{
58
+ ...> run_times: %{"Input" => %{"My Job" => run_times}}
59
+ ...> }
41
60
iex> Benchee.Statistics.statistics(suite)
42
- %{
61
+ %Benchee.Suite{
43
62
statistics: %{
44
63
"Input" => %{
45
- "My Job" => %{
64
+ "My Job" => %Benchee.Statistics{
46
65
average: 500.0,
47
66
ips: 2000.0,
48
67
std_dev: 200.0,
 
@@ -59,15 +78,19 @@ defmodule Benchee.Statistics do
59
78
"Input" => %{
60
79
"My Job" => [200, 400, 400, 400, 500, 500, 700, 900]
61
80
}
62
- }
81
+ },
82
+ configuration: nil,
83
+ jobs: %{ },
84
+ system: nil
63
85
}
64
86
65
87
"""
66
- def statistics(suite = %{run_times: run_times_per_input}) do
88
+ @spec statistics(Suite.t) :: Suite.t
89
+ def statistics(suite = %Suite{run_times: run_times_per_input}) do
67
90
statistics = run_times_per_input
68
91
|> p_map_values(&Statistics.job_statistics/1)
69
92
70
- Map.put suite, :statistics, statistics
93
+ %Suite{suite | statistics: statistics}
71
94
end
72
95
73
96
@doc """
 
@@ -78,7 +101,8 @@ defmodule Benchee.Statistics do
78
101
79
102
iex> run_times = [200, 400, 400, 400, 500, 500, 700, 900]
80
103
iex> Benchee.Statistics.job_statistics(run_times)
81
- %{average: 500.0,
104
+ %Benchee.Statistics{
105
+ average: 500.0,
82
106
ips: 2000.0,
83
107
std_dev: 200.0,
84
108
std_dev_ratio: 0.4,
 
@@ -86,9 +110,11 @@ defmodule Benchee.Statistics do
86
110
median: 450.0,
87
111
minimum: 200,
88
112
maximum: 900,
89
- sample_size: 8}
113
+ sample_size: 8
114
+ }
90
115
91
116
"""
117
+ @spec job_statistics(samples) :: __MODULE__.t
92
118
def job_statistics(run_times) do
93
119
total_time = Enum.sum(run_times)
94
120
iterations = Enum.count(run_times)
 
@@ -101,7 +127,7 @@ defmodule Benchee.Statistics do
101
127
minimum = Enum.min run_times
102
128
maximum = Enum.max run_times
103
129
104
- %{
130
+ %__MODULE__{
105
131
average: average,
106
132
ips: ips,
107
133
std_dev: deviation,
added lib/benchee/suite.ex
 
@@ -0,0 +1,45 @@
1
+ defmodule Benchee.Suite do
2
+ @moduledoc """
3
+ Main benchee data structure that aggregates the results from every step.
4
+
5
+ Different layers of the benchmarking rely on different data being present
6
+ here. For instance for `Benchee.Statistics.statistics/1` to work the
7
+ `run_times` key needs to be filled with the results from
8
+ `Benchee.Benchmark.measure/1`.
9
+
10
+ Formatters can then use the data to display all of the results and the
11
+ configuration.
12
+ """
13
+ defstruct [
14
+ :configuration,
15
+ :system,
16
+ :run_times,
17
+ :statistics,
18
+ jobs: %{}
19
+ ]
20
+
21
+ @type optional_map :: map | nil
22
+ @type key :: atom | String.t
23
+ @type benchmark_function :: (() -> any) | ((any) -> any)
24
+ @type t :: %__MODULE__{
25
+ configuration: Benchee.Configuration.t | nil,
26
+ system: optional_map,
27
+ run_times: %{key => %{key => [integer]}} | nil,
28
+ statistics: %{key => %{key => Benchee.Statistics.t}} | nil,
29
+ jobs: %{key => benchmark_function}
30
+ }
31
+ end
32
+
33
+ defimpl DeepMerge.Resolver, for: Benchee.Suite do
34
+ def resolve(original, override = %{__struct__: Benchee.Suite}, resolver) do
35
+ cleaned_override = override
36
+ |> Map.from_struct
37
+ |> Enum.reject(fn({_key, value}) -> is_nil(value) end)
38
+ |> Map.new
39
+
40
+ Map.merge(original, cleaned_override, resolver)
41
+ end
42
+ def resolve(original, override, resolver) when is_map(override) do
43
+ Map.merge(original, override, resolver)
44
+ end
45
+ end
changed lib/benchee/system.ex
 
@@ -3,12 +3,15 @@ defmodule Benchee.System do
3
3
Provides information about the system the benchmarks are run on.
4
4
"""
5
5
6
+ alias Benchee.Suite
7
+
6
8
@doc """
7
9
Adds system information to the suite (currently elixir and erlang versions).
8
10
"""
9
- def system(suite) do
11
+ @spec system(Suite.t) :: Suite.t
12
+ def system(suite = %Suite{}) do
10
13
versions = %{elixir: elixir(), erlang: erlang()}
11
- Map.put suite, :system, versions
14
+ %Suite{suite | system: versions}
12
15
end
13
16
14
17
@doc """
 
@@ -23,7 +26,7 @@ defmodule Benchee.System do
23
26
otp_release = :erlang.system_info(:otp_release)
24
27
file = Path.join([:code.root_dir, "releases", otp_release , "OTP_VERSION"])
25
28
case File.read(file) do
26
- {:ok, version} -> String.strip(version)
29
+ {:ok, version} -> String.trim(version)
27
30
{:error, reason} ->
28
31
IO.puts "Error trying to dermine erlang version #{reason}"
29
32
end
changed lib/benchee/utility/map_value.ex
 
@@ -42,7 +42,7 @@ defmodule Benchee.Utility.MapValues do
42
42
{key, Task.async(fn -> do_map_values(child_map, function) end)}
43
43
end)
44
44
|> Enum.map(fn({key, task}) ->
45
- {key, Task.await(task)}
45
+ {key, Task.await(task, :infinity)}
46
46
end)
47
47
|> Map.new
48
48
end
changed mix.exs
 
@@ -1,13 +1,13 @@
1
1
defmodule Benchee.Mixfile do
2
2
use Mix.Project
3
3
4
- @version "0.7.0"
4
+ @version "0.8.0"
5
5
6
6
def project do
7
7
[
8
8
app: :benchee,
9
9
version: @version,
10
- elixir: "~> 1.2",
10
+ elixir: "~> 1.3",
11
11
elixirc_paths: elixirc_paths(Mix.env),
12
12
consolidate_protocols: true,
13
13
build_embedded: Mix.env == :prod,
 
@@ -20,6 +20,10 @@ defmodule Benchee.Mixfile do
20
20
"coveralls": :test, "coveralls.detail": :test,
21
21
"coveralls.post": :test, "coveralls.html": :test,
22
22
"coveralls.travis": :test],
23
+ dialyzer: [
24
+ flags:
25
+ [:unmatched_returns, :error_handling, :race_conditions, :underspecs]
26
+ ],
23
27
name: "Benchee",
24
28
source_url: "https://github.com/PragTob/benchee",
25
29
description: """
 
@@ -33,7 +37,7 @@ defmodule Benchee.Mixfile do
33
37
defp elixirc_paths(_), do: ["lib"]
34
38
35
39
def application do
36
- [applications: [:logger]]
40
+ [applications: [:logger, :deep_merge]]
37
41
end
38
42
39
43
defp deps do
 
@@ -42,9 +46,10 @@ defmodule Benchee.Mixfile do
42
46
{:mix_test_watch, "~> 0.2", only: :dev},
43
47
{:credo, "~> 0.4", only: :dev},
44
48
{:ex_doc, "~> 0.11", only: :dev},
45
- {:earmark, "~> 1.0.1", only: :dev},
49
+ {:earmark, "~> 1.0", only: :dev},
46
50
{:excoveralls, "~> 0.6.1", only: :test},
47
- {:inch_ex, "~> 0.5", only: :docs}
51
+ {:inch_ex, "~> 0.5", only: :docs},
52
+ {:dialyxir, "~> 0.5", only: :dev, runtime: false}
48
53
]
49
54
end