changed
CHANGELOG.md
|
@@ -1,3 +1,36 @@
|
1
|
+ # 0.6.0 (November 30, 2016)
|
2
|
+
|
3
|
+ One of the biggest releases yet. Great stuff in here - more elixir like API for `Benchee.run/2` with the jobs as the primary argument and the optional options as the second argument and now also as the more idiomatic keyword list!
|
4
|
+
|
5
|
+ The biggest feature apart from that is the possibility to use multiple inputs - which you all should do now as quite many functions behave differently with bigger, smaller or differently shaped inputs. Apart from that a bulk of work has gone into making and supporting [benchee_html](https://github.com/PragTob/benchee_html)!
|
6
|
+
|
7
|
+ ## Features (User Facing)
|
8
|
+
|
9
|
+ * New `:inputs` configuration key that allows you to specify a map from input name to input value so that each defined benchmarking job is then executed with this input. For this to work the benchmarking function is called with the appropriate `input` as an argument. See [`samples/multiple_inputs.exs`](https://github.com/PragTob/benchee/blob/master/samples/multiple_inputs.exs) for an example. [#21]( https://github.com/PragTob/benchee/issues/21)
|
10
|
+ * The highlevel `Benchee.run/2` is now more idiomatic elixir and takes the map of jobs as the first argument and a keywordlist of options as the second (and last) argument. The old way of passing config as a map as the first argument and the jobs as the second argument still works, **but might be deprecated later on** [#47](https://github.com/PragTob/benchee/issues/47)
|
11
|
+ * Along with that `Benchee.init/1` now also accepts keyword lists of course
|
12
|
+
|
13
|
+ ## Breaking Changes (User Facing)
|
14
|
+
|
15
|
+ * The old way of providing the jobs as a list of tuples now removed, please switch to using a map from string to functions
|
16
|
+
|
17
|
+ ## Features (Plugins)
|
18
|
+
|
19
|
+ * `Benchee.Utility.FileCreation` module to help with creating files from a map of multiple inputs (or other descriptors) mapping to input and an `interleave` function that spits out the correct file names especially if the `:__no_input` marker is used
|
20
|
+ * `Benchee.System` is available to retrieve elixir and erlang versions but it's
|
21
|
+ also already added to the suite during `Benchee.run/2`
|
22
|
+
|
23
|
+ ## Breaking Changes (Plugins)
|
24
|
+
|
25
|
+ * The structure of the output from `Benchee.Benchmark.measure/1` to `Benchee.Statistics.statistics/1` has changed to accommodate the new inputs feature there is now an additional level where in a map the input name then points to the appropriate results of the jobs. When there were no inputs the key is the value returned by `Benchee.Benchmark.no_input/0`.
|
26
|
+
|
27
|
+ ## Bugfixes
|
28
|
+
|
29
|
+ * prewarming (discarding the first result due to some timer issues) during run time was removed, as it should already happen during the warmup period and would discard actual useful results especially for longer running macro benchmarks.
|
30
|
+ * when the execution time of the benchmarking job exceeds the given `:time` it will now execute exactly once (used to be 2) [#49](https://github.com/PragTob/benchee/issues/49)
|
31
|
+ * `run_times` are now in the order as recorded (used to be reverse) - important when wants to graph them/look at them to see if there are any anomalities during benchmarking
|
32
|
+ * Remove elixir 1.4.0-rc.0 warnings
|
33
|
+
|
1
34
|
# 0.5.0 (October 13, 2016)
|
2
35
|
|
3
36
|
This release focuses on scaling units to more appropriate sizes. Instead of always working with base one for counts and microseconds those values are scaled accordingly to thousands, milliseconds for better readability. This work was mostly done by new contributor @wasnotrice.
|
changed
README.md
|
@@ -1,45 +1,82 @@
|
1
1
|
# Benchee [![Hex Version](https://img.shields.io/hexpm/v/benchee.svg)](https://hex.pm/packages/benchee) [![docs](https://img.shields.io/badge/docs-hexpm-blue.svg)](https://hexdocs.pm/benchee/) [![Inline docs](https://inch-ci.org/github/PragTob/benchee.svg)](https://inch-ci.org/github/PragTob/benchee) [![Build Status](https://travis-ci.org/PragTob/benchee.svg?branch=master)](https://travis-ci.org/PragTob/benchee)
|
2
2
|
|
3
|
- Library for easy and nice (micro) benchmarking in Elixir. It allows you to compare the performance of different pieces of code and functions at a glance. Benchee is also versatile and extensible, relying only on functions - no macros!
|
3
|
+ Library for easy and nice (micro) benchmarking in Elixir. It allows you to compare the performance of different pieces of code at a glance. Benchee is also versatile and extensible, relying only on functions - no macros! There are also a bunch of [plugins](#plugins) to draw pretty graphs and more!
|
4
4
|
|
5
|
- Somewhat inspired by [benchmark-ips](https://github.com/evanphx/benchmark-ips) from the ruby world, but a very different interface and a functional spin.
|
5
|
+ Benchee has a nice and concise main interface, and its behavior can be altered through lots of [configuration options](#configuration):
|
6
6
|
|
7
|
- General features:
|
7
|
+ ```elixir
|
8
|
+ list = Enum.to_list(1..10_000)
|
9
|
+ map_fun = fn(i) -> [i, i * i] end
|
10
|
+
|
11
|
+ Benchee.run(%{
|
12
|
+ "flat_map" => fn -> Enum.flat_map(list, map_fun) end,
|
13
|
+ "map.flatten" => fn -> list |> Enum.map(map_fun) |> List.flatten end
|
14
|
+ }, time: 3)
|
15
|
+ ```
|
16
|
+
|
17
|
+ Produces the following output on the console:
|
18
|
+
|
19
|
+ ```
|
20
|
+ tobi@happy ~/github/benchee $ mix run samples/run.exs
|
21
|
+ Erlang/OTP 19 [erts-8.1] [source] [64-bit] [smp:8:8] [async-threads:10] [hipe] [kernel-poll:false]
|
22
|
+ Elixir 1.3.4
|
23
|
+ Benchmark suite executing with the following configuration:
|
24
|
+ warmup: 2.0s
|
25
|
+ time: 3.0s
|
26
|
+ parallel: 1
|
27
|
+ inputs: none specified
|
28
|
+ Estimated total run time: 10.0s
|
29
|
+
|
30
|
+ Benchmarking flat_map...
|
31
|
+ Benchmarking map.flatten...
|
32
|
+
|
33
|
+ Name ips average deviation median
|
34
|
+ map.flatten 1.04 K 0.96 ms ±21.82% 0.90 ms
|
35
|
+ flat_map 0.66 K 1.51 ms ±16.98% 1.50 ms
|
36
|
+
|
37
|
+ Comparison:
|
38
|
+ map.flatten 1.04 K
|
39
|
+ flat_map 0.66 K - 1.56x slower
|
40
|
+ ```
|
41
|
+
|
42
|
+ The aforementioned [plugins](#plugins) like [benchee_html](https://github.com/PragTob/benchee_html) make it possible to generate nice looking [html reports](https://www.pragtob.info/benchee/flat_map.html) and export graphs as png images like this IPS comparison chart with standard deviation:
|
43
|
+
|
44
|
+ ![flat_map_ips](https://www.pragtob.info/benchee/images/flat_map_ips.png)
|
45
|
+
|
46
|
+ ## Features
|
8
47
|
|
9
48
|
* first runs the functions for a given warmup time without recording the results, to simulate a _"warm"_ running system
|
10
|
- * plugin/extensible friendly architecture so you can use different formatters to generate CSV or whatever
|
49
|
+ * plugin/extensible friendly architecture so you can use different formatters to generate CSV and more
|
11
50
|
* well tested
|
12
51
|
* well documented
|
13
52
|
* execute benchmark jobs in parallel to gather more results in the same time, or simulate a system under load
|
14
|
- * nicely formatted console output
|
15
|
- * provides you with **lots of statistics** - check the next list
|
53
|
+ * nicely formatted console output with units scaled to appropriate units
|
54
|
+ * provides you with lots of statistics - check the next list
|
16
55
|
|
17
|
- Provides you with the following statistical data:
|
56
|
+ Provides you with the following **statistical data**:
|
18
57
|
|
19
58
|
* **average** - average execution time (the lower the better)
|
20
|
- * **ips** - iterations per second, how often can the given function be executed within one second (the higher the better)
|
59
|
+ * **ips** - iterations per second, aka how often can the given function be executed within one second (the higher the better)
|
21
60
|
* **deviation** - standard deviation (how much do the results vary), given as a percentage of the average (raw absolute values also available)
|
22
|
- * **median** - when all measured times are sorted, this is the middle value (or average of the two middle values when the number of samples is even). More stable than the average and somewhat more likely to be a typical value you see.
|
61
|
+ * **median** - when all measured times are sorted, this is the middle value (or average of the two middle values when the number of samples is even). More stable than the average and somewhat more likely to be a typical value you see. (the lower the better)
|
23
62
|
|
24
63
|
Benchee does not:
|
25
64
|
|
26
|
- * Keep results of previous and compare them, if you want that have a look at [benchfella](https://github.com/alco/benchfella) or [bmark](https://github.com/joekain/bmark)
|
65
|
+ * Keep results of previous runs and compare them, if you want that have a look at [benchfella](https://github.com/alco/benchfella) or [bmark](https://github.com/joekain/bmark)
|
27
66
|
|
28
|
- Benchee has no runtime dependencies and is aimed at being the core benchmarking logic. Further functionality is provided through plugins that then pull in dependencies, such as CSV export. Check out the [available plugins](#plugins)!
|
67
|
+ Benchee only has a small runtime dependency on `deep_merge` for merging configuration and is aimed at being the core benchmarking logic. Further functionality is provided through plugins that then pull in dependencies, such as HTML generation and CSV export. Check out the [available plugins](#plugins)!
|
29
68
|
|
30
69
|
## Installation
|
31
70
|
|
32
|
- When [available in Hex](https://hex.pm/docs/publish), the package can be installed as:
|
33
|
-
|
34
71
|
Add benchee to your list of dependencies in `mix.exs`:
|
35
72
|
|
36
73
|
```elixir
|
37
74
|
def deps do
|
38
|
- [{:benchee, "~> 0.5", only: :dev}]
|
75
|
+ [{:benchee, "~> 0.6", only: :dev}]
|
39
76
|
end
|
40
77
|
```
|
41
78
|
|
42
|
- Install via `mix deps.get` and then happy benchmarking as described in Usage :)
|
79
|
+ Install via `mix deps.get` and then happy benchmarking as described in [Usage](#usage) :)
|
43
80
|
|
44
81
|
## Usage
|
45
82
|
|
|
@@ -49,17 +86,56 @@ After installing just write a little Elixir benchmarking script:
|
49
86
|
list = Enum.to_list(1..10_000)
|
50
87
|
map_fun = fn(i) -> [i, i * i] end
|
51
88
|
|
52
|
- Benchee.run(%{time: 3}, %{
|
89
|
+ Benchee.run(%{
|
53
90
|
"flat_map" => fn -> Enum.flat_map(list, map_fun) end,
|
54
|
- "map.flatten" => fn -> list |> Enum.map(map_fun) |> List.flatten end})
|
91
|
+ "map.flatten" => fn -> list |> Enum.map(map_fun) |> List.flatten end
|
92
|
+ }, time: 3)
|
55
93
|
```
|
56
94
|
|
57
|
- First configuration options are passed:
|
95
|
+ This produces the following output:
|
96
|
+
|
97
|
+ ```
|
98
|
+ tobi@happy ~/github/benchee $ mix run samples/run.exs
|
99
|
+ Erlang/OTP 19 [erts-8.1] [source] [64-bit] [smp:8:8] [async-threads:10] [hipe] [kernel-poll:false]
|
100
|
+ Elixir 1.3.4
|
101
|
+ Benchmark suite executing with the following configuration:
|
102
|
+ warmup: 2.0s
|
103
|
+ time: 3.0s
|
104
|
+ parallel: 1
|
105
|
+ inputs: none specified
|
106
|
+ Estimated total run time: 10.0s
|
107
|
+
|
108
|
+ Benchmarking flat_map...
|
109
|
+ Benchmarking map.flatten...
|
110
|
+
|
111
|
+ Name ips average deviation median
|
112
|
+ map.flatten 1.27 K 0.79 ms ±15.34% 0.76 ms
|
113
|
+ flat_map 0.85 K 1.18 ms ±6.00% 1.23 ms
|
114
|
+
|
115
|
+ Comparison:
|
116
|
+ map.flatten 1.27 K
|
117
|
+ flat_map 0.85 K - 1.49x slower
|
118
|
+ ```
|
119
|
+
|
120
|
+ See [Features](#features) for a description of the different statistical values and what they mean.
|
121
|
+
|
122
|
+ If you're looking to see how to make something specific work, please refer to the [samples](https://github.com/PragTob/benchee/tree/master/samples) directory. Also, especially when wanting to extend benchee check out the [hexdocs](https://hexdocs.pm/benchee/api-reference.html).
|
123
|
+
|
124
|
+ ### Configuration
|
125
|
+
|
126
|
+ Benchee takes a wealth of configuration options, in the most common `Benchee.run/2` interface these are passed as the second argument in the form of an optional keyword list:
|
127
|
+
|
128
|
+ ```elixir
|
129
|
+ Benchee.run(%{"some function" => fn -> magic end}, print: [benchmarking: false])
|
130
|
+ ```
|
131
|
+
|
132
|
+ The available options are the following (also documented in [hexdocs](https://hexdocs.pm/benchee/Benchee.Config.html#init/1)).
|
58
133
|
|
59
134
|
* `warmup` - the time in seconds for which a benchmark should be run without measuring times before real measurements start. This simulates a _"warm"_ running system. Defaults to 2.
|
60
135
|
* `time` - the time in seconds for how long each individual benchmark should be run and measured. Defaults to 5.
|
136
|
+ * `inputs` - a map from descriptive input names to some different input, your benchmarking jobs will then be run with each of these inputs. For this to work your benchmarking function gets the current input passed in as an argument into the function. Defaults to `nil`, aka no input specified and functions are called without an argument. See [Inputs](#inputs)
|
61
137
|
* `parallel` - each job will be executed in `parallel` number processes. Gives you more data in the same time, but also puts a load on the system interfering with benchmark results. For more on the pros and cons of parallel benchmarking [check the wiki](https://github.com/PragTob/benchee/wiki/Parallel-Benchmarking). Defaults to 1.
|
62
|
- * `formatters` - list of formatter functions you'd like to run to output the benchmarking results of the suite when using `Benchee.run/2`. Functions need to accept one argument (which is the benchmarking suite with all data) and then use that to produce output. Used for plugins. Defaults to the builtin console formatter calling `Benchee.Formatters.Console.output/1`.
|
138
|
+ * `formatters` - list of formatter functions you'd like to run to output the benchmarking results of the suite when using `Benchee.run/2`. Functions need to accept one argument (which is the benchmarking suite with all data) and then use that to produce output. Used for plugins. Defaults to the builtin console formatter calling `Benchee.Formatters.Console.output/1`. See [Formatters](#formatters)
|
63
139
|
* `print` - a map from atoms to `true` or `false` to configure if the output identified by the atom will be printed during the standard Benchee benchmarking process. All options are enabled by default (true). Options are:
|
64
140
|
* `:benchmarking` - print when Benchee starts benchmarking a new job (Benchmarking name ..)
|
65
141
|
* `:configuration` - a summary of configured benchmarking options including estimated total run time is printed before benchmarking starts
|
|
@@ -79,35 +155,105 @@ First configuration options are passed:
|
79
155
|
and seconds if values are large enough)
|
80
156
|
* `:smallest` - the smallest best fit unit will be used (i.e. millisecond
|
81
157
|
and one)
|
82
|
- * `:none` - no unit scaling will occur. Durations will be displayed in
|
83
|
- microseconds, and counts will be displayed in ones (this is equivalent to
|
84
|
- the behaviour Benchee had pre 0.5.0)
|
158
|
+ * `:none` - no unit scaling will occur. Durations will be displayed in microseconds, and counts will be displayed in ones (this is equivalent to the behaviour Benchee had pre 0.5.0)
|
85
159
|
|
86
|
- Running this script produces an output like:
|
160
|
+ ### Inputs
|
161
|
+
|
162
|
+ `:inputs` is a very useful configuration that allows you to run the same benchmarking with different inputs. Functions can have different performance characteristics on differently shaped inputs be that structure or input size.
|
163
|
+
|
164
|
+ One of such cases is comparing tail-recursive and body-recursive implementations of `map`. More information in the [repository with the benchmark](https://github.com/PragTob/elixir_playground/blob/master/bench/tco_blog_post_focussed_inputs.exs) and the [blog post](https://pragtob.wordpress.com/2016/06/16/tail-call-optimization-in-elixir-erlang-not-as-efficient-and-important-as-you-probably-think/).
|
165
|
+
|
166
|
+ ```elixir
|
167
|
+ map_fun = fn(i) -> i + 1 end
|
168
|
+ inputs = %{
|
169
|
+ "Small (1 Thousand)" => Enum.to_list(1..1_000),
|
170
|
+ "Middle (100 Thousand)" => Enum.to_list(1..100_000),
|
171
|
+ "Big (10 Million)" => Enum.to_list(1..10_000_000),
|
172
|
+ }
|
173
|
+
|
174
|
+ Benchee.run %{
|
175
|
+ "map tail-recursive" =>
|
176
|
+ fn(list) -> MyMap.map_tco(list, map_fun) end,
|
177
|
+ "stdlib map" =>
|
178
|
+ fn(list) -> Enum.map(list, map_fun) end,
|
179
|
+ "map simple body-recursive" =>
|
180
|
+ fn(list) -> MyMap.map_body(list, map_fun) end,
|
181
|
+ "map tail-recursive different argument order" =>
|
182
|
+ fn(list) -> MyMap.map_tco_arg_order(list, map_fun) end
|
183
|
+ }, time: 15, warmup: 5, inputs: inputs
|
184
|
+ ```
|
185
|
+
|
186
|
+ Omitting some of the output this produces the following results:
|
87
187
|
|
88
188
|
```
|
89
|
- tobi@happy ~/github/benchee $ mix run samples/run.exs
|
90
|
- Erlang/OTP 19 [erts-8.0] [source] [64-bit] [smp:8:8] [async-threads:10] [hipe] [kernel-poll:false]
|
91
|
- Elixir 1.3.2
|
92
|
- Benchmark suite executing with the following configuration:
|
93
|
- warmup: 2.0s
|
94
|
- time: 3.0s
|
95
|
- parallel: 1
|
96
|
- Estimated total run time: 10.0s
|
97
|
-
|
98
|
- Benchmarking flat_map...
|
99
|
- Benchmarking map.flatten...
|
100
|
-
|
101
|
- Name ips average deviation median
|
102
|
- map.flatten 989.80 1.01 ms (±12.63%) 0.99 ms
|
103
|
- flat_map 647.35 1.54 ms (±10.54%) 1.56 ms
|
189
|
+ ##### With input Big (10 Million) #####
|
190
|
+ Name ips average deviation median
|
191
|
+ map tail-recursive different argument order 5.09 196.48 ms ±9.70% 191.18 ms
|
192
|
+ map tail-recursive 3.86 258.84 ms ±22.05% 246.03 ms
|
193
|
+ stdlib map 2.87 348.36 ms ±9.02% 345.21 ms
|
194
|
+ map simple body-recursive 2.85 350.80 ms ±9.03% 349.33 ms
|
104
195
|
|
105
196
|
Comparison:
|
106
|
- map.flatten 989.80
|
107
|
- flat_map 647.35 - 1.53x slower
|
197
|
+ map tail-recursive different argument order 5.09
|
198
|
+ map tail-recursive 3.86 - 1.32x slower
|
199
|
+ stdlib map 2.87 - 1.77x slower
|
200
|
+ map simple body-recursive 2.85 - 1.79x slower
|
201
|
+
|
202
|
+ ##### With input Middle (100 Thousand) #####
|
203
|
+ Name ips average deviation median
|
204
|
+ stdlib map 584.79 1.71 ms ±16.20% 1.67 ms
|
205
|
+ map simple body-recursive 581.89 1.72 ms ±15.38% 1.68 ms
|
206
|
+ map tail-recursive different argument order 531.09 1.88 ms ±17.41% 1.95 ms
|
207
|
+ map tail-recursive 471.64 2.12 ms ±18.93% 2.13 ms
|
208
|
+
|
209
|
+ Comparison:
|
210
|
+ stdlib map 584.79
|
211
|
+ map simple body-recursive 581.89 - 1.00x slower
|
212
|
+ map tail-recursive different argument order 531.09 - 1.10x slower
|
213
|
+ map tail-recursive 471.64 - 1.24x slower
|
214
|
+
|
215
|
+ ##### With input Small (1 Thousand) #####
|
216
|
+ Name ips average deviation median
|
217
|
+ stdlib map 66.10 K 15.13 μs ±58.17% 15.00 μs
|
218
|
+ map tail-recursive different argument order 62.46 K 16.01 μs ±31.43% 15.00 μs
|
219
|
+ map simple body-recursive 62.35 K 16.04 μs ±60.37% 15.00 μs
|
220
|
+ map tail-recursive 55.68 K 17.96 μs ±30.32% 17.00 μs
|
221
|
+
|
222
|
+ Comparison:
|
223
|
+ stdlib map 66.10 K
|
224
|
+ map tail-recursive different argument order 62.46 K - 1.06x slower
|
225
|
+ map simple body-recursive 62.35 K - 1.06x slower
|
226
|
+ map tail-recursive 55.68 K - 1.19x slower
|
108
227
|
```
|
109
228
|
|
110
|
- See the general description for the meaning of the different statistics.
|
229
|
+ As you can see, the tail-recursive approach is significantly faster for the _Big_ 10 Million input while body recursion outperforms it or performs just as well on the _Middle_ and _Small_ inputs.
|
230
|
+
|
231
|
+ Therefore, I **highly recommend** using this feature and checking different realistically structured and sized inputs for the functions you benchmark!
|
232
|
+
|
233
|
+ ### Formatters
|
234
|
+
|
235
|
+ Among all the configuration options, one that you probably want to use are the formatters. Formatters are functions that take one argument (the benchmarking suite with all its results) and then generate some output. You can specify multiple formatters to run for the benchmarking run.
|
236
|
+
|
237
|
+ So if you are using the [HTML plugin](https://github.com/PragTob/benchee_html) and you want to run both the console formatter and the HTML formatter this looks like this (after you installed it of course):
|
238
|
+
|
239
|
+ ```elixir
|
240
|
+ list = Enum.to_list(1..10_000)
|
241
|
+ map_fun = fn(i) -> [i, i * i] end
|
242
|
+
|
243
|
+ Benchee.run(%{
|
244
|
+ "flat_map" => fn -> Enum.flat_map(list, map_fun) end,
|
245
|
+ "map.flatten" => fn -> list |> Enum.map(map_fun) |> List.flatten end
|
246
|
+ },
|
247
|
+ formatters: [
|
248
|
+ &Benchee.Formatters.HTML.output/1,
|
249
|
+ &Benchee.Formatters.Console.output/1
|
250
|
+ ],
|
251
|
+ html: [file: "samples_output/my.html"],
|
252
|
+ )
|
253
|
+
|
254
|
+ ```
|
255
|
+
|
256
|
+ ### More expanded/verbose usage
|
111
257
|
|
112
258
|
It is important to note that the benchmarking code shown before is the convenience interface. The same benchmark in its more verbose form looks like this:
|
113
259
|
|
|
@@ -115,7 +261,7 @@ It is important to note that the benchmarking code shown before is the convenien
|
115
261
|
list = Enum.to_list(1..10_000)
|
116
262
|
map_fun = fn(i) -> [i, i * i] end
|
117
263
|
|
118
|
- Benchee.init(%{time: 3})
|
264
|
+ Benchee.init(time: 3)
|
119
265
|
|> Benchee.benchmark("flat_map", fn -> Enum.flat_map(list, map_fun) end)
|
120
266
|
|> Benchee.benchmark("map.flatten",
|
121
267
|
fn -> list |> Enum.map(map_fun) |> List.flatten end)
|
|
@@ -127,54 +273,36 @@ Benchee.init(%{time: 3})
|
127
273
|
This is a take on the _functional transformation_ of data applied to benchmarks here:
|
128
274
|
|
129
275
|
1. Configure the benchmarking suite to be run
|
130
|
- 2. run n benchmarks with the given configuration gathering raw run times per function (done in 2 steps, gathering the benchmarks and then running them with `Benchee.measure`)
|
131
|
- 3. Generate statistics based on the raw run times
|
132
|
- 4. Format the statistics in a suitable way
|
133
|
- 5. Output the formatted statistics
|
276
|
+ 2. Define the functions to be benchmarked
|
277
|
+ 3. Run n benchmarks with the given configuration gathering raw run times per function
|
278
|
+ 4. Generate statistics based on the raw run times
|
279
|
+ 5. Format the statistics in a suitable way
|
280
|
+ 6. Output the formatted statistics
|
134
281
|
|
135
|
- This is also part of the official API and allows for more fine grained control.
|
136
|
- Do you just want to have all the raw run times? Grab them before `Benchee.statistics`! Just want to have the calculated statistics and use your own formatting? Grab the result of `Benchee.statistics`! Or, maybe you want to write to a file or send an HTTP post to some online service? Just replace the `IO.puts`.
|
282
|
+ This is also part of the **official API** and allows for more **fine grained control**. (It's also what Benchee does internally when you use `Benchee.run/2`).
|
283
|
+
|
284
|
+ Do you just want to have all the raw run times? Just work with the result of `Benchee.measure/1`! Just want to have the calculated statistics and use your own formatting? Grab the result of `Benchee.statistics/1`! Or, maybe you want to write to a file or send an HTTP post to some online service? Just use the `Benchee.Formatters.Console.format/1` and then send the result where you want.
|
137
285
|
|
138
286
|
This way Benchee should be flexible enough to suit your needs and be extended at will. Have a look at the [available plugins](#plugins).
|
139
287
|
|
140
|
- For more example usages and benchmarks have a look at the [`samples`](https://github.com/PragTob/benchee/tree/master/samples) directory!
|
141
|
-
|
142
|
- ## Formatters
|
143
|
-
|
144
|
- Among all the configuration options, one that you probably want to use are the formatters. Formatters are functions that take one argument (the benchmarking suite with all its results) and then generate some output. You can specify multiple formatters to run for the benchmarking run.
|
145
|
-
|
146
|
- So if you are using the [CSV plugin](https://github.com/PragTob/benchee_csv) and you want to run both the console formatter and the CSV formatter this looks like this:
|
147
|
-
|
148
|
- ```elixir
|
149
|
- list = Enum.to_list(1..10_000)
|
150
|
- map_fun = fn(i) -> [i, i * i] end
|
151
|
-
|
152
|
- Benchee.run(
|
153
|
- %{
|
154
|
- formatters: [
|
155
|
- &Benchee.Formatters.CSV.output/1,
|
156
|
- &Benchee.Formatters.Console.output/1
|
157
|
- ],
|
158
|
- csv: %{file: "my.csv"}
|
159
|
- },
|
160
|
- %{
|
161
|
- "flat_map" => fn -> Enum.flat_map(list, map_fun) end,
|
162
|
- "map.flatten" => fn -> list |> Enum.map(map_fun) |> List.flatten end
|
163
|
- })
|
164
|
- ```
|
165
|
-
|
166
288
|
## Plugins
|
167
289
|
|
168
290
|
Packages that work with Benchee to provide additional functionality.
|
169
291
|
|
170
|
- * [BencheeCSV](//github.com/PragTob/benchee_csv) - generate CSV from your Benchee benchmark results so you can import them into your favorite spreadsheet tool and make fancy graphs
|
292
|
+ * [benchee_html](//github.com/PragTob/benchee_html) - generate HTML including a data table and many different graphs with the possibility to export individual graphs as PNG :)
|
293
|
+ * [benchee_csv](//github.com/PragTob/benchee_csv) - generate CSV from your Benchee benchmark results so you can import them into your favorite spreadsheet tool and make fancy graphs
|
294
|
+ * [benchee_json](//github.com/PragTob/benchee_json) - export suite results as JSON to feed anywhere or feed it to your JavaScript and make magic happen :)
|
171
295
|
|
172
|
- (You didn't really expect to find tons of plugins here when the library was just released, did you? ;) )
|
296
|
+ With the HTML plugin for instance you can get fancy graphs like this boxplot (but normal bar chart is there as well):
|
297
|
+
|
298
|
+ ![boxplot](https://www.pragtob.info/benchee/images/boxplot.png)
|
173
299
|
|
174
300
|
## Contributing
|
175
301
|
|
176
302
|
Contributions to Benchee are very welcome! Bug reports, documentation, spelling corrections, whole features, feature ideas, bugfixes, new plugins, fancy graphics... all of those (and probably more) are much appreciated contributions!
|
177
303
|
|
304
|
+ Please respect the [Code of Conduct](//github.com/PragTob/benchee/blob/master/CODE_OF_CONDUCT.md).
|
305
|
+
|
178
306
|
You can get started with a look at the [open issues](https://github.com/PragTob/benchee/issues).
|
179
307
|
|
180
308
|
A couple of (hopefully) helpful points:
|
changed
hex_metadata.config
|
@@ -10,8 +10,11 @@
|
10
10
|
<<"lib/benchee/conversion/duration.ex">>,
|
11
11
|
<<"lib/benchee/conversion/format.ex">>,
|
12
12
|
<<"lib/benchee/conversion/scale.ex">>,<<"lib/benchee/conversion/unit.ex">>,
|
13
|
- <<"lib/benchee/formatters/console.ex">>,<<"lib/benchee/repeat_n.ex">>,
|
14
|
- <<"lib/benchee/statistics.ex">>,<<"mix.exs">>,<<"README.md">>,
|
13
|
+ <<"lib/benchee/formatters/console.ex">>,<<"lib/benchee/statistics.ex">>,
|
14
|
+ <<"lib/benchee/system.ex">>,<<"lib/benchee/utility/deep_convert.ex">>,
|
15
|
+ <<"lib/benchee/utility/file_creation.ex">>,
|
16
|
+ <<"lib/benchee/utility/map_value.ex">>,
|
17
|
+ <<"lib/benchee/utility/repeat_n.ex">>,<<"mix.exs">>,<<"README.md">>,
|
15
18
|
<<"LICENSE.md">>,<<"CHANGELOG.md">>]}.
|
16
19
|
{<<"licenses">>,[<<"MIT">>]}.
|
17
20
|
{<<"links">>,
|
|
@@ -19,5 +22,9 @@
|
19
22
|
{<<"github">>,<<"https://github.com/PragTob/benchee">>}]}.
|
20
23
|
{<<"maintainers">>,[<<"Tobias Pfeiffer">>]}.
|
21
24
|
{<<"name">>,<<"benchee">>}.
|
22
|
- {<<"requirements">>,[]}.
|
23
|
- {<<"version">>,<<"0.5.0">>}.
|
25
|
+ {<<"requirements">>,
|
26
|
+ [[{<<"app">>,<<"deep_merge">>},
|
27
|
+ {<<"name">>,<<"deep_merge">>},
|
28
|
+ {<<"optional">>,false},
|
29
|
+ {<<"requirement">>,<<"~> 0.1">>}]]}.
|
30
|
+ {<<"version">>,<<"0.6.0">>}.
|
changed
lib/benchee.ex
|
@@ -7,32 +7,42 @@ defmodule Benchee do
|
7
7
|
alias Benchee.{Statistics, Config, Benchmark}
|
8
8
|
|
9
9
|
@doc """
|
10
|
- High level interface that runs the given benchmarks and prints the results on
|
11
|
- the console. It is given an optional config and an array of tuples
|
12
|
- of names and functions to benchmark. For configuration options see the
|
13
|
- documentation of `Benchee.Config.init/1`.
|
10
|
+ Run benchmark jobs defined by a map and optionally provide configuration
|
11
|
+ options.
|
12
|
+
|
13
|
+ Runs the given benchmarks and prints the results on the console.
|
14
|
+
|
15
|
+ * jobs - a map from descriptive benchmark job name to a function to be
|
16
|
+ executed and benchmarked
|
17
|
+ * config - configuration options to alter what Benchee does, see
|
18
|
+ `Benchee.Config.init/1` for documentation of the available options.
|
14
19
|
|
15
20
|
## Examples
|
16
21
|
|
17
|
- Benchee.run(%{time: 3},
|
18
|
- %{"My Benchmark" => fn -> 1 + 1 end,
|
19
|
- "My other benchmrk" => fn -> "1" ++ "1" end})
|
22
|
+ Benchee.run(%{"My Benchmark" => fn -> 1 + 1 end,
|
23
|
+ "My other benchmrk" => fn -> "1" ++ "1" end}, time: 3)
|
20
24
|
# Prints a summary of the benchmark to the console
|
21
25
|
|
22
26
|
"""
|
23
|
- def run(config \\ %{}, jobs)
|
24
|
- def run(config, jobs) when is_list(jobs) do
|
25
|
- map_jobs = Enum.into jobs, %{}
|
26
|
- run(config, map_jobs)
|
27
|
+ def run(jobs, config \\ [])
|
28
|
+ def run(jobs, config) when is_list(config) do
|
29
|
+ do_run(jobs, config)
|
27
30
|
end
|
28
|
- def run(config, jobs) do
|
29
|
- suite = run_benchmarks config, jobs
|
30
|
- output_results suite
|
31
|
+ def run(config, jobs) when is_map(jobs) do
|
32
|
+ # pre 0.6.0 way of passing in the config first and as a map
|
33
|
+ do_run(jobs, config)
|
31
34
|
end
|
32
35
|
|
33
|
- defp run_benchmarks(config, jobs) do
|
36
|
+ defp do_run(jobs, config) do
|
37
|
+ suite = run_benchmarks jobs, config
|
38
|
+ output_results suite
|
39
|
+ suite
|
40
|
+ end
|
41
|
+
|
42
|
+ defp run_benchmarks(jobs, config) do
|
34
43
|
config
|
35
44
|
|> Benchee.init
|
45
|
+ |> Benchee.System.system
|
36
46
|
|> Map.put(:jobs, jobs)
|
37
47
|
|> Benchee.measure
|
38
48
|
|> Statistics.statistics
|
changed
lib/benchee/benchmark.ex
|
@@ -5,7 +5,7 @@ defmodule Benchee.Benchmark do
|
5
5
|
Exposes `benchmark` function.
|
6
6
|
"""
|
7
7
|
|
8
|
- alias Benchee.RepeatN
|
8
|
+ alias Benchee.Utility.RepeatN
|
9
9
|
alias Benchee.Conversion.Duration
|
10
10
|
|
11
11
|
@doc """
|
|
@@ -38,10 +38,8 @@ defmodule Benchee.Benchmark do
|
38
38
|
"""
|
39
39
|
def measure(suite = %{jobs: jobs, config: config}) do
|
40
40
|
print_configuration_information(jobs, config)
|
41
|
- run_times =
|
42
|
- jobs
|
43
|
- |> Enum.map(fn(job) -> measure_job(job, config) end)
|
44
|
- |> Map.new
|
41
|
+ run_times = record_runtimes(jobs, config)
|
42
|
+
|
45
43
|
Map.put suite, :run_times, run_times
|
46
44
|
end
|
47
45
|
|
|
@@ -49,7 +47,7 @@ defmodule Benchee.Benchmark do
|
49
47
|
nil
|
50
48
|
end
|
51
49
|
defp print_configuration_information(jobs, config) do
|
52
|
- print_system_information
|
50
|
+ print_system_information()
|
53
51
|
print_suite_information(jobs, config)
|
54
52
|
end
|
55
53
|
|
|
@@ -60,28 +58,76 @@ defmodule Benchee.Benchmark do
|
60
58
|
|
61
59
|
defp print_suite_information(jobs, %{parallel: parallel,
|
62
60
|
time: time,
|
63
|
- warmup: warmup}) do
|
61
|
+ warmup: warmup,
|
62
|
+ inputs: inputs}) do
|
64
63
|
warmup_seconds = time_precision Duration.scale(warmup, :second)
|
65
64
|
time_seconds = time_precision Duration.scale(time, :second)
|
66
65
|
job_count = map_size jobs
|
67
|
- total_time = time_precision(job_count * (warmup_seconds + time_seconds))
|
66
|
+ exec_time = warmup_seconds + time_seconds
|
67
|
+ total_time = time_precision(job_count * inputs_count(inputs) * exec_time)
|
68
68
|
|
69
69
|
IO.puts "Benchmark suite executing with the following configuration:"
|
70
70
|
IO.puts "warmup: #{warmup_seconds}s"
|
71
71
|
IO.puts "time: #{time_seconds}s"
|
72
72
|
IO.puts "parallel: #{parallel}"
|
73
|
+ IO.puts "inputs: #{inputs_out(inputs)}"
|
73
74
|
IO.puts "Estimated total run time: #{total_time}s"
|
74
75
|
IO.puts ""
|
75
76
|
end
|
76
77
|
|
78
|
+ defp inputs_count(nil), do: 1 # no input specified still executes
|
79
|
+ defp inputs_count(inputs), do: map_size(inputs)
|
80
|
+
|
81
|
+ defp inputs_out(nil), do: "none specified"
|
82
|
+ defp inputs_out(inputs) do
|
83
|
+ inputs
|
84
|
+ |> Map.keys
|
85
|
+ |> Enum.join(", ")
|
86
|
+ end
|
87
|
+
|
77
88
|
@round_precision 2
|
78
89
|
defp time_precision(float) do
|
79
90
|
Float.round(float, @round_precision)
|
80
91
|
end
|
81
92
|
|
82
|
- defp measure_job({name, function}, config) do
|
93
|
+ @no_input :__no_input
|
94
|
+ @no_input_marker {@no_input, @no_input}
|
95
|
+
|
96
|
+ @doc """
|
97
|
+ Key in the result for when there were no inputs given.
|
98
|
+ """
|
99
|
+ def no_input, do: @no_input
|
100
|
+
|
101
|
+ defp record_runtimes(jobs, config = %{inputs: nil}) do
|
102
|
+ [runtimes_for_input(@no_input_marker, jobs, config)]
|
103
|
+ |> Map.new
|
104
|
+ end
|
105
|
+ defp record_runtimes(jobs, config = %{inputs: inputs}) do
|
106
|
+ inputs
|
107
|
+ |> Enum.map(fn(input) -> runtimes_for_input(input, jobs, config) end)
|
108
|
+ |> Map.new
|
109
|
+ end
|
110
|
+
|
111
|
+ defp runtimes_for_input({input_name, input}, jobs, config) do
|
112
|
+ print_input_information(input_name)
|
113
|
+
|
114
|
+ results = jobs
|
115
|
+ |> Enum.map(fn(job) -> measure_job(job, input, config) end)
|
116
|
+ |> Map.new
|
117
|
+
|
118
|
+ {input_name, results}
|
119
|
+ end
|
120
|
+
|
121
|
+ defp print_input_information(@no_input) do
|
122
|
+ # noop
|
123
|
+ end
|
124
|
+ defp print_input_information(input_name) do
|
125
|
+ IO.puts "\nBenchmarking with input #{input_name}:"
|
126
|
+ end
|
127
|
+
|
128
|
+ defp measure_job({name, function}, input, config) do
|
83
129
|
print_benchmarking name, config
|
84
|
- job_run_times = parallel_benchmark function, config
|
130
|
+ job_run_times = parallel_benchmark function, input, config
|
85
131
|
{name, job_run_times}
|
86
132
|
end
|
87
133
|
|
|
@@ -93,13 +139,14 @@ defmodule Benchee.Benchmark do
|
93
139
|
end
|
94
140
|
|
95
141
|
defp parallel_benchmark(function,
|
142
|
+ input,
|
96
143
|
%{parallel: parallel,
|
97
144
|
time: time,
|
98
145
|
warmup: warmup,
|
99
146
|
print: %{fast_warning: fast_warning}}) do
|
100
147
|
pmap 1..parallel, fn ->
|
101
|
- run_warmup function, warmup
|
102
|
- measure_runtimes function, time, fast_warning
|
148
|
+ run_warmup function, input, warmup
|
149
|
+ measure_runtimes function, input, time, fast_warning
|
103
150
|
end
|
104
151
|
end
|
105
152
|
|
|
@@ -110,55 +157,47 @@ defmodule Benchee.Benchmark do
|
110
157
|
|> List.flatten
|
111
158
|
end
|
112
159
|
|
113
|
- defp run_warmup(function, time) do
|
114
|
- measure_runtimes(function, time, false)
|
160
|
+ defp run_warmup(function, input, time) do
|
161
|
+ measure_runtimes(function, input, time, false)
|
115
162
|
end
|
116
163
|
|
117
|
- defp measure_runtimes(function, time, display_fast_warning)
|
118
|
- defp measure_runtimes(_function, 0, _) do
|
164
|
+ defp measure_runtimes(function, input, time, display_fast_warning)
|
165
|
+ defp measure_runtimes(_function, _input, 0, _) do
|
119
166
|
[]
|
120
167
|
end
|
121
168
|
|
122
|
- defp measure_runtimes(function, time, display_fast_warning) do
|
123
|
- finish_time = current_time + time
|
169
|
+ defp measure_runtimes(function, input, time, display_fast_warning) do
|
170
|
+ finish_time = current_time() + time
|
124
171
|
:erlang.garbage_collect
|
125
|
- {n, initial_run_time} = determine_n_times(function, display_fast_warning)
|
126
|
- do_benchmark(finish_time, function, [initial_run_time], n)
|
172
|
+ {n, initial_run_time} = determine_n_times(function, input, display_fast_warning)
|
173
|
+ do_benchmark(finish_time, function, input, [initial_run_time], n, current_time())
|
127
174
|
end
|
128
175
|
|
129
176
|
defp current_time do
|
130
177
|
:erlang.system_time :micro_seconds
|
131
178
|
end
|
132
179
|
|
133
|
- # testing has shown that sometimes the first call is significantly slower
|
134
|
- # than the second (like 2 vs 800) so prewarm one time.
|
135
|
- defp prewarm(function, n \\ 1) do
|
136
|
- measure_call(function, n)
|
137
|
- end
|
138
|
-
|
139
180
|
# If a function executes way too fast measurements are too unreliable and
|
140
181
|
# with too high variance. Therefore determine an n how often it should be
|
141
182
|
# executed in the measurement cycle.
|
142
183
|
@minimum_execution_time 10
|
143
184
|
@times_multiplicator 10
|
144
|
- defp determine_n_times(function, display_fast_warning) do
|
145
|
- prewarm function
|
146
|
- run_time = measure_call function
|
185
|
+ defp determine_n_times(function, input, display_fast_warning) do
|
186
|
+ run_time = measure_call function, input
|
147
187
|
if run_time >= @minimum_execution_time do
|
148
188
|
{1, run_time}
|
149
189
|
else
|
150
|
- if display_fast_warning, do: print_fast_warning
|
151
|
- try_n_times(function, @times_multiplicator)
|
190
|
+ if display_fast_warning, do: print_fast_warning()
|
191
|
+ try_n_times(function, input, @times_multiplicator)
|
152
192
|
end
|
153
193
|
end
|
154
194
|
|
155
|
- defp try_n_times(function, n) do
|
156
|
- prewarm function, n
|
157
|
- run_time = measure_call_n_times function, n
|
195
|
+ defp try_n_times(function, input, n) do
|
196
|
+ run_time = measure_call_n_times function, input, n
|
158
197
|
if run_time >= @minimum_execution_time do
|
159
198
|
{n, run_time / n}
|
160
199
|
else
|
161
|
- try_n_times(function, n * @times_multiplicator)
|
200
|
+ try_n_times(function, input, n * @times_multiplicator)
|
162
201
|
end
|
163
202
|
end
|
164
203
|
|
|
@@ -169,30 +208,46 @@ defmodule Benchee.Benchmark do
|
169
208
|
IO.puts @fast_warning
|
170
209
|
end
|
171
210
|
|
172
|
- defp do_benchmark(finish_time, function, run_times, n, now \\ 0)
|
173
|
- defp do_benchmark(finish_time, _, run_times, _n, now) when now > finish_time do
|
174
|
- run_times
|
211
|
+ defp do_benchmark(finish_time, function, input, run_times, n, now)
|
212
|
+ defp do_benchmark(finish_time, _, _, run_times, _n, now)
|
213
|
+ when now > finish_time do
|
214
|
+ Enum.reverse run_times # restore correct order important for graphing
|
175
215
|
end
|
176
|
- defp do_benchmark(finish_time, function, run_times, n, _now) do
|
177
|
- run_time = measure_call(function, n)
|
216
|
+ defp do_benchmark(finish_time, function, input, run_times, n, _now) do
|
217
|
+ run_time = measure_call(function, input, n)
|
178
218
|
updated_run_times = [run_time | run_times]
|
179
|
- do_benchmark(finish_time, function, updated_run_times, n, current_time())
|
219
|
+ do_benchmark(finish_time, function, input,
|
220
|
+ updated_run_times, n, current_time())
|
180
221
|
end
|
181
222
|
|
182
|
- defp measure_call(function, n \\ 1)
|
183
|
- defp measure_call(function, 1) do
|
223
|
+ defp measure_call(function, input, n \\ 1)
|
224
|
+ defp measure_call(function, @no_input, 1) do
|
184
225
|
{microseconds, _return_value} = :timer.tc function
|
185
226
|
microseconds
|
186
227
|
end
|
187
|
- defp measure_call(function, n) do
|
188
|
- measure_call_n_times(function, n) / n
|
228
|
+ defp measure_call(function, input, 1) do
|
229
|
+ {microseconds, _return_value} = :timer.tc function, [input]
|
230
|
+ microseconds
|
231
|
+ end
|
232
|
+ defp measure_call(function, input, n) do
|
233
|
+ measure_call_n_times(function, input, n) / n
|
189
234
|
end
|
190
235
|
|
191
|
- defp measure_call_n_times(function, n) do
|
236
|
+ defp measure_call_n_times(function, @no_input, n) do
|
192
237
|
{microseconds, _return_value} = :timer.tc fn ->
|
193
238
|
RepeatN.repeat_n(function, n)
|
194
239
|
end
|
195
240
|
|
241
|
+ microseconds
|
242
|
+ end
|
243
|
+ defp measure_call_n_times(function, input, n) do
|
244
|
+ call_with_arg = fn ->
|
245
|
+ function.(input)
|
246
|
+ end
|
247
|
+ {microseconds, _return_value} = :timer.tc fn ->
|
248
|
+ RepeatN.repeat_n(call_with_arg, n)
|
249
|
+ end
|
250
|
+
|
196
251
|
microseconds
|
197
252
|
end
|
198
253
|
end
|
changed
lib/benchee/config.ex
|
@@ -4,10 +4,12 @@ defmodule Benchee.Config do
|
4
4
|
"""
|
5
5
|
|
6
6
|
alias Benchee.Conversion.Duration
|
7
|
+ alias Benchee.Utility.DeepConvert
|
7
8
|
|
8
9
|
@doc """
|
9
10
|
Returns the initial benchmark configuration for Benchee, composed of defaults
|
10
11
|
and an optional custom configuration.
|
12
|
+
|
11
13
|
Configuration times are given in seconds, but are converted to microseconds
|
12
14
|
internally.
|
13
15
|
|
|
@@ -17,6 +19,11 @@ defmodule Benchee.Config do
|
17
19
|
how often it is executed). Defaults to 5.
|
18
20
|
* `warmup` - the time in seconds for which the benchmarking function
|
19
21
|
should be run without gathering results. Defaults to 2.
|
22
|
+ * `inputs` - a map from descriptive input names to some different input,
|
23
|
+ your benchmarking jobs will then be run with each of these inputs. For this
|
24
|
+ to work your benchmarking function gets the current input passed in as an
|
25
|
+ argument into the function. Defaults to `nil`, aka no input specified and
|
26
|
+ functions are called without an argument.
|
20
27
|
* `parallel` - each job will be executed in `parallel` number processes.
|
21
28
|
Gives you more data in the same time, but also puts a load on the system
|
22
29
|
interfering with benchmark results. Defaults to 1.
|
|
@@ -64,6 +71,26 @@ defmodule Benchee.Config do
|
64
71
|
parallel: 1,
|
65
72
|
time: 5_000_000,
|
66
73
|
warmup: 2_000_000,
|
74
|
+ inputs: nil,
|
75
|
+ formatters: [&Benchee.Formatters.Console.output/1],
|
76
|
+ print: %{
|
77
|
+ benchmarking: true,
|
78
|
+ fast_warning: true,
|
79
|
+ configuration: true
|
80
|
+ },
|
81
|
+ console: %{ comparison: true, unit_scaling: :best }
|
82
|
+ },
|
83
|
+ jobs: %{}
|
84
|
+ }
|
85
|
+
|
86
|
+ iex> Benchee.init time: 1, warmup: 0.2
|
87
|
+ %{
|
88
|
+ config:
|
89
|
+ %{
|
90
|
+ parallel: 1,
|
91
|
+ time: 1_000_000,
|
92
|
+ warmup: 200_000.0,
|
93
|
+ inputs: nil,
|
67
94
|
formatters: [&Benchee.Formatters.Console.output/1],
|
68
95
|
print: %{
|
69
96
|
benchmarking: true,
|
|
@@ -82,6 +109,7 @@ defmodule Benchee.Config do
|
82
109
|
parallel: 1,
|
83
110
|
time: 1_000_000,
|
84
111
|
warmup: 200_000.0,
|
112
|
+ inputs: nil,
|
85
113
|
formatters: [&Benchee.Formatters.Console.output/1],
|
86
114
|
print: %{
|
87
115
|
benchmarking: true,
|
|
@@ -93,13 +121,21 @@ defmodule Benchee.Config do
|
93
121
|
jobs: %{}
|
94
122
|
}
|
95
123
|
|
96
|
- iex> Benchee.init %{parallel: 2, time: 1, warmup: 0.2, formatters: [&IO.puts/2], print: %{fast_warning: false}, console: %{unit_scaling: :smallest}}
|
124
|
+ iex> Benchee.init(
|
125
|
+ ...> parallel: 2,
|
126
|
+ ...> time: 1,
|
127
|
+ ...> warmup: 0.2,
|
128
|
+ ...> formatters: [&IO.puts/2],
|
129
|
+ ...> print: [fast_warning: false],
|
130
|
+ ...> console: [unit_scaling: :smallest],
|
131
|
+ ...> inputs: %{"Small" => 5, "Big" => 9999})
|
97
132
|
%{
|
98
133
|
config:
|
99
134
|
%{
|
100
135
|
parallel: 2,
|
101
136
|
time: 1_000_000,
|
102
137
|
warmup: 200_000.0,
|
138
|
+ inputs: %{"Small" => 5, "Big" => 9999},
|
103
139
|
formatters: [&IO.puts/2],
|
104
140
|
print: %{
|
105
141
|
benchmarking: true,
|
|
@@ -116,6 +152,7 @@ defmodule Benchee.Config do
|
116
152
|
time: 5,
|
117
153
|
warmup: 2,
|
118
154
|
formatters: [&Benchee.Formatters.Console.output/1],
|
155
|
+ inputs: nil,
|
119
156
|
print: %{
|
120
157
|
benchmarking: true,
|
121
158
|
configuration: true,
|
|
@@ -128,11 +165,11 @@ defmodule Benchee.Config do
|
128
165
|
}
|
129
166
|
@time_keys [:time, :warmup]
|
130
167
|
def init(config \\ %{}) do
|
131
|
- print = print_config config
|
132
|
- console = console_config config
|
133
|
- config = convert_time_to_micro_s(Map.merge(@default_config, config))
|
134
|
- config = %{config | print: print, console: console}
|
135
|
- :ok = :timer.start
|
168
|
+ map_config = DeepConvert.to_map(config)
|
169
|
+ config = @default_config
|
170
|
+ |> DeepMerge.deep_merge(map_config)
|
171
|
+ |> convert_time_to_micro_s
|
172
|
+ :ok = :timer.start
|
136
173
|
%{config: config, jobs: %{}}
|
137
174
|
end
|
138
175
|
|
|
@@ -144,18 +181,4 @@ defmodule Benchee.Config do
|
144
181
|
new_config
|
145
182
|
end
|
146
183
|
end
|
147
|
-
|
148
|
- defp print_config(%{print: config}) do
|
149
|
- Map.merge @default_config.print, config
|
150
|
- end
|
151
|
- defp print_config(_no_print_config) do
|
152
|
- @default_config.print
|
153
|
- end
|
154
|
-
|
155
|
- defp console_config(%{console: config}) do
|
156
|
- Map.merge @default_config.console, config
|
157
|
- end
|
158
|
- defp console_config(_no_console_config) do
|
159
|
- @default_config.console
|
160
|
- end
|
161
184
|
end
|
changed
lib/benchee/formatters/console.ex
|
@@ -20,31 +20,67 @@ defmodule Benchee.Formatters.Console do
|
20
20
|
def output(suite) do
|
21
21
|
suite
|
22
22
|
|> format
|
23
|
- |> IO.puts
|
23
|
+ |> IO.write
|
24
24
|
end
|
25
25
|
|
26
26
|
@doc """
|
27
27
|
Formats the benchmark statistics to a report suitable for output on the CLI.
|
28
28
|
|
29
|
+ Returns a list of lists, where each list element is a group belonging to one
|
30
|
+ specific input. So if there only was one (or no) input given through `:inputs`
|
31
|
+ then there's just one list inside.
|
32
|
+
|
29
33
|
## Examples
|
30
34
|
|
31
35
|
```
|
32
36
|
iex> jobs = %{ "My Job" =>%{average: 200.0, ips: 5000.0,std_dev_ratio: 0.1, median: 190.0}, "Job 2" => %{average: 400.0, ips: 2500.0, std_dev_ratio: 0.2, median: 390.0}}
|
33
|
- iex> Benchee.Formatters.Console.format(%{statistics: jobs, config: %{console: %{comparison: false, unit_scaling: :best}}})
|
34
|
- ["\nName ips average deviation median\n",
|
37
|
+ iex> inputs = %{"My input" => jobs}
|
38
|
+ iex> Benchee.Formatters.Console.format(%{statistics: inputs, config: %{console: %{comparison: false, unit_scaling: :best}}})
|
39
|
+ [["\n##### With input My input #####", "\nName ips average deviation median\n",
|
35
40
|
"My Job 5.00 K 200.00 μs ±10.00% 190.00 μs\n",
|
36
|
- "Job 2 2.50 K 400.00 μs ±20.00% 390.00 μs"]
|
41
|
+ "Job 2 2.50 K 400.00 μs ±20.00% 390.00 μs\n"]]
|
37
42
|
|
38
43
|
```
|
39
44
|
|
40
45
|
"""
|
41
|
- def format(%{statistics: job_stats, config: %{console: config}}) do
|
46
|
+ def format(%{statistics: jobs_per_input, config: %{console: config}}) do
|
47
|
+ jobs_per_input
|
48
|
+ |> Enum.map(fn({input, jobs_stats}) ->
|
49
|
+ [input_header(input) | format_jobs(jobs_stats, config)]
|
50
|
+ end)
|
51
|
+ end
|
52
|
+
|
53
|
+ defp input_header(input) do
|
54
|
+ no_input_marker = Benchee.Benchmark.no_input
|
55
|
+ case input do
|
56
|
+ ^no_input_marker -> ""
|
57
|
+ _ -> "\n##### With input #{input} #####"
|
58
|
+ end
|
59
|
+ end
|
60
|
+
|
61
|
+ @doc """
|
62
|
+ Formats the job statistics to a report suitable for output on the CLI.
|
63
|
+
|
64
|
+ ## Examples
|
65
|
+
|
66
|
+ ```
|
67
|
+ iex> jobs = %{ "My Job" =>%{average: 200.0, ips: 5000.0,std_dev_ratio: 0.1, median: 190.0}, "Job 2" => %{average: 400.0, ips: 2500.0, std_dev_ratio: 0.2, median: 390.0}}
|
68
|
+ iex> Benchee.Formatters.Console.format_jobs(jobs, %{comparison: false, unit_scaling: :best})
|
69
|
+ ["\nName ips average deviation median\n",
|
70
|
+ "My Job 5.00 K 200.00 μs ±10.00% 190.00 μs\n",
|
71
|
+ "Job 2 2.50 K 400.00 μs ±20.00% 390.00 μs\n"]
|
72
|
+
|
73
|
+ ```
|
74
|
+
|
75
|
+ """
|
76
|
+ def format_jobs(job_stats, config) do
|
42
77
|
sorted_stats = Statistics.sort(job_stats)
|
43
78
|
units = units(sorted_stats, config)
|
44
79
|
label_width = label_width job_stats
|
45
|
- [column_descriptors(label_width) | job_reports(sorted_stats, units, label_width)
|
80
|
+
|
81
|
+ [column_descriptors(label_width) |
|
82
|
+ job_reports(sorted_stats, units, label_width)
|
46
83
|
++ comparison_report(sorted_stats, units, label_width, config)]
|
47
|
- |> remove_last_blank_line
|
48
84
|
end
|
49
85
|
|
50
86
|
defp column_descriptors(label_width) do
|
|
@@ -64,7 +100,7 @@ defmodule Benchee.Formatters.Console do
|
64
100
|
end
|
65
101
|
|
66
102
|
defp job_reports(jobs, units, label_width) do
|
67
|
- Enum.map(jobs, fn(job) -> format_job job, units, label_width end)
|
103
|
+ Enum.map(jobs, fn(job) -> format_jobs job, units, label_width end)
|
68
104
|
end
|
69
105
|
|
70
106
|
defp units(jobs, %{unit_scaling: scaling_strategy}) do
|
|
@@ -88,7 +124,7 @@ defmodule Benchee.Formatters.Console do
|
88
124
|
}
|
89
125
|
end
|
90
126
|
|
91
|
- defp format_job({name, %{average: average,
|
127
|
+ defp format_jobs({name, %{average: average,
|
92
128
|
ips: ips,
|
93
129
|
std_dev_ratio: std_dev_ratio,
|
94
130
|
median: median}
|
|
@@ -124,7 +160,7 @@ defmodule Benchee.Formatters.Console do
|
124
160
|
end
|
125
161
|
defp comparison_report([reference | other_jobs], units, label_width, _config) do
|
126
162
|
[
|
127
|
- comparison_descriptor,
|
163
|
+ comparison_descriptor(),
|
128
164
|
reference_report(reference, units, label_width) |
|
129
165
|
comparisons(reference, units, label_width, other_jobs)
|
130
166
|
]
|
|
@@ -151,12 +187,4 @@ defmodule Benchee.Formatters.Console do
|
151
187
|
defp comparison_descriptor do
|
152
188
|
"\nComparison: \n"
|
153
189
|
end
|
154
|
-
|
155
|
- defp remove_last_blank_line([head]) do
|
156
|
- [String.rstrip(head)]
|
157
|
- end
|
158
|
- defp remove_last_blank_line([head | tail]) do
|
159
|
- [head | remove_last_blank_line(tail)]
|
160
|
- end
|
161
|
-
|
162
190
|
end
|
removed
lib/benchee/repeat_n.ex
|
@@ -1,22 +0,0 @@
|
1
|
- defmodule Benchee.RepeatN do
|
2
|
- @moduledoc """
|
3
|
- Simple helper module that can easily make a function call repeat n times.
|
4
|
- Which is significantly faster than Enum.each/list comprehension.
|
5
|
-
|
6
|
- Check out the benchmark in `samples/repeat_n.exs`
|
7
|
- """
|
8
|
-
|
9
|
- @doc """
|
10
|
- Calls the given function n times.
|
11
|
- """
|
12
|
- def repeat_n(_function, 0) do
|
13
|
- # noop
|
14
|
- end
|
15
|
- def repeat_n(function, 1) do
|
16
|
- function.()
|
17
|
- end
|
18
|
- def repeat_n(function, count) do
|
19
|
- function.()
|
20
|
- repeat_n(function, count - 1)
|
21
|
- end
|
22
|
- end
|
changed
lib/benchee/statistics.ex
|
@@ -5,6 +5,7 @@ defmodule Benchee.Statistics do
|
5
5
|
"""
|
6
6
|
|
7
7
|
alias Benchee.{Statistics, Conversion.Duration}
|
8
|
+ import Benchee.Utility.MapValues
|
8
9
|
require Integer
|
9
10
|
|
10
11
|
@doc """
|
|
@@ -36,33 +37,35 @@ defmodule Benchee.Statistics do
|
36
37
|
## Examples
|
37
38
|
|
38
39
|
iex> run_times = [200, 400, 400, 400, 500, 500, 700, 900]
|
39
|
- iex> suite = %{run_times: %{"My Job" => run_times}}
|
40
|
+ iex> suite = %{run_times: %{"Input" => %{"My Job" => run_times}}}
|
40
41
|
iex> Benchee.Statistics.statistics(suite)
|
41
42
|
%{
|
42
43
|
statistics: %{
|
43
|
- "My Job" => %{
|
44
|
- average: 500.0,
|
45
|
- ips: 2000.0,
|
46
|
- std_dev: 200.0,
|
47
|
- std_dev_ratio: 0.4,
|
48
|
- std_dev_ips: 800.0,
|
49
|
- median: 450.0,
|
50
|
- minimum: 200,
|
51
|
- maximum: 900,
|
52
|
- sample_size: 8
|
44
|
+ "Input" => %{
|
45
|
+ "My Job" => %{
|
46
|
+ average: 500.0,
|
47
|
+ ips: 2000.0,
|
48
|
+ std_dev: 200.0,
|
49
|
+ std_dev_ratio: 0.4,
|
50
|
+ std_dev_ips: 800.0,
|
51
|
+ median: 450.0,
|
52
|
+ minimum: 200,
|
53
|
+ maximum: 900,
|
54
|
+ sample_size: 8
|
55
|
+ }
|
53
56
|
}
|
54
57
|
},
|
55
|
- run_times: %{"My Job" => [200, 400, 400, 400, 500, 500, 700, 900]}
|
58
|
+ run_times: %{
|
59
|
+ "Input" => %{
|
60
|
+ "My Job" => [200, 400, 400, 400, 500, 500, 700, 900]
|
61
|
+ }
|
62
|
+ }
|
56
63
|
}
|
57
64
|
|
58
65
|
"""
|
59
|
- def statistics(suite = %{run_times: run_times}) do
|
60
|
- statistics =
|
61
|
- run_times
|
62
|
- |> Enum.map(fn({name, job_run_times}) ->
|
63
|
- {name, Statistics.job_statistics(job_run_times)}
|
64
|
- end)
|
65
|
- |> Map.new
|
66
|
+ def statistics(suite = %{run_times: run_times_per_input}) do
|
67
|
+ statistics = run_times_per_input
|
68
|
+ |> map_values(&Statistics.job_statistics/1)
|
66
69
|
|
67
70
|
Map.put suite, :statistics, statistics
|
68
71
|
end
|
added
lib/benchee/system.ex
|
@@ -0,0 +1,32 @@
|
1
|
+ defmodule Benchee.System do
|
2
|
+ @moduledoc """
|
3
|
+ Provides information about the system the benchmarks are run on.
|
4
|
+ """
|
5
|
+
|
6
|
+ @doc """
|
7
|
+ Adds system information to the suite (currently elixir and erlang versions).
|
8
|
+ """
|
9
|
+ def system(suite) do
|
10
|
+ versions = %{elixir: elixir(), erlang: erlang()}
|
11
|
+ Map.put suite, :system, versions
|
12
|
+ end
|
13
|
+
|
14
|
+ @doc """
|
15
|
+ Returns current Elixir version in use.
|
16
|
+ """
|
17
|
+ def elixir, do: System.version
|
18
|
+
|
19
|
+ @doc """
|
20
|
+ Returns the current erlang/otp version in use.
|
21
|
+ """
|
22
|
+ def erlang do
|
23
|
+ otp_release = :erlang.system_info(:otp_release)
|
24
|
+ file = Path.join([:code.root_dir, "releases", otp_release , "OTP_VERSION"])
|
25
|
+ case File.read(file) do
|
26
|
+ {:ok, version} -> String.strip(version)
|
27
|
+ {:error, reason} ->
|
28
|
+ IO.puts "Error trying to dermine erlang version #{reason}"
|
29
|
+ end
|
30
|
+ end
|
31
|
+
|
32
|
+ end
|
added
lib/benchee/utility/deep_convert.ex
|
@@ -0,0 +1,34 @@
|
1
|
+ defmodule Benchee.Utility.DeepConvert do
|
2
|
+ @moduledoc false
|
3
|
+
|
4
|
+ @doc """
|
5
|
+ Converts a deep keywordlist to the corresponding deep map.
|
6
|
+
|
7
|
+ ## Examples
|
8
|
+
|
9
|
+ iex> Benchee.Utility.DeepConvert.to_map([a: 1, b: 2])
|
10
|
+ %{a: 1, b: 2}
|
11
|
+
|
12
|
+ iex> Benchee.Utility.DeepConvert.to_map([a: [b: 2], c: [d: 3, e: 4, e: 5]])
|
13
|
+ %{a: %{b: 2}, c: %{d: 3, e: 5}}
|
14
|
+
|
15
|
+ iex> Benchee.Utility.DeepConvert.to_map([a: [b: 2], c: [1, 2, 3], d: []])
|
16
|
+ %{a: %{b: 2}, c: [1, 2, 3], d: []}
|
17
|
+
|
18
|
+ iex> Benchee.Utility.DeepConvert.to_map(%{a: %{b: 2}, c: %{d: 3, e: 5}})
|
19
|
+ %{a: %{b: 2}, c: %{d: 3, e: 5}}
|
20
|
+
|
21
|
+ iex> Benchee.Utility.DeepConvert.to_map([])
|
22
|
+ %{}
|
23
|
+ """
|
24
|
+ def to_map([]), do: %{}
|
25
|
+ def to_map(structure), do: do_to_map(structure)
|
26
|
+
|
27
|
+ defp do_to_map(kwlist = [{_key, _value} | _tail]) do
|
28
|
+ kwlist
|
29
|
+ |> Enum.map(fn({key, value}) -> {key, do_to_map(value)} end)
|
30
|
+ |> Map.new
|
31
|
+ end
|
32
|
+ defp do_to_map(no_list), do: no_list
|
33
|
+
|
34
|
+ end
|
added
lib/benchee/utility/file_creation.ex
|
@@ -0,0 +1,84 @@
|
1
|
+ defmodule Benchee.Utility.FileCreation do
|
2
|
+ @moduledoc """
|
3
|
+ Methods to create files used in plugins.
|
4
|
+ """
|
5
|
+
|
6
|
+ @doc """
|
7
|
+ Open a file for write for all key/value pairs, interleaves the file name and
|
8
|
+ calls function with file, content and filename.
|
9
|
+
|
10
|
+ Uses `Benchee.Utility.FileCreation.interlave/2` to get the base filename and
|
11
|
+ the given keys together to one nice file name, then creates these files and
|
12
|
+ calls the function with the file and the content from the given map so that
|
13
|
+ data can be written to the file.
|
14
|
+
|
15
|
+ If a directory is specified, it creates the directory.
|
16
|
+
|
17
|
+ Expects:
|
18
|
+
|
19
|
+ * names_to_content - a map from input name to contents that should go into
|
20
|
+ the corresponding file
|
21
|
+ * filename - the base file name as desired by the user
|
22
|
+ * function - a function that is then called for every file with the associated
|
23
|
+ file content from the map
|
24
|
+
|
25
|
+ ## Examples
|
26
|
+
|
27
|
+ # Just writes the contents to a file
|
28
|
+ Benchee.Utility.FileCreation.each(%{"My Input" => "_awesome html content_"},
|
29
|
+ "my.html",
|
30
|
+ fn(file, content) -> IO.write(file, content) end)
|
31
|
+ """
|
32
|
+ def each(names_to_content, filename, function \\ &default_each/3) do
|
33
|
+ create_directory filename
|
34
|
+ Enum.each names_to_content, fn({input_name, content}) ->
|
35
|
+ input_filename = interleave(filename, input_name)
|
36
|
+ File.open input_filename, [:write, :utf8], fn(file) ->
|
37
|
+ function.(file, content, input_filename)
|
38
|
+ end
|
39
|
+ end
|
40
|
+ end
|
41
|
+
|
42
|
+ defp default_each(file, content, input_filename) do
|
43
|
+ :ok = IO.write file, content
|
44
|
+ IO.puts "Generated #{input_filename}"
|
45
|
+ end
|
46
|
+
|
47
|
+ defp create_directory(filename) do
|
48
|
+ directory = Path.dirname filename
|
49
|
+ File.mkdir_p! directory
|
50
|
+ end
|
51
|
+
|
52
|
+ @doc """
|
53
|
+ Gets file name/path and the input name together.
|
54
|
+
|
55
|
+ Handles the special no_input key to do no work at all.
|
56
|
+
|
57
|
+ ## Examples
|
58
|
+
|
59
|
+ iex> Benchee.Utility.FileCreation.interleave("abc.csv", "hello")
|
60
|
+ "abc_hello.csv"
|
61
|
+
|
62
|
+ iex> Benchee.Utility.FileCreation.interleave("abc.csv", "Big Input")
|
63
|
+ "abc_big_input.csv"
|
64
|
+
|
65
|
+ iex> Benchee.Utility.FileCreation.interleave("bench/abc.csv", "Big Input")
|
66
|
+ "bench/abc_big_input.csv"
|
67
|
+
|
68
|
+ iex> marker = Benchee.Benchmark.no_input
|
69
|
+ iex> Benchee.Utility.FileCreation.interleave("abc.csv", marker)
|
70
|
+ "abc.csv"
|
71
|
+ """
|
72
|
+ def interleave(filename, name) do
|
73
|
+ Path.rootname(filename) <> to_filename(name) <> Path.extname(filename)
|
74
|
+ end
|
75
|
+
|
76
|
+ defp to_filename(name_string) do
|
77
|
+ no_input = Benchee.Benchmark.no_input
|
78
|
+ case name_string do
|
79
|
+ ^no_input -> ""
|
80
|
+ _ ->
|
81
|
+ String.downcase("_" <> String.replace(name_string, " ", "_"))
|
82
|
+ end
|
83
|
+ end
|
84
|
+ end
|
added
lib/benchee/utility/map_value.ex
|
@@ -0,0 +1,31 @@
|
1
|
+ defmodule Benchee.Utility.MapValues do
|
2
|
+ @moduledoc false
|
3
|
+
|
4
|
+ @doc """
|
5
|
+ Map values of a map keeping the keys intact.
|
6
|
+
|
7
|
+ ## Examples
|
8
|
+
|
9
|
+ iex> Benchee.Utility.MapValues.map_values(%{a: %{b: 2, c: 0}},
|
10
|
+ ...> fn(value) -> value + 1 end)
|
11
|
+ %{a: %{b: 3, c: 1}}
|
12
|
+
|
13
|
+ iex> Benchee.Utility.MapValues.map_values(%{a: %{b: 2, c: 0}, d: %{e: 2}},
|
14
|
+ ...> fn(value) -> value + 1 end)
|
15
|
+ %{a: %{b: 3, c: 1}, d: %{e: 3}}
|
16
|
+ """
|
17
|
+ require IEx
|
18
|
+ def map_values(map, function) do
|
19
|
+ map
|
20
|
+ |> Enum.map(fn({key, child_map}) ->
|
21
|
+ {key, do_map_values(child_map, function)}
|
22
|
+ end)
|
23
|
+ |> Map.new
|
24
|
+ end
|
25
|
+
|
26
|
+ defp do_map_values(child_map, function) do
|
27
|
+ child_map
|
28
|
+ |> Enum.map(fn({key, value}) -> {key, function.(value)} end)
|
29
|
+ |> Map.new
|
30
|
+ end
|
31
|
+ end
|
added
lib/benchee/utility/repeat_n.ex
|
@@ -0,0 +1,17 @@
|
1
|
+ defmodule Benchee.Utility.RepeatN do
|
2
|
+ @moduledoc false
|
3
|
+
|
4
|
+ @doc """
|
5
|
+ Calls the given function n times.
|
6
|
+ """
|
7
|
+ def repeat_n(_function, 0) do
|
8
|
+ # noop
|
9
|
+ end
|
10
|
+ def repeat_n(function, 1) do
|
11
|
+ function.()
|
12
|
+ end
|
13
|
+ def repeat_n(function, count) do
|
14
|
+ function.()
|
15
|
+ repeat_n(function, count - 1)
|
16
|
+ end
|
17
|
+ end
|
changed
mix.exs
|
@@ -1,7 +1,7 @@
|
1
1
|
defmodule Benchee.Mixfile do
|
2
2
|
use Mix.Project
|
3
3
|
|
4
|
- @version "0.5.0"
|
4
|
+ @version "0.6.0"
|
5
5
|
|
6
6
|
def project do
|
7
7
|
[
|
|
@@ -12,9 +12,9 @@ defmodule Benchee.Mixfile do
|
12
12
|
consolidate_protocols: true,
|
13
13
|
build_embedded: Mix.env == :prod,
|
14
14
|
start_permanent: Mix.env == :prod,
|
15
|
- deps: deps,
|
15
|
+ deps: deps(),
|
16
16
|
docs: [source_ref: @version],
|
17
|
- package: package,
|
17
|
+ package: package(),
|
18
18
|
name: "Benchee",
|
19
19
|
source_url: "https://github.com/PragTob/benchee",
|
20
20
|
description: """
|
|
@@ -33,11 +33,12 @@ defmodule Benchee.Mixfile do
|
33
33
|
|
34
34
|
defp deps do
|
35
35
|
[
|
36
|
- {:mix_test_watch, "~> 0.2", only: :dev},
|
37
|
- {:credo, "~> 0.4", only: :dev},
|
38
|
- {:ex_doc, "~> 0.11", only: :dev},
|
39
|
- {:earmark, "~> 1.0.1", only: :dev},
|
40
|
- {:inch_ex, "~> 0.5", only: :docs}
|
36
|
+ {:deep_merge, "~> 0.1"},
|
37
|
+ {:mix_test_watch, "~> 0.2", only: :dev},
|
38
|
+ {:credo, "~> 0.4", only: :dev},
|
39
|
+ {:ex_doc, "~> 0.11", only: :dev},
|
40
|
+ {:earmark, "~> 1.0.1", only: :dev},
|
41
|
+ {:inch_ex, "~> 0.5", only: :docs}
|
41
42
|
]
|
42
43
|
end
|