Skip to content
This repository has been archived by the owner on Jul 9, 2024. It is now read-only.

Latest commit

 

History

History
119 lines (85 loc) · 2.78 KB

PERFORMANCE.md

File metadata and controls

119 lines (85 loc) · 2.78 KB

Performance observations

These results were generated by running pnpm bench on Macbook Air 2020 with 16Gb RAM.

D = dot
P = pug
S = susskind
V = vanilla


### 1

--- DATA
D 358715 ops/s
P 8200 ops/s
S 9780165 ops/s
V 26860525 ops/s

--- GRAPH
D --------------------
P --------------------
S #######-------------
V ####################


### 10

--- DATA
D 354720 ops/s
P 8380 ops/s
S 2375825 ops/s
V 14111795 ops/s

--- GRAPH
D --------------------
P --------------------
S ###-----------------
V ####################


### 100

--- DATA
D 320405 ops/s
P 8330 ops/s
S 279930 ops/s
V 2764145 ops/s

--- GRAPH
D ##------------------
P --------------------
S ##------------------
V ####################


### 1000

--- DATA
D 164310 ops/s
P 8080 ops/s
S 28270 ops/s
V 309735 ops/s

--- GRAPH
D ##########----------
P --------------------
S #-------------------
V ####################


### 10000

--- DATA
D 26315 ops/s
P 6270 ops/s
S 2640 ops/s
V 29695 ops/s

--- GRAPH
D ###################-
P ####----------------
S #-------------------
V ####################


### 100000

--- DATA
D 2660 ops/s
P 1130 ops/s
S 230 ops/s
V 2730 ops/s

--- GRAPH
D ####################
P #########-----------
S #-------------------
V ####################

The problem

As you can see from the results Susskind is the fastest library when there is less then ±80 items to render.

However the performance falls exponentially as the number of items grows. With 10000 items it's even slower than Pug.

I suspect the main reason is function call overhead. All other libraries are basically just RegExp matchers. It does have its own overhead as you can see from <1, 10> items results where Susskind is the fastest library. However when this tipping point is crossed the matching overhead seems to be much smaller when compared to composition via functions.

The solution

You might have noticed that vanilla is always the fastest solution.

It's crazy fast when there is a small number of items and it's a little bit faster than dot when there is a lot of items. It sustains its leverage over other solutions even when the number of items is over one million.

One solution would be compile-time macros similar to those found in Rust. However TypeScript team has no intentions of adding those. There are some hacks how to intercept TS compiler but that's not a good solution.

Another solution would be a partial evaluator like Prepack. However it's a dead project so it's not a good solution either.

The last option would be something like a babel plugin. However as of writing I have no idea how to implement something like that.

Potentially if I was able to get rid of all unnecessary function calls I might get a near vanilla code meaning near vanilla performance while retaining modularity and lazy runtime evaluation where needed.