Skip to content

Commit

Permalink
Merge branch 'master' into Connor1996-patch-1
Browse files Browse the repository at this point in the history
  • Loading branch information
Hoverbear committed Dec 20, 2018
2 parents fbd7230 + d6939c1 commit 872f84a
Show file tree
Hide file tree
Showing 10 changed files with 347 additions and 201 deletions.
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
node_modules/
package-lock.json
7 changes: 7 additions & 0 deletions .markdownlint.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
{
"default": true,
"MD013": {
"code_blocks": false
},
"MD034": false
}
6 changes: 6 additions & 0 deletions .travis.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
language: node_js
node_js:
- 10
cache: npm
script:
- npm run lint
41 changes: 30 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,28 +1,47 @@
# TiKV RFCs

Many changes, including bug fixes and documentation improvements can be implemented and reviewed via the normal GitHub pull request workflow.
Many changes, including bug fixes and documentation improvements can be
implemented and reviewed via the normal GitHub pull request workflow.

Some changes though are "substantial", and we ask that these be put through a bit of a design process and produce a consensus among the TiKV community.
Some changes though are "substantial", and we ask that these be put through a
bit of a design process and produce a consensus among the TiKV community.

The "RFC" (request for comments) process is intended to provide a consistent and controlled path for new features to enter the project, so that all stakeholders can be confident about the direction the project is evolving in.
The "RFC" (request for comments) process is intended to provide a consistent
and controlled path for new features to enter the project, so that all
stakeholders can be confident about the direction the project is evolving in.

### How to submit an RFC
## How to submit an RFC

1. Copy `template.md` into `text/YYYY-MM-DD-my-feature.md`.
2. Write the document and fill in the blanks.
3. Submit a pull request.

### Timeline of an RFC
## Timeline of an RFC

1. An RFC is submitted as a PR.
2. Discussion takes place, and the text is revised in response.
3. The PR is merged or closed when at least two project maintainers reach consensus.
3. The PR is merged or closed when at least two project maintainers reach
consensus.

### License
## Style of an RFC

This content is licensed under Apachie License, Version 2.0, ([LICENSE](LICENSE) or
http:https://www.apache.org/licenses/LICENSE-2.0)
We follow lint rules listed in
[markdownlint](https:https://github.com/DavidAnson/markdownlint/blob/master/doc/Rules.md).

### Contributions
Run lints (you must have [Node.js](https://nodejs.org) installed):

Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.
```bash
# Install linters: npm install
npm run lint
```

## License

This content is licensed under Apachie License, Version 2.0,
([LICENSE](LICENSE) or http:https://www.apache.org/licenses/LICENSE-2.0)

## Contributions

Unless you explicitly state otherwise, any contribution intentionally submitted
for inclusion in the work by you, as defined in the Apache-2.0 license, shall
be dual licensed as above, without any additional terms or conditions.
10 changes: 10 additions & 0 deletions package.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
{
"name": "rfcs",
"version": "0.0.0",
"devDependencies": {
"markdownlint-cli": "^0.13.0"
},
"scripts": {
"lint": "markdownlint text/*.md"
}
}
22 changes: 13 additions & 9 deletions template.md
Original file line number Diff line number Diff line change
@@ -1,29 +1,33 @@
# Summary
# Title

## Summary

One para explanation of the proposal.

# Motivation
## Motivation

Why are we doing this? What use cases does it support? What is the expected outcome?
Why are we doing this? What use cases does it support? What is the expected
outcome?

# Detailed design
## Detailed design

This is the bulk of the RFC. Explain the design in enough detail that:

- It is reasonably clear how the feature would be implemented.
- Corner cases are dissected by example.
- How the feature is used.

# Drawbacks
## Drawbacks

Why should we not do this?

# Alternatives
## Alternatives

- Why is this design the best in the space of possible designs?
- What other designs have been considered and what is the rationale for not choosing them?
- What other designs have been considered and what is the rationale for not
choosing them?
- What is the impact of not doing this?

# Unresolved questions
## Unresolved questions

What parts of the design are still to be determined?
What parts of the design are still to be determined?
113 changes: 61 additions & 52 deletions text/2017-12-22-read-pool.md
Original file line number Diff line number Diff line change
@@ -1,28 +1,32 @@
# Summary
# Read Pool

This RFC proposes a new thread pool that supports async operations, which is called `ReadPool`. It
is designed to be specifically used for all sorts of read operations like Kv Get, Kv Scan and
Coprocessor Read to improve performance.
## Summary

# Motivation
This RFC proposes a new thread pool that supports async operations, which is
called `ReadPool`. It is designed to be specifically used for all sorts of read
operations like Kv Get, Kv Scan and Coprocessor Read to improve performance.

Currently, for all KV operations that stay inside `Storage`, they are executed on the same thread
pool. This leads to bad operation isolation, i.e. heavy writes will cause slow reads. In fact, these
read operations can be executed on other threads to be not blocked. For Coprocessor operations, we
have an end-point worker thread to handle async snapshots. This is a bottleneck when there are heavy
coprocessor requests.
## Motivation

By introducing a standalone thread pool that supports async operations (which is called ReadPool) in
this RFC, we can fix both problems. For KV operations, read and write operations will be executed
on different thread pools, so that they won't affect each other. For Coprocessor operations, since
ReadPool supports async operations, the end-point worker is not necessary anymore and the bottleneck
no longer exists.
Currently, for all KV operations that stay inside `Storage`, they are executed
on the same thread pool. This leads to bad operation isolation, i.e. heavy
writes will cause slow reads. In fact, these read operations can be executed on
other threads to be not blocked. For Coprocessor operations, we have an
end-point worker thread to handle async snapshots. This is a bottleneck when
there are heavy coprocessor requests.

In this RFC, there will be a ReadPool for KV operations and another ReadPool for Coprocessor
operations, so that they won't block each other. In the future, it may be possible to merge
them together.
By introducing a standalone thread pool that supports async operations (which
is called ReadPool) in this RFC, we can fix both problems. For KV operations,
read and write operations will be executed on different thread pools, so that
they won't affect each other. For Coprocessor operations, since ReadPool
supports async operations, the end-point worker is not necessary anymore and
the bottleneck no longer exists.

# Detailed design
In this RFC, there will be a ReadPool for KV operations and another ReadPool
for Coprocessor operations, so that they won't block each other. In the future,
it may be possible to merge them together.

## Detailed design

The ReadPool provides these features:

Expand All @@ -32,14 +36,15 @@ The ReadPool provides these features:

3. Support max task threshold as a load control mechanism.

The ReadPool can be implemented via 2 levels. The lower level is a new facility called
`util::FuturePool`, which is a simple encapsulation over [CpuPool](https://docs.rs/futures-cpupool/)
that supports feature 2. The higher level is `server::ReadPool`, which assembles multiple
`FuturePool`s and supports features 1 and 3. In this way, `FuturePool` becomes a common facility
that can be reused in the future to run other kinds of async operations that need contexts, not just
limited to the scenario listed in this RFC.
The ReadPool can be implemented via 2 levels. The lower level is a new facility
called `util::FuturePool`, which is a simple encapsulation over
[CpuPool](https://docs.rs/futures-cpupool/) that supports feature 2. The higher
level is `server::ReadPool`, which assembles multiple `FuturePool`s and
supports features 1 and 3. In this way, `FuturePool` becomes a common facility
that can be reused in the future to run other kinds of async operations that
need contexts, not just limited to the scenario listed in this RFC.

## FuturePool
### FuturePool

To support periodical ticking, this RFC defines:

Expand All @@ -49,8 +54,8 @@ pub trait Context: fmt::Debug + Send {
}
```

The `FuturePool` user can provide customized `impl Context` so that some code can be executed
periodically within the context of the thread pool's thread.
The `FuturePool` user can provide customized `impl Context` so that some code
can be executed periodically within the context of the thread pool's thread.

The `FuturePool` is defined as follows:

Expand All @@ -63,9 +68,10 @@ pub struct FuturePool<T: Context + 'static> {
}
```

The logic of whether to call `on_tick` is, when each task finishes running, we check how much
time has elapsed. If the elapsed time since the last tick is longer than our tick interval, we call
`on_tick`. In order to do so in a clean way, we can wrap this logic into a `ContextDelegator`:
The logic of whether to call `on_tick` is, when each task finishes running, we
check how much time has elapsed. If the elapsed time since the last tick is
longer than our tick interval, we call `on_tick`. In order to do so in a clean
way, we can wrap this logic into a `ContextDelegator`:

```rust
struct ContextDelegator<T: Context> {
Expand All @@ -85,14 +91,15 @@ impl<T: Context> ContextDelegator<T> {
}
```

`FuturePool` users should pass a `Future` to the `FuturePool` to execute, just like `CpuPool`.
However, to obtain access to the `Context` inside `Future`, the provided `Future` should be wrapped
in a closure which provides access to the `Context` via the parameter. In the further, the parameter
may live multiple `Future`s, which may run on different threads and different `Context`s. Thus, the
parameter is a `Context` accessor instead of a specific `Context`.
`FuturePool` users should pass a `Future` to the `FuturePool` to execute, just
like `CpuPool`. However, to obtain access to the `Context` inside `Future`, the
provided `Future` should be wrapped in a closure which provides access to the
`Context` via the parameter. In the further, the parameter may live multiple
`Future`s, which may run on different threads and different `Context`s. Thus,
the parameter is a `Context` accessor instead of a specific `Context`.

As a result, this RFC further introduces the following struct, which provides an access to the
current `Context` based on running thread:
As a result, this RFC further introduces the following struct, which provides
an access to the current `Context` based on running thread:

```rust
pub struct ContextDelegators<T: Context> {
Expand All @@ -106,7 +113,8 @@ impl<T: Context> ContextDelegators<T> {
}
```

The `FuturePool` provides `spawn` interface to execute a `Future` while providing
The `FuturePool` provides `spawn` interface to execute a `Future` while
providing
`ContextDelegators`:

```rust
Expand All @@ -123,7 +131,7 @@ impl<T: Context + 'static> FuturePool<T> {
}
```

## ReadPool
### ReadPool

`ReadPool` is implemented over `FuturePool` as follows:

Expand All @@ -138,8 +146,8 @@ pub struct ReadPool<T: Context + 'static> {
}
```

It also provides an interface similar to `FuturePool::spawn`, which specifies the priority and
returns an error when `FuturePool` is full:
It also provides an interface similar to `FuturePool::spawn`, which specifies
the priority and returns an error when `FuturePool` is full:

```rust
pub enum Priority {
Expand Down Expand Up @@ -175,20 +183,21 @@ impl<T: futurepool::Context + 'static> ReadPool<T> {
}
```

# Drawbacks
## Drawbacks

The `on_tick` implementation in `FuturePool` is not perfect. It is driven by task completion, which
means it will not be executed when there is no task. This is acceptable since we are going to use
`on_tick` to report metrics.
The `on_tick` implementation in `FuturePool` is not perfect. It is driven by
task completion, which means it will not be executed when there is no task.
This is acceptable since we are going to use `on_tick` to report metrics.

# Alternatives
## Alternatives

We can implement our own future pool instead of utilizing `CpuPool`. In this way, we can have a true
`on_tick`. However, this is complex and is not a necessary feature we need.
We can implement our own future pool instead of utilizing `CpuPool`. In this
way, we can have a true `on_tick`. However, this is complex and is not a
necessary feature we need.

We can implement our own async model instead of utilizing `Future`. However, there is no benefits
compared to the current solution.
We can implement our own async model instead of utilizing `Future`. However,
there is no benefits compared to the current solution.

# Unresolved questions
## Unresolved questions

None.
Loading

0 comments on commit 872f84a

Please sign in to comment.