Skip to content

Commit

Permalink
update docs
Browse files Browse the repository at this point in the history
Reviewed By: rbgirshick

Differential Revision: D21658811

fbshipit-source-id: 18688a2bcefc10a188ef2e1ebd066381a68fbb7f
  • Loading branch information
ppwwyyxx authored and facebook-github-bot committed May 20, 2020
1 parent c213a6b commit 2e4a59d
Show file tree
Hide file tree
Showing 9 changed files with 36 additions and 28 deletions.
4 changes: 3 additions & 1 deletion .flake8
Original file line number Diff line number Diff line change
Expand Up @@ -6,4 +6,6 @@ ignore = W503, E203, E221, C901, C408, E741
max-line-length = 100
max-complexity = 18
select = B,C,E,F,W,T4,B9
exclude = build,__init__.py
exclude = build
per-file-ignores =
**/__init__.py:F401,F403
25 changes: 18 additions & 7 deletions INSTALL.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,28 +35,27 @@ old build first. You often need to rebuild detectron2 after reinstalling PyTorch

### Install Pre-Built Detectron2 (Linux only)
```
# for CUDA 10.1:
python -m pip install detectron2 -f https://dl.fbaipublicfiles.com/detectron2/wheels/cu101/index.html
# for CUDA 10.2:
python -m pip install detectron2 -f https://dl.fbaipublicfiles.com/detectron2/wheels/cu102/index.html
```
You can replace cu101 with "cu{100,92}" or "cpu".
For other cuda versions, replace cu102 with "cu{101,92}" or "cpu".

Note that:
1. Such installation has to be used with certain version of official PyTorch release.
See [releases](https://github.com/facebookresearch/detectron2/releases) for requirements.
It will not work with a different version of PyTorch or a non-official build of PyTorch.
The CUDA version used by PyTorch and detectron2 has to match as well.
2. Such installation is out-of-date w.r.t. master branch of detectron2. It may not be
compatible with the master branch of a research project that uses detectron2 (e.g. those in
[projects](projects) or [meshrcnn](https://github.com/facebookresearch/meshrcnn/)).

### Common Installation Issues

If you met issues using the pre-built detectron2, please uninstall it and try building it from source.

Click each issue for its solutions:

<details>
<summary>
Undefined torch/aten/caffe2 symbols, or segmentation fault immediately when running the library.
Undefined torch/aten/caffe2 symbols; missing torch dynamic libraries; segmentation fault immediately when using detectron2.
</summary>
<br/>

Expand Down Expand Up @@ -144,7 +143,7 @@ Two possibilities:

<details>
<summary>
Undefined CUDA symbols; cannot open libcudart.so; other nvcc failures.
Undefined CUDA symbols; cannot open libcudart.so
</summary>
<br/>
The version of NVCC you use to build detectron2 or torchvision does
Expand All @@ -161,6 +160,18 @@ to match your local CUDA installation, or install a different version of CUDA to
</details>


<details>
<summary>
C++ compilation errors from NVCC
</summary>
<br/>
1. NVCC version has to match the CUDA version of your PyTorch.

2. NVCC has compatibility issues with certain versions of gcc. You may need a different
version of gcc. The version used by PyTorch can be found by `print(torch.__config__.show())`.
</details>


<details>
<summary>
"ImportError: cannot import name '_C'".
Expand Down
2 changes: 1 addition & 1 deletion detectron2/config/defaults.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@
_C.MODEL.DEVICE = "cuda"
_C.MODEL.META_ARCHITECTURE = "GeneralizedRCNN"

# Path (possibly with schema like catalog:https:// or detectron2:https://) to a checkpoint file
# Path (a file path, or URL like detectron2:https://.., https:https://..) to a checkpoint file
# to be loaded to the model. You can find available models in the model zoo.
_C.MODEL.WEIGHTS = ""

Expand Down
9 changes: 5 additions & 4 deletions detectron2/data/transforms/transform_gen.py
Original file line number Diff line number Diff line change
Expand Up @@ -446,7 +446,7 @@ def get_transform(self, img):

class RandomSaturation(TransformGen):
"""
Randomly transforms image saturation.
Randomly transforms saturation of an RGB image.
Saturation intensity is uniformly sampled in (intensity_min, intensity_max).
- intensity < 1 will reduce saturation (make the image more grayscale)
Expand All @@ -466,15 +466,16 @@ def __init__(self, intensity_min, intensity_max):
self._init(locals())

def get_transform(self, img):
assert img.shape[-1] == 3, "Saturation only works on RGB images"
assert img.shape[-1] == 3, "RandomSaturation only works on RGB images"
w = np.random.uniform(self.intensity_min, self.intensity_max)
grayscale = img.dot([0.299, 0.587, 0.114])[:, :, np.newaxis]
return BlendTransform(src_image=grayscale, src_weight=1 - w, dst_weight=w)


class RandomLighting(TransformGen):
"""
Randomly transforms image color using fixed PCA over ImageNet.
The "lighting" augmentation described in AlexNet, using fixed PCA over ImageNet.
Inputs are assumed to be RGB images.
The degree of color jittering is randomly sampled via a normal distribution,
with standard deviation given by the scale parameter.
Expand All @@ -493,7 +494,7 @@ def __init__(self, scale):
self.eigen_vals = np.array([0.2175, 0.0188, 0.0045])

def get_transform(self, img):
assert img.shape[-1] == 3, "Saturation only works on RGB images"
assert img.shape[-1] == 3, "RandomLighting only works on RGB images"
weights = np.random.normal(scale=self.scale, size=3)
return BlendTransform(
src_image=self.eigen_vecs.dot(weights * self.eigen_vals), src_weight=1.0, dst_weight=1.0
Expand Down
8 changes: 1 addition & 7 deletions detectron2/modeling/__init__.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,4 @@
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
import torch

from detectron2.layers import ShapeSpec

from .anchor_generator import build_anchor_generator, ANCHOR_GENERATOR_REGISTRY
Expand Down Expand Up @@ -48,9 +46,5 @@
)
from .test_time_augmentation import DatasetMapperTTA, GeneralizedRCNNWithTTA

_EXCLUDE = {"torch", "ShapeSpec"}
_EXCLUDE = {"ShapeSpec"}
__all__ = [k for k in globals().keys() if k not in _EXCLUDE and not k.startswith("_")]

assert (
torch.Tensor([1]) == torch.Tensor([2])
).dtype == torch.bool, "Your Pytorch is too old. Please update to contain https://github.com/pytorch/pytorch/pull/21113"
8 changes: 4 additions & 4 deletions docs/tutorials/models.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ and how to use the `model` object.
### Load/Save a Checkpoint
```python
from detectron2.checkpoint import DetectionCheckpointer
DetectionCheckpointer(model).load(file_path) # load a file to model
DetectionCheckpointer(model).load(file_path_or_url) # load a file, usually from cfg.MODEL.WEIGHTS

checkpointer = DetectionCheckpointer(model, save_dir="output")
checkpointer.save("model_999") # save to output/model_999.pth
Expand Down Expand Up @@ -76,9 +76,9 @@ The dict may contain the following keys:
+ "proposal_boxes": a [Boxes](../modules/structures.html#detectron2.structures.Boxes) object storing P proposal boxes.
+ "objectness_logits": `Tensor`, a vector of P scores, one for each proposal.
* "height", "width": the **desired** output height and width, which is not necessarily the same
as the height or width of the `image` input field.
For example, the `image` input field might be a resized image,
but you may want the outputs to be in **original** resolution.
as the height or width of the `image` field.
For example, the `image` field contains the resized image, if resize is used as a preprocessing step.
But you may want the outputs to be in **original** resolution.

If provided, the model will produce output in this resolution,
rather than in the resolution of the `image` as input into the model. This is more efficient and accurate.
Expand Down
1 change: 1 addition & 0 deletions docs/tutorials/write-models.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@ from detectron2.modeling import BACKBONE_REGISTRY, Backbone, ShapeSpec
@BACKBONE_REGISTRY.register()
class ToyBackBone(Backbone):
def __init__(self, cfg, input_shape):
super().__init__()
# create your own backbone
self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=16, padding=3)

Expand Down
2 changes: 1 addition & 1 deletion projects/PointRend/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ Alexander Kirillov, Yuxin Wu, Kaiming He, Ross Girshick
In this repository, we release code for PointRend in Detectron2. PointRend can be flexibly applied to both instance and semantic segmentation tasks by building on top of existing state-of-the-art models.

## Installation
Install Detectron 2 following [INSTALL.md](https://github.com/facebookresearch/detectron2/blob/master/INSTALL.md). You are ready to go!
Install Detectron2 following [the instructions](https://detectron2.readthedocs.io/tutorials/install.html). You are ready to go!

## Quick start and visualization

Expand Down
5 changes: 2 additions & 3 deletions tools/train_net.py
Original file line number Diff line number Diff line change
Expand Up @@ -44,9 +44,8 @@ class Trainer(DefaultTrainer):
"""
We use the "DefaultTrainer" which contains pre-defined default logic for
standard training workflow. They may not work for you, especially if you
are working on a new research project. In that case you can use the cleaner
"SimpleTrainer", or write your own training loop. You can use
"tools/plain_train_net.py" as an example.
are working on a new research project. In that case you can write your
own training loop. You can use "tools/plain_train_net.py" as an example.
"""

@classmethod
Expand Down

0 comments on commit 2e4a59d

Please sign in to comment.