Skip to content

Commit

Permalink
Merge pull request #5537 from shelhamer/docs-grooming
Browse files Browse the repository at this point in the history
[docs] groom Caffe site
  • Loading branch information
shelhamer committed Apr 14, 2017
2 parents 2e33792 + 8b8f2dd commit 946c9b8
Show file tree
Hide file tree
Showing 14 changed files with 50 additions and 126 deletions.
2 changes: 1 addition & 1 deletion CONTRIBUTORS.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Contributors

Caffe is developed by a core set of BVLC members and the open-source community.
Caffe is developed by a core set of BAIR members and the open-source community.

We thank all of our [contributors](https://github.com/BVLC/caffe/graphs/contributors)!

Expand Down
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,13 +4,13 @@
[![License](https://img.shields.io/badge/license-BSD-blue.svg)](LICENSE)

Caffe is a deep learning framework made with expression, speed, and modularity in mind.
It is developed by the Berkeley Vision and Learning Center ([BVLC](http:https://bvlc.eecs.berkeley.edu)) and community contributors.
It is developed by Berkeley AI Research ([BAIR](http:https://bair.berkeley.edu))/The Berkeley Vision and Learning Center (BVLC) and community contributors.

Check out the [project site](http:https://caffe.berkeleyvision.org) for all the details like

- [DIY Deep Learning for Vision with Caffe](https://docs.google.com/presentation/d/1UeKXVgRvvxg9OUdh_UiC5G71UMscNPlvArsWER41PsU/edit#slide=id.p)
- [Tutorial Documentation](http:https://caffe.berkeleyvision.org/tutorial/)
- [BVLC reference models](http:https://caffe.berkeleyvision.org/model_zoo.html) and the [community model zoo](https://github.com/BVLC/caffe/wiki/Model-Zoo)
- [BAIR reference models](http:https://caffe.berkeleyvision.org/model_zoo.html) and the [community model zoo](https://github.com/BVLC/caffe/wiki/Model-Zoo)
- [Installation instructions](http:https://caffe.berkeleyvision.org/installation.html)

and step-by-step examples.
Expand All @@ -25,7 +25,7 @@ Happy brewing!
## License and Citation

Caffe is released under the [BSD 2-Clause license](https://github.com/BVLC/caffe/blob/master/LICENSE).
The BVLC reference models are released for unrestricted use.
The BAIR/BVLC reference models are released for unrestricted use.

Please cite Caffe in your publications if it helps your research:

Expand Down
2 changes: 1 addition & 1 deletion docs/_layouts/default.html
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@
<header>
<h1 class="header"><a href="/">Caffe</a></h1>
<p class="header">
Deep learning framework by the <a class="header name" href="http:https://bvlc.eecs.berkeley.edu/">BVLC</a>
Deep learning framework by <a class="header name" href="http:https://bair.berkeley.edu/">BAIR</a>
</p>
<p class="header">
Created by
Expand Down
4 changes: 2 additions & 2 deletions docs/development.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ title: Developing and Contributing
# Development and Contributing

Caffe is developed with active participation of the community.<br>
The [BVLC](http:https://bvlc.eecs.berkeley.edu/) brewers welcome all contributions!
The [BAIR](http:https://bair.berkeley.edu/)/BVLC brewers welcome all contributions!

The exact details of contributions are recorded by versioning and cited in our [acknowledgements](http:https://caffe.berkeleyvision.org/#acknowledgements).
This method is impartial and always up-to-date.
Expand Down Expand Up @@ -37,7 +37,7 @@ We absolutely appreciate any contribution to this effort!

The `master` branch receives all new development including community contributions.
We try to keep it in a reliable state, but it is the bleeding edge, and things do get broken every now and then.
BVLC maintainers will periodically make releases by marking stable checkpoints as tags and maintenance branches. [Past releases](https://github.com/BVLC/caffe/releases) are catalogued online.
BAIR maintainers will periodically make releases by marking stable checkpoints as tags and maintenance branches. [Past releases](https://github.com/BVLC/caffe/releases) are catalogued online.

#### Issues & Pull Request Protocol

Expand Down
47 changes: 21 additions & 26 deletions docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ title: Deep Learning Framework
# Caffe

Caffe is a deep learning framework made with expression, speed, and modularity in mind.
It is developed by the Berkeley Vision and Learning Center ([BVLC](http:https://bvlc.eecs.berkeley.edu)) and by community contributors.
It is developed by Berkeley AI Research ([BAIR](http:https://bair.berkeley.edu)) and by community contributors.
[Yangqing Jia](http:https://daggerfs.com) created the project during his PhD at UC Berkeley.
Caffe is released under the [BSD 2-Clause license](https://github.com/BVLC/caffe/blob/master/LICENSE).

Expand All @@ -23,40 +23,34 @@ Thanks to these contributors the framework tracks the state-of-the-art in both c

**Speed** makes Caffe perfect for research experiments and industry deployment.
Caffe can process **over 60M images per day** with a single NVIDIA K40 GPU\*.
That's 1 ms/image for inference and 4 ms/image for learning.
We believe that Caffe is the fastest convnet implementation available.
That's 1 ms/image for inference and 4 ms/image for learning and more recent library versions and hardware are faster still.
We believe that Caffe is among the fastest convnet implementations available.

**Community**: Caffe already powers academic research projects, startup prototypes, and even large-scale industrial applications in vision, speech, and multimedia.
Join our community of brewers on the [caffe-users group](https://groups.google.com/forum/#!forum/caffe-users) and [Github](https://github.com/BVLC/caffe/).

<p class="footnote" markdown="1">
\* With the ILSVRC2012-winning [SuperVision](http:https://www.image-net.org/challenges/LSVRC/2012/supervision.pdf) model and caching IO.
Consult performance [details](/performance_hardware.html).
\* With the ILSVRC2012-winning [SuperVision](http:https://www.image-net.org/challenges/LSVRC/2012/supervision.pdf) model and prefetching IO.
</p>

## Documentation

- [DIY Deep Learning for Vision with Caffe](https://docs.google.com/presentation/d/1UeKXVgRvvxg9OUdh_UiC5G71UMscNPlvArsWER41PsU/edit#slide=id.p)<br>
Tutorial presentation.
- [DIY Deep Learning for Vision with Caffe](https://docs.google.com/presentation/d/1UeKXVgRvvxg9OUdh_UiC5G71UMscNPlvArsWER41PsU/edit#slide=id.p) and [Caffe in a Day](https://docs.google.com/presentation/d/1HxGdeq8MPktHaPb-rlmYYQ723iWzq9ur6Gjo71YiG0Y/edit#slide=id.gc2fcdcce7_216_0)<br>
Tutorial presentation of the framework and a full-day crash course.
- [Tutorial Documentation](/tutorial)<br>
Practical guide and framework reference.
- [arXiv / ACM MM '14 paper](http:https://arxiv.org/abs/1408.5093)<br>
A 4-page report for the ACM Multimedia Open Source competition (arXiv:1408.5093v1).
- [Installation instructions](/installation.html)<br>
Tested on Ubuntu, Red Hat, OS X.
* [Model Zoo](/model_zoo.html)<br>
BVLC suggests a standard distribution format for Caffe models, and provides trained models.
BAIR suggests a standard distribution format for Caffe models, and provides trained models.
* [Developing & Contributing](/development.html)<br>
Guidelines for development and contributing to Caffe.
* [API Documentation](/doxygen/annotated.html)<br>
Developer documentation automagically generated from code comments.

### Examples

{% assign examples = site.pages | where:'category','example' | sort: 'priority' %}
{% for page in examples %}
- <div><a href="{{page.url}}">{{page.title}}</a><br>{{page.description}}</div>
{% endfor %}
* [Benchmarking](https://docs.google.com/spreadsheets/d/1Yp4rqHpT7mKxOPbpzYeUfEFLnELDAgxSSBQKp5uKDGQ/edit#gid=0)<br>
Comparison of inference and learning for different networks and GPUs.

### Notebook Examples

Expand All @@ -65,6 +59,13 @@ Developer documentation automagically generated from code comments.
- <div><a href="http:https://nbviewer.ipython.org/github/BVLC/caffe/blob/master/{{page.original_path}}">{{page.title}}</a><br>{{page.description}}</div>
{% endfor %}

### Command Line Examples

{% assign examples = site.pages | where:'category','example' | sort: 'priority' %}
{% for page in examples %}
- <div><a href="{{page.url}}">{{page.title}}</a><br>{{page.description}}</div>
{% endfor %}

## Citing Caffe

Please cite Caffe in your publications if it helps your research:
Expand All @@ -76,31 +77,25 @@ Please cite Caffe in your publications if it helps your research:
Year = {2014}
}

If you do publish a paper where Caffe helped your research, we encourage you to update the [publications wiki](https://github.com/BVLC/caffe/wiki/Publications).
Citations are also tracked automatically by [Google Scholar](http:https://scholar.google.com/scholar?oi=bibs&hl=en&cites=17333247995453974016).
If you do publish a paper where Caffe helped your research, we encourage you to cite the framework for tracking by [Google Scholar](https://scholar.google.com/citations?view_op=view_citation&hl=en&citation_for_view=-ltRSM0AAAAJ:u5HHmVD_uO8C).

## Contacting Us

Join the [caffe-users group](https://groups.google.com/forum/#!forum/caffe-users) to ask questions and discuss methods and models. This is where we talk about usage, installation, and applications.

Framework development discussions and thorough bug reports are collected on [Issues](https://github.com/BVLC/caffe/issues).

Contact [caffe-dev](mailto:[email protected]) if you have a confidential proposal for the framework *and the ability to act on it*.
Requests for features, explanations, or personal help will be ignored; post to [caffe-users](https://groups.google.com/forum/#!forum/caffe-users) instead.

The core Caffe developers offer [consulting services](mailto:[email protected]) for appropriate projects.

## Acknowledgements

The BVLC Caffe developers would like to thank NVIDIA for GPU donation, A9 and Amazon Web Services for a research grant in support of Caffe development and reproducible research in deep learning, and BVLC PI [Trevor Darrell](http:https://www.eecs.berkeley.edu/~trevor/) for guidance.
The BAIR Caffe developers would like to thank NVIDIA for GPU donation, A9 and Amazon Web Services for a research grant in support of Caffe development and reproducible research in deep learning, and BAIR PI [Trevor Darrell](http:https://www.eecs.berkeley.edu/~trevor/) for guidance.

The BVLC members who have contributed to Caffe are (alphabetical by first name):
[Eric Tzeng](https://github.com/erictzeng), [Evan Shelhamer](http:https://imaginarynumber.net/), [Jeff Donahue](http:https://jeffdonahue.com/), [Jon Long](https://github.com/longjon), [Ross Girshick](http:https://www.cs.berkeley.edu/~rbg/), [Sergey Karayev](http:https://sergeykarayev.com/), [Sergio Guadarrama](http:https://www.eecs.berkeley.edu/~sguada/), and [Yangqing Jia](http:https://daggerfs.com/).
The BAIR members who have contributed to Caffe are (alphabetical by first name):
[Carl Doersch](http:https://www.carldoersch.com/), [Eric Tzeng](https://github.com/erictzeng), [Evan Shelhamer](http:https://imaginarynumber.net/), [Jeff Donahue](http:https://jeffdonahue.com/), [Jon Long](https://github.com/longjon), [Philipp Krähenbühl](http:https://www.philkr.net/), [Ronghang Hu](http:https://ronghanghu.com/), [Ross Girshick](http:https://www.cs.berkeley.edu/~rbg/), [Sergey Karayev](http:https://sergeykarayev.com/), [Sergio Guadarrama](http:https://www.eecs.berkeley.edu/~sguada/), [Takuya Narihira](https://github.com/tnarihi), and [Yangqing Jia](http:https://daggerfs.com/).

The open-source community plays an important and growing role in Caffe's development.
Check out the Github [project pulse](https://github.com/BVLC/caffe/pulse) for recent activity and the [contributors](https://github.com/BVLC/caffe/graphs/contributors) for the full list.

We sincerely appreciate your interest and contributions!
If you'd like to contribute, please read the [developing & contributing](development.html) guide.

Yangqing would like to give a personal thanks to the NVIDIA Academic program for providing GPUs, [Oriol Vinyals](http:https://www1.icsi.berkeley.edu/~vinyals/) for discussions along the journey, and BVLC PI [Trevor Darrell](http:https://www.eecs.berkeley.edu/~trevor/) for advice.
Yangqing would like to give a personal thanks to the NVIDIA Academic program for providing GPUs, [Oriol Vinyals](http:https://www1.icsi.berkeley.edu/~vinyals/) for discussions along the journey, and BAIR PI [Trevor Darrell](http:https://www.eecs.berkeley.edu/~trevor/) for advice.
24 changes: 13 additions & 11 deletions docs/model_zoo.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ title: Model Zoo
---
# Caffe Model Zoo

Lots of researchers and engineers have made Caffe models for different tasks with all kinds of architectures and data.
Lots of researchers and engineers have made Caffe models for different tasks with all kinds of architectures and data: check out the [model zoo](https://github.com/BVLC/caffe/wiki/Model-Zoo)!
These models are learned and applied for problems ranging from simple regression, to large-scale visual classification, to Siamese networks for image similarity, to speech and robotics applications.

To help share these models, we introduce the model zoo framework:
Expand All @@ -14,17 +14,17 @@ To help share these models, we introduce the model zoo framework:

## Where to get trained models

First of all, we bundle BVLC-trained models for unrestricted, out of the box use.
First of all, we bundle BAIR-trained models for unrestricted, out of the box use.
<br>
See the [BVLC model license](#bvlc-model-license) for details.
See the [BAIR model license](#bair-model-license) for details.
Each one of these can be downloaded by running `scripts/download_model_binary.py <dirname>` where `<dirname>` is specified below:

- **BVLC Reference CaffeNet** in `models/bvlc_reference_caffenet`: AlexNet trained on ILSVRC 2012, with a minor variation from the version as described in [ImageNet classification with deep convolutional neural networks](http:https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks) by Krizhevsky et al. in NIPS 2012. (Trained by Jeff Donahue @jeffdonahue)
- **BVLC AlexNet** in `models/bvlc_alexnet`: AlexNet trained on ILSVRC 2012, almost exactly as described in [ImageNet classification with deep convolutional neural networks](http:https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks) by Krizhevsky et al. in NIPS 2012. (Trained by Evan Shelhamer @shelhamer)
- **BVLC Reference R-CNN ILSVRC-2013** in `models/bvlc_reference_rcnn_ilsvrc13`: pure Caffe implementation of [R-CNN](https://github.com/rbgirshick/rcnn) as described by Girshick et al. in CVPR 2014. (Trained by Ross Girshick @rbgirshick)
- **BVLC GoogLeNet** in `models/bvlc_googlenet`: GoogLeNet trained on ILSVRC 2012, almost exactly as described in [Going Deeper with Convolutions](http:https://arxiv.org/abs/1409.4842) by Szegedy et al. in ILSVRC 2014. (Trained by Sergio Guadarrama @sguada)
- **BAIR Reference CaffeNet** in `models/bvlc_reference_caffenet`: AlexNet trained on ILSVRC 2012, with a minor variation from the version as described in [ImageNet classification with deep convolutional neural networks](http:https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks) by Krizhevsky et al. in NIPS 2012. (Trained by Jeff Donahue @jeffdonahue)
- **BAIR AlexNet** in `models/bvlc_alexnet`: AlexNet trained on ILSVRC 2012, almost exactly as described in [ImageNet classification with deep convolutional neural networks](http:https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks) by Krizhevsky et al. in NIPS 2012. (Trained by Evan Shelhamer @shelhamer)
- **BAIR Reference R-CNN ILSVRC-2013** in `models/bvlc_reference_rcnn_ilsvrc13`: pure Caffe implementation of [R-CNN](https://github.com/rbgirshick/rcnn) as described by Girshick et al. in CVPR 2014. (Trained by Ross Girshick @rbgirshick)
- **BAIR GoogLeNet** in `models/bvlc_googlenet`: GoogLeNet trained on ILSVRC 2012, almost exactly as described in [Going Deeper with Convolutions](http:https://arxiv.org/abs/1409.4842) by Szegedy et al. in ILSVRC 2014. (Trained by Sergio Guadarrama @sguada)

**Community models** made by Caffe users are posted to a publicly editable [wiki page](https://github.com/BVLC/caffe/wiki/Model-Zoo).
**Community models** made by Caffe users are posted to a publicly editable [model zoo wiki page](https://github.com/BVLC/caffe/wiki/Model-Zoo).
These models are subject to conditions of their respective authors such as citation and license.
Thank you for sharing your models!

Expand All @@ -42,6 +42,8 @@ A caffe model is distributed as a directory containing:
- License information.
- [optional] Other helpful scripts.

This simple format can be handled through bundled scripts or manually if need be.

### Hosting model info

Github Gist is a good format for model info distribution because it can contain multiple files, is versionable, and has in-browser syntax highlighting and markdown rendering.
Expand All @@ -55,14 +57,14 @@ Downloading model info is done just as easily with `scripts/download_model_from_
### Hosting trained models

It is up to the user where to host the `.caffemodel` file.
We host our BVLC-provided models on our own server.
We host our BAIR-provided models on our own server.
Dropbox also works fine (tip: make sure that `?dl=1` is appended to the end of the URL).

`scripts/download_model_binary.py <dirname>` downloads the `.caffemodel` from the URL specified in the `<dirname>/readme.md` frontmatter and confirms SHA1.

## BVLC model license
## BAIR model license

The Caffe models bundled by the BVLC are released for unrestricted use.
The Caffe models bundled by the BAIR are released for unrestricted use.

These models are trained on data from the [ImageNet project](http:https://www.image-net.org/) and training data includes internet photos that may be subject to copyright.

Expand Down
4 changes: 2 additions & 2 deletions docs/multigpu.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ The GPUs to be used for training can be set with the "-gpu" flag on the command
# Hardware Configuration Assumptions

The current implementation uses a tree reduction strategy. e.g. if there are 4 GPUs in the system, 0:1, 2:3 will exchange gradients, then 0:2 (top of the tree) will exchange gradients, 0 will calculate
updated model, 0\-\>2, and then 0\-\>1, 2\-\>3.
updated model, 0\-\>2, and then 0\-\>1, 2\-\>3.

For best performance, P2P DMA access between devices is needed. Without P2P access, for example crossing PCIe root complex, data is copied through host and effective exchange bandwidth is greatly reduced.

Expand All @@ -23,4 +23,4 @@ Current implementation has a "soft" assumption that the devices being used are h

# Scaling Performance

Performance is **heavily** dependent on the PCIe topology of the system, the configuration of the neural network you are training, and the speed of each of the layers. Systems like the DIGITS DevBox have an optimized PCIe topology (X99-E WS chipset). In general, scaling on 2 GPUs tends to be ~1.8X on average for networks like AlexNet, CaffeNet, VGG, GoogleNet. 4 GPUs begins to have falloff in scaling. Generally with "weak scaling" where the batchsize increases with the number of GPUs you will see 3.5x scaling or so. With "strong scaling", the system can become communication bound, especially with layer performance optimizations like those in [cuDNNv3](http:https://nvidia.com/cudnn), and you will likely see closer to mid 2.x scaling in performance. Networks that have heavy computation compared to the number of parameters tend to have the best scaling performance.
Performance is **heavily** dependent on the PCIe topology of the system, the configuration of the neural network you are training, and the speed of each of the layers. Systems like the DIGITS DevBox have an optimized PCIe topology (X99-E WS chipset). In general, scaling on 2 GPUs tends to be ~1.8X on average for networks like AlexNet, CaffeNet, VGG, GoogleNet. 4 GPUs begins to have falloff in scaling. Generally with "weak scaling" where the batchsize increases with the number of GPUs you will see 3.5x scaling or so. With "strong scaling", the system can become communication bound, especially with layer performance optimizations like those in [cuDNNv3](http:https://nvidia.com/cudnn), and you will likely see closer to mid 2.x scaling in performance. Networks that have heavy computation compared to the number of parameters tend to have the best scaling performance.
Loading

0 comments on commit 946c9b8

Please sign in to comment.