Skip to content

Commit

Permalink
add vgg and so on
Browse files Browse the repository at this point in the history
  • Loading branch information
WAMAWAMA committed Nov 6, 2022
1 parent a553a1c commit 413808e
Show file tree
Hide file tree
Showing 9 changed files with 716 additions and 75 deletions.
66 changes: 41 additions & 25 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,15 @@
# ωαмα m⚙️dules
A PyTorch-based module library for building 1D/2D/3D networks flexibly ~

(🚧 still under building, but current module implementations are work...)

Highlights (*Simple-to-use & Function-rich!*)
- No complex class inheritance or nesting, and the forward process is shown succinctly
- No complex input parameters, but output as many features as possible for fast reuse
- No dimension restriction, 1D or 2D or 3D networks are all supported
*A PyTorch module library for building 1D/2D/3D networks flexibly ~*

Highlights (*Simple-to-use & Function-rich!*)
- Simple code that show all forward processes succinctly
- Output as many features as possible for fast reuse
- Support 1D / 2D / 3D networks
- Easy to integrate with any other networks
- 🚀 Pretrained weights (both 2D and 3D): 20+ `2D networks` and 30+ `3D networks`

## 1. Installation
- 🔥1.1 [`wama_modules`](https://github.com/WAMAWAMA/wama_modules) (*Basic*)
Expand Down Expand Up @@ -36,11 +39,18 @@ Install *transformer* use ↓
pip install transformers
```

## 2. Update list
- 2022/11/5: Open the source code, version `v0.0.1-beta`
- ...


## 3. Main modules and network architectures
Here I'll give an overview of this repo

## 2. How to build a network modularly?
## 4. Guideline 1: Build networks modularly
How to build a network modularly?

The paradigm of building networks:
The answer is a paradigm of building networks:

***'Design architecture according to tasks, pick modules according to architecture'***

Expand All @@ -52,17 +62,8 @@ So, network architectures for different tasks can be viewed modularly such as:
- a multi-task net for classification and segmentation = encoder + decoder + cls_head + seg_head


## 3. Main modules
- resblock?
- dense block
- decoder block
- transformer block


## 4. Examples


Build a 3D resnet50
For example, build a 3D resnet50


```python
Expand All @@ -80,7 +81,7 @@ input = torch.ones([3,3,128,128])



More demos are shown below ↓ (Click to view codes), or you can visit the `demo` folder for more demo codes
Here are more demos shown below ↓ (Click to view codes), or you can visit the `demo` folder for more demo codes



Expand Down Expand Up @@ -337,11 +338,16 @@ input = torch.ones([3,3,128,128])



## 5. All modules (or functions)
## 5. Guideline 2: Use pretrained weights
How to use pretrained weights?



### 5.1 `wama_modules.BaseModule`
## 6. All modules and functions

#### 5.1.1 Pooling
### 6.1 `wama_modules.BaseModule`

#### 6.1.1 Pooling
- `GlobalAvgPool` Global average pooling
- `GlobalMaxPool` Global maximum pooling
- `GlobalMaxAvgPool` GlobalMaxAvgPool = (GlobalAvgPool + GlobalMaxPool) / 2.
Expand Down Expand Up @@ -415,7 +421,7 @@ print(inputs3D.shape, GAMP(inputs3D).shape)
```
</details>

### 5.2 `wama_modules.utils`
### 6.2 `wama_modules.utils`
- `resizeTensor` scale torch tensor, similar to scipy's zoom
- `tensor2array` transform tensor to ndarray

Expand All @@ -428,7 +434,7 @@ print(inputs3D.shape, GAMP(inputs3D).shape)
</details>


### 5.3 `wama_modules.Attention`
### 6.3 `wama_modules.Attention`
- `SCSEModule`
- `NonLocal`

Expand All @@ -442,6 +448,9 @@ print(inputs3D.shape, GAMP(inputs3D).shape)


### 5.4 `wama_modules.Encoder`
- `VGGEncoder`
- `ResNetEncoder`
- `DenseNetEncoder`
- `???`

<details>
Expand Down Expand Up @@ -527,5 +536,12 @@ _ = [print(i.shape) for i in f_listB]
</details>


## 6. Acknowledgment
Thanks to ......
## 7. Acknowledgment
Thanks to ......
1) https://github.com/ZhugeKongan/torch-template-for-deep-learning
2) pytorch vit
3) smp
4) transformers
5) medicalnet
6)

Empty file added requirements.txt
Empty file.
28 changes: 18 additions & 10 deletions setup.py
Original file line number Diff line number Diff line change
@@ -1,20 +1,28 @@
from setuptools import find_packages, setup
import io
import os
import sys

here = os.path.abspath(os.path.dirname(__file__))

# What packages are required for this module to be executed?
try:
with open(os.path.join(here, "requirements.txt"), encoding="utf-8") as f:
REQUIRED = f.read().split("\n")
except:
REQUIRED = []


setup(
name='wama_modules',
version='1.0.0',
version='0.0.1-beta',
description='Nothing',
author='wamawama',
author_email='[email protected]',
python_requires=">=3.6.0",
url='https://github.com/WAMAWAMA/wama_modules',
# packages=find_packages(exclude=("tests", "docs", "images")),
packages=find_packages(),
# If your package is a single module, use this instead of 'packages':
# py_modules=['mypackage'],
# entry_points={
# 'console_scripts': ['mycli=mymodule:cli'],
# },
install_requires=[],
packages=find_packages(exclude=("demo", "docs", "images")),
install_requires=REQUIRED,
license="MIT",
)
)

8 changes: 7 additions & 1 deletion tmp.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,10 @@
from torchvision.models.resnet import Bottleneck

from wama_modules.BaseModule import GlobalAvgPool
BasicBlock = BasicBlock()
BasicBlock = GlobalAvgPool()
print(1)
print(1)
print(1)
print(1)
print(1)
print(1)
2 changes: 1 addition & 1 deletion wama_modules/Attention.py
Original file line number Diff line number Diff line change
Expand Up @@ -192,4 +192,4 @@ def forward(self, x):



STN
# STN
5 changes: 2 additions & 3 deletions wama_modules/BaseModule.py
Original file line number Diff line number Diff line change
Expand Up @@ -290,7 +290,7 @@ def __init__(self, in_channels, out_channels, block_num=2, norm='bn', active='re
self.block_num = block_num
self.dim = dim

print('VGGStage stage contains ', block_num, ' blocks')
# print('VGGStage stage contains ', block_num, ' blocks')

# 构建block
self.block_list = nn.ModuleList([])
Expand Down Expand Up @@ -434,7 +434,6 @@ def forward(self, x):
class ResStage(nn.Module):
"""
a ResStage contains multiple ResBlocks
"""
def __init__(self, type, in_channels, middle_channels, out_channels, block_num=2, norm='bn', active='relu', gn_c=8, dim=2):
super().__init__()
Expand All @@ -445,7 +444,7 @@ def __init__(self, type, in_channels, middle_channels, out_channels, block_num=2
self.block_num = block_num
self.dim = dim

print('ResStage stage contains ', block_num, ' blocks')
# print('ResStage stage contains ', block_num, ' blocks')

# 构建block
if type == '33':
Expand Down
10 changes: 10 additions & 0 deletions wama_modules/Decoder.py
Original file line number Diff line number Diff line change
Expand Up @@ -121,3 +121,13 @@ def forward(self, f_list):
# try this https://blog.csdn.net/m0_51436734/article/details/124073901


# NestedUNet









Loading

0 comments on commit 413808e

Please sign in to comment.