Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can model be defined by nn.Sequential or does it need to use nn.ModuleList #285

Open
alecda573 opened this issue Mar 3, 2023 · 5 comments

Comments

@alecda573
Copy link

I am trying to understand what happens in your forward_once method here:

`
def forward_once(self, x, augment=False, verbose=False):
img_size = x.shape[-2:] # height, width
yolo_out, out = [], []
if verbose:
print('0', x.shape)
str = ''

    # Augment images (inference and test only)
    if augment:  # https://github.com/ultralytics/yolov3/issues/931
        nb = x.shape[0]  # batch size
        s = [0.83, 0.67]  # scales
        x = torch.cat((x,
                       torch_utils.scale_img(x.flip(3), s[0]),  # flip-lr and scale
                       torch_utils.scale_img(x, s[1]),  # scale
                       ), 0)

    for i, module in enumerate(self.module_list):
        name = module.__class__.__name__
        #print(name)
        if name in ['WeightedFeatureFusion', 'FeatureConcat', 'FeatureConcat2', 'FeatureConcat3', 'FeatureConcat_l', 'ScaleChannel', 'ShiftChannel', 'ShiftChannel2D', 'ControlChannel', 'ControlChannel2D', 'AlternateChannel', 'AlternateChannel2D', 'SelectChannel', 'SelectChannel2D', 'ScaleSpatial']:  # sum, concat
            if verbose:
                l = [i - 1] + module.layers  # layers
                sh = [list(x.shape)] + [list(out[i].shape) for i in module.layers]  # shapes
                str = ' >> ' + ' + '.join(['layer %g %s' % x for x in zip(l, sh)])
            x = module(x, out)  # WeightedFeatureFusion(), FeatureConcat()
        elif name in ['ImplicitA', 'ImplicitM', 'ImplicitC', 'Implicit2DA', 'Implicit2DM', 'Implicit2DC']:
            x = module()
        elif name == 'YOLOLayer':
            yolo_out.append(module(x, out))
        elif name == 'JDELayer':
            yolo_out.append(module(x, out))
        else:  # run module directly, i.e. mtype = 'convolutional', 'upsample', 'maxpool', 'batchnorm2d' etc.
            #print(module)
            #print(x.shape)
            x = module(x)

        out.append(x if self.routs[i] else [])
        if verbose:
            print('%g/%g %s -' % (i, len(self.module_list), name), list(x.shape), str)
            str = ''

    if self.training:  # train
        return yolo_out
    elif ONNX_EXPORT:  # export
        x = [torch.cat(x, 0) for x in zip(*yolo_out)]
        return x[0], torch.cat(x[1:3], 1)  # scores, boxes: 3780x80, 3780x4
    else:  # inference or test
        x, p = zip(*yolo_out)  # inference output, training output
        x = torch.cat(x, 1)  # cat yolo outputs
        if augment:  # de-augment results
            x = torch.split(x, nb, dim=0)
            x[1][..., :4] /= s[0]  # scale
            x[1][..., 0] = img_size[1] - x[1][..., 0]  # flip lr
            x[2][..., :4] /= s[1]  # scale
            x = torch.cat(x, 1)
        return x, p`

and why you choose to loop through an nn.ModuleList object in place of an nn.Sequential object.
Could this easily support an nn.Sequential object?

@Crazylov3
Copy link

You couldn't use shortcut if you use nn.Sequential :)

@alecda573
Copy link
Author

@Crazylov3 hey thanks for responding so quickly! Can you explain what is the purpose of the shortcut. So there is no way to replicate shortcut by using the sequential class?

@Crazylov3
Copy link

The main purpose of the shortcut (also known as skip connection) in deep neural networks is to help with the flow of information and improve gradient flow during training. In particular, it helps to address the problem of vanishing gradients, which can occur when training very deep neural networks. The idea behind the shortcut is to create a direct connection between the input and output of a block of layers, allowing information to flow directly from one layer to another without having to pass through several intermediate layers. This can help to preserve information and gradients as they propagate through the network, which can lead to more stable and efficient training.

@alecda573
Copy link
Author

@Crazylov3 so it seems in the config files the shortcut appears after two consectutive conv layers, can this be replaced in the config files by Bottleneck block seen in yolov7 repo here: https://github.com/WongKinYiu/yolov7/blob/main/models/common.py
and then one could use nn.Sequential inplace of nn.ModuleList?

@Crazylov3
Copy link

In general, you can use shortcut everywhere you want. If you have only 1 configs, it will easy to implement shortcut in a block, then you can put these block into nn.Sequential(). However, in this case, there are a lot of configs file, using nn.Sequential() for all may be hard in implementation term. The purpose of their implementation is resuse the code (1 source code for all configs)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants