You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Mar 17, 2021. It is now read-only.
Hi. First of all, thank you for this amazing package. This is my first time to write a GitHub issue, and I apologize for any incorrect formatting or phrasing.
My question is about the architecture of U-net (3D version). In the original design, each convolution reduces the size of the tensor by (2,2,2). By this calculation:
Let x:=(((i-4)/2-4)/2-4)/2, let o:=(((x-4)*2-4)*2-4)*2-4, then o=i-88
we can see that the size of the output tensor is reduced by 88 in each dimension. I think my reasoning and calculation is correct. Also, from the above equation we see that the size of the input tensor should be in the form of 8n+4, instead of 8n.
However, in your implementation, for each convolution, you set padding='SAME', and therefore the convolution does not reduce the size of the tensor. Therefore, the output tensor should be of the same size. However, at the last step, you crop the tensor by 44 to reduce the size (layer name: 'crop-88').
I'm not sure if your implementation and the original design embody the same idea and have same effect. In all, I don't think the crop-88 makes sense to me.
The text was updated successfully, but these errors were encountered:
elitap
pushed a commit
to elitap/NiftyNet
that referenced
this issue
Oct 30, 2018
Hi. First of all, thank you for this amazing package. This is my first time to write a GitHub issue, and I apologize for any incorrect formatting or phrasing.
My question is about the architecture of U-net (3D version). In the original design, each convolution reduces the size of the tensor by (2,2,2). By this calculation:
Let x:=(((i-4)/2-4)/2-4)/2, let o:=(((x-4)*2-4)*2-4)*2-4, then o=i-88
we can see that the size of the output tensor is reduced by 88 in each dimension. I think my reasoning and calculation is correct. Also, from the above equation we see that the size of the input tensor should be in the form of 8n+4, instead of 8n.
However, in your implementation, for each convolution, you set padding='SAME', and therefore the convolution does not reduce the size of the tensor. Therefore, the output tensor should be of the same size. However, at the last step, you crop the tensor by 44 to reduce the size (layer name: 'crop-88').
I'm not sure if your implementation and the original design embody the same idea and have same effect. In all, I don't think the crop-88 makes sense to me.
The text was updated successfully, but these errors were encountered: