Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Difference in deploy.prototxt and train_val.prototxt in FaceRecognition #33

Open
deepakcrk opened this issue Mar 27, 2018 · 2 comments
Open

Comments

@deepakcrk
Copy link

deepakcrk commented Mar 27, 2018

I am seeing difference in deploy.prototxt and train_val.prototxt in FaceRecognition.
In deploy.prototxt, ReLU and InnerProduct layer is missing. What is the reason for same ?.

Following layers are available in train_val.prototxt and not available in deploy.prototxt.

layer {
  name: "relu6_1"
  type: "ReLU"
  bottom: "deepid_1"
  top: "deepid_1"
}

layer {
  name: "fc8_1"
  type:  "InnerProduct"
  bottom: "deepid_1"
  top: "fc8_1"
  param {
    name: "fc8_w"
    lr_mult: 1
    decay_mult: 1
  }
  param {
    name: "fc8_b"
    lr_mult: 2
    decay_mult: 0
  }
  inner_product_param {
    num_output: 10575
    weight_filler {
      type: "gaussian"
      std: 0.01
    }
    bias_filler {
      type: "constant"
      value: 0
    }
  }
}

@jagadeesh09
Copy link

Hi @deepakcrk

This code is implemented according to DeepId paper. During training phase, they are trying to learn the Face Representation vector of dimension 160 in size by classifying a given face into one of the 10,000 classes. After training, 160 dimensional vector is used as the Face Representaion Vector or Face Embedding. So during deployment, there is no need of the last inner product layer. I hope this clears your doubt.

@ysh329
Copy link

ysh329 commented Aug 9, 2018

@jagadeesh09 Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants