2

Suppose there are parameters in the network I would like to change manually in pycaffe, rather than update automatically by the solver. For example, suppose we would like to penalize dense activations, this can be implemented as an additional loss layer. Across the training process, we would like to change the strength of this penalty by multiplying the loss with a coefficient that evolves in a pre-specified way. What would be a good way to do this in caffe? Is it possible to specify this in the prototxt definition? In the pycaffe interface?

Update: I suppose setting lr_mult and decay_mult to 0 might be a solution, but seems like a clumsy one. Maybe a DummyDataLayer providing the parameters as a blob would be a better option. But there is so little documentation that it's quite a struggle to write for someone new to caffe

2
  • 1
    it's unclear what you are trying to do. Can you please clarify your question? Commented May 31, 2016 at 6:09
  • Thanks for pointing that out. I have added an example Commented May 31, 2016 at 8:39

1 Answer 1

2

Maybe this is a trivial question, but just in case someone else might be interested, here is a successful implementation I ended up using

In the layer proto def, set lr_mult and decay_mult to 0, which means that we neither want to learn or decay the parameters. Use filler to set initial values. To change the parameters in python during training of the network, use a statement like net.param['name'][index].data[...] = something

Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.