Skip to content

Tags: modern-fortran/neural-fortran

Tags

v0.17.0

Toggle v0.17.0's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
Intrinsic `pack` replaced by pointers in `get_params` and `get_gradie…

…nts` (#183)

* Replace intrinsic pack by pointers

* Dense layer: remove an avoidable reshape

* conv2d: avoid intrinsics pack and reshape

* replace a reshape by a pointer

* clean conv2d_layer_submodule

---------

Co-authored-by: Vandenplas, Jeremie <[email protected]>
Co-authored-by: milancurcic <[email protected]>

v0.16.1

Toggle v0.16.1's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
Replacement of a matmul + use of merge (#181)

* dense_layer: replace a matmul(reshape) by a do concurrent

* nf_activation: replace some where statements by merge intrinsic

* Set correct size for self%gradient in dense_layer

* remove some unneeded pack()

* Remove notes on -fno-frontend-optimize (no longer necessary)

* Bump patch version

---------

Co-authored-by: Vandenplas, Jeremie <[email protected]>
Co-authored-by: milancurcic <[email protected]>

v0.16.0

Toggle v0.16.0's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
Addition of the Loss derived type and of the MSE loss function (#175)

* Addition of the abstract DT loss_type and of the DT quadratic

* Support of the loss_type for the derivative loss function

* Addition of the MSE loss function

* add documentation

* Test program placeholder

* Add loss test to CMake config

* Minimal test for expected values

* Bump version and copyright years

---------

Co-authored-by: Vandenplas, Jeremie <[email protected]>
Co-authored-by: milancurcic <[email protected]>

v0.15.1

Toggle v0.15.1's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
safegaurd Box-Muller normal random number generation against u=0.0 (#158

)

* safegaurd Box-Muller normal random number generation against u=0.0

* Docstrings and formatting

* Bump patch version

---------

Co-authored-by: milancurcic <[email protected]>

v0.15.0

Toggle v0.15.0's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
Adagrad Optimizer Implementation (#154)

* Adagrad Implementation

* Resolved comments

* Added test for adagrad

* Comment

* Fix L2 penalty and learning rate decay

* Add Adagrad to the list in README

* Bump minor version

---------

Co-authored-by: milancurcic <[email protected]>

v0.14.0

Toggle v0.14.0's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
Added Adam optimizer implementation (#150)

* Added Adam optimizer implementation

* called rms from optimizers module

* Minor fixes to improve adam performance

* Suggested Changes

* Remove dead code; format to <=80 columns

* Use associate instead of explicit allocation for m_hat and v_hat; formatting

* Added convergency test for Adam

* AdamW: Adam with decay weights modification

* AdamW Modifications

* Fixed failing test

* Add notes; clean up; make more internal parameters private

* AdamW changes

* flexible weight decay regularization

* Formatting

---------

Co-authored-by: milancurcic <[email protected]>

v0.13.0

Toggle v0.13.0's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
Added Momentum and Nesterov modifications (#148)

* Added Momentum and Nesterov modifications

* Resolved dummy argument error

* Changes in update formulas

* Corrected formulae, velocity allocation changes

* Added concrete implementation of RMSProp

* Report RMSE every 10% of num_epochs; Fix xtest calculation

* Initialize networks with same weights; larger batch; larger test array

* Start putting RMS and velocity structures in place; yet to be allocated and initialized

* WIP: SGD and RMSprop optimizers plumbing at the network % update level

* Added get_gradients() method (draft)

* Clean up formatting and docstrings

* Flush gradients to zero; code compiles but segfaults

* Set default value for batch_size; tests pass in debug mode but segfault in optimized mode

* Update learning rates in simple and sine examples because the default changed

* Added draft test suite for optimizers

* Store the optimizer as a member of the network type

* Don't print to stdout; indentation

* Added convergence tests

* Resolved comments

* Clean up

* Import RMSProp

* Remove old code

* Add optimizer support notes

---------

Co-authored-by: milancurcic <[email protected]>

v0.12.0

Toggle v0.12.0's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
Class-based activation functions (#126)

* Implementation of activation_function class for 1d activations

* 3d activations implemented using activation_function type

* get_name function added to the activation_function type

* Activation_function instances are now passed to contructors

* removal of redundant use statements

* Small fix to make the test build

* Tidy up and formatting

* Formatting

* Set alpha defaults from Keras

* Enable leaky ReLU

* Add tests for setting alpha values to parametric activations (ELU and leaky ReLU)

* Bump version

---------

Co-authored-by: milancurcic <[email protected]>

v0.11.0

Toggle v0.11.0's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
Update contributors (#120)

v0.10.0

Toggle v0.10.0's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
Get set network parameters (#111)

* Add an exampple to get/set network parameters

* Bump version

* Add get_num_params, get_params, and set_params implementations

Co-authored-by: Christopher Zapart <[email protected]>

* Make get_params() a function;
Make set_params() a subroutine;

Co-authored-by: Christopher Zapart <[email protected]>

* Tidy up example

* Make layer % get_num_parameters() elemental

* Begin test suite for getting and setting network params

* Simplify network % get_num_params()

* Warn on stderr if the user attempts to set_params to a non-zero param layer

* Skip no-op layer % set_params() calls

* Test getting and setting parameters

* Check that the size of parameters match those of the layer

* Print number of parameters in layer % print_info()

* Tidy up the exampel

Co-authored-by: Christopher Zapart <[email protected]>