Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Address closed review issues #40

Merged
merged 26 commits into from
Jun 28, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
26 commits
Select commit Hold shift + click to select a range
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
37 changes: 37 additions & 0 deletions CHANGELOG
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,43 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0

### Removed

## [1.3.2] - 2024-06-28

### Added
- Added conv1d_layer_type
- Added avgpool1d_layer_type
- Added maxpool1d_layer_type
- Added input2d_layer_type
- Added statement of need to README

### Changed

### Fixed
- Fix reference to fpm install in README
- Fixed unallocated lr_decay reference
- Fixed ifort and ifx compiler reference to temporary arrays
- Fixed temporary array reference in tests
- Fixed epsilon undefined value in mod_loss
- Fixed project link in CMakeLists.txt

### Removed

## [1.3.1] - 2024-05-25

### Added

### Changed
- Add reference to gcc 12.3 compatibility in README

### Fixed
- Fix attempt to assign size of random seed in random_setup
- Fix attempt to assign size of random seed in test_network
- Fix attempt to assing size of random seed in example/simple
- Fix abstract interface use in base_layer_type
- Add appropriate references to neural-fortran in CONTRIBUTING.md and associated procedures, and modules

### Removed

## [1.3.0] - 2024-03-13

### Added
Expand Down
9 changes: 7 additions & 2 deletions CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ project(athena NONE)
set( LIB_NAME ${PROJECT_NAME} )
set( PROJECT_DESCRIPTION
"Fortran neural network" )
set( PROJECT_URL "https://https://git.exeter.ac.uk/hepplestone/athena" )
set( PROJECT_URL "https://github.com/nedtaylor/athena" )
set( CMAKE_CONFIGURATION_TYPES "Release" "Parallel" "Serial" "Dev" "Debug" "Parallel_Dev"
CACHE STRING "List of configurations types." )
set( CMAKE_BUILD_TYPE "Release"
Expand Down Expand Up @@ -97,13 +97,16 @@ set(LIB_FILES
mod_batchnorm1d_layer.f90
mod_batchnorm2d_layer.f90
mod_batchnorm3d_layer.f90
mod_conv1d_layer.f90
mod_conv2d_layer.f90
mod_conv3d_layer.f90
mod_dropout_layer.f90
mod_dropblock2d_layer.f90
mod_dropblock3d_layer.f90
mod_avgpool1d_layer.f90
mod_avgpool2d_layer.f90
mod_avgpool3d_layer.f90
mod_maxpool1d_layer.f90
mod_maxpool2d_layer.f90
mod_maxpool3d_layer.f90
mod_full_layer.f90
Expand All @@ -112,6 +115,7 @@ set(LIB_FILES
mod_flatten3d_layer.f90
mod_flatten4d_layer.f90
mod_input1d_layer.f90
mod_input2d_layer.f90
mod_input3d_layer.f90
mod_input4d_layer.f90
mod_container_layer.f90
Expand Down Expand Up @@ -221,6 +225,7 @@ if(BUILD_TESTS)
endif()

# add coverage compiler flags
if (CMAKE_BUILD_TYPE MATCHES "Debug*" OR CMAKE_BUILD_TYPE MATCHES "Dev*")
if ( ( CMAKE_Fortran_COMPILER MATCHES ".*gfortran.*" OR CMAKE_Fortran_COMPILER MATCHES ".*gcc.*" ) AND
( CMAKE_BUILD_TYPE MATCHES "Debug*" OR CMAKE_BUILD_TYPE MATCHES "Dev*" ) )
append_coverage_compiler_flags()
endif()
2 changes: 2 additions & 0 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,8 @@

This document outlines the organisation of the athena codebase to help guide code contributors.

This document has been copied from the neural-fortran repository and used as a template from which to design this version. The original document can be found here: https://github.com/modern-fortran/neural-fortran/blob/main/CONTRIBUTING.md

## Overall code organization

The source code organisation follows the usual `fpm` convention:
Expand Down
33 changes: 28 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,28 @@ It was decided that this project should be migrated to allow for better communit

---

## Statement of need

The ATHENA library leverages Fortran's strong support of array arithmatics, and its compatibility with parallel and high-performance computing resources.
Additionally, there exist many improvements made available since Fortran 95, specifically in Fortran 2018 (Reid 2018) (and upcoming ones in Fortran 2023), as well as continued development by the Fortran Standards committee.
All of this provides a clear incentive to develop further libraries and frameworks focused on providing machine learning capabilities to the Fortran community.

While existing Fortran-based libraries, such as neural-fortran (Curcic 2019), address many aspects of neural networks,
ATHENA provides implementation of some well-known features not currently available within other libraries; these features include batchnormalisation, regularisation layers (such as dropout and dropblock), and average pooling layers.
Additionally, the library provides support for 1, 2, and 3D input data for most features currently implemented; this includes 1, 2, and 3D data for convolutional layers.
Finally, the ATHENA library supports many convolutional techniques, including various data padding types, and stride.

One of the primary intended applications of ATHENA is in materials science, which heavily utilises convolutional and graph neural networks for learning based on charge densities and atomic structures.
Given the unique data structure of atomic configurations, specifically their graph-based nature, a specialised API must be developed to accommodate these needs.

### References
- Reid, J. (2018). The new features of fortran 2018. SIGPLAN Fortran Forum, 37(1), 5–43. https://doi.org/10.1145/3206214.3206215
- Curcic, M. (2019). A parallel fortran framework for neural networks and deep learning. SIGPLAN Fortran Forum, 38(1), 4–21. https://doi.org/10.1145/3323057.3323059


Documentation
-----

ATHENA is distributed with the following directories:

| Directory | Description |
Expand All @@ -35,9 +57,6 @@ ATHENA is distributed with the following directories:
| _test/_ | A set of test programs to check functionality of the library works after compilation |


Documentation
-----

For extended details on the functionality of this library, please check out the [wiki](https://github.com/nedtaylor/athena/wiki)

**NOTE: There currently exists no manual document. This will be included at a later date**
Expand All @@ -63,6 +82,10 @@ The library has been developed and tested using the following compilers:
- ifort -- Intel 2021.10.0.20230609
- ifx -- IntelLLVM 2023.2.0

#### Tested compilers
The library has also been tested with and found to support the following libraries:
- gfortran -- gcc 12.3

### Building with fpm

The library is set up to work with the Fortran Package Manager (fpm).
Expand All @@ -76,7 +99,7 @@ Run the following command in the repository main directory:

To check whether ATHENA has installed correctly and that the compilation works as expected, the following command can be run:
```
fpm test
fpm test --profile release
```

This runs a set of test programs (found within the test/ directory) to ensure the expected output occurs when layers and networks are set up.
Expand Down Expand Up @@ -133,7 +156,7 @@ Using fpm, the examples are built alongside the library. To list all available e
To run a particular example, execute the following command:

```
fpm run --example [NAME]
fpm run --example [NAME] --profile release
```

where [_NAME_] is the name of the example found in the list.
Expand Down
2 changes: 1 addition & 1 deletion example/mnist/src/main.f90
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ program mnist_example
!!!-----------------------------------------------------------------------------
!!! initialise random seed
!!!-----------------------------------------------------------------------------
call random_setup(seed, num_seed=1, restart=.false.)
call random_setup(seed, restart=.false.)


!!!-----------------------------------------------------------------------------
Expand Down
2 changes: 1 addition & 1 deletion example/mnist_3D/src/main.f90
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ program mnist_test
!!!-----------------------------------------------------------------------------
!!! initialise random seed
!!!-----------------------------------------------------------------------------
call random_setup(seed, num_seed=1, restart=.false.)
call random_setup(seed, restart=.false.)


!!!-----------------------------------------------------------------------------
Expand Down
2 changes: 1 addition & 1 deletion example/mnist_bn/src/main.f90
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ program mnist_example
!!!-----------------------------------------------------------------------------
!!! initialise random seed
!!!-----------------------------------------------------------------------------
call random_setup(seed, num_seed=1, restart=.false.)
call random_setup(seed, restart=.false.)


!!!-----------------------------------------------------------------------------
Expand Down
2 changes: 1 addition & 1 deletion example/mnist_drop/src/main.f90
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ program mnist_test
!!!-----------------------------------------------------------------------------
!!! initialise random seed
!!!-----------------------------------------------------------------------------
call random_setup(seed, num_seed=1, restart=.false.)
call random_setup(seed, restart=.false.)


!!!-----------------------------------------------------------------------------
Expand Down
6 changes: 4 additions & 2 deletions example/simple/src/main.f90
Original file line number Diff line number Diff line change
@@ -1,3 +1,6 @@
!! This file contains a modified version of the "simple" example found in ...
!! ... neural fortran:
!! https://github.com/modern-fortran/neural-fortran/blob/main/example/simple.f90
program simple
use athena
use constants_mnist, only: real12, pi
Expand All @@ -16,9 +19,8 @@ program simple


!! set random seed
seed_size = 8
call random_seed(size=seed_size)
seed = [1,1,1,1,1,1,1,1]
allocate(seed(seed_size), source = 1)
call random_seed(put=seed)

write(*,*) "Simple function approximation using a fully-connected neural network"
Expand Down
3 changes: 3 additions & 0 deletions example/sine/src/main.f90
Original file line number Diff line number Diff line change
@@ -1,3 +1,6 @@
!! This file contains a modified version of the "sine" example found in ...
!! ... neural fortran:
!! https://github.com/modern-fortran/neural-fortran/blob/main/example/sine.f90
program sine
use athena
use constants_mnist, only: real12, pi
Expand Down
2 changes: 1 addition & 1 deletion fpm.toml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
name = "athena"
version = "1.3.0"
version = "1.3.2"
license = "MIT"
author = "Ned Thaddeus Taylor"
maintainer = "[email protected]"
Expand Down
4 changes: 4 additions & 0 deletions src/athena.f90
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,7 @@ module athena

!! input layer types
use input1d_layer, only: input1d_layer_type
use input2d_layer, only: input2d_layer_type
use input3d_layer, only: input3d_layer_type
use input4d_layer, only: input4d_layer_type

Expand All @@ -58,6 +59,7 @@ module athena
use batchnorm3d_layer, only: batchnorm3d_layer_type, read_batchnorm3d_layer

!! convolution layer types
use conv1d_layer, only: conv1d_layer_type, read_conv1d_layer
use conv2d_layer, only: conv2d_layer_type, read_conv2d_layer
use conv3d_layer, only: conv3d_layer_type, read_conv3d_layer

Expand All @@ -67,8 +69,10 @@ module athena
use dropblock3d_layer, only: dropblock3d_layer_type, read_dropblock3d_layer

!! pooling layer types
use avgpool1d_layer, only: avgpool1d_layer_type, read_avgpool1d_layer
use avgpool2d_layer, only: avgpool2d_layer_type, read_avgpool2d_layer
use avgpool3d_layer, only: avgpool3d_layer_type, read_avgpool3d_layer
use maxpool1d_layer, only: maxpool1d_layer_type, read_maxpool1d_layer
use maxpool2d_layer, only: maxpool2d_layer_type, read_maxpool2d_layer
use maxpool3d_layer, only: maxpool3d_layer_type, read_maxpool3d_layer

Expand Down
Loading
Loading