Skip to content

FengleiFan/Duality

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

33 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

| ArXiv|

Network Equivalency

This part includes implementations of eight equivalent networks in light of the extended De Morgan's law. Eight networks are shown in Figure 1. The purpose of this experiment is to evaluate if the performance of these eight networks is close to one another.

Figure 1. Eight equivalent networks in light of the extended De Morgan's law.

Folders

NetworkEquivalence: this directory contains codes for eight equivalent networks. The used dataset is the breast cancer datatset.

Running Experiments

Please first go to the directory of "NetworkEquivalence", then execute

>> python NetworkEquivalence/NetworkEquivalency_I.py    
>> python NetworkEquivalence/NetworkEquivalency_II.py 
>> python NetworkEquivalence/NetworkEquivalency_III.py 

Robustness

In this part, we compare the robustness of a deep and a wide quadratic network constructed as Figure 2 shows.

Figure 2. The width and depth equivalence for networks of quadratic neurons. In this construction, a deep network is to implement the continued fraction of a polynomial, and a wide network reflects the factorization of the polynomial.

Experimental Design

We first preprocessed the MNIST dataset using image deskewing and dimension deduction techniques. Image deskewing (https://fsix.github.io/mnist/Deskewing.html) straightens the digits that are written in a crooked manner. Mathematically, skewing is modeled as an affine transformation: $Image^{'} = A(Image)+b$, in which the center of mass of the image is computed to estimate how much offset is needed, and the covariance matrix is estimated to approximate by how much an image is skewed. Furthermore, the center and covariance matrix are employed for the inverse affine transformation, which is referred to as deskewing. Then, we used t-SNE to reduce the dimension of the MNIST from $28\times 28$ to $2$, as the two-dimensional embedding space.

Furthermore, we used the following four popular adversarial attack methods to evaluate the robustness of the deep learning models: (1) fast gradient method; (2) fast sign gradient method (FSGM); (3) iterative fast sign gradient method (I-FSGM); and (4) DeepFool.

Folders

Robustness: this directory contains codes for training deep and wide quadratic networks. The used dataset is the MNIST.

Running Experiments

Please first go to the directory of "Robustness".

Then, run the following code to train a wide quadratic network and a deep quadratic network, respectively.

>> python Robustness/MNIST_QuadraticTrain_wide.py    
>> python Robustness/MNIST_QuadraticTrain_deep.py 
>> python Robustness/MNIST_QuadraticTrain_deep_RELU.py 

Lastly, run the following code to test the robustness of the already-trained wide and deep networks with three robustness methods.

>> python Robustness/IFSGM_wide.py    
>> python Robustness/IFSGM_deep.py 
>> python Robustness/IFSGM_deep_relu.py 
>> python Robustness/FSGM_wide.py    
>> python Robustness/FSGM_deep.py 
>> python Robustness/FSGM_deep_relu.py 
>> python Robustness/DF_wide.py    
>> python Robustness/DF_deep.py 
>> python Robustness/DF_deep_relu.py 

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages