-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
KAN Experiment on an Unknown Function #276
Comments
Hi, the shape of label looks a bit suspicious. it should be (N,1) instead of (N,).
to
|
Glad it works! The |
Thanks a lot for the help! I will continue to experiment on KANs, they are promising! |
Did you find an approximation of the unknown function you were looking for? I would also like to start with KANs, I would like to have your feedback if possible |
Hi @BKS00 Sorry for my late response I've been out of my country because of a conference. Yes, it worked! The algorithm found a formulation for my case. |
Glad to know, is it possible that you can send your results. I find it really interesting. |
Hi @BKS00 I used the script and data that I ve shared here, nothing new actually. I ve been able to derive a formulation but the problem is that "seed" argument is not working eventhough I am not prunning the model. Therefore, I am having different formulation from some of the runs. I am gonna do some more work to solve the problem. |
@fbkoroglu Sometimes I find adding this line in the beginning helps reproducibility. Looks like torch LBFGS is using some non-deterministic operations which are non-deterministic even with seeding.
|
I tried to use R^2 for the trained model is very well and I tested the KAN model by using Monte Carlo Simulation (MCS) and compared the result that I obtained by using FEM-MCS. They were almost the same indicating that everything was good. However, when I used the derived symbolic function, the result that I got was quite wrong. I evaluated the symbolic function by using the same inputs in the model training dataset to calculate the outputs manually and compared them with FEM results. I realized that there is a difference of up to 10 times and the average is around 3 times. Do you have an idea to solve this issue? I am adding two plots here to illustrate the situation. By the way, thank you a lot for your suggestions up to now. @BKS00 you might be also interested in this issue. |
@fbkoroglu how big is your KAN and is there any activation whose symbolic fitting is quite off (low R^2)? Would be great if you can include what's been printed after you call auto_symbolic(). |
Dear @KindXiaoming, thank you for your fast response! My KAN is [3,3,1] it is not that big and I am trying grids for I have the following lines for grid trials and training:
I am using "default" library for symbolic as As you can see I tried to construct the KAN as simple as possible. I did no pruning because all of my inputs are needed in the final model. Consequently, I have the following R^2 for symbolic fitting which seems okay:
|
After auto_symbolic, did you do further training? There are a few affine transformations to be fixed via further training. What's the loss before auto_symbolic, right after auto_symbolic, and after further training? |
I needed to update KAN version since I missed out the major improvements when I was abroad. However, now I managed to come up with a solid symbolic function based on your suggestions. Thank you for your advices! I am very happy with the losses and estimation capacity of KANs as well as the illustrations! I have two final questions before closing this issue. The first one, sometimes I am coming across in cases where one of my inputs is diminishing. However, by the engineering judgement I know that the input must appear in the final equation. As far as I understand from the other issues, "lock" is no more supported. So, what should I do for such cases? The second question is again about reproducebility. I just read @issue318 and checked hellokan.ipynb but I could not see any difference. The line you mentioning in that issue is Thanks in advance! |
Hello everyone, I solved the reproducibility issue and I realized that it was my own mistake. I just forgot to put a seed to train/test sets split. Therefore, now I am able to get reproducible results. Since my first question covers a rare case, I think it is not that important. Therefore, I am closing this issue. Thank you so much for your help @KindXiaoming |
Hello everyone,
I have been attempting to use KAN to derive a symbolic "explicit" function for a dataset obtained through finite element analysis results. Despite my efforts, I have not observed any improvement in the train/test losses. I have experimented with various depths and widths, but the results remain unchanged. As I am relatively new to KAN, like many others here, I would greatly appreciate any assistance or guidance on this matter. I am sharing my code and dataset in here as well as one of the models that I trained and its corresponding loss history.
Thank you
seedDataset.xlsx
The text was updated successfully, but these errors were encountered: