-
Notifications
You must be signed in to change notification settings - Fork 66
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using asserts in test cases #6
Comments
Agreed. I will have a look at the test-cases - and any help is very welcome, too :) . |
Thanks Patrick. I will have a look as well. |
Just updated |
To move on with this issue, I believe we have to have a dedicated test case per classifier. Just added a new package in tests directory with three classes. Have a look here. One thing I noticed, these classifiers usually return a different test accuracy every time they are run. Is that normal? I tried to add some sort of tolerance to the assertions but not sure if that's the right thing. What are your thoughts here? |
Looks great! Good question. WEASEL has a random component, as it is using liblinear. I have set a seed for liblinear to avoid the problem. But I also noticed that WEASEL's accuracy differs sometimes on the beef dataset. I have to look into that. Maybe a workaround would be to set a minimum accuracy (and it is ok to have a higher accuracy)? BOSS, BOSS VS and Shotgun should return the same result on every run, as there is no random component involved. One more thing, maybe the (default) parameters should be explicitly set for the test runs? |
Really pleased to see many bug fixes and issues being solved. I can happily test this more and provide more test cases and solve many issues to ensure the stability of the algorithms in this fantastic library. Test runs must be as consistent as possible to make sure no bugs are hidden or could be hidden when testing the code after each refactoring/addition. Your last sentence regarding default parameters gives me the impression that we disable the optimisation code in favor of consistency. I feel that's a bit of a hack. How about we pass in the seeds for any random number generations to ensure the consistency? |
Thanks a lot for pointing out these issues! And finally there is one more bugfix for WEASEL. It should produce consistent results now. There is a problem with too small window sizes - I can not say why, yet. Maybe that is a problem with the Fourier Transform. But I can avoid it by using large enough windows for training. Oh, by "default parameters" I meant that when initializing the classifier to explicitly set the parameters used:
Otherwise the accuracy might change once there is a change to maxF in a future commit. |
With this pull request, all test cases results are tested. I feel we need to add one for |
Sure, I agree that this is necessary. I am on leave for one week now, and will have a look into it once I am back. |
Thank you very much Patrick. Enjoy the leave. |
Thank you. I wrote a testcase for MFT: MFTTest It tests, whether the MFT always returns the same Fourier coefficients as a DFT. I found a bug while handing l>windowsize. Thus I had to correct test results in test-cases. |
Thank you, Patrick. It seems every time we write a test case, a bug shows up and gets fixed. Shouldn't we decouple the test of method |
Yes, that is a good idea. I did add another test case for
|
In tests cases, no assertions are used. Only printing is insufficient to prove the test case is working properly. A good example on using assertion is here. Have a look at
assertEquals
andassertTrue
.The text was updated successfully, but these errors were encountered: