diff --git a/libsvm-3.21/COPYRIGHT b/libsvm-3.21/COPYRIGHT
new file mode 100644
index 0000000..5fe2f22
--- /dev/null
+++ b/libsvm-3.21/COPYRIGHT
@@ -0,0 +1,31 @@
+
+Copyright (c) 2000-2014 Chih-Chung Chang and Chih-Jen Lin
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions
+are met:
+
+1. Redistributions of source code must retain the above copyright
+notice, this list of conditions and the following disclaimer.
+
+2. Redistributions in binary form must reproduce the above copyright
+notice, this list of conditions and the following disclaimer in the
+documentation and/or other materials provided with the distribution.
+
+3. Neither name of copyright holders nor the names of its contributors
+may be used to endorse or promote products derived from this software
+without specific prior written permission.
+
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR
+CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
+LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
+NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
diff --git a/libsvm-3.21/FAQ.html b/libsvm-3.21/FAQ.html
new file mode 100644
index 0000000..42a175a
--- /dev/null
+++ b/libsvm-3.21/FAQ.html
@@ -0,0 +1,2166 @@
+
+
+
+
+
+All Questions (84)
+
+
+
+
+Some courses which have used libsvm as a tool
+Some applications/tools which have used libsvm
+Where can I find documents/videos of libsvm ?
+Where are change log and earlier versions?
+How to cite LIBSVM?
+I would like to use libsvm in my software. Is there any license problem?
+Is there a repository of additional tools based on libsvm?
+On unix machines, I got "error in loading shared libraries" or "cannot open shared object file." What happened ?
+I have modified the source and would like to build the graphic interface "svm-toy" on MS windows. How should I do it ?
+I am an MS windows user but why only one (svm-toy) of those precompiled .exe actually runs ?
+What is the difference between "." and "*" outputed during training?
+Why occasionally the program (including MATLAB or other interfaces) crashes and gives a segmentation fault?
+How to build a dynamic library (.dll file) on MS windows?
+On some systems (e.g., Ubuntu), compiling LIBSVM gives many warning messages. Is this a problem and how to disable the warning message?
+In LIBSVM, why you don't use certain C/C++ library functions to make the code shorter?
+Why sometimes not all attributes of a data appear in the training/model files ?
+What if my data are non-numerical ?
+Why do you consider sparse format ? Will the training of dense data be much slower ?
+Why sometimes the last line of my data is not read by svm-train?
+Is there a program to check if my data are in the correct format?
+May I put comments in data files?
+How to convert other data formats to LIBSVM format?
+The output of training C-SVM is like the following. What do they mean?
+Can you explain more about the model file?
+Should I use float or double to store numbers in the cache ?
+Does libsvm have special treatments for linear SVM?
+The number of free support vectors is large. What should I do?
+Should I scale training and testing data in a similar way?
+On windows sometimes svm-scale.exe generates some non-ASCII data not good for training/prediction?
+Does it make a big difference if I scale each attribute to [0,1] instead of [-1,1]?
+The prediction rate is low. How could I improve it?
+My data are unbalanced. Could libsvm handle such problems?
+What is the difference between nu-SVC and C-SVC?
+The program keeps running (without showing any output). What should I do?
+The program keeps running (with output, i.e. many dots). What should I do?
+The training time is too long. What should I do?
+Does shrinking always help?
+How do I get the decision value(s)?
+How do I get the distance between a point and the hyperplane?
+On 32-bit machines, if I use a large cache (i.e. large -m) on a linux machine, why sometimes I get "segmentation fault ?"
+How do I disable screen output of svm-train?
+I would like to use my own kernel. Any example? In svm.cpp, there are two subroutines for kernel evaluations: k_function() and kernel_function(). Which one should I modify ?
+What method does libsvm use for multi-class SVM ? Why don't you use the "1-against-the rest" method?
+I would like to solve L2-loss SVM (i.e., error term is quadratic). How should I modify the code ?
+In one-class SVM, parameter nu should be an upper bound of the training error rate. Why sometimes I get a training error rate bigger than nu?
+Why the code gives NaN (not a number) results?
+Why the sign of predicted labels and decision values are sometimes reversed?
+I don't know class labels of test data. What should I put in the first column of the test file?
+How can I use OpenMP to parallelize LIBSVM on a multicore/shared-memory computer?
+How could I know which training instances are support vectors?
+Why sv_indices (indices of support vectors) are not stored in the saved model file?
+After doing cross validation, why there is no model file outputted ?
+Why my cross-validation results are different from those in the Practical Guide?
+On some systems CV accuracy is the same in several runs. How could I use different data partitions? In other words, how do I set random seed in LIBSVM?
+Why on windows sometimes grid.py fails?
+Why grid.py/easy.py sometimes generates the following warning message?
+How do I choose the kernel?
+How does LIBSVM perform parameter selection for multi-class problems?
+How do I choose parameters for one-class SVM as training data are in only one class?
+Instead of grid.py, what if I would like to conduct parameter selection using other programmin languages?
+Why training a probability model (i.e., -b 1) takes a longer time?
+Why using the -b option does not give me better accuracy?
+Why using svm-predict -b 0 and -b 1 gives different accuracy values?
+How can I save images drawn by svm-toy?
+I press the "load" button to load data points but why svm-toy does not draw them ?
+I would like svm-toy to handle more than three classes of data, what should I do ?
+What is the difference between Java version and C++ version of libsvm?
+Is the Java version significantly slower than the C++ version?
+While training I get the following error message: java.lang.OutOfMemoryError. What is wrong?
+Why you have the main source file svm.m4 and then transform it to svm.java?
+Except the python-C++ interface provided, could I use Jython to call libsvm ?
+I compile the MATLAB interface without problem, but why errors occur while running it?
+On 64bit Windows I compile the MATLAB interface without problem, but why errors occur while running it?
+Does the MATLAB interface provide a function to do scaling?
+How could I use MATLAB interface for parameter selection?
+I use MATLAB parallel programming toolbox on a multi-core environment for parameter selection. Why the program is even slower?
+How to use LIBSVM with OpenMP under MATLAB/Octave?
+How could I generate the primal variable w of linear SVM?
+Is there an OCTAVE interface for libsvm?
+How to handle the name conflict between svmtrain in the libsvm matlab interface and that in MATLAB bioinformatics toolbox?
+On Windows I got an error message "Invalid MEX-file: Specific module not found" when running the pre-built MATLAB interface in the windows sub-directory. What should I do?
+LIBSVM supports 1-vs-1 multi-class classification. If instead I would like to use 1-vs-rest, how to implement it using MATLAB interface?
+I tried to install matlab interface on mac, but failed. What should I do?
+I tried to install octave interface on windows, but failed. What should I do?
+
+
+
+
+
+
+
+Q: Some courses which have used libsvm as a tool
+
+
+Institute for Computer Science,
+Faculty of Applied Science, University of Freiburg, Germany
+
+
+Division of Mathematics and Computer Science.
+Faculteit der Exacte Wetenschappen
+Vrije Universiteit, The Netherlands.
+
+
+Electrical and Computer Engineering Department,
+University of Wisconsin-Madison
+
+
+
+Technion (Israel Institute of Technology), Israel.
+
+
+Computer and Information Sciences Dept., University of Florida
+
+
+The Institute of Computer Science,
+University of Nairobi, Kenya.
+
+
+Applied Mathematics and Computer Science, University of Iceland.
+
+
+SVM tutorial in machine learning
+summer school, University of Chicago, 2005.
+
+
+
+[Go Top]
+
+
+Q: Some applications/tools which have used libsvm
+
+(and maybe liblinear).
+
+
+[Go Top]
+
+
+Q: Where can I find documents/videos of libsvm ?
+
+
+
+
+
+Official implementation document:
+
+C.-C. Chang and
+C.-J. Lin.
+LIBSVM
+: a library for support vector machines.
+ACM Transactions on Intelligent
+Systems and Technology, 2:27:1--27:27, 2011.
+pdf , ps.gz ,
+ACM digital lib .
+
+
+ Instructions for using LIBSVM are in the README files in the main directory and some sub-directories.
+
+README in the main directory: details all options, data format, and library calls.
+
+tools/README: parameter selection and other tools
+
+A guide for beginners:
+
+C.-W. Hsu, C.-C. Chang, and
+C.-J. Lin.
+
+A practical guide to support vector classification
+
+ An introductory video
+for windows users.
+
+
+
+[Go Top]
+
+
+Q: Where are change log and earlier versions?
+
+See the change log .
+
+
You can download earlier versions
+here .
+
+[Go Top]
+
+
+Q: How to cite LIBSVM?
+
+
+Please cite the following paper:
+
+Chih-Chung Chang and Chih-Jen Lin, LIBSVM
+: a library for support vector machines.
+ACM Transactions on Intelligent Systems and Technology, 2:27:1--27:27, 2011.
+Software available at http://www.csie.ntu.edu.tw/~cjlin/libsvm
+
+The bibtex format is
+
+@article{CC01a,
+ author = {Chang, Chih-Chung and Lin, Chih-Jen},
+ title = {{LIBSVM}: A library for support vector machines},
+ journal = {ACM Transactions on Intelligent Systems and Technology},
+ volume = {2},
+ issue = {3},
+ year = {2011},
+ pages = {27:1--27:27},
+ note = {Software available at \url{http://www.csie.ntu.edu.tw/~cjlin/libsvm}}
+}
+
+
+[Go Top]
+
+
+Q: I would like to use libsvm in my software. Is there any license problem?
+
+
+We have "the modified BSD license,"
+so it is very easy to
+use libsvm in your software.
+Please check the COPYRIGHT file in detail. Basically
+you need to
+
+
+Clearly indicate that LIBSVM is used.
+
+
+Retain the LIBSVM COPYRIGHT file in your software.
+
+
+It can also be used in commercial products.
+
+[Go Top]
+
+
+Q: Is there a repository of additional tools based on libsvm?
+
+
+Yes, see libsvm
+tools
+
+[Go Top]
+
+
+Q: On unix machines, I got "error in loading shared libraries" or "cannot open shared object file." What happened ?
+
+
+
+This usually happens if you compile the code
+on one machine and run it on another which has incompatible
+libraries.
+Try to recompile the program on that machine or use static linking.
+
+[Go Top]
+
+
+Q: I have modified the source and would like to build the graphic interface "svm-toy" on MS windows. How should I do it ?
+
+
+
+Build it as a project by choosing "Win32 Project."
+On the other hand, for "svm-train" and "svm-predict"
+you want to choose "Win32 Console Project."
+After libsvm 2.5, you can also use the file Makefile.win.
+See details in README.
+
+
+
+If you are not using Makefile.win and see the following
+link error
+
+LIBCMTD.lib(wwincrt0.obj) : error LNK2001: unresolved external symbol
+_wWinMain@16
+
+you may have selected a wrong project type.
+
+[Go Top]
+
+
+Q: I am an MS windows user but why only one (svm-toy) of those precompiled .exe actually runs ?
+
+
+
+You need to open a command window
+and type svmtrain.exe to see all options.
+Some examples are in README file.
+
+[Go Top]
+
+
+Q: What is the difference between "." and "*" outputed during training?
+
+
+
+"." means every 1,000 iterations (or every #data
+iterations is your #data is less than 1,000).
+"*" means that after iterations of using
+a smaller shrunk problem,
+we reset to use the whole set. See the
+implementation document for details.
+
+[Go Top]
+
+
+Q: Why occasionally the program (including MATLAB or other interfaces) crashes and gives a segmentation fault?
+
+
+
+Very likely the program consumes too much memory than what the
+operating system can provide. Try a smaller data and see if the
+program still crashes.
+
+[Go Top]
+
+
+Q: How to build a dynamic library (.dll file) on MS windows?
+
+
+
+The easiest way is to use Makefile.win.
+See details in README.
+
+Alternatively, you can use Visual C++. Here is
+the example using Visual Studio 2013:
+
+Create a Win32 empty DLL project and set (in Project->$Project_Name
+Properties...->Configuration) to "Release."
+ About how to create a new dynamic link library, please refer to
+http://msdn2.microsoft.com/en-us/library/ms235636(VS.80).aspx
+
+ Add svm.cpp, svm.h to your project.
+ Add __WIN32__ and _CRT_SECURE_NO_DEPRECATE to Preprocessor definitions (in
+Project->$Project_Name Properties...->C/C++->Preprocessor)
+ Set Create/Use Precompiled Header to Not Using Precompiled Headers
+(in Project->$Project_Name Properties...->C/C++->Precompiled Headers)
+ Set the path for the Modulation Definition File svm.def (in
+Project->$Project_Name Properties...->Linker->input
+ Build the DLL.
+ Rename the dll file to libsvm.dll and move it to the correct path.
+
+
+
+
+[Go Top]
+
+
+Q: On some systems (e.g., Ubuntu), compiling LIBSVM gives many warning messages. Is this a problem and how to disable the warning message?
+
+
+
+If you are using a version before 3.18, probably you see
+a warning message like
+
+svm.cpp:2730: warning: ignoring return value of int fscanf(FILE*, const char*, ...), declared with attribute warn_unused_result
+
+This is not a problem; see this page for more
+details of ubuntu systems.
+To disable the warning message you can replace
+
+CFLAGS = -Wall -Wconversion -O3 -fPIC
+
+with
+
+CFLAGS = -Wall -Wconversion -O3 -fPIC -U_FORTIFY_SOURCE
+
+in Makefile.
+ After version 3.18, we have a better setting so that such warning messages do not appear.
+
+[Go Top]
+
+
+Q: In LIBSVM, why you don't use certain C/C++ library functions to make the code shorter?
+
+
+
+For portability, we use only features defined in ISO C89. Note that features in ISO C99 may not be available everywhere.
+Even the newest gcc lacks some features in C99 (see http://gcc.gnu.org/c99status.html for details).
+If the situation changes in the future,
+we might consider using these newer features.
+
+[Go Top]
+
+
+Q: Why sometimes not all attributes of a data appear in the training/model files ?
+
+
+libsvm uses the so called "sparse" format where zero
+values do not need to be stored. Hence a data with attributes
+
+1 0 2 0
+
+is represented as
+
+1:1 3:2
+
+
+[Go Top]
+
+
+Q: What if my data are non-numerical ?
+
+
+Currently libsvm supports only numerical data.
+You may have to change non-numerical data to
+numerical. For example, you can use several
+binary attributes to represent a categorical
+attribute.
+
+[Go Top]
+
+
+Q: Why do you consider sparse format ? Will the training of dense data be much slower ?
+
+
+This is a controversial issue. The kernel
+evaluation (i.e. inner product) of sparse vectors is slower
+so the total training time can be at least twice or three times
+of that using the dense format.
+However, we cannot support only dense format as then we CANNOT
+handle extremely sparse cases. Simplicity of the code is another
+concern. Right now we decide to support
+the sparse format only.
+
+[Go Top]
+
+
+Q: Why sometimes the last line of my data is not read by svm-train?
+
+
+
+We assume that you have '\n' in the end of
+each line. So please press enter in the end
+of your last line.
+
+[Go Top]
+
+
+Q: Is there a program to check if my data are in the correct format?
+
+
+
+The svm-train program in libsvm conducts only a simple check of the input data. To do a
+detailed check, after libsvm 2.85, you can use the python script tools/checkdata.py. See tools/README for details.
+
+[Go Top]
+
+
+Q: May I put comments in data files?
+
+
+
+We don't officially support this. But, currently LIBSVM
+is able to process data in the following
+format:
+
+1 1:2 2:1 # your comments
+
+Note that the character ":" should not appear in your
+comments.
+
+
+[Go Top]
+
+
+Q: How to convert other data formats to LIBSVM format?
+
+
+
+It depends on your data format. A simple way is to use
+libsvmwrite in the libsvm matlab/octave interface.
+
+Take a CSV (comma-separated values) file
+in UCI machine learning repository as an example.
+We download SPECTF.train .
+Labels are in the first column. The following steps produce
+a file in the libsvm format.
+
+matlab> SPECTF = csvread('SPECTF.train'); % read a csv file
+matlab> labels = SPECTF(:, 1); % labels from the 1st column
+matlab> features = SPECTF(:, 2:end);
+matlab> features_sparse = sparse(features); % features must be in a sparse matrix
+matlab> libsvmwrite('SPECTFlibsvm.train', labels, features_sparse);
+
+The tranformed data are stored in SPECTFlibsvm.train.
+
+
+Alternatively, you can use convert.c
+to convert CSV format to libsvm format.
+
+[Go Top]
+
+
+Q: The output of training C-SVM is like the following. What do they mean?
+
+ optimization finished, #iter = 219
+ nu = 0.431030
+ obj = -100.877286, rho = 0.424632
+ nSV = 132, nBSV = 107
+ Total nSV = 132
+
+obj is the optimal objective value of the dual SVM problem.
+rho is the bias term in the decision function
+sgn(w^Tx - rho).
+nSV and nBSV are number of support vectors and bounded support
+vectors (i.e., alpha_i = C). nu-svm is a somewhat equivalent
+form of C-SVM where C is replaced by nu. nu simply shows the
+corresponding parameter. More details are in
+
+libsvm document .
+
+[Go Top]
+
+
+Q: Can you explain more about the model file?
+
+
+
+In the model file, after parameters and other informations such as labels , each line represents a support vector.
+Support vectors are listed in the order of "labels" shown earlier.
+(i.e., those from the first class in the "labels" list are
+grouped first, and so on.)
+If k is the total number of classes,
+in front of a support vector in class j, there are
+k-1 coefficients
+y*alpha where alpha are dual solution of the
+following two class problems:
+
+1 vs j, 2 vs j, ..., j-1 vs j, j vs j+1, j vs j+2, ..., j vs k
+
+and y=1 in first j-1 coefficients, y=-1 in the remaining
+k-j coefficients.
+
+For example, if there are 4 classes, the file looks like:
+
+
++-+-+-+--------------------+
+|1|1|1| |
+|v|v|v| SVs from class 1 |
+|2|3|4| |
++-+-+-+--------------------+
+|1|2|2| |
+|v|v|v| SVs from class 2 |
+|2|3|4| |
++-+-+-+--------------------+
+|1|2|3| |
+|v|v|v| SVs from class 3 |
+|3|3|4| |
++-+-+-+--------------------+
+|1|2|3| |
+|v|v|v| SVs from class 4 |
+|4|4|4| |
++-+-+-+--------------------+
+
+See also
+ an illustration using
+MATLAB/OCTAVE.
+
+[Go Top]
+
+
+Q: Should I use float or double to store numbers in the cache ?
+
+
+
+We have float as the default as you can store more numbers
+in the cache.
+In general this is good enough but for few difficult
+cases (e.g. C very very large) where solutions are huge
+numbers, it might be possible that the numerical precision is not
+enough using only float.
+
+[Go Top]
+
+
+Q: Does libsvm have special treatments for linear SVM?
+
+
+
+
+No, libsvm solves linear/nonlinear SVMs by the
+same way.
+Some tricks may save training/testing time if the
+linear kernel is used,
+so libsvm is NOT particularly efficient for linear SVM,
+especially when
+C is large and
+the number of data is much larger
+than the number of attributes.
+You can either
+
+
+ Please also see our SVM guide
+on the discussion of using RBF and linear
+kernels.
+
+[Go Top]
+
+
+Q: The number of free support vectors is large. What should I do?
+
+
+This usually happens when the data are overfitted.
+If attributes of your data are in large ranges,
+try to scale them. Then the region
+of appropriate parameters may be larger.
+Note that there is a scale program
+in libsvm.
+
+[Go Top]
+
+
+Q: Should I scale training and testing data in a similar way?
+
+
+Yes, you can do the following:
+
+> svm-scale -s scaling_parameters train_data > scaled_train_data
+> svm-scale -r scaling_parameters test_data > scaled_test_data
+
+
+[Go Top]
+
+
+Q: On windows sometimes svm-scale.exe generates some non-ASCII data not good for training/prediction?
+
+
+In general this does not happen, but we have observed in some rare
+situations, the output of svm-scale.exe directed to a file (by ">")
+has wrong encoding. That is, the file is not an ASCII file, so cannot be
+used for training/prediction. Please let us know if this happens as at this moment
+we don't clearly see how to fix the problem.
+
+[Go Top]
+
+
+Q: Does it make a big difference if I scale each attribute to [0,1] instead of [-1,1]?
+
+
+
+For the linear scaling method, if the RBF kernel is
+used and parameter selection is conducted, there
+is no difference. Assume Mi and mi are
+respectively the maximal and minimal values of the
+ith attribute. Scaling to [0,1] means
+
+ x'=(x-mi)/(Mi-mi)
+
+For [-1,1],
+
+ x''=2(x-mi)/(Mi-mi)-1.
+
+In the RBF kernel,
+
+ x'-y'=(x-y)/(Mi-mi), x''-y''=2(x-y)/(Mi-mi).
+
+Hence, using (C,g) on the [0,1]-scaled data is the
+same as (C,g/2) on the [-1,1]-scaled data.
+
+ Though the performance is the same, the computational
+time may be different. For data with many zero entries,
+[0,1]-scaling keeps the sparsity of input data and hence
+may save the time.
+
+[Go Top]
+
+
+Q: The prediction rate is low. How could I improve it?
+
+
+Try to use the model selection tool grid.py in the tools
+directory find
+out good parameters. To see the importance of model selection,
+please
+see our guide for beginners:
+
+A practical guide to support vector
+classification
+
+
+[Go Top]
+
+
+Q: My data are unbalanced. Could libsvm handle such problems?
+
+
+Yes, there is a -wi options. For example, if you use
+
+> svm-train -s 0 -c 10 -w1 1 -w-1 5 data_file
+
+
+the penalty for class "-1" is larger.
+Note that this -w option is for C-SVC only.
+
+[Go Top]
+
+
+Q: What is the difference between nu-SVC and C-SVC?
+
+
+Basically they are the same thing but with different
+parameters. The range of C is from zero to infinity
+but nu is always between [0,1]. A nice property
+of nu is that it is related to the ratio of
+support vectors and the ratio of the training
+error.
+
+[Go Top]
+
+
+Q: The program keeps running (without showing any output). What should I do?
+
+
+You may want to check your data. Each training/testing
+data must be in one line. It cannot be separated.
+In addition, you have to remove empty lines.
+
+[Go Top]
+
+
+Q: The program keeps running (with output, i.e. many dots). What should I do?
+
+
+In theory libsvm guarantees to converge.
+Therefore, this means you are
+handling ill-conditioned situations
+(e.g. too large/small parameters) so numerical
+difficulties occur.
+
+You may get better numerical stability by replacing
+
+typedef float Qfloat;
+
+in svm.cpp with
+
+typedef double Qfloat;
+
+That is, elements in the kernel cache are stored
+in double instead of single. However, this means fewer elements
+can be put in the kernel cache.
+
+[Go Top]
+
+
+Q: The training time is too long. What should I do?
+
+
+For large problems, please specify enough cache size (i.e.,
+-m).
+Slow convergence may happen for some difficult cases (e.g. -c is large).
+You can try to use a looser stopping tolerance with -e.
+If that still doesn't work, you may train only a subset of the data.
+You can use the program subset.py in the directory "tools"
+to obtain a random subset.
+
+
+If you have extremely large data and face this difficulty, please
+contact us. We will be happy to discuss possible solutions.
+
+
When using large -e, you may want to check if -h 0 (no shrinking) or -h 1 (shrinking) is faster.
+See a related question below.
+
+
+[Go Top]
+
+
+Q: Does shrinking always help?
+
+
+If the number of iterations is high, then shrinking
+often helps.
+However, if the number of iterations is small
+(e.g., you specify a large -e), then
+probably using -h 0 (no shrinking) is better.
+See the
+implementation document for details.
+
+[Go Top]
+
+
+Q: How do I get the decision value(s)?
+
+
+We print out decision values for regression. For classification,
+we solve several binary SVMs for multi-class cases. You
+can obtain values by easily calling the subroutine
+svm_predict_values. Their corresponding labels
+can be obtained from svm_get_labels.
+Details are in
+README of libsvm package.
+
+
+If you are using MATLAB/OCTAVE interface, svmpredict can directly
+give you decision values. Please see matlab/README for details.
+
+
+We do not recommend the following. But if you would
+like to get values for
+TWO-class classification with labels +1 and -1
+(note: +1 and -1 but not things like 5 and 10)
+in the easiest way, simply add
+
+ printf("%f\n", dec_values[0]*model->label[0]);
+
+after the line
+
+ svm_predict_values(model, x, dec_values);
+
+of the file svm.cpp.
+Positive (negative)
+decision values correspond to data predicted as +1 (-1).
+
+
+
+[Go Top]
+
+
+Q: How do I get the distance between a point and the hyperplane?
+
+
+The distance is |decision_value| / |w|.
+We have |w|^2 = w^Tw = alpha^T Q alpha = 2*(dual_obj + sum alpha_i).
+Thus in svm.cpp please find the place
+where we calculate the dual objective value
+(i.e., the subroutine Solve())
+and add a statement to print w^Tw.
+
+More precisely, here is what you need to do
+
+Search for "calculate objective value" in svm.cpp
+
+ In that place, si->obj is the variable for the objective value
+
+ Add a for loop to calculate the sum of alpha
+
+ Calculate 2*(si->obj + sum of alpha) and print the square root of it. You now get |w|. You
+need to recompile the code
+
+ Check an earlier FAQ on printing decision values. You
+need to recompile the code
+
+
+Then print decision value divided by the |w| value obtained earlier.
+
+
+
+[Go Top]
+
+
+Q: On 32-bit machines, if I use a large cache (i.e. large -m) on a linux machine, why sometimes I get "segmentation fault ?"
+
+
+
+On 32-bit machines, the maximum addressable
+memory is 4GB. The Linux kernel uses 3:1
+split which means user space is 3G and
+kernel space is 1G. Although there are
+3G user space, the maximum dynamic allocation
+memory is 2G. So, if you specify -m near 2G,
+the memory will be exhausted. And svm-train
+will fail when it asks more memory.
+For more details, please read
+
+this article .
+
+The easiest solution is to switch to a
+ 64-bit machine.
+Otherwise, there are two ways to solve this. If your
+machine supports Intel's PAE (Physical Address
+Extension), you can turn on the option HIGHMEM64G
+in Linux kernel which uses 4G:4G split for
+kernel and user space. If you don't, you can
+try a software `tub' which can eliminate the 2G
+boundary for dynamic allocated memory. The `tub'
+is available at
+http://www.bitwagon.com/tub.html .
+
+
+
+
+[Go Top]
+
+
+Q: How do I disable screen output of svm-train?
+
+
+For commend-line users, use the -q option:
+
+> ./svm-train -q heart_scale
+
+
+For library users, set the global variable
+
+extern void (*svm_print_string) (const char *);
+
+to specify the output format. You can disable the output by the following steps:
+
+
+Declare a function to output nothing:
+
+void print_null(const char *s) {}
+
+
+
+Assign the output function of libsvm by
+
+svm_print_string = &print_null;
+
+
+
+Finally, a way used in earlier libsvm
+is by updating svm.cpp from
+
+#if 1
+void info(const char *fmt,...)
+
+to
+
+#if 0
+void info(const char *fmt,...)
+
+
+[Go Top]
+
+
+Q: I would like to use my own kernel. Any example? In svm.cpp, there are two subroutines for kernel evaluations: k_function() and kernel_function(). Which one should I modify ?
+
+
+An example is "LIBSVM for string data" in LIBSVM Tools.
+
+The reason why we have two functions is as follows.
+For the RBF kernel exp(-g |xi - xj|^2), if we calculate
+xi - xj first and then the norm square, there are 3n operations.
+Thus we consider exp(-g (|xi|^2 - 2dot(xi,xj) +|xj|^2))
+and by calculating all |xi|^2 in the beginning,
+the number of operations is reduced to 2n.
+This is for the training. For prediction we cannot
+do this so a regular subroutine using that 3n operations is
+needed.
+
+The easiest way to have your own kernel is
+to put the same code in these two
+subroutines by replacing any kernel.
+
+[Go Top]
+
+
+Q: What method does libsvm use for multi-class SVM ? Why don't you use the "1-against-the rest" method?
+
+
+It is one-against-one. We chose it after doing the following
+comparison:
+C.-W. Hsu and C.-J. Lin.
+
+A comparison of methods
+for multi-class support vector machines
+ ,
+IEEE Transactions on Neural Networks , 13(2002), 415-425.
+
+
+"1-against-the rest" is a good method whose performance
+is comparable to "1-against-1." We do the latter
+simply because its training time is shorter.
+
+[Go Top]
+
+
+Q: I would like to solve L2-loss SVM (i.e., error term is quadratic). How should I modify the code ?
+
+
+It is extremely easy. Taking c-svc for example, to solve
+
+min_w w^Tw/2 + C \sum max(0, 1- (y_i w^Tx_i+b))^2,
+
+only two
+places of svm.cpp have to be changed.
+First, modify the following line of
+solve_c_svc from
+
+ s.Solve(l, SVC_Q(*prob,*param,y), minus_ones, y,
+ alpha, Cp, Cn, param->eps, si, param->shrinking);
+
+to
+
+ s.Solve(l, SVC_Q(*prob,*param,y), minus_ones, y,
+ alpha, INF, INF, param->eps, si, param->shrinking);
+
+Second, in the class of SVC_Q, declare C as
+a private variable:
+
+ double C;
+
+In the constructor replace
+
+ for(int i=0;i<prob.l;i++)
+ QD[i]= (Qfloat)(this->*kernel_function)(i,i);
+
+with
+
+ this->C = param.C;
+ for(int i=0;i<prob.l;i++)
+ QD[i]= (Qfloat)(this->*kernel_function)(i,i)+0.5/C;
+
+Then in the subroutine get_Q, after the for loop, add
+
+ if(i >= start && i < len)
+ data[i] += 0.5/C;
+
+
+
+For one-class svm, the modification is exactly the same. For SVR, you don't need an if statement like the above. Instead, you only need a simple assignment:
+
+ data[real_i] += 0.5/C;
+
+
+
+
+For large linear L2-loss SVM, please use
+LIBLINEAR .
+
+[Go Top]
+
+
+Q: In one-class SVM, parameter nu should be an upper bound of the training error rate. Why sometimes I get a training error rate bigger than nu?
+
+
+
+At optimum, some training instances should satisfy
+w^Tx - rho = 0. However, numerically they may be slightly
+smaller than zero
+Then they are wrongly counted
+as training errors. You can use a smaller stopping tolerance
+(by the -e option) to make this problem less serious.
+
+
+This issue does not occur for nu-SVC for
+two-class classification.
+We have that
+
+nu is an upper bound on the ratio of training points
+on the wrong side of the hyperplane, and
+ therefore, nu is also an upper bound on the training error rate.
+
+Numerical issues occur in calculating the first case
+because some training points satisfying y(w^Tx + b) - rho = 0
+become negative.
+However, we have no numerical problems for the second case because
+we compare y(w^Tx + b) and 0 for counting training errors.
+
+[Go Top]
+
+
+Q: Why the code gives NaN (not a number) results?
+
+
+This rarely happens, but few users reported the problem.
+It seems that their
+computers for training libsvm have the VPN client
+running. The VPN software has some bugs and causes this
+problem. Please try to close or disconnect the VPN client.
+
+[Go Top]
+
+
+Q: Why the sign of predicted labels and decision values are sometimes reversed?
+
+
+
+This situation may occur before version 3.17 .
+Nothing is wrong. Very likely you have two labels +1/-1 and the first instance in your data
+has -1. We give the following explanation.
+
+
+Internally class labels are ordered by their first occurrence in the training set. For a k-class data, internally labels
+are 0, ..., k-1, and each two-class SVM considers pair
+(i, j) with i < j. Then class i is treated as positive (+1)
+and j as negative (-1).
+For example, if the data set has labels +5/+10 and +10 appears
+first, then internally the +5 versus +10 SVM problem
+has +10 as positive (+1) and +5 as negative (-1).
+
+
+By this setting, if you have labels +1 and -1,
+it's possible that internally they correspond to -1 and +1,
+respectively. Some new users have been confused about
+this, so after version 3.17 , if the data set has only
+two labels +1 and -1,
+internally we ensure +1 to be before -1. Then class +1
+is always treated as positive in the SVM problem.
+Note that this is for two-class data only.
+
+[Go Top]
+
+
+Q: I don't know class labels of test data. What should I put in the first column of the test file?
+
+Any value is ok. In this situation, what you will use is the output file of svm-predict, which gives predicted class labels.
+
+
+
+[Go Top]
+
+
+Q: How can I use OpenMP to parallelize LIBSVM on a multicore/shared-memory computer?
+
+
+It is very easy if you are using GCC 4.2
+or after.
+
+
In Makefile, add -fopenmp to CFLAGS.
+
+
In class SVC_Q of svm.cpp, modify the for loop
+of get_Q to:
+
+#pragma omp parallel for private(j) schedule(guided)
+ for(j=start;j<len;j++)
+
+ In the subroutine svm_predict_values of svm.cpp, add one line to the for loop:
+
+#pragma omp parallel for private(i) schedule(guided)
+ for(i=0;i<l;i++)
+ kvalue[i] = Kernel::k_function(x,model->SV[i],model->param);
+
+For regression, you need to modify
+class SVR_Q instead. The loop in svm_predict_values
+is also different because you need
+a reduction clause for the variable sum:
+
+#pragma omp parallel for private(i) reduction(+:sum) schedule(guided)
+ for(i=0;i<model->l;i++)
+ sum += sv_coef[i] * Kernel::k_function(x,model->SV[i],model->param);
+
+
+ Then rebuild the package. Kernel evaluations in training/testing will be parallelized. An example of running this modification on
+an 8-core machine using the data set
+real-sim :
+
+
8 cores:
+
+%setenv OMP_NUM_THREADS 8
+%time svm-train -c 8 -g 0.5 -m 1000 real-sim
+175.90sec
+
+1 core:
+
+%setenv OMP_NUM_THREADS 1
+%time svm-train -c 8 -g 0.5 -m 1000 real-sim
+588.89sec
+
+For this data, kernel evaluations take 91% of training time. In the above example, we assume you use csh. For bash, use
+
+export OMP_NUM_THREADS=8
+
+instead.
+
+ For Python interface, you need to add the -lgomp link option:
+
+$(CXX) -lgomp -shared -dynamiclib svm.o -o libsvm.so.$(SHVER)
+
+
+ For MS Windows, you need to add /openmp in CFLAGS of Makefile.win
+
+
+[Go Top]
+
+
+Q: How could I know which training instances are support vectors?
+
+
+
+It's very simple. Since version 3.13, you can use the function
+
+void svm_get_sv_indices(const struct svm_model *model, int *sv_indices)
+
+to get indices of support vectors. For example, in svm-train.c, after
+
+ model = svm_train(&prob, ¶m);
+
+you can add
+
+ int nr_sv = svm_get_nr_sv(model);
+ int *sv_indices = Malloc(int, nr_sv);
+ svm_get_sv_indices(model, sv_indices);
+ for (int i=0; i<nr_sv; i++)
+ printf("instance %d is a support vector\n", sv_indices[i]);
+
+
+ If you use matlab interface, you can directly check
+
+model.sv_indices
+
+
+[Go Top]
+
+
+Q: Why sv_indices (indices of support vectors) are not stored in the saved model file?
+
+
+
+Although sv_indices is a member of the model structure
+to
+indicate support vectors in the training set,
+we do not store its contents in the model file.
+The model file is mainly used in the future for
+prediction, so it is basically independent
+from training data. Thus
+storing sv_indices is not necessary.
+Users should find support vectors right after
+the training process. See the previous FAQ.
+
+[Go Top]
+
+
+Q: After doing cross validation, why there is no model file outputted ?
+
+
+Cross validation is used for selecting good parameters.
+After finding them, you want to re-train the whole
+data without the -v option.
+
+[Go Top]
+
+
+Q: Why my cross-validation results are different from those in the Practical Guide?
+
+
+
+Due to random partitions of
+the data, on different systems CV accuracy values
+may be different.
+
+[Go Top]
+
+
+Q: On some systems CV accuracy is the same in several runs. How could I use different data partitions? In other words, how do I set random seed in LIBSVM?
+
+
+If you use GNU C library,
+the default seed 1 is considered. Thus you always
+get the same result of running svm-train -v.
+To have different seeds, you can add the following code
+in svm-train.c:
+
+#include <time.h>
+
+and in the beginning of main(),
+
+srand(time(0));
+
+Alternatively, if you are not using GNU C library
+and would like to use a fixed seed, you can have
+
+srand(1);
+
+
+
+For Java, the random number generator
+is initialized using the time information.
+So results of two CV runs are different.
+To fix the seed, after version 3.1 (released
+in mid 2011), you can add
+
+svm.rand.setSeed(0);
+
+in the main() function of svm_train.java.
+
+
+If you use CV to select parameters, it is recommended to use identical folds
+under different parameters. In this case, you can consider fixing the seed.
+
+[Go Top]
+
+
+Q: Why on windows sometimes grid.py fails?
+
+
+
+This problem shouldn't happen after version
+2.85. If you are using earlier versions,
+please download the latest one.
+
+
+
+[Go Top]
+
+
+Q: Why grid.py/easy.py sometimes generates the following warning message?
+
+
+Warning: empty z range [62.5:62.5], adjusting to [61.875:63.125]
+Notice: cannot contour non grid data!
+
+Nothing is wrong and please disregard the
+message. It is from gnuplot when drawing
+the contour.
+
+[Go Top]
+
+
+Q: How do I choose the kernel?
+
+
+
+In general we suggest you to try the RBF kernel first.
+A recent result by Keerthi and Lin
+(
+download paper here )
+shows that if RBF is used with model selection,
+then there is no need to consider the linear kernel.
+The kernel matrix using sigmoid may not be positive definite
+and in general it's accuracy is not better than RBF.
+(see the paper by Lin and Lin
+(
+download paper here ).
+Polynomial kernels are ok but if a high degree is used,
+numerical difficulties tend to happen
+(thinking about dth power of (<1) goes to 0
+and (>1) goes to infinity).
+
+[Go Top]
+
+
+Q: How does LIBSVM perform parameter selection for multi-class problems?
+
+
+
+LIBSVM implements "one-against-one" multi-class method, so there are
+k(k-1)/2 binary models, where k is the number of classes.
+
+
+We can consider two ways to conduct parameter selection.
+
+
+
+For any two classes of data, a parameter selection procedure is conducted. Finally,
+each decision function has its own optimal parameters.
+
+
+The same parameters are used for all k(k-1)/2 binary classification problems.
+We select parameters that achieve the highest overall performance.
+
+
+
+Each has its own advantages. A
+single parameter set may not be uniformly good for all k(k-1)/2 decision functions.
+However, as the overall accuracy is the final consideration, one parameter set
+for one decision function may lead to over-fitting. In the paper
+
+Chen, Lin, and Schölkopf,
+
+A tutorial on nu-support vector machines.
+
+Applied Stochastic Models in Business and Industry, 21(2005), 111-136,
+
+
+they have experimentally
+shown that the two methods give similar performance.
+Therefore, currently the parameter selection in LIBSVM
+takes the second approach by considering the same parameters for
+all k(k-1)/2 models.
+
+[Go Top]
+
+
+Q: How do I choose parameters for one-class SVM as training data are in only one class?
+
+
+You have pre-specified true positive rate in mind and then search for
+parameters which achieve similar cross-validation accuracy.
+
+[Go Top]
+
+
+Q: Instead of grid.py, what if I would like to conduct parameter selection using other programmin languages?
+
+
+For MATLAB, please see another question in FAQ.
+
+
+For using shell scripts, please check the code written by Bjarte Johansen
+
+[Go Top]
+
+
+Q: Why training a probability model (i.e., -b 1) takes a longer time?
+
+
+To construct this probability model, we internally conduct a
+cross validation, which is more time consuming than
+a regular training.
+Hence, in general you do parameter selection first without
+-b 1. You only use -b 1 when good parameters have been
+selected. In other words, you avoid using -b 1 and -v
+together.
+
+[Go Top]
+
+
+Q: Why using the -b option does not give me better accuracy?
+
+
+There is absolutely no reason the probability outputs guarantee
+you better accuracy. The main purpose of this option is
+to provide you the probability estimates, but not to boost
+prediction accuracy. From our experience,
+after proper parameter selections, in general with
+and without -b have similar accuracy. Occasionally there
+are some differences.
+It is not recommended to compare the two under
+just a fixed parameter
+set as more differences will be observed.
+
+[Go Top]
+
+
+Q: Why using svm-predict -b 0 and -b 1 gives different accuracy values?
+
+
+Let's just consider two-class classification here. After probability information is obtained in training,
+we do not have
+
+prob > = 0.5 if and only if decision value >= 0.
+
+So predictions may be different with -b 0 and 1.
+
+[Go Top]
+
+
+Q: How can I save images drawn by svm-toy?
+
+
+For Microsoft windows, first press the "print screen" key on the keyboard.
+Open "Microsoft Paint"
+(included in Windows)
+and press "ctrl-v." Then you can clip
+the part of picture which you want.
+For X windows, you can
+use the program "xv" or "import" to grab the picture of the svm-toy window.
+
+[Go Top]
+
+
+Q: I press the "load" button to load data points but why svm-toy does not draw them ?
+
+
+The program svm-toy assumes both attributes (i.e. x-axis and y-axis
+values) are in (0,1). Hence you want to scale your
+data to between a small positive number and
+a number less than but very close to 1.
+Moreover, class labels must be 1, 2, or 3
+(not 1.0, 2.0 or anything else).
+
+[Go Top]
+
+
+Q: I would like svm-toy to handle more than three classes of data, what should I do ?
+
+
+Taking windows/svm-toy.cpp as an example, you need to
+modify it and the difference
+from the original file is as the following: (for five classes of
+data)
+
+30,32c30
+< RGB(200,0,200),
+< RGB(0,160,0),
+< RGB(160,0,0)
+---
+> RGB(200,0,200)
+39c37
+< HBRUSH brush1, brush2, brush3, brush4, brush5;
+---
+> HBRUSH brush1, brush2, brush3;
+113,114d110
+< brush4 = CreateSolidBrush(colors[7]);
+< brush5 = CreateSolidBrush(colors[8]);
+155,157c151
+< else if(v==3) return brush3;
+< else if(v==4) return brush4;
+< else return brush5;
+---
+> else return brush3;
+325d318
+< int colornum = 5;
+327c320
+< svm_node *x_space = new svm_node[colornum * prob.l];
+---
+> svm_node *x_space = new svm_node[3 * prob.l];
+333,338c326,331
+< x_space[colornum * i].index = 1;
+< x_space[colornum * i].value = q->x;
+< x_space[colornum * i + 1].index = 2;
+< x_space[colornum * i + 1].value = q->y;
+< x_space[colornum * i + 2].index = -1;
+< prob.x[i] = &x_space[colornum * i];
+---
+> x_space[3 * i].index = 1;
+> x_space[3 * i].value = q->x;
+> x_space[3 * i + 1].index = 2;
+> x_space[3 * i + 1].value = q->y;
+> x_space[3 * i + 2].index = -1;
+> prob.x[i] = &x_space[3 * i];
+397c390
+< if(current_value > 5) current_value = 1;
+---
+> if(current_value > 3) current_value = 1;
+
+
+[Go Top]
+
+
+Q: What is the difference between Java version and C++ version of libsvm?
+
+
+They are the same thing. We just rewrote the C++ code
+in Java.
+
+[Go Top]
+
+
+Q: Is the Java version significantly slower than the C++ version?
+
+
+This depends on the VM you used. We have seen good
+VM which leads the Java version to be quite competitive with
+the C++ code. (though still slower)
+
+[Go Top]
+
+
+Q: While training I get the following error message: java.lang.OutOfMemoryError. What is wrong?
+
+
+You should try to increase the maximum Java heap size.
+For example,
+
+java -Xmx2048m -classpath libsvm.jar svm_train ...
+
+sets the maximum heap size to 2048M.
+
+[Go Top]
+
+
+Q: Why you have the main source file svm.m4 and then transform it to svm.java?
+
+
+Unlike C, Java does not have a preprocessor built-in.
+However, we need some macros (see first 3 lines of svm.m4).
+
+
+
+[Go Top]
+
+
+Q: Except the python-C++ interface provided, could I use Jython to call libsvm ?
+
+ Yes, here are some examples:
+
+
+$ export CLASSPATH=$CLASSPATH:~/libsvm-2.91/java/libsvm.jar
+$ ./jython
+Jython 2.1a3 on java1.3.0 (JIT: jitc)
+Type "copyright", "credits" or "license" for more information.
+>>> from libsvm import *
+>>> dir()
+['__doc__', '__name__', 'svm', 'svm_model', 'svm_node', 'svm_parameter',
+'svm_problem']
+>>> x1 = [svm_node(index=1,value=1)]
+>>> x2 = [svm_node(index=1,value=-1)]
+>>> param = svm_parameter(svm_type=0,kernel_type=2,gamma=1,cache_size=40,eps=0.001,C=1,nr_weight=0,shrinking=1)
+>>> prob = svm_problem(l=2,y=[1,-1],x=[x1,x2])
+>>> model = svm.svm_train(prob,param)
+*
+optimization finished, #iter = 1
+nu = 1.0
+obj = -1.018315639346838, rho = 0.0
+nSV = 2, nBSV = 2
+Total nSV = 2
+>>> svm.svm_predict(model,x1)
+1.0
+>>> svm.svm_predict(model,x2)
+-1.0
+>>> svm.svm_save_model("test.model",model)
+
+
+
+
+[Go Top]
+
+
+Q: I compile the MATLAB interface without problem, but why errors occur while running it?
+
+
+Your compiler version may not be supported/compatible for MATLAB.
+Please check this MATLAB page first and then specify the version
+number. For example, if g++ X.Y is supported, replace
+
+CXX = g++
+
+in the Makefile with
+
+CXX = g++-X.Y
+
+
+[Go Top]
+
+
+Q: On 64bit Windows I compile the MATLAB interface without problem, but why errors occur while running it?
+
+
+
+
+Please make sure that you use
+the -largeArrayDims option in make.m. For example,
+
+mex -largeArrayDims -O -c svm.cpp
+
+
+Moreover, if you use Microsoft Visual Studio,
+probabally it is not properly installed.
+See the explanation
+here .
+
+[Go Top]
+
+
+Q: Does the MATLAB interface provide a function to do scaling?
+
+
+It is extremely easy to do scaling under MATLAB.
+The following one-line code scale each feature to the range
+of [0,1]:
+
+(data - repmat(min(data,[],1),size(data,1),1))*spdiags(1./(max(data,[],1)-min(data,[],1))',0,size(data,2),size(data,2))
+
+
+[Go Top]
+
+
+Q: How could I use MATLAB interface for parameter selection?
+
+
+One can do this by a simple loop.
+See the following example:
+
+bestcv = 0;
+for log2c = -1:3,
+ for log2g = -4:1,
+ cmd = ['-v 5 -c ', num2str(2^log2c), ' -g ', num2str(2^log2g)];
+ cv = svmtrain(heart_scale_label, heart_scale_inst, cmd);
+ if (cv >= bestcv),
+ bestcv = cv; bestc = 2^log2c; bestg = 2^log2g;
+ end
+ fprintf('%g %g %g (best c=%g, g=%g, rate=%g)\n', log2c, log2g, cv, bestc, bestg, bestcv);
+ end
+end
+
+You may adjust the parameter range in the above loops.
+
+[Go Top]
+
+
+Q: I use MATLAB parallel programming toolbox on a multi-core environment for parameter selection. Why the program is even slower?
+
+
+Fabrizio Lacalandra of University of Pisa reported this issue.
+It seems the problem is caused by the screen output.
+If you disable the info function
+using
#if 0, then the problem
+may be solved.
+
+[Go Top]
+
+
+Q: How to use LIBSVM with OpenMP under MATLAB/Octave?
+
+
+
+First, you must modify svm.cpp. Check the following faq,
+
+How can I use OpenMP to parallelize LIBSVM on a multicore/shared-memory computer?
+
+
+To build the MATLAB/Octave interface, we recommend using make.m .
+You must append '-fopenmp' to CXXFLAGS and add '-lgomp' to mex options in make.m .
+See details below.
+
+
+For MATLAB users, the modified code is:
+
+mex CFLAGS="\$CFLAGS -std=c99" CXXFLAGS="\$CXXFLAGS -fopenmp" -largeArrayDims -I.. -lgomp svmtrain.c ../svm.cpp svm_model_matlab.c
+mex CFLAGS="\$CFLAGS -std=c99" CXXFLAGS="\$CXXFLAGS -fopenmp" -largeArrayDims -I.. -lgomp svmpredict.c ../svm.cpp svm_model_matlab.c
+
+
+
+For Octave users, the modified code is:
+
+setenv('CXXFLAGS', '-fopenmp')
+mex -I.. -lgomp svmtrain.c ../svm.cpp svm_model_matlab.c
+mex -I.. -lgomp svmpredict.c ../svm.cpp svm_model_matlab.c
+
+
+
+If make.m fails under matlab and you use Makefile to compile the codes,
+you must modify two files:
+
+
+You must append '-fopenmp' to CFLAGS in ../Makefile for C/C++ codes:
+
+CFLAGS = -Wall -Wconversion -O3 -fPIC -fopenmp -I$(MATLABDIR)/extern/include -I..
+
+and add '-lgomp' to MEX_OPTION in Makefile for the matlab/octave interface:
+
+MEX_OPTION += -lgomp
+
+
+
+ To run the code, you must specify the number of threads. For
+ example, before executing matlab/octave, you run
+
+> export OMP_NUM_THREADS=8
+> matlab
+
+Here we assume Bash is used. Unfortunately, we do not know yet
+how to specify the number of threads within MATLAB/Octave. Our
+experiments show that
+
+>> setenv('OMP_NUM_THREADS', '8');
+
+does not work. Please contact us if you
+see how to solve this problem. On the other hand, you can
+specify the number of threads in the source code (thanks
+to comments from Ricardo Santiago-mozos):
+
+#pragma omp parallel for private(i) num_threads(8)
+
+
+[Go Top]
+
+
+Q: How could I generate the primal variable w of linear SVM?
+
+
+Let's start from the binary class and
+assume you have two labels -1 and +1.
+After obtaining the model from calling svmtrain,
+do the following to have w and b:
+
+w = model.SVs' * model.sv_coef;
+b = -model.rho;
+
+if model.Label(1) == -1
+ w = -w;
+ b = -b;
+end
+
+If you do regression or one-class SVM, then the if statement is not needed.
+
+ For multi-class SVM, we illustrate the setting
+in the following example of running the iris
+data, which have 3 classes
+
+> [y, x] = libsvmread('../../htdocs/libsvmtools/datasets/multiclass/iris.scale');
+> m = svmtrain(y, x, '-t 0')
+
+m =
+
+ Parameters: [5x1 double]
+ nr_class: 3
+ totalSV: 42
+ rho: [3x1 double]
+ Label: [3x1 double]
+ ProbA: []
+ ProbB: []
+ nSV: [3x1 double]
+ sv_coef: [42x2 double]
+ SVs: [42x4 double]
+
+sv_coef is like:
+
++-+-+--------------------+
+|1|1| |
+|v|v| SVs from class 1 |
+|2|3| |
++-+-+--------------------+
+|1|2| |
+|v|v| SVs from class 2 |
+|2|3| |
++-+-+--------------------+
+|1|2| |
+|v|v| SVs from class 3 |
+|3|3| |
++-+-+--------------------+
+
+so we need to see nSV of each classes.
+
+> m.nSV
+
+ans =
+
+ 3
+ 21
+ 18
+
+Suppose the goal is to find the vector w of classes
+1 vs 3. Then
+y_i alpha_i of training 1 vs 3 are
+
+> coef = [m.sv_coef(1:3,2); m.sv_coef(25:42,1)];
+
+and SVs are:
+
+> SVs = [m.SVs(1:3,:); m.SVs(25:42,:)];
+
+Hence, w is
+
+> w = SVs'*coef;
+
+For rho,
+
+> m.rho
+
+ans =
+
+ 1.1465
+ 0.3682
+ -1.9969
+> b = -m.rho(2);
+
+because rho is arranged by 1vs2 1vs3 2vs3.
+
+
+
+
+[Go Top]
+
+
+Q: Is there an OCTAVE interface for libsvm?
+
+
+Yes, after libsvm 2.86, the matlab interface
+works on OCTAVE as well. Please use make.m by typing
+
+>> make
+
+under OCTAVE.
+
+[Go Top]
+
+
+Q: How to handle the name conflict between svmtrain in the libsvm matlab interface and that in MATLAB bioinformatics toolbox?
+
+
+The easiest way is to rename the svmtrain binary
+file (e.g., svmtrain.mexw32 on 32-bit windows)
+to a different
+name (e.g., svmtrain2.mexw32).
+
+[Go Top]
+
+
+Q: On Windows I got an error message "Invalid MEX-file: Specific module not found" when running the pre-built MATLAB interface in the windows sub-directory. What should I do?
+
+
+
+The error usually happens
+when there are missing runtime components
+such as MSVCR100.dll on your Windows platform.
+You can use tools such as
+Dependency
+Walker to find missing library files.
+
+
+For example, if the pre-built MEX files are compiled by
+Visual C++ 2010,
+you must have installed
+Microsoft Visual C++ Redistributable Package 2010
+(vcredist_x86.exe). You can easily find the freely
+available file from Microsoft's web site.
+
+
+For 64bit Windows, the situation is similar. If
+the pre-built files are by
+Visual C++ 2008, then you must have
+Microsoft Visual C++ Redistributable Package 2008
+(vcredist_x64.exe).
+
+[Go Top]
+
+
+Q: LIBSVM supports 1-vs-1 multi-class classification. If instead I would like to use 1-vs-rest, how to implement it using MATLAB interface?
+
+
+
+Please use code in the following directory . The following example shows how to
+train and test the problem dna (training and testing ).
+
+
Load, train and predict data:
+
+[trainY trainX] = libsvmread('./dna.scale');
+[testY testX] = libsvmread('./dna.scale.t');
+model = ovrtrain(trainY, trainX, '-c 8 -g 4');
+[pred ac decv] = ovrpredict(testY, testX, model);
+fprintf('Accuracy = %g%%\n', ac * 100);
+
+Conduct CV on a grid of parameters
+
+bestcv = 0;
+for log2c = -1:2:3,
+ for log2g = -4:2:1,
+ cmd = ['-q -c ', num2str(2^log2c), ' -g ', num2str(2^log2g)];
+ cv = get_cv_ac(trainY, trainX, cmd, 3);
+ if (cv >= bestcv),
+ bestcv = cv; bestc = 2^log2c; bestg = 2^log2g;
+ end
+ fprintf('%g %g %g (best c=%g, g=%g, rate=%g)\n', log2c, log2g, cv, bestc, bestg, bestcv);
+ end
+end
+
+
+[Go Top]
+
+
+Q: I tried to install matlab interface on mac, but failed. What should I do?
+
+
+
+We assume that in a matlab command window you change directory to libsvm/matlab and type
+
+>> make
+
+We discuss the following situations.
+
+
+An error message like "libsvmread.c:1:19: fatal error:
+stdio.h: No such file or directory" appears.
+
+
+Reason: "make" looks for a C++ compiler, but
+no compiler is found. To get one, you can
+
+ Install XCode offered by Apple Inc.
+ Install XCode Command Line Tools.
+
+
+
+
On OS X with Xcode 4.2+, I got an error message like "llvm-gcc-4.2:
+command not found."
+
+
+Reason: Since Apple Inc. only ships llsvm-gcc instead of gcc-4.2,
+llvm-gcc-4.2 cannot be found.
+
+
+If you are using Xcode 4.2-4.6,
+a related solution is offered at
+http://www.mathworks.com/matlabcentral/answers/94092 .
+
+
+On the other hand, for Xcode 5 (including Xcode 4.2-4.6), in a Matlab command window, enter
+
+
+Please also ensure that SDKROOT corresponds to the SDK version you are using.
+
+
+
Other errors: you may check http://www.mathworks.com/matlabcentral/answers/94092 .
+
+
+
+[Go Top]
+
+
+Q: I tried to install octave interface on windows, but failed. What should I do?
+
+
+
+This may be due to
+that Octave's math.h file does not
+refer to the correct location of Visual Studio's math.h.
+Please see this nice page for detailed
+instructions.
+
+[Go Top]
+
+
+LIBSVM home page
+
+
+
diff --git a/libsvm-3.21/Makefile b/libsvm-3.21/Makefile
new file mode 100644
index 0000000..db6ab34
--- /dev/null
+++ b/libsvm-3.21/Makefile
@@ -0,0 +1,25 @@
+CXX ?= g++
+CFLAGS = -Wall -Wconversion -O3 -fPIC
+SHVER = 2
+OS = $(shell uname)
+
+all: svm-train svm-predict svm-scale
+
+lib: svm.o
+ if [ "$(OS)" = "Darwin" ]; then \
+ SHARED_LIB_FLAG="-dynamiclib -Wl,-install_name,libsvm.so.$(SHVER)"; \
+ else \
+ SHARED_LIB_FLAG="-shared -Wl,-soname,libsvm.so.$(SHVER)"; \
+ fi; \
+ $(CXX) $${SHARED_LIB_FLAG} svm.o -o libsvm.so.$(SHVER)
+
+svm-predict: svm-predict.c svm.o
+ $(CXX) $(CFLAGS) svm-predict.c svm.o -o svm-predict -lm
+svm-train: svm-train.c svm.o
+ $(CXX) $(CFLAGS) svm-train.c svm.o -o svm-train -lm
+svm-scale: svm-scale.c
+ $(CXX) $(CFLAGS) svm-scale.c -o svm-scale
+svm.o: svm.cpp svm.h
+ $(CXX) $(CFLAGS) -c svm.cpp
+clean:
+ rm -f *~ svm.o svm-train svm-predict svm-scale libsvm.so.$(SHVER)
diff --git a/libsvm-3.21/Makefile.win b/libsvm-3.21/Makefile.win
new file mode 100644
index 0000000..b1d3570
--- /dev/null
+++ b/libsvm-3.21/Makefile.win
@@ -0,0 +1,33 @@
+#You must ensure nmake.exe, cl.exe, link.exe are in system path.
+#VCVARS64.bat
+#Under dosbox prompt
+#nmake -f Makefile.win
+
+##########################################
+CXX = cl.exe
+CFLAGS = /nologo /O2 /EHsc /I. /D _WIN64 /D _CRT_SECURE_NO_DEPRECATE
+TARGET = windows
+
+all: $(TARGET)\svm-train.exe $(TARGET)\svm-predict.exe $(TARGET)\svm-scale.exe $(TARGET)\svm-toy.exe lib
+
+$(TARGET)\svm-predict.exe: svm.h svm-predict.c svm.obj
+ $(CXX) $(CFLAGS) svm-predict.c svm.obj -Fe$(TARGET)\svm-predict.exe
+
+$(TARGET)\svm-train.exe: svm.h svm-train.c svm.obj
+ $(CXX) $(CFLAGS) svm-train.c svm.obj -Fe$(TARGET)\svm-train.exe
+
+$(TARGET)\svm-scale.exe: svm.h svm-scale.c
+ $(CXX) $(CFLAGS) svm-scale.c -Fe$(TARGET)\svm-scale.exe
+
+$(TARGET)\svm-toy.exe: svm.h svm.obj svm-toy\windows\svm-toy.cpp
+ $(CXX) $(CFLAGS) svm-toy\windows\svm-toy.cpp svm.obj user32.lib gdi32.lib comdlg32.lib -Fe$(TARGET)\svm-toy.exe
+
+svm.obj: svm.cpp svm.h
+ $(CXX) $(CFLAGS) -c svm.cpp
+
+lib: svm.cpp svm.h svm.def
+ $(CXX) $(CFLAGS) -LD svm.cpp -Fe$(TARGET)\libsvm -link -DEF:svm.def
+
+clean:
+ -erase /Q *.obj $(TARGET)\*.exe $(TARGET)\*.dll $(TARGET)\*.exp $(TARGET)\*.lib
+
diff --git a/libsvm-3.21/README b/libsvm-3.21/README
new file mode 100644
index 0000000..7b78382
--- /dev/null
+++ b/libsvm-3.21/README
@@ -0,0 +1,770 @@
+Libsvm is a simple, easy-to-use, and efficient software for SVM
+classification and regression. It solves C-SVM classification, nu-SVM
+classification, one-class-SVM, epsilon-SVM regression, and nu-SVM
+regression. It also provides an automatic model selection tool for
+C-SVM classification. This document explains the use of libsvm.
+
+Libsvm is available at
+http://www.csie.ntu.edu.tw/~cjlin/libsvm
+Please read the COPYRIGHT file before using libsvm.
+
+Table of Contents
+=================
+
+- Quick Start
+- Installation and Data Format
+- `svm-train' Usage
+- `svm-predict' Usage
+- `svm-scale' Usage
+- Tips on Practical Use
+- Examples
+- Precomputed Kernels
+- Library Usage
+- Java Version
+- Building Windows Binaries
+- Additional Tools: Sub-sampling, Parameter Selection, Format checking, etc.
+- MATLAB/OCTAVE Interface
+- Python Interface
+- Additional Information
+
+Quick Start
+===========
+
+If you are new to SVM and if the data is not large, please go to
+`tools' directory and use easy.py after installation. It does
+everything automatic -- from data scaling to parameter selection.
+
+Usage: easy.py training_file [testing_file]
+
+More information about parameter selection can be found in
+`tools/README.'
+
+Installation and Data Format
+============================
+
+On Unix systems, type `make' to build the `svm-train' and `svm-predict'
+programs. Run them without arguments to show the usages of them.
+
+On other systems, consult `Makefile' to build them (e.g., see
+'Building Windows binaries' in this file) or use the pre-built
+binaries (Windows binaries are in the directory `windows').
+
+The format of training and testing data file is:
+
+ : : ...
+.
+.
+.
+
+Each line contains an instance and is ended by a '\n' character. For
+classification, is an integer indicating the class label
+(multi-class is supported). For regression, is the target
+value which can be any real number. For one-class SVM, it's not used
+so can be any number. The pair : gives a feature
+(attribute) value: is an integer starting from 1 and
+is a real number. The only exception is the precomputed kernel, where
+ starts from 0; see the section of precomputed kernels. Indices
+must be in ASCENDING order. Labels in the testing file are only used
+to calculate accuracy or errors. If they are unknown, just fill the
+first column with any numbers.
+
+A sample classification data included in this package is
+`heart_scale'. To check if your data is in a correct form, use
+`tools/checkdata.py' (details in `tools/README').
+
+Type `svm-train heart_scale', and the program will read the training
+data and output the model file `heart_scale.model'. If you have a test
+set called heart_scale.t, then type `svm-predict heart_scale.t
+heart_scale.model output' to see the prediction accuracy. The `output'
+file contains the predicted class labels.
+
+For classification, if training data are in only one class (i.e., all
+labels are the same), then `svm-train' issues a warning message:
+`Warning: training data in only one class. See README for details,'
+which means the training data is very unbalanced. The label in the
+training data is directly returned when testing.
+
+There are some other useful programs in this package.
+
+svm-scale:
+
+ This is a tool for scaling input data file.
+
+svm-toy:
+
+ This is a simple graphical interface which shows how SVM
+ separate data in a plane. You can click in the window to
+ draw data points. Use "change" button to choose class
+ 1, 2 or 3 (i.e., up to three classes are supported), "load"
+ button to load data from a file, "save" button to save data to
+ a file, "run" button to obtain an SVM model, and "clear"
+ button to clear the window.
+
+ You can enter options in the bottom of the window, the syntax of
+ options is the same as `svm-train'.
+
+ Note that "load" and "save" consider dense data format both in
+ classification and the regression cases. For classification,
+ each data point has one label (the color) that must be 1, 2,
+ or 3 and two attributes (x-axis and y-axis values) in
+ [0,1). For regression, each data point has one target value
+ (y-axis) and one attribute (x-axis values) in [0, 1).
+
+ Type `make' in respective directories to build them.
+
+ You need Qt library to build the Qt version.
+ (available from http://www.trolltech.com)
+
+ You need GTK+ library to build the GTK version.
+ (available from http://www.gtk.org)
+
+ The pre-built Windows binaries are in the `windows'
+ directory. We use Visual C++ on a 32-bit machine, so the
+ maximal cache size is 2GB.
+
+`svm-train' Usage
+=================
+
+Usage: svm-train [options] training_set_file [model_file]
+options:
+-s svm_type : set type of SVM (default 0)
+ 0 -- C-SVC (multi-class classification)
+ 1 -- nu-SVC (multi-class classification)
+ 2 -- one-class SVM
+ 3 -- epsilon-SVR (regression)
+ 4 -- nu-SVR (regression)
+-t kernel_type : set type of kernel function (default 2)
+ 0 -- linear: u'*v
+ 1 -- polynomial: (gamma*u'*v + coef0)^degree
+ 2 -- radial basis function: exp(-gamma*|u-v|^2)
+ 3 -- sigmoid: tanh(gamma*u'*v + coef0)
+ 4 -- precomputed kernel (kernel values in training_set_file)
+-d degree : set degree in kernel function (default 3)
+-g gamma : set gamma in kernel function (default 1/num_features)
+-r coef0 : set coef0 in kernel function (default 0)
+-c cost : set the parameter C of C-SVC, epsilon-SVR, and nu-SVR (default 1)
+-n nu : set the parameter nu of nu-SVC, one-class SVM, and nu-SVR (default 0.5)
+-p epsilon : set the epsilon in loss function of epsilon-SVR (default 0.1)
+-m cachesize : set cache memory size in MB (default 100)
+-e epsilon : set tolerance of termination criterion (default 0.001)
+-h shrinking : whether to use the shrinking heuristics, 0 or 1 (default 1)
+-b probability_estimates : whether to train a SVC or SVR model for probability estimates, 0 or 1 (default 0)
+-wi weight : set the parameter C of class i to weight*C, for C-SVC (default 1)
+-v n: n-fold cross validation mode
+-q : quiet mode (no outputs)
+
+
+The k in the -g option means the number of attributes in the input data.
+
+option -v randomly splits the data into n parts and calculates cross
+validation accuracy/mean squared error on them.
+
+See libsvm FAQ for the meaning of outputs.
+
+`svm-predict' Usage
+===================
+
+Usage: svm-predict [options] test_file model_file output_file
+options:
+-b probability_estimates: whether to predict probability estimates, 0 or 1 (default 0); for one-class SVM only 0 is supported
+
+model_file is the model file generated by svm-train.
+test_file is the test data you want to predict.
+svm-predict will produce output in the output_file.
+
+`svm-scale' Usage
+=================
+
+Usage: svm-scale [options] data_filename
+options:
+-l lower : x scaling lower limit (default -1)
+-u upper : x scaling upper limit (default +1)
+-y y_lower y_upper : y scaling limits (default: no y scaling)
+-s save_filename : save scaling parameters to save_filename
+-r restore_filename : restore scaling parameters from restore_filename
+
+See 'Examples' in this file for examples.
+
+Tips on Practical Use
+=====================
+
+* Scale your data. For example, scale each attribute to [0,1] or [-1,+1].
+* For C-SVC, consider using the model selection tool in the tools directory.
+* nu in nu-SVC/one-class-SVM/nu-SVR approximates the fraction of training
+ errors and support vectors.
+* If data for classification are unbalanced (e.g. many positive and
+ few negative), try different penalty parameters C by -wi (see
+ examples below).
+* Specify larger cache size (i.e., larger -m) for huge problems.
+
+Examples
+========
+
+> svm-scale -l -1 -u 1 -s range train > train.scale
+> svm-scale -r range test > test.scale
+
+Scale each feature of the training data to be in [-1,1]. Scaling
+factors are stored in the file range and then used for scaling the
+test data.
+
+> svm-train -s 0 -c 5 -t 2 -g 0.5 -e 0.1 data_file
+
+Train a classifier with RBF kernel exp(-0.5|u-v|^2), C=10, and
+stopping tolerance 0.1.
+
+> svm-train -s 3 -p 0.1 -t 0 data_file
+
+Solve SVM regression with linear kernel u'v and epsilon=0.1
+in the loss function.
+
+> svm-train -c 10 -w1 1 -w-2 5 -w4 2 data_file
+
+Train a classifier with penalty 10 = 1 * 10 for class 1, penalty 50 =
+5 * 10 for class -2, and penalty 20 = 2 * 10 for class 4.
+
+> svm-train -s 0 -c 100 -g 0.1 -v 5 data_file
+
+Do five-fold cross validation for the classifier using
+the parameters C = 100 and gamma = 0.1
+
+> svm-train -s 0 -b 1 data_file
+> svm-predict -b 1 test_file data_file.model output_file
+
+Obtain a model with probability information and predict test data with
+probability estimates
+
+Precomputed Kernels
+===================
+
+Users may precompute kernel values and input them as training and
+testing files. Then libsvm does not need the original
+training/testing sets.
+
+Assume there are L training instances x1, ..., xL and.
+Let K(x, y) be the kernel
+value of two instances x and y. The input formats
+are:
+
+New training instance for xi:
+
+ 0:i 1:K(xi,x1) ... L:K(xi,xL)
+
+New testing instance for any x:
+
+ 0:? 1:K(x,x1) ... L:K(x,xL)
+
+That is, in the training file the first column must be the "ID" of
+xi. In testing, ? can be any value.
+
+All kernel values including ZEROs must be explicitly provided. Any
+permutation or random subsets of the training/testing files are also
+valid (see examples below).
+
+Note: the format is slightly different from the precomputed kernel
+package released in libsvmtools earlier.
+
+Examples:
+
+ Assume the original training data has three four-feature
+ instances and testing data has one instance:
+
+ 15 1:1 2:1 3:1 4:1
+ 45 2:3 4:3
+ 25 3:1
+
+ 15 1:1 3:1
+
+ If the linear kernel is used, we have the following new
+ training/testing sets:
+
+ 15 0:1 1:4 2:6 3:1
+ 45 0:2 1:6 2:18 3:0
+ 25 0:3 1:1 2:0 3:1
+
+ 15 0:? 1:2 2:0 3:1
+
+ ? can be any value.
+
+ Any subset of the above training file is also valid. For example,
+
+ 25 0:3 1:1 2:0 3:1
+ 45 0:2 1:6 2:18 3:0
+
+ implies that the kernel matrix is
+
+ [K(2,2) K(2,3)] = [18 0]
+ [K(3,2) K(3,3)] = [0 1]
+
+Library Usage
+=============
+
+These functions and structures are declared in the header file
+`svm.h'. You need to #include "svm.h" in your C/C++ source files and
+link your program with `svm.cpp'. You can see `svm-train.c' and
+`svm-predict.c' for examples showing how to use them. We define
+LIBSVM_VERSION and declare `extern int libsvm_version; ' in svm.h, so
+you can check the version number.
+
+Before you classify test data, you need to construct an SVM model
+(`svm_model') using training data. A model can also be saved in
+a file for later use. Once an SVM model is available, you can use it
+to classify new data.
+
+- Function: struct svm_model *svm_train(const struct svm_problem *prob,
+ const struct svm_parameter *param);
+
+ This function constructs and returns an SVM model according to
+ the given training data and parameters.
+
+ struct svm_problem describes the problem:
+
+ struct svm_problem
+ {
+ int l;
+ double *y;
+ struct svm_node **x;
+ };
+
+ where `l' is the number of training data, and `y' is an array containing
+ their target values. (integers in classification, real numbers in
+ regression) `x' is an array of pointers, each of which points to a sparse
+ representation (array of svm_node) of one training vector.
+
+ For example, if we have the following training data:
+
+ LABEL ATTR1 ATTR2 ATTR3 ATTR4 ATTR5
+ ----- ----- ----- ----- ----- -----
+ 1 0 0.1 0.2 0 0
+ 2 0 0.1 0.3 -1.2 0
+ 1 0.4 0 0 0 0
+ 2 0 0.1 0 1.4 0.5
+ 3 -0.1 -0.2 0.1 1.1 0.1
+
+ then the components of svm_problem are:
+
+ l = 5
+
+ y -> 1 2 1 2 3
+
+ x -> [ ] -> (2,0.1) (3,0.2) (-1,?)
+ [ ] -> (2,0.1) (3,0.3) (4,-1.2) (-1,?)
+ [ ] -> (1,0.4) (-1,?)
+ [ ] -> (2,0.1) (4,1.4) (5,0.5) (-1,?)
+ [ ] -> (1,-0.1) (2,-0.2) (3,0.1) (4,1.1) (5,0.1) (-1,?)
+
+ where (index,value) is stored in the structure `svm_node':
+
+ struct svm_node
+ {
+ int index;
+ double value;
+ };
+
+ index = -1 indicates the end of one vector. Note that indices must
+ be in ASCENDING order.
+
+ struct svm_parameter describes the parameters of an SVM model:
+
+ struct svm_parameter
+ {
+ int svm_type;
+ int kernel_type;
+ int degree; /* for poly */
+ double gamma; /* for poly/rbf/sigmoid */
+ double coef0; /* for poly/sigmoid */
+
+ /* these are for training only */
+ double cache_size; /* in MB */
+ double eps; /* stopping criteria */
+ double C; /* for C_SVC, EPSILON_SVR, and NU_SVR */
+ int nr_weight; /* for C_SVC */
+ int *weight_label; /* for C_SVC */
+ double* weight; /* for C_SVC */
+ double nu; /* for NU_SVC, ONE_CLASS, and NU_SVR */
+ double p; /* for EPSILON_SVR */
+ int shrinking; /* use the shrinking heuristics */
+ int probability; /* do probability estimates */
+ };
+
+ svm_type can be one of C_SVC, NU_SVC, ONE_CLASS, EPSILON_SVR, NU_SVR.
+
+ C_SVC: C-SVM classification
+ NU_SVC: nu-SVM classification
+ ONE_CLASS: one-class-SVM
+ EPSILON_SVR: epsilon-SVM regression
+ NU_SVR: nu-SVM regression
+
+ kernel_type can be one of LINEAR, POLY, RBF, SIGMOID.
+
+ LINEAR: u'*v
+ POLY: (gamma*u'*v + coef0)^degree
+ RBF: exp(-gamma*|u-v|^2)
+ SIGMOID: tanh(gamma*u'*v + coef0)
+ PRECOMPUTED: kernel values in training_set_file
+
+ cache_size is the size of the kernel cache, specified in megabytes.
+ C is the cost of constraints violation.
+ eps is the stopping criterion. (we usually use 0.00001 in nu-SVC,
+ 0.001 in others). nu is the parameter in nu-SVM, nu-SVR, and
+ one-class-SVM. p is the epsilon in epsilon-insensitive loss function
+ of epsilon-SVM regression. shrinking = 1 means shrinking is conducted;
+ = 0 otherwise. probability = 1 means model with probability
+ information is obtained; = 0 otherwise.
+
+ nr_weight, weight_label, and weight are used to change the penalty
+ for some classes (If the weight for a class is not changed, it is
+ set to 1). This is useful for training classifier using unbalanced
+ input data or with asymmetric misclassification cost.
+
+ nr_weight is the number of elements in the array weight_label and
+ weight. Each weight[i] corresponds to weight_label[i], meaning that
+ the penalty of class weight_label[i] is scaled by a factor of weight[i].
+
+ If you do not want to change penalty for any of the classes,
+ just set nr_weight to 0.
+
+ *NOTE* Because svm_model contains pointers to svm_problem, you can
+ not free the memory used by svm_problem if you are still using the
+ svm_model produced by svm_train().
+
+ *NOTE* To avoid wrong parameters, svm_check_parameter() should be
+ called before svm_train().
+
+ struct svm_model stores the model obtained from the training procedure.
+ It is not recommended to directly access entries in this structure.
+ Programmers should use the interface functions to get the values.
+
+ struct svm_model
+ {
+ struct svm_parameter param; /* parameter */
+ int nr_class; /* number of classes, = 2 in regression/one class svm */
+ int l; /* total #SV */
+ struct svm_node **SV; /* SVs (SV[l]) */
+ double **sv_coef; /* coefficients for SVs in decision functions (sv_coef[k-1][l]) */
+ double *rho; /* constants in decision functions (rho[k*(k-1)/2]) */
+ double *probA; /* pairwise probability information */
+ double *probB;
+ int *sv_indices; /* sv_indices[0,...,nSV-1] are values in [1,...,num_traning_data] to indicate SVs in the training set */
+
+ /* for classification only */
+
+ int *label; /* label of each class (label[k]) */
+ int *nSV; /* number of SVs for each class (nSV[k]) */
+ /* nSV[0] + nSV[1] + ... + nSV[k-1] = l */
+ /* XXX */
+ int free_sv; /* 1 if svm_model is created by svm_load_model*/
+ /* 0 if svm_model is created by svm_train */
+ };
+
+ param describes the parameters used to obtain the model.
+
+ nr_class is the number of classes. It is 2 for regression and one-class SVM.
+
+ l is the number of support vectors. SV and sv_coef are support
+ vectors and the corresponding coefficients, respectively. Assume there are
+ k classes. For data in class j, the corresponding sv_coef includes (k-1) y*alpha vectors,
+ where alpha's are solutions of the following two class problems:
+ 1 vs j, 2 vs j, ..., j-1 vs j, j vs j+1, j vs j+2, ..., j vs k
+ and y=1 for the first j-1 vectors, while y=-1 for the remaining k-j
+ vectors. For example, if there are 4 classes, sv_coef and SV are like:
+
+ +-+-+-+--------------------+
+ |1|1|1| |
+ |v|v|v| SVs from class 1 |
+ |2|3|4| |
+ +-+-+-+--------------------+
+ |1|2|2| |
+ |v|v|v| SVs from class 2 |
+ |2|3|4| |
+ +-+-+-+--------------------+
+ |1|2|3| |
+ |v|v|v| SVs from class 3 |
+ |3|3|4| |
+ +-+-+-+--------------------+
+ |1|2|3| |
+ |v|v|v| SVs from class 4 |
+ |4|4|4| |
+ +-+-+-+--------------------+
+
+ See svm_train() for an example of assigning values to sv_coef.
+
+ rho is the bias term (-b). probA and probB are parameters used in
+ probability outputs. If there are k classes, there are k*(k-1)/2
+ binary problems as well as rho, probA, and probB values. They are
+ aligned in the order of binary problems:
+ 1 vs 2, 1 vs 3, ..., 1 vs k, 2 vs 3, ..., 2 vs k, ..., k-1 vs k.
+
+ sv_indices[0,...,nSV-1] are values in [1,...,num_traning_data] to
+ indicate support vectors in the training set.
+
+ label contains labels in the training data.
+
+ nSV is the number of support vectors in each class.
+
+ free_sv is a flag used to determine whether the space of SV should
+ be released in free_model_content(struct svm_model*) and
+ free_and_destroy_model(struct svm_model**). If the model is
+ generated by svm_train(), then SV points to data in svm_problem
+ and should not be removed. For example, free_sv is 0 if svm_model
+ is created by svm_train, but is 1 if created by svm_load_model.
+
+- Function: double svm_predict(const struct svm_model *model,
+ const struct svm_node *x);
+
+ This function does classification or regression on a test vector x
+ given a model.
+
+ For a classification model, the predicted class for x is returned.
+ For a regression model, the function value of x calculated using
+ the model is returned. For an one-class model, +1 or -1 is
+ returned.
+
+- Function: void svm_cross_validation(const struct svm_problem *prob,
+ const struct svm_parameter *param, int nr_fold, double *target);
+
+ This function conducts cross validation. Data are separated to
+ nr_fold folds. Under given parameters, sequentially each fold is
+ validated using the model from training the remaining. Predicted
+ labels (of all prob's instances) in the validation process are
+ stored in the array called target.
+
+ The format of svm_prob is same as that for svm_train().
+
+- Function: int svm_get_svm_type(const struct svm_model *model);
+
+ This function gives svm_type of the model. Possible values of
+ svm_type are defined in svm.h.
+
+- Function: int svm_get_nr_class(const svm_model *model);
+
+ For a classification model, this function gives the number of
+ classes. For a regression or an one-class model, 2 is returned.
+
+- Function: void svm_get_labels(const svm_model *model, int* label)
+
+ For a classification model, this function outputs the name of
+ labels into an array called label. For regression and one-class
+ models, label is unchanged.
+
+- Function: void svm_get_sv_indices(const struct svm_model *model, int *sv_indices)
+
+ This function outputs indices of support vectors into an array called sv_indices.
+ The size of sv_indices is the number of support vectors and can be obtained by calling svm_get_nr_sv.
+ Each sv_indices[i] is in the range of [1, ..., num_traning_data].
+
+- Function: int svm_get_nr_sv(const struct svm_model *model)
+
+ This function gives the number of total support vector.
+
+- Function: double svm_get_svr_probability(const struct svm_model *model);
+
+ For a regression model with probability information, this function
+ outputs a value sigma > 0. For test data, we consider the
+ probability model: target value = predicted value + z, z: Laplace
+ distribution e^(-|z|/sigma)/(2sigma)
+
+ If the model is not for svr or does not contain required
+ information, 0 is returned.
+
+- Function: double svm_predict_values(const svm_model *model,
+ const svm_node *x, double* dec_values)
+
+ This function gives decision values on a test vector x given a
+ model, and return the predicted label (classification) or
+ the function value (regression).
+
+ For a classification model with nr_class classes, this function
+ gives nr_class*(nr_class-1)/2 decision values in the array
+ dec_values, where nr_class can be obtained from the function
+ svm_get_nr_class. The order is label[0] vs. label[1], ...,
+ label[0] vs. label[nr_class-1], label[1] vs. label[2], ...,
+ label[nr_class-2] vs. label[nr_class-1], where label can be
+ obtained from the function svm_get_labels. The returned value is
+ the predicted class for x. Note that when nr_class = 1, this
+ function does not give any decision value.
+
+ For a regression model, dec_values[0] and the returned value are
+ both the function value of x calculated using the model. For a
+ one-class model, dec_values[0] is the decision value of x, while
+ the returned value is +1/-1.
+
+- Function: double svm_predict_probability(const struct svm_model *model,
+ const struct svm_node *x, double* prob_estimates);
+
+ This function does classification or regression on a test vector x
+ given a model with probability information.
+
+ For a classification model with probability information, this
+ function gives nr_class probability estimates in the array
+ prob_estimates. nr_class can be obtained from the function
+ svm_get_nr_class. The class with the highest probability is
+ returned. For regression/one-class SVM, the array prob_estimates
+ is unchanged and the returned value is the same as that of
+ svm_predict.
+
+- Function: const char *svm_check_parameter(const struct svm_problem *prob,
+ const struct svm_parameter *param);
+
+ This function checks whether the parameters are within the feasible
+ range of the problem. This function should be called before calling
+ svm_train() and svm_cross_validation(). It returns NULL if the
+ parameters are feasible, otherwise an error message is returned.
+
+- Function: int svm_check_probability_model(const struct svm_model *model);
+
+ This function checks whether the model contains required
+ information to do probability estimates. If so, it returns
+ +1. Otherwise, 0 is returned. This function should be called
+ before calling svm_get_svr_probability and
+ svm_predict_probability.
+
+- Function: int svm_save_model(const char *model_file_name,
+ const struct svm_model *model);
+
+ This function saves a model to a file; returns 0 on success, or -1
+ if an error occurs.
+
+- Function: struct svm_model *svm_load_model(const char *model_file_name);
+
+ This function returns a pointer to the model read from the file,
+ or a null pointer if the model could not be loaded.
+
+- Function: void svm_free_model_content(struct svm_model *model_ptr);
+
+ This function frees the memory used by the entries in a model structure.
+
+- Function: void svm_free_and_destroy_model(struct svm_model **model_ptr_ptr);
+
+ This function frees the memory used by a model and destroys the model
+ structure. It is equivalent to svm_destroy_model, which
+ is deprecated after version 3.0.
+
+- Function: void svm_destroy_param(struct svm_parameter *param);
+
+ This function frees the memory used by a parameter set.
+
+- Function: void svm_set_print_string_function(void (*print_func)(const char *));
+
+ Users can specify their output format by a function. Use
+ svm_set_print_string_function(NULL);
+ for default printing to stdout.
+
+Java Version
+============
+
+The pre-compiled java class archive `libsvm.jar' and its source files are
+in the java directory. To run the programs, use
+
+java -classpath libsvm.jar svm_train
+java -classpath libsvm.jar svm_predict
+java -classpath libsvm.jar svm_toy
+java -classpath libsvm.jar svm_scale
+
+Note that you need Java 1.5 (5.0) or above to run it.
+
+You may need to add Java runtime library (like classes.zip) to the classpath.
+You may need to increase maximum Java heap size.
+
+Library usages are similar to the C version. These functions are available:
+
+public class svm {
+ public static final int LIBSVM_VERSION=321;
+ public static svm_model svm_train(svm_problem prob, svm_parameter param);
+ public static void svm_cross_validation(svm_problem prob, svm_parameter param, int nr_fold, double[] target);
+ public static int svm_get_svm_type(svm_model model);
+ public static int svm_get_nr_class(svm_model model);
+ public static void svm_get_labels(svm_model model, int[] label);
+ public static void svm_get_sv_indices(svm_model model, int[] indices);
+ public static int svm_get_nr_sv(svm_model model);
+ public static double svm_get_svr_probability(svm_model model);
+ public static double svm_predict_values(svm_model model, svm_node[] x, double[] dec_values);
+ public static double svm_predict(svm_model model, svm_node[] x);
+ public static double svm_predict_probability(svm_model model, svm_node[] x, double[] prob_estimates);
+ public static void svm_save_model(String model_file_name, svm_model model) throws IOException
+ public static svm_model svm_load_model(String model_file_name) throws IOException
+ public static String svm_check_parameter(svm_problem prob, svm_parameter param);
+ public static int svm_check_probability_model(svm_model model);
+ public static void svm_set_print_string_function(svm_print_interface print_func);
+}
+
+The library is in the "libsvm" package.
+Note that in Java version, svm_node[] is not ended with a node whose index = -1.
+
+Users can specify their output format by
+
+ your_print_func = new svm_print_interface()
+ {
+ public void print(String s)
+ {
+ // your own format
+ }
+ };
+ svm.svm_set_print_string_function(your_print_func);
+
+Building Windows Binaries
+=========================
+
+Windows binaries are available in the directory `windows'. To re-build
+them via Visual C++, use the following steps:
+
+1. Open a DOS command box (or Visual Studio Command Prompt) and change
+to libsvm directory. If environment variables of VC++ have not been
+set, type
+
+""C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\bin\amd64\vcvars64.bat""
+
+You may have to modify the above command according which version of
+VC++ or where it is installed.
+
+2. Type
+
+nmake -f Makefile.win clean all
+
+3. (optional) To build shared library libsvm.dll, type
+
+nmake -f Makefile.win lib
+
+4. (optional) To build 32-bit windows binaries, you must
+ (1) Setup "C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\bin\vcvars32.bat" instead of vcvars64.bat
+ (2) Change CFLAGS in Makefile.win: /D _WIN64 to /D _WIN32
+
+Another way is to build them from Visual C++ environment. See details
+in libsvm FAQ.
+
+- Additional Tools: Sub-sampling, Parameter Selection, Format checking, etc.
+============================================================================
+
+See the README file in the tools directory.
+
+MATLAB/OCTAVE Interface
+=======================
+
+Please check the file README in the directory `matlab'.
+
+Python Interface
+================
+
+See the README file in python directory.
+
+Additional Information
+======================
+
+If you find LIBSVM helpful, please cite it as
+
+Chih-Chung Chang and Chih-Jen Lin, LIBSVM : a library for support
+vector machines. ACM Transactions on Intelligent Systems and
+Technology, 2:27:1--27:27, 2011. Software available at
+http://www.csie.ntu.edu.tw/~cjlin/libsvm
+
+LIBSVM implementation document is available at
+http://www.csie.ntu.edu.tw/~cjlin/papers/libsvm.pdf
+
+For any questions and comments, please email cjlin@csie.ntu.edu.tw
+
+Acknowledgments:
+This work was supported in part by the National Science
+Council of Taiwan via the grant NSC 89-2213-E-002-013.
+The authors thank their group members and users
+for many helpful discussions and comments. They are listed in
+http://www.csie.ntu.edu.tw/~cjlin/libsvm/acknowledgements
+
diff --git a/libsvm-3.21/heart_scale b/libsvm-3.21/heart_scale
new file mode 100644
index 0000000..23bac94
--- /dev/null
+++ b/libsvm-3.21/heart_scale
@@ -0,0 +1,270 @@
++1 1:0.708333 2:1 3:1 4:-0.320755 5:-0.105023 6:-1 7:1 8:-0.419847 9:-1 10:-0.225806 12:1 13:-1
+-1 1:0.583333 2:-1 3:0.333333 4:-0.603774 5:1 6:-1 7:1 8:0.358779 9:-1 10:-0.483871 12:-1 13:1
++1 1:0.166667 2:1 3:-0.333333 4:-0.433962 5:-0.383562 6:-1 7:-1 8:0.0687023 9:-1 10:-0.903226 11:-1 12:-1 13:1
+-1 1:0.458333 2:1 3:1 4:-0.358491 5:-0.374429 6:-1 7:-1 8:-0.480916 9:1 10:-0.935484 12:-0.333333 13:1
+-1 1:0.875 2:-1 3:-0.333333 4:-0.509434 5:-0.347032 6:-1 7:1 8:-0.236641 9:1 10:-0.935484 11:-1 12:-0.333333 13:-1
+-1 1:0.5 2:1 3:1 4:-0.509434 5:-0.767123 6:-1 7:-1 8:0.0534351 9:-1 10:-0.870968 11:-1 12:-1 13:1
++1 1:0.125 2:1 3:0.333333 4:-0.320755 5:-0.406393 6:1 7:1 8:0.0839695 9:1 10:-0.806452 12:-0.333333 13:0.5
++1 1:0.25 2:1 3:1 4:-0.698113 5:-0.484018 6:-1 7:1 8:0.0839695 9:1 10:-0.612903 12:-0.333333 13:1
++1 1:0.291667 2:1 3:1 4:-0.132075 5:-0.237443 6:-1 7:1 8:0.51145 9:-1 10:-0.612903 12:0.333333 13:1
++1 1:0.416667 2:-1 3:1 4:0.0566038 5:0.283105 6:-1 7:1 8:0.267176 9:-1 10:0.290323 12:1 13:1
+-1 1:0.25 2:1 3:1 4:-0.226415 5:-0.506849 6:-1 7:-1 8:0.374046 9:-1 10:-0.83871 12:-1 13:1
+-1 2:1 3:1 4:-0.0943396 5:-0.543379 6:-1 7:1 8:-0.389313 9:1 10:-1 11:-1 12:-1 13:1
+-1 1:-0.375 2:1 3:0.333333 4:-0.132075 5:-0.502283 6:-1 7:1 8:0.664122 9:-1 10:-1 11:-1 12:-1 13:-1
++1 1:0.333333 2:1 3:-1 4:-0.245283 5:-0.506849 6:-1 7:-1 8:0.129771 9:-1 10:-0.16129 12:0.333333 13:-1
+-1 1:0.166667 2:-1 3:1 4:-0.358491 5:-0.191781 6:-1 7:1 8:0.343511 9:-1 10:-1 11:-1 12:-0.333333 13:-1
+-1 1:0.75 2:-1 3:1 4:-0.660377 5:-0.894977 6:-1 7:-1 8:-0.175573 9:-1 10:-0.483871 12:-1 13:-1
++1 1:-0.291667 2:1 3:1 4:-0.132075 5:-0.155251 6:-1 7:-1 8:-0.251908 9:1 10:-0.419355 12:0.333333 13:1
++1 2:1 3:1 4:-0.132075 5:-0.648402 6:1 7:1 8:0.282443 9:1 11:1 12:-1 13:1
+-1 1:0.458333 2:1 3:-1 4:-0.698113 5:-0.611872 6:-1 7:1 8:0.114504 9:1 10:-0.419355 12:-1 13:-1
+-1 1:-0.541667 2:1 3:-1 4:-0.132075 5:-0.666667 6:-1 7:-1 8:0.633588 9:1 10:-0.548387 11:-1 12:-1 13:1
++1 1:0.583333 2:1 3:1 4:-0.509434 5:-0.52968 6:-1 7:1 8:-0.114504 9:1 10:-0.16129 12:0.333333 13:1
+-1 1:-0.208333 2:1 3:-0.333333 4:-0.320755 5:-0.456621 6:-1 7:1 8:0.664122 9:-1 10:-0.935484 12:-1 13:-1
+-1 1:-0.416667 2:1 3:1 4:-0.603774 5:-0.191781 6:-1 7:-1 8:0.679389 9:-1 10:-0.612903 12:-1 13:-1
+-1 1:-0.25 2:1 3:1 4:-0.660377 5:-0.643836 6:-1 7:-1 8:0.0992366 9:-1 10:-0.967742 11:-1 12:-1 13:-1
+-1 1:0.0416667 2:-1 3:-0.333333 4:-0.283019 5:-0.260274 6:1 7:1 8:0.343511 9:1 10:-1 11:-1 12:-0.333333 13:-1
+-1 1:-0.208333 2:-1 3:0.333333 4:-0.320755 5:-0.319635 6:-1 7:-1 8:0.0381679 9:-1 10:-0.935484 11:-1 12:-1 13:-1
+-1 1:-0.291667 2:-1 3:1 4:-0.169811 5:-0.465753 6:-1 7:1 8:0.236641 9:1 10:-1 12:-1 13:-1
+-1 1:-0.0833333 2:-1 3:0.333333 4:-0.509434 5:-0.228311 6:-1 7:1 8:0.312977 9:-1 10:-0.806452 11:-1 12:-1 13:-1
++1 1:0.208333 2:1 3:0.333333 4:-0.660377 5:-0.525114 6:-1 7:1 8:0.435115 9:-1 10:-0.193548 12:-0.333333 13:1
+-1 1:0.75 2:-1 3:0.333333 4:-0.698113 5:-0.365297 6:1 7:1 8:-0.0992366 9:-1 10:-1 11:-1 12:-0.333333 13:-1
++1 1:0.166667 2:1 3:0.333333 4:-0.358491 5:-0.52968 6:-1 7:1 8:0.206107 9:-1 10:-0.870968 12:-0.333333 13:1
+-1 1:0.541667 2:1 3:1 4:0.245283 5:-0.534247 6:-1 7:1 8:0.0229008 9:-1 10:-0.258065 11:-1 12:-1 13:0.5
+-1 1:-0.666667 2:-1 3:0.333333 4:-0.509434 5:-0.593607 6:-1 7:-1 8:0.51145 9:-1 10:-1 11:-1 12:-1 13:-1
++1 1:0.25 2:1 3:1 4:0.433962 5:-0.086758 6:-1 7:1 8:0.0534351 9:1 10:0.0967742 11:1 12:-1 13:1
++1 1:-0.125 2:1 3:1 4:-0.0566038 5:-0.6621 6:-1 7:1 8:-0.160305 9:1 10:-0.709677 12:-1 13:1
++1 1:-0.208333 2:1 3:1 4:-0.320755 5:-0.406393 6:1 7:1 8:0.206107 9:1 10:-1 11:-1 12:0.333333 13:1
++1 1:0.333333 2:1 3:1 4:-0.132075 5:-0.630137 6:-1 7:1 8:0.0229008 9:1 10:-0.387097 11:-1 12:-0.333333 13:1
++1 1:0.25 2:1 3:-1 4:0.245283 5:-0.328767 6:-1 7:1 8:-0.175573 9:-1 10:-1 11:-1 12:-1 13:-1
+-1 1:-0.458333 2:1 3:0.333333 4:-0.320755 5:-0.753425 6:-1 7:-1 8:0.206107 9:-1 10:-1 11:-1 12:-1 13:-1
+-1 1:-0.208333 2:1 3:1 4:-0.471698 5:-0.561644 6:-1 7:1 8:0.755725 9:-1 10:-1 11:-1 12:-1 13:-1
++1 1:-0.541667 2:1 3:1 4:0.0943396 5:-0.557078 6:-1 7:-1 8:0.679389 9:-1 10:-1 11:-1 12:-1 13:1
+-1 1:0.375 2:-1 3:1 4:-0.433962 5:-0.621005 6:-1 7:-1 8:0.40458 9:-1 10:-1 11:-1 12:-1 13:-1
+-1 1:-0.375 2:1 3:0.333333 4:-0.320755 5:-0.511416 6:-1 7:-1 8:0.648855 9:1 10:-0.870968 11:-1 12:-1 13:-1
+-1 1:-0.291667 2:1 3:-0.333333 4:-0.867925 5:-0.675799 6:1 7:-1 8:0.29771 9:-1 10:-1 11:-1 12:-1 13:1
++1 1:0.25 2:1 3:0.333333 4:-0.396226 5:-0.579909 6:1 7:-1 8:-0.0381679 9:-1 10:-0.290323 12:-0.333333 13:0.5
+-1 1:0.208333 2:1 3:0.333333 4:-0.132075 5:-0.611872 6:1 7:1 8:0.435115 9:-1 10:-1 11:-1 12:-1 13:-1
++1 1:-0.166667 2:1 3:0.333333 4:-0.54717 5:-0.894977 6:-1 7:1 8:-0.160305 9:-1 10:-0.741935 11:-1 12:1 13:-1
++1 1:-0.375 2:1 3:1 4:-0.698113 5:-0.675799 6:-1 7:1 8:0.618321 9:-1 10:-1 11:-1 12:-0.333333 13:-1
++1 1:0.541667 2:1 3:-0.333333 4:0.245283 5:-0.452055 6:-1 7:-1 8:-0.251908 9:1 10:-1 12:1 13:0.5
++1 1:0.5 2:-1 3:1 4:0.0566038 5:-0.547945 6:-1 7:1 8:-0.343511 9:-1 10:-0.677419 12:1 13:1
++1 1:-0.458333 2:1 3:1 4:-0.207547 5:-0.136986 6:-1 7:-1 8:-0.175573 9:1 10:-0.419355 12:-1 13:0.5
+-1 1:-0.0416667 2:1 3:-0.333333 4:-0.358491 5:-0.639269 6:1 7:-1 8:0.725191 9:-1 10:-1 11:-1 12:-1 13:-1
+-1 1:0.5 2:-1 3:0.333333 4:-0.132075 5:0.328767 6:1 7:1 8:0.312977 9:-1 10:-0.741935 11:-1 12:-0.333333 13:-1
+-1 1:0.416667 2:-1 3:-0.333333 4:-0.132075 5:-0.684932 6:-1 7:-1 8:0.648855 9:-1 10:-1 11:-1 12:0.333333 13:-1
+-1 1:-0.333333 2:-1 3:-0.333333 4:-0.320755 5:-0.506849 6:-1 7:1 8:0.587786 9:-1 10:-0.806452 12:-1 13:-1
+-1 1:-0.5 2:-1 3:-0.333333 4:-0.792453 5:-0.671233 6:-1 7:-1 8:0.480916 9:-1 10:-1 11:-1 12:-0.333333 13:-1
++1 1:0.333333 2:1 3:1 4:-0.169811 5:-0.817352 6:-1 7:1 8:-0.175573 9:1 10:0.16129 12:-0.333333 13:-1
+-1 1:0.291667 2:-1 3:0.333333 4:-0.509434 5:-0.762557 6:1 7:-1 8:-0.618321 9:-1 10:-1 11:-1 12:-1 13:-1
++1 1:0.25 2:-1 3:1 4:0.509434 5:-0.438356 6:-1 7:-1 8:0.0992366 9:1 10:-1 12:-1 13:-1
++1 1:0.375 2:1 3:-0.333333 4:-0.509434 5:-0.292237 6:-1 7:1 8:-0.51145 9:-1 10:-0.548387 12:-0.333333 13:1
+-1 1:0.166667 2:1 3:0.333333 4:0.0566038 5:-1 6:1 7:-1 8:0.557252 9:-1 10:-0.935484 11:-1 12:-0.333333 13:1
++1 1:-0.0833333 2:-1 3:1 4:-0.320755 5:-0.182648 6:-1 7:-1 8:0.0839695 9:1 10:-0.612903 12:-1 13:1
+-1 1:-0.375 2:1 3:0.333333 4:-0.509434 5:-0.543379 6:-1 7:-1 8:0.496183 9:-1 10:-1 11:-1 12:-1 13:-1
+-1 1:0.291667 2:-1 3:-1 4:0.0566038 5:-0.479452 6:-1 7:-1 8:0.526718 9:-1 10:-0.709677 11:-1 12:-1 13:-1
+-1 1:0.416667 2:1 3:-1 4:-0.0377358 5:-0.511416 6:1 7:1 8:0.206107 9:-1 10:-0.258065 11:1 12:-1 13:0.5
++1 1:0.166667 2:1 3:1 4:0.0566038 5:-0.315068 6:-1 7:1 8:-0.374046 9:1 10:-0.806452 12:-0.333333 13:0.5
+-1 1:-0.0833333 2:1 3:1 4:-0.132075 5:-0.383562 6:-1 7:1 8:0.755725 9:1 10:-1 11:-1 12:-1 13:-1
++1 1:0.208333 2:-1 3:-0.333333 4:-0.207547 5:-0.118721 6:1 7:1 8:0.236641 9:-1 10:-1 11:-1 12:0.333333 13:-1
+-1 1:-0.375 2:-1 3:0.333333 4:-0.54717 5:-0.47032 6:-1 7:-1 8:0.19084 9:-1 10:-0.903226 12:-0.333333 13:-1
++1 1:-0.25 2:1 3:0.333333 4:-0.735849 5:-0.465753 6:-1 7:-1 8:0.236641 9:-1 10:-1 11:-1 12:-1 13:-1
++1 1:0.333333 2:1 3:1 4:-0.509434 5:-0.388128 6:-1 7:-1 8:0.0534351 9:1 10:0.16129 12:-0.333333 13:1
+-1 1:0.166667 2:-1 3:1 4:-0.509434 5:0.0410959 6:-1 7:-1 8:0.40458 9:1 10:-0.806452 11:-1 12:-1 13:-1
+-1 1:0.708333 2:1 3:-0.333333 4:0.169811 5:-0.456621 6:-1 7:1 8:0.0992366 9:-1 10:-1 11:-1 12:-1 13:-1
+-1 1:0.958333 2:-1 3:0.333333 4:-0.132075 5:-0.675799 6:-1 8:-0.312977 9:-1 10:-0.645161 12:-1 13:-1
+-1 1:0.583333 2:-1 3:1 4:-0.773585 5:-0.557078 6:-1 7:-1 8:0.0839695 9:-1 10:-0.903226 11:-1 12:0.333333 13:-1
++1 1:-0.333333 2:1 3:1 4:-0.0943396 5:-0.164384 6:-1 7:1 8:0.160305 9:1 10:-1 12:1 13:1
+-1 1:-0.333333 2:1 3:1 4:-0.811321 5:-0.625571 6:-1 7:1 8:0.175573 9:1 10:-0.0322581 12:-1 13:-1
+-1 1:-0.583333 2:-1 3:0.333333 4:-1 5:-0.666667 6:-1 7:-1 8:0.648855 9:-1 10:-1 11:-1 12:-1 13:-1
+-1 1:-0.458333 2:-1 3:0.333333 4:-0.509434 5:-0.621005 6:-1 7:-1 8:0.557252 9:-1 10:-1 12:-1 13:-1
+-1 1:0.125 2:1 3:-0.333333 4:-0.509434 5:-0.497717 6:-1 7:-1 8:0.633588 9:-1 10:-0.741935 11:-1 12:-1 13:-1
++1 1:0.208333 2:1 3:1 4:-0.0188679 5:-0.579909 6:-1 7:-1 8:-0.480916 9:-1 10:-0.354839 12:-0.333333 13:1
++1 1:-0.75 2:1 3:1 4:-0.509434 5:-0.671233 6:-1 7:-1 8:-0.0992366 9:1 10:-0.483871 12:-1 13:1
++1 1:0.208333 2:1 3:1 4:0.0566038 5:-0.342466 6:-1 7:1 8:-0.389313 9:1 10:-0.741935 11:-1 12:-1 13:1
+-1 1:-0.5 2:1 3:0.333333 4:-0.320755 5:-0.598174 6:-1 7:1 8:0.480916 9:-1 10:-0.354839 12:-1 13:-1
+-1 1:0.166667 2:1 3:1 4:-0.698113 5:-0.657534 6:-1 7:-1 8:-0.160305 9:1 10:-0.516129 12:-1 13:0.5
+-1 1:-0.458333 2:1 3:-1 4:0.0188679 5:-0.461187 6:-1 7:1 8:0.633588 9:-1 10:-0.741935 11:-1 12:0.333333 13:-1
+-1 1:0.375 2:1 3:-0.333333 4:-0.358491 5:-0.625571 6:1 7:1 8:0.0534351 9:-1 10:-1 11:-1 12:-1 13:-1
+-1 1:0.25 2:1 3:-1 4:0.584906 5:-0.342466 6:-1 7:1 8:0.129771 9:-1 10:0.354839 11:1 12:-1 13:1
+-1 1:-0.5 2:-1 3:-0.333333 4:-0.396226 5:-0.178082 6:-1 7:-1 8:0.40458 9:-1 10:-1 11:-1 12:-1 13:-1
++1 1:-0.125 2:1 3:1 4:0.0566038 5:-0.465753 6:-1 7:1 8:-0.129771 9:-1 10:-0.16129 12:-1 13:1
+-1 1:0.25 2:1 3:-0.333333 4:-0.132075 5:-0.56621 6:-1 7:-1 8:0.419847 9:1 10:-1 11:-1 12:-1 13:-1
++1 1:0.333333 2:-1 3:1 4:-0.320755 5:-0.0684932 6:-1 7:1 8:0.496183 9:-1 10:-1 11:-1 12:-1 13:-1
++1 1:0.0416667 2:1 3:1 4:-0.433962 5:-0.360731 6:-1 7:1 8:-0.419847 9:1 10:-0.290323 12:-0.333333 13:1
++1 1:0.0416667 2:1 3:1 4:-0.698113 5:-0.634703 6:-1 7:1 8:-0.435115 9:1 10:-1 12:-0.333333 13:-1
++1 1:-0.0416667 2:1 3:1 4:-0.415094 5:-0.607306 6:-1 7:-1 8:0.480916 9:-1 10:-0.677419 11:-1 12:0.333333 13:1
++1 1:-0.25 2:1 3:1 4:-0.698113 5:-0.319635 6:-1 7:1 8:-0.282443 9:1 10:-0.677419 12:-0.333333 13:-1
+-1 1:0.541667 2:1 3:1 4:-0.509434 5:-0.196347 6:-1 7:1 8:0.221374 9:-1 10:-0.870968 12:-1 13:-1
++1 1:0.208333 2:1 3:1 4:-0.886792 5:-0.506849 6:-1 7:-1 8:0.29771 9:-1 10:-0.967742 11:-1 12:-0.333333 13:1
+-1 1:0.458333 2:-1 3:0.333333 4:-0.132075 5:-0.146119 6:-1 7:-1 8:-0.0534351 9:-1 10:-0.935484 11:-1 12:-1 13:1
+-1 1:-0.125 2:-1 3:-0.333333 4:-0.509434 5:-0.461187 6:-1 7:-1 8:0.389313 9:-1 10:-0.645161 11:-1 12:-1 13:-1
+-1 1:-0.375 2:-1 3:0.333333 4:-0.735849 5:-0.931507 6:-1 7:-1 8:0.587786 9:-1 10:-0.806452 12:-1 13:-1
++1 1:0.583333 2:1 3:1 4:-0.509434 5:-0.493151 6:-1 7:-1 8:-1 9:-1 10:-0.677419 12:-1 13:-1
+-1 1:-0.166667 2:-1 3:1 4:-0.320755 5:-0.347032 6:-1 7:-1 8:0.40458 9:-1 10:-1 11:-1 12:-1 13:-1
++1 1:0.166667 2:1 3:1 4:0.339623 5:-0.255708 6:1 7:1 8:-0.19084 9:-1 10:-0.677419 12:1 13:1
++1 1:0.416667 2:1 3:1 4:-0.320755 5:-0.415525 6:-1 7:1 8:0.160305 9:-1 10:-0.548387 12:-0.333333 13:1
++1 1:-0.208333 2:1 3:1 4:-0.433962 5:-0.324201 6:-1 7:1 8:0.450382 9:-1 10:-0.83871 12:-1 13:1
+-1 1:-0.0833333 2:1 3:0.333333 4:-0.886792 5:-0.561644 6:-1 7:-1 8:0.0992366 9:1 10:-0.612903 12:-1 13:-1
++1 1:0.291667 2:-1 3:1 4:0.0566038 5:-0.39726 6:-1 7:1 8:0.312977 9:-1 10:-0.16129 12:0.333333 13:1
++1 1:0.25 2:1 3:1 4:-0.132075 5:-0.767123 6:-1 7:-1 8:0.389313 9:1 10:-1 11:-1 12:-0.333333 13:1
+-1 1:-0.333333 2:-1 3:-0.333333 4:-0.660377 5:-0.844749 6:-1 7:-1 8:0.0229008 9:-1 10:-1 12:-1 13:-1
++1 1:0.0833333 2:-1 3:1 4:0.622642 5:-0.0821918 6:-1 8:-0.29771 9:1 10:0.0967742 12:-1 13:-1
+-1 1:-0.5 2:1 3:-0.333333 4:-0.698113 5:-0.502283 6:-1 7:-1 8:0.251908 9:-1 10:-1 11:-1 12:-1 13:-1
++1 1:0.291667 2:-1 3:1 4:0.207547 5:-0.182648 6:-1 7:1 8:0.374046 9:-1 10:-1 11:-1 12:-1 13:-1
+-1 1:0.0416667 2:-1 3:0.333333 4:-0.226415 5:-0.187215 6:1 7:-1 8:0.51145 9:-1 10:-1 11:-1 12:-1 13:-1
+-1 1:-0.458333 2:1 3:-0.333333 4:-0.509434 5:-0.228311 6:-1 7:-1 8:0.389313 9:-1 10:-1 11:-1 12:-1 13:-1
+-1 1:-0.166667 2:-1 3:-0.333333 4:-0.245283 5:-0.3379 6:-1 7:-1 8:0.389313 9:-1 10:-1 12:-1 13:-1
++1 1:-0.291667 2:1 3:1 4:-0.509434 5:-0.438356 6:-1 7:1 8:0.114504 9:-1 10:-0.741935 11:-1 12:-1 13:1
++1 1:0.125 2:-1 3:1 4:1 5:-0.260274 6:1 7:1 8:-0.0534351 9:1 10:0.290323 11:1 12:0.333333 13:1
+-1 1:0.541667 2:-1 3:-1 4:0.0566038 5:-0.543379 6:-1 7:-1 8:-0.343511 9:-1 10:-0.16129 11:1 12:-1 13:-1
++1 1:0.125 2:1 3:1 4:-0.320755 5:-0.283105 6:1 7:1 8:-0.51145 9:1 10:-0.483871 11:1 12:-1 13:1
++1 1:-0.166667 2:1 3:0.333333 4:-0.509434 5:-0.716895 6:-1 7:-1 8:0.0381679 9:-1 10:-0.354839 12:1 13:1
++1 1:0.0416667 2:1 3:1 4:-0.471698 5:-0.269406 6:-1 7:1 8:-0.312977 9:1 10:0.0322581 12:0.333333 13:-1
++1 1:0.166667 2:1 3:1 4:0.0943396 5:-0.324201 6:-1 7:-1 8:-0.740458 9:1 10:-0.612903 12:-0.333333 13:1
+-1 1:0.5 2:-1 3:0.333333 4:0.245283 5:0.0684932 6:-1 7:1 8:0.221374 9:-1 10:-0.741935 11:-1 12:-1 13:-1
+-1 1:0.0416667 2:1 3:0.333333 4:-0.415094 5:-0.328767 6:-1 7:1 8:0.236641 9:-1 10:-0.83871 11:1 12:-0.333333 13:-1
+-1 1:0.0416667 2:-1 3:0.333333 4:0.245283 5:-0.657534 6:-1 7:-1 8:0.40458 9:-1 10:-1 11:-1 12:-0.333333 13:-1
++1 1:0.375 2:1 3:1 4:-0.509434 5:-0.356164 6:-1 7:-1 8:-0.572519 9:1 10:-0.419355 12:0.333333 13:1
+-1 1:-0.0416667 2:-1 3:0.333333 4:-0.207547 5:-0.680365 6:-1 7:1 8:0.496183 9:-1 10:-0.967742 12:-1 13:-1
+-1 1:-0.0416667 2:1 3:-0.333333 4:-0.245283 5:-0.657534 6:-1 7:-1 8:0.328244 9:-1 10:-0.741935 11:-1 12:-0.333333 13:-1
++1 1:0.291667 2:1 3:1 4:-0.566038 5:-0.525114 6:1 7:-1 8:0.358779 9:1 10:-0.548387 11:-1 12:0.333333 13:1
++1 1:0.416667 2:-1 3:1 4:-0.735849 5:-0.347032 6:-1 7:-1 8:0.496183 9:1 10:-0.419355 12:0.333333 13:-1
++1 1:0.541667 2:1 3:1 4:-0.660377 5:-0.607306 6:-1 7:1 8:-0.0687023 9:1 10:-0.967742 11:-1 12:-0.333333 13:-1
+-1 1:-0.458333 2:1 3:1 4:-0.132075 5:-0.543379 6:-1 7:-1 8:0.633588 9:-1 10:-1 11:-1 12:-1 13:-1
++1 1:0.458333 2:1 3:1 4:-0.509434 5:-0.452055 6:-1 7:1 8:-0.618321 9:1 10:-0.290323 11:1 12:-0.333333 13:-1
+-1 1:0.0416667 2:1 3:0.333333 4:0.0566038 5:-0.515982 6:-1 7:1 8:0.435115 9:-1 10:-0.483871 11:-1 12:-1 13:1
+-1 1:-0.291667 2:-1 3:0.333333 4:-0.0943396 5:-0.767123 6:-1 7:1 8:0.358779 9:1 10:-0.548387 11:1 12:-1 13:-1
+-1 1:0.583333 2:-1 3:0.333333 4:0.0943396 5:-0.310502 6:-1 7:-1 8:0.541985 9:-1 10:-1 11:-1 12:-0.333333 13:-1
++1 1:0.125 2:1 3:1 4:-0.415094 5:-0.438356 6:1 7:1 8:0.114504 9:1 10:-0.612903 12:-0.333333 13:-1
+-1 1:-0.791667 2:-1 3:-0.333333 4:-0.54717 5:-0.616438 6:-1 7:-1 8:0.847328 9:-1 10:-0.774194 11:-1 12:-1 13:-1
+-1 1:0.166667 2:1 3:1 4:-0.283019 5:-0.630137 6:-1 7:-1 8:0.480916 9:1 10:-1 11:-1 12:-1 13:1
++1 1:0.458333 2:1 3:1 4:-0.0377358 5:-0.607306 6:-1 7:1 8:-0.0687023 9:-1 10:-0.354839 12:0.333333 13:0.5
+-1 1:0.25 2:1 3:1 4:-0.169811 5:-0.3379 6:-1 7:1 8:0.694656 9:-1 10:-1 11:-1 12:-1 13:-1
++1 1:-0.125 2:1 3:0.333333 4:-0.132075 5:-0.511416 6:-1 7:-1 8:0.40458 9:-1 10:-0.806452 12:-0.333333 13:1
+-1 1:-0.0833333 2:1 3:-1 4:-0.415094 5:-0.60274 6:-1 7:1 8:-0.175573 9:1 10:-0.548387 11:-1 12:-0.333333 13:-1
++1 1:0.0416667 2:1 3:-0.333333 4:0.849057 5:-0.283105 6:-1 7:1 8:0.89313 9:-1 10:-1 11:-1 12:-0.333333 13:1
++1 2:1 3:1 4:-0.45283 5:-0.287671 6:-1 7:-1 8:-0.633588 9:1 10:-0.354839 12:0.333333 13:1
++1 1:-0.0416667 2:1 3:1 4:-0.660377 5:-0.525114 6:-1 7:-1 8:0.358779 9:-1 10:-1 11:-1 12:-0.333333 13:-1
++1 1:-0.541667 2:1 3:1 4:-0.698113 5:-0.812785 6:-1 7:1 8:-0.343511 9:1 10:-0.354839 12:-1 13:1
++1 1:0.208333 2:1 3:0.333333 4:-0.283019 5:-0.552511 6:-1 7:1 8:0.557252 9:-1 10:0.0322581 11:-1 12:0.333333 13:1
+-1 1:-0.5 2:-1 3:0.333333 4:-0.660377 5:-0.351598 6:-1 7:1 8:0.541985 9:1 10:-1 11:-1 12:-1 13:-1
+-1 1:-0.5 2:1 3:0.333333 4:-0.660377 5:-0.43379 6:-1 7:-1 8:0.648855 9:-1 10:-1 11:-1 12:-1 13:-1
+-1 1:-0.125 2:-1 3:0.333333 4:-0.509434 5:-0.575342 6:-1 7:-1 8:0.328244 9:-1 10:-0.483871 12:-1 13:-1
+-1 1:0.0416667 2:-1 3:0.333333 4:-0.735849 5:-0.356164 6:-1 7:1 8:0.465649 9:-1 10:-1 11:-1 12:-1 13:-1
+-1 1:0.458333 2:-1 3:1 4:-0.320755 5:-0.191781 6:-1 7:-1 8:-0.221374 9:-1 10:-0.354839 12:0.333333 13:-1
+-1 1:-0.0833333 2:-1 3:0.333333 4:-0.320755 5:-0.406393 6:-1 7:1 8:0.19084 9:-1 10:-0.83871 11:-1 12:-1 13:-1
+-1 1:-0.291667 2:-1 3:-0.333333 4:-0.792453 5:-0.643836 6:-1 7:-1 8:0.541985 9:-1 10:-1 11:-1 12:-1 13:-1
++1 1:0.0833333 2:1 3:1 4:-0.132075 5:-0.584475 6:-1 7:-1 8:-0.389313 9:1 10:0.806452 11:1 12:-1 13:1
+-1 1:-0.333333 2:1 3:-0.333333 4:-0.358491 5:-0.16895 6:-1 7:1 8:0.51145 9:-1 10:-1 11:-1 12:-1 13:-1
+-1 1:0.125 2:1 3:-1 4:-0.509434 5:-0.694064 6:-1 7:1 8:0.389313 9:-1 10:-0.387097 12:-1 13:1
++1 1:0.541667 2:-1 3:1 4:0.584906 5:-0.534247 6:1 7:-1 8:0.435115 9:1 10:-0.677419 12:0.333333 13:1
++1 1:-0.625 2:1 3:-1 4:-0.509434 5:-0.520548 6:-1 7:-1 8:0.694656 9:1 10:0.225806 12:-1 13:1
++1 1:0.375 2:-1 3:1 4:0.0566038 5:-0.461187 6:-1 7:-1 8:0.267176 9:1 10:-0.548387 12:-1 13:-1
+-1 1:0.0833333 2:1 3:-0.333333 4:-0.320755 5:-0.378995 6:-1 7:-1 8:0.282443 9:-1 10:-1 11:-1 12:-1 13:-1
++1 1:0.208333 2:1 3:1 4:-0.358491 5:-0.392694 6:-1 7:1 8:-0.0992366 9:1 10:-0.0322581 12:0.333333 13:1
+-1 1:-0.416667 2:1 3:1 4:-0.698113 5:-0.611872 6:-1 7:-1 8:0.374046 9:-1 10:-1 11:-1 12:-1 13:1
+-1 1:0.458333 2:-1 3:1 4:0.622642 5:-0.0913242 6:-1 7:-1 8:0.267176 9:1 10:-1 11:-1 12:-1 13:-1
+-1 1:-0.125 2:-1 3:1 4:-0.698113 5:-0.415525 6:-1 7:1 8:0.343511 9:-1 10:-1 11:-1 12:-1 13:-1
+-1 2:1 3:0.333333 4:-0.320755 5:-0.675799 6:1 7:1 8:0.236641 9:-1 10:-0.612903 11:1 12:-1 13:-1
+-1 1:-0.333333 2:-1 3:1 4:-0.169811 5:-0.497717 6:-1 7:1 8:0.236641 9:1 10:-0.935484 12:-1 13:-1
++1 1:0.5 2:1 3:-1 4:-0.169811 5:-0.287671 6:1 7:1 8:0.572519 9:-1 10:-0.548387 12:-0.333333 13:-1
+-1 1:0.666667 2:1 3:-1 4:0.245283 5:-0.506849 6:1 7:1 8:-0.0839695 9:-1 10:-0.967742 12:-0.333333 13:-1
++1 1:0.666667 2:1 3:0.333333 4:-0.132075 5:-0.415525 6:-1 7:1 8:0.145038 9:-1 10:-0.354839 12:1 13:1
++1 1:0.583333 2:1 3:1 4:-0.886792 5:-0.210046 6:-1 7:1 8:-0.175573 9:1 10:-0.709677 12:0.333333 13:-1
+-1 1:0.625 2:-1 3:0.333333 4:-0.509434 5:-0.611872 6:-1 7:1 8:-0.328244 9:-1 10:-0.516129 12:-1 13:-1
+-1 1:-0.791667 2:1 3:-1 4:-0.54717 5:-0.744292 6:-1 7:1 8:0.572519 9:-1 10:-1 11:-1 12:-1 13:-1
++1 1:0.375 2:-1 3:1 4:-0.169811 5:-0.232877 6:1 7:-1 8:-0.465649 9:-1 10:-0.387097 12:1 13:-1
++1 1:-0.0833333 2:1 3:1 4:-0.132075 5:-0.214612 6:-1 7:-1 8:-0.221374 9:1 10:0.354839 12:1 13:1
++1 1:-0.291667 2:1 3:0.333333 4:0.0566038 5:-0.520548 6:-1 7:-1 8:0.160305 9:-1 10:0.16129 12:-1 13:-1
++1 1:0.583333 2:1 3:1 4:-0.415094 5:-0.415525 6:1 7:-1 8:0.40458 9:-1 10:-0.935484 12:0.333333 13:1
+-1 1:-0.125 2:1 3:0.333333 4:-0.339623 5:-0.680365 6:-1 7:-1 8:0.40458 9:-1 10:-1 11:-1 12:-1 13:-1
+-1 1:-0.458333 2:1 3:0.333333 4:-0.509434 5:-0.479452 6:1 7:-1 8:0.877863 9:-1 10:-0.741935 11:1 12:-1 13:1
++1 1:0.125 2:-1 3:1 4:-0.245283 5:0.292237 6:-1 7:1 8:0.206107 9:1 10:-0.387097 12:0.333333 13:1
++1 1:-0.5 2:1 3:1 4:-0.698113 5:-0.789954 6:-1 7:1 8:0.328244 9:-1 10:-1 11:-1 12:-1 13:1
+-1 1:-0.458333 2:-1 3:1 4:-0.849057 5:-0.365297 6:-1 7:1 8:-0.221374 9:-1 10:-0.806452 12:-1 13:-1
+-1 2:1 3:0.333333 4:-0.320755 5:-0.452055 6:1 7:1 8:0.557252 9:-1 10:-1 11:-1 12:1 13:-1
+-1 1:-0.416667 2:1 3:0.333333 4:-0.320755 5:-0.136986 6:-1 7:-1 8:0.389313 9:-1 10:-0.387097 11:-1 12:-0.333333 13:-1
++1 1:0.125 2:1 3:1 4:-0.283019 5:-0.73516 6:-1 7:1 8:-0.480916 9:1 10:-0.322581 12:-0.333333 13:0.5
+-1 1:-0.0416667 2:1 3:1 4:-0.735849 5:-0.511416 6:1 7:-1 8:0.160305 9:-1 10:-0.967742 11:-1 12:1 13:1
+-1 1:0.375 2:-1 3:1 4:-0.132075 5:0.223744 6:-1 7:1 8:0.312977 9:-1 10:-0.612903 12:-1 13:-1
++1 1:0.708333 2:1 3:0.333333 4:0.245283 5:-0.347032 6:-1 7:-1 8:-0.374046 9:1 10:-0.0645161 12:-0.333333 13:1
+-1 1:0.0416667 2:1 3:1 4:-0.132075 5:-0.484018 6:-1 7:-1 8:0.358779 9:-1 10:-0.612903 11:-1 12:-1 13:-1
++1 1:0.708333 2:1 3:1 4:-0.0377358 5:-0.780822 6:-1 7:-1 8:-0.175573 9:1 10:-0.16129 11:1 12:-1 13:1
+-1 1:0.0416667 2:1 3:-0.333333 4:-0.735849 5:-0.164384 6:-1 7:-1 8:0.29771 9:-1 10:-1 11:-1 12:-1 13:1
++1 1:-0.75 2:1 3:1 4:-0.396226 5:-0.287671 6:-1 7:1 8:0.29771 9:1 10:-1 11:-1 12:-1 13:1
+-1 1:-0.208333 2:1 3:0.333333 4:-0.433962 5:-0.410959 6:1 7:-1 8:0.587786 9:-1 10:-1 11:-1 12:0.333333 13:-1
+-1 1:0.0833333 2:-1 3:-0.333333 4:-0.226415 5:-0.43379 6:-1 7:1 8:0.374046 9:-1 10:-0.548387 12:-1 13:-1
+-1 1:0.208333 2:-1 3:1 4:-0.886792 5:-0.442922 6:-1 7:1 8:-0.221374 9:-1 10:-0.677419 12:-1 13:-1
+-1 1:0.0416667 2:-1 3:0.333333 4:-0.698113 5:-0.598174 6:-1 7:-1 8:0.328244 9:-1 10:-0.483871 12:-1 13:-1
+-1 1:0.666667 2:-1 3:-1 4:-0.132075 5:-0.484018 6:-1 7:-1 8:0.221374 9:-1 10:-0.419355 11:-1 12:0.333333 13:-1
++1 1:1 2:1 3:1 4:-0.415094 5:-0.187215 6:-1 7:1 8:0.389313 9:1 10:-1 11:-1 12:1 13:-1
+-1 1:0.625 2:1 3:0.333333 4:-0.54717 5:-0.310502 6:-1 7:-1 8:0.221374 9:-1 10:-0.677419 11:-1 12:-0.333333 13:1
++1 1:0.208333 2:1 3:1 4:-0.415094 5:-0.205479 6:-1 7:1 8:0.526718 9:-1 10:-1 11:-1 12:0.333333 13:1
++1 1:0.291667 2:1 3:1 4:-0.415094 5:-0.39726 6:-1 7:1 8:0.0687023 9:1 10:-0.0967742 12:-0.333333 13:1
++1 1:-0.0833333 2:1 3:1 4:-0.132075 5:-0.210046 6:-1 7:-1 8:0.557252 9:1 10:-0.483871 11:-1 12:-1 13:1
++1 1:0.0833333 2:1 3:1 4:0.245283 5:-0.255708 6:-1 7:1 8:0.129771 9:1 10:-0.741935 12:-0.333333 13:1
+-1 1:-0.0416667 2:1 3:-1 4:0.0943396 5:-0.214612 6:1 7:-1 8:0.633588 9:-1 10:-0.612903 12:-1 13:1
+-1 1:0.291667 2:-1 3:0.333333 4:-0.849057 5:-0.123288 6:-1 7:-1 8:0.358779 9:-1 10:-1 11:-1 12:-0.333333 13:-1
+-1 1:0.208333 2:1 3:0.333333 4:-0.792453 5:-0.479452 6:-1 7:1 8:0.267176 9:1 10:-0.806452 12:-1 13:1
++1 1:0.458333 2:1 3:0.333333 4:-0.415094 5:-0.164384 6:-1 7:-1 8:-0.0839695 9:1 10:-0.419355 12:-1 13:1
+-1 1:-0.666667 2:1 3:0.333333 4:-0.320755 5:-0.43379 6:-1 7:-1 8:0.770992 9:-1 10:0.129032 11:1 12:-1 13:-1
++1 1:0.25 2:1 3:-1 4:0.433962 5:-0.260274 6:-1 7:1 8:0.343511 9:-1 10:-0.935484 12:-1 13:1
+-1 1:-0.0833333 2:1 3:0.333333 4:-0.415094 5:-0.456621 6:1 7:1 8:0.450382 9:-1 10:-0.225806 12:-1 13:-1
+-1 1:-0.416667 2:-1 3:0.333333 4:-0.471698 5:-0.60274 6:-1 7:-1 8:0.435115 9:-1 10:-0.935484 12:-1 13:-1
++1 1:0.208333 2:1 3:1 4:-0.358491 5:-0.589041 6:-1 7:1 8:-0.0839695 9:1 10:-0.290323 12:1 13:1
+-1 1:-1 2:1 3:-0.333333 4:-0.320755 5:-0.643836 6:-1 7:1 8:1 9:-1 10:-1 11:-1 12:-1 13:-1
+-1 1:-0.5 2:-1 3:-0.333333 4:-0.320755 5:-0.643836 6:-1 7:1 8:0.541985 9:-1 10:-0.548387 11:-1 12:-1 13:-1
+-1 1:0.416667 2:-1 3:0.333333 4:-0.226415 5:-0.424658 6:-1 7:1 8:0.541985 9:-1 10:-1 11:-1 12:-1 13:-1
+-1 1:-0.0833333 2:1 3:0.333333 4:-1 5:-0.538813 6:-1 7:-1 8:0.267176 9:1 10:-1 11:-1 12:-0.333333 13:1
+-1 1:0.0416667 2:1 3:0.333333 4:-0.509434 5:-0.39726 6:-1 7:1 8:0.160305 9:-1 10:-0.870968 12:-1 13:1
+-1 1:-0.375 2:1 3:-0.333333 4:-0.509434 5:-0.570776 6:-1 7:-1 8:0.51145 9:-1 10:-1 11:-1 12:-1 13:-1
++1 1:0.0416667 2:1 3:1 4:-0.698113 5:-0.484018 6:-1 7:-1 8:-0.160305 9:1 10:-0.0967742 12:-0.333333 13:1
++1 1:0.5 2:1 3:1 4:-0.226415 5:-0.415525 6:-1 7:1 8:-0.145038 9:-1 10:-0.0967742 12:-0.333333 13:1
+-1 1:0.166667 2:1 3:0.333333 4:0.0566038 5:-0.808219 6:-1 7:-1 8:0.572519 9:-1 10:-0.483871 11:-1 12:-1 13:-1
++1 1:0.416667 2:1 3:1 4:-0.320755 5:-0.0684932 6:1 7:1 8:-0.0687023 9:1 10:-0.419355 11:-1 12:1 13:1
+-1 1:-0.75 2:-1 3:1 4:-0.169811 5:-0.739726 6:-1 7:-1 8:0.694656 9:-1 10:-0.548387 11:-1 12:-1 13:-1
+-1 1:-0.5 2:1 3:-0.333333 4:-0.226415 5:-0.648402 6:-1 7:-1 8:-0.0687023 9:-1 10:-1 12:-1 13:0.5
++1 1:0.375 2:-1 3:0.333333 4:-0.320755 5:-0.374429 6:-1 7:-1 8:-0.603053 9:-1 10:-0.612903 12:-0.333333 13:1
++1 1:-0.416667 2:-1 3:1 4:-0.283019 5:-0.0182648 6:1 7:1 8:-0.00763359 9:1 10:-0.0322581 12:-1 13:1
+-1 1:0.208333 2:-1 3:-1 4:0.0566038 5:-0.283105 6:1 7:1 8:0.389313 9:-1 10:-0.677419 11:-1 12:-1 13:-1
+-1 1:-0.0416667 2:1 3:-1 4:-0.54717 5:-0.726027 6:-1 7:1 8:0.816794 9:-1 10:-1 12:-1 13:0.5
++1 1:0.333333 2:-1 3:1 4:-0.0377358 5:-0.173516 6:-1 7:1 8:0.145038 9:1 10:-0.677419 12:-1 13:1
++1 1:-0.583333 2:1 3:1 4:-0.54717 5:-0.575342 6:-1 7:-1 8:0.0534351 9:-1 10:-0.612903 12:-1 13:1
+-1 1:-0.333333 2:1 3:1 4:-0.603774 5:-0.388128 6:-1 7:1 8:0.740458 9:-1 10:-1 11:-1 12:-1 13:-1
++1 1:-0.0416667 2:1 3:1 4:-0.358491 5:-0.410959 6:-1 7:-1 8:0.374046 9:1 10:-1 11:-1 12:-0.333333 13:1
+-1 1:0.375 2:1 3:0.333333 4:-0.320755 5:-0.520548 6:-1 7:-1 8:0.145038 9:-1 10:-0.419355 12:1 13:1
++1 1:0.375 2:-1 3:1 4:0.245283 5:-0.826484 6:-1 7:1 8:0.129771 9:-1 10:1 11:1 12:1 13:1
+-1 2:-1 3:1 4:-0.169811 5:-0.506849 6:-1 7:1 8:0.358779 9:-1 10:-1 11:-1 12:-1 13:-1
++1 1:-0.416667 2:1 3:1 4:-0.509434 5:-0.767123 6:-1 7:1 8:-0.251908 9:1 10:-0.193548 12:-1 13:1
+-1 1:-0.25 2:1 3:0.333333 4:-0.169811 5:-0.401826 6:-1 7:1 8:0.29771 9:-1 10:-1 11:-1 12:-1 13:-1
+-1 1:-0.0416667 2:1 3:-0.333333 4:-0.509434 5:-0.0913242 6:-1 7:-1 8:0.541985 9:-1 10:-0.935484 11:-1 12:-1 13:-1
++1 1:0.625 2:1 3:0.333333 4:0.622642 5:-0.324201 6:1 7:1 8:0.206107 9:1 10:-0.483871 12:-1 13:1
+-1 1:-0.583333 2:1 3:0.333333 4:-0.132075 5:-0.109589 6:-1 7:1 8:0.694656 9:-1 10:-1 11:-1 12:-1 13:-1
+-1 2:-1 3:1 4:-0.320755 5:-0.369863 6:-1 7:1 8:0.0992366 9:-1 10:-0.870968 12:-1 13:-1
++1 1:0.375 2:-1 3:1 4:-0.132075 5:-0.351598 6:-1 7:1 8:0.358779 9:-1 10:0.16129 11:1 12:0.333333 13:-1
+-1 1:-0.0833333 2:-1 3:0.333333 4:-0.132075 5:-0.16895 6:-1 7:1 8:0.0839695 9:-1 10:-0.516129 11:-1 12:-0.333333 13:-1
++1 1:0.291667 2:1 3:1 4:-0.320755 5:-0.420091 6:-1 7:-1 8:0.114504 9:1 10:-0.548387 11:-1 12:-0.333333 13:1
++1 1:0.5 2:1 3:1 4:-0.698113 5:-0.442922 6:-1 7:1 8:0.328244 9:-1 10:-0.806452 11:-1 12:0.333333 13:0.5
+-1 1:0.5 2:-1 3:0.333333 4:0.150943 5:-0.347032 6:-1 7:-1 8:0.175573 9:-1 10:-0.741935 11:-1 12:-1 13:-1
++1 1:0.291667 2:1 3:0.333333 4:-0.132075 5:-0.730594 6:-1 7:1 8:0.282443 9:-1 10:-0.0322581 12:-1 13:-1
++1 1:0.291667 2:1 3:1 4:-0.0377358 5:-0.287671 6:-1 7:1 8:0.0839695 9:1 10:-0.0967742 12:0.333333 13:1
++1 1:0.0416667 2:1 3:1 4:-0.509434 5:-0.716895 6:-1 7:-1 8:-0.358779 9:-1 10:-0.548387 12:-0.333333 13:1
+-1 1:-0.375 2:1 3:-0.333333 4:-0.320755 5:-0.575342 6:-1 7:1 8:0.78626 9:-1 10:-1 11:-1 12:-1 13:-1
++1 1:-0.375 2:1 3:1 4:-0.660377 5:-0.251142 6:-1 7:1 8:0.251908 9:-1 10:-1 11:-1 12:-0.333333 13:-1
+-1 1:-0.0833333 2:1 3:0.333333 4:-0.698113 5:-0.776256 6:-1 7:-1 8:-0.206107 9:-1 10:-0.806452 11:-1 12:-1 13:-1
+-1 1:0.25 2:1 3:0.333333 4:0.0566038 5:-0.607306 6:1 7:-1 8:0.312977 9:-1 10:-0.483871 11:-1 12:-1 13:-1
+-1 1:0.75 2:-1 3:-0.333333 4:0.245283 5:-0.196347 6:-1 7:-1 8:0.389313 9:-1 10:-0.870968 11:-1 12:0.333333 13:-1
+-1 1:0.333333 2:1 3:0.333333 4:0.0566038 5:-0.465753 6:1 7:-1 8:0.00763359 9:1 10:-0.677419 12:-1 13:-1
++1 1:0.0833333 2:1 3:1 4:-0.283019 5:0.0365297 6:-1 7:-1 8:-0.0687023 9:1 10:-0.612903 12:-0.333333 13:1
++1 1:0.458333 2:1 3:0.333333 4:-0.132075 5:-0.0456621 6:-1 7:-1 8:0.328244 9:-1 10:-1 11:-1 12:-1 13:-1
+-1 1:-0.416667 2:1 3:1 4:0.0566038 5:-0.447489 6:-1 7:-1 8:0.526718 9:-1 10:-0.516129 11:-1 12:-1 13:-1
+-1 1:0.208333 2:-1 3:0.333333 4:-0.509434 5:-0.0228311 6:-1 7:-1 8:0.541985 9:-1 10:-1 11:-1 12:-1 13:-1
++1 1:0.291667 2:1 3:1 4:-0.320755 5:-0.634703 6:-1 7:1 8:-0.0687023 9:1 10:-0.225806 12:0.333333 13:1
++1 1:0.208333 2:1 3:-0.333333 4:-0.509434 5:-0.278539 6:-1 7:1 8:0.358779 9:-1 10:-0.419355 12:-1 13:-1
+-1 1:-0.166667 2:1 3:-0.333333 4:-0.320755 5:-0.360731 6:-1 7:-1 8:0.526718 9:-1 10:-0.806452 11:-1 12:-1 13:-1
++1 1:-0.208333 2:1 3:-0.333333 4:-0.698113 5:-0.52968 6:-1 7:-1 8:0.480916 9:-1 10:-0.677419 11:1 12:-1 13:1
+-1 1:-0.0416667 2:1 3:0.333333 4:0.471698 5:-0.666667 6:1 7:-1 8:0.389313 9:-1 10:-0.83871 11:-1 12:-1 13:1
+-1 1:-0.375 2:1 3:-0.333333 4:-0.509434 5:-0.374429 6:-1 7:-1 8:0.557252 9:-1 10:-1 11:-1 12:-1 13:1
+-1 1:0.125 2:-1 3:-0.333333 4:-0.132075 5:-0.232877 6:-1 7:1 8:0.251908 9:-1 10:-0.580645 12:-1 13:-1
+-1 1:0.166667 2:1 3:1 4:-0.132075 5:-0.69863 6:-1 7:-1 8:0.175573 9:-1 10:-0.870968 12:-1 13:0.5
++1 1:0.583333 2:1 3:1 4:0.245283 5:-0.269406 6:-1 7:1 8:-0.435115 9:1 10:-0.516129 12:1 13:-1
diff --git a/libsvm-3.21/java/Makefile b/libsvm-3.21/java/Makefile
new file mode 100644
index 0000000..08d31bd
--- /dev/null
+++ b/libsvm-3.21/java/Makefile
@@ -0,0 +1,25 @@
+.SUFFIXES: .class .java
+FILES = libsvm/svm.class libsvm/svm_model.class libsvm/svm_node.class \
+ libsvm/svm_parameter.class libsvm/svm_problem.class \
+ libsvm/svm_print_interface.class \
+ svm_train.class svm_predict.class svm_toy.class svm_scale.class
+
+#JAVAC = jikes
+JAVAC_FLAGS = -target 1.5 -source 1.5
+JAVAC = javac
+# JAVAC_FLAGS =
+
+all: $(FILES)
+ jar cvf libsvm.jar *.class libsvm/*.class
+
+.java.class:
+ $(JAVAC) $(JAVAC_FLAGS) $<
+
+libsvm/svm.java: libsvm/svm.m4
+ m4 libsvm/svm.m4 > libsvm/svm.java
+
+clean:
+ rm -f libsvm/*.class *.class *.jar libsvm/*~ *~ libsvm/svm.java
+
+dist: clean all
+ rm *.class libsvm/*.class
diff --git a/libsvm-3.21/java/libsvm.jar b/libsvm-3.21/java/libsvm.jar
new file mode 100644
index 0000000..3714a99
Binary files /dev/null and b/libsvm-3.21/java/libsvm.jar differ
diff --git a/libsvm-3.21/java/libsvm/svm.java b/libsvm-3.21/java/libsvm/svm.java
new file mode 100644
index 0000000..442bbce
--- /dev/null
+++ b/libsvm-3.21/java/libsvm/svm.java
@@ -0,0 +1,2849 @@
+
+
+
+
+
+package libsvm;
+import java.io.*;
+import java.util.*;
+
+//
+// Kernel Cache
+//
+// l is the number of total data items
+// size is the cache size limit in bytes
+//
+class Cache {
+ private final int l;
+ private long size;
+ private final class head_t
+ {
+ head_t prev, next; // a cicular list
+ float[] data;
+ int len; // data[0,len) is cached in this entry
+ }
+ private final head_t[] head;
+ private head_t lru_head;
+
+ Cache(int l_, long size_)
+ {
+ l = l_;
+ size = size_;
+ head = new head_t[l];
+ for(int i=0;i= len if nothing needs to be filled)
+ // java: simulate pointer using single-element array
+ int get_data(int index, float[][] data, int len)
+ {
+ head_t h = head[index];
+ if(h.len > 0) lru_delete(h);
+ int more = len - h.len;
+
+ if(more > 0)
+ {
+ // free old space
+ while(size < more)
+ {
+ head_t old = lru_head.next;
+ lru_delete(old);
+ size += old.len;
+ old.data = null;
+ old.len = 0;
+ }
+
+ // allocate new space
+ float[] new_data = new float[len];
+ if(h.data != null) System.arraycopy(h.data,0,new_data,0,h.len);
+ h.data = new_data;
+ size -= more;
+ do {int _=h.len; h.len=len; len=_;} while(false);
+ }
+
+ lru_insert(h);
+ data[0] = h.data;
+ return len;
+ }
+
+ void swap_index(int i, int j)
+ {
+ if(i==j) return;
+
+ if(head[i].len > 0) lru_delete(head[i]);
+ if(head[j].len > 0) lru_delete(head[j]);
+ do {float[] _=head[i].data; head[i].data=head[j].data; head[j].data=_;} while(false);
+ do {int _=head[i].len; head[i].len=head[j].len; head[j].len=_;} while(false);
+ if(head[i].len > 0) lru_insert(head[i]);
+ if(head[j].len > 0) lru_insert(head[j]);
+
+ if(i>j) do {int _=i; i=j; j=_;} while(false);
+ for(head_t h = lru_head.next; h!=lru_head; h=h.next)
+ {
+ if(h.len > i)
+ {
+ if(h.len > j)
+ do {float _=h.data[i]; h.data[i]=h.data[j]; h.data[j]=_;} while(false);
+ else
+ {
+ // give up
+ lru_delete(h);
+ size += h.len;
+ h.data = null;
+ h.len = 0;
+ }
+ }
+ }
+ }
+}
+
+//
+// Kernel evaluation
+//
+// the static method k_function is for doing single kernel evaluation
+// the constructor of Kernel prepares to calculate the l*l kernel matrix
+// the member function get_Q is for getting one column from the Q Matrix
+//
+abstract class QMatrix {
+ abstract float[] get_Q(int column, int len);
+ abstract double[] get_QD();
+ abstract void swap_index(int i, int j);
+};
+
+abstract class Kernel extends QMatrix {
+ private svm_node[][] x;
+ private final double[] x_square;
+
+ // svm_parameter
+ private final int kernel_type;
+ private final int degree;
+ private final double gamma;
+ private final double coef0;
+
+ abstract float[] get_Q(int column, int len);
+ abstract double[] get_QD();
+
+ void swap_index(int i, int j)
+ {
+ do {svm_node[] _=x[i]; x[i]=x[j]; x[j]=_;} while(false);
+ if(x_square != null) do {double _=x_square[i]; x_square[i]=x_square[j]; x_square[j]=_;} while(false);
+ }
+
+ private static double powi(double base, int times)
+ {
+ double tmp = base, ret = 1.0;
+
+ for(int t=times; t>0; t/=2)
+ {
+ if(t%2==1) ret*=tmp;
+ tmp = tmp * tmp;
+ }
+ return ret;
+ }
+
+ double kernel_function(int i, int j)
+ {
+ switch(kernel_type)
+ {
+ case svm_parameter.LINEAR:
+ return dot(x[i],x[j]);
+ case svm_parameter.POLY:
+ return powi(gamma*dot(x[i],x[j])+coef0,degree);
+ case svm_parameter.RBF:
+ return Math.exp(-gamma*(x_square[i]+x_square[j]-2*dot(x[i],x[j])));
+ case svm_parameter.SIGMOID:
+ return Math.tanh(gamma*dot(x[i],x[j])+coef0);
+ case svm_parameter.PRECOMPUTED:
+ return x[i][(int)(x[j][0].value)].value;
+ default:
+ return 0; // java
+ }
+ }
+
+ Kernel(int l, svm_node[][] x_, svm_parameter param)
+ {
+ this.kernel_type = param.kernel_type;
+ this.degree = param.degree;
+ this.gamma = param.gamma;
+ this.coef0 = param.coef0;
+
+ x = (svm_node[][])x_.clone();
+
+ if(kernel_type == svm_parameter.RBF)
+ {
+ x_square = new double[l];
+ for(int i=0;i y[j].index)
+ ++j;
+ else
+ ++i;
+ }
+ }
+ return sum;
+ }
+
+ static double k_function(svm_node[] x, svm_node[] y,
+ svm_parameter param)
+ {
+ switch(param.kernel_type)
+ {
+ case svm_parameter.LINEAR:
+ return dot(x,y);
+ case svm_parameter.POLY:
+ return powi(param.gamma*dot(x,y)+param.coef0,param.degree);
+ case svm_parameter.RBF:
+ {
+ double sum = 0;
+ int xlen = x.length;
+ int ylen = y.length;
+ int i = 0;
+ int j = 0;
+ while(i < xlen && j < ylen)
+ {
+ if(x[i].index == y[j].index)
+ {
+ double d = x[i++].value - y[j++].value;
+ sum += d*d;
+ }
+ else if(x[i].index > y[j].index)
+ {
+ sum += y[j].value * y[j].value;
+ ++j;
+ }
+ else
+ {
+ sum += x[i].value * x[i].value;
+ ++i;
+ }
+ }
+
+ while(i < xlen)
+ {
+ sum += x[i].value * x[i].value;
+ ++i;
+ }
+
+ while(j < ylen)
+ {
+ sum += y[j].value * y[j].value;
+ ++j;
+ }
+
+ return Math.exp(-param.gamma*sum);
+ }
+ case svm_parameter.SIGMOID:
+ return Math.tanh(param.gamma*dot(x,y)+param.coef0);
+ case svm_parameter.PRECOMPUTED:
+ return x[(int)(y[0].value)].value;
+ default:
+ return 0; // java
+ }
+ }
+}
+
+// An SMO algorithm in Fan et al., JMLR 6(2005), p. 1889--1918
+// Solves:
+//
+// min 0.5(\alpha^T Q \alpha) + p^T \alpha
+//
+// y^T \alpha = \delta
+// y_i = +1 or -1
+// 0 <= alpha_i <= Cp for y_i = 1
+// 0 <= alpha_i <= Cn for y_i = -1
+//
+// Given:
+//
+// Q, p, y, Cp, Cn, and an initial feasible point \alpha
+// l is the size of vectors and matrices
+// eps is the stopping tolerance
+//
+// solution will be put in \alpha, objective value will be put in obj
+//
+class Solver {
+ int active_size;
+ byte[] y;
+ double[] G; // gradient of objective function
+ static final byte LOWER_BOUND = 0;
+ static final byte UPPER_BOUND = 1;
+ static final byte FREE = 2;
+ byte[] alpha_status; // LOWER_BOUND, UPPER_BOUND, FREE
+ double[] alpha;
+ QMatrix Q;
+ double[] QD;
+ double eps;
+ double Cp,Cn;
+ double[] p;
+ int[] active_set;
+ double[] G_bar; // gradient, if we treat free variables as 0
+ int l;
+ boolean unshrink; // XXX
+
+ static final double INF = java.lang.Double.POSITIVE_INFINITY;
+
+ double get_C(int i)
+ {
+ return (y[i] > 0)? Cp : Cn;
+ }
+ void update_alpha_status(int i)
+ {
+ if(alpha[i] >= get_C(i))
+ alpha_status[i] = UPPER_BOUND;
+ else if(alpha[i] <= 0)
+ alpha_status[i] = LOWER_BOUND;
+ else alpha_status[i] = FREE;
+ }
+ boolean is_upper_bound(int i) { return alpha_status[i] == UPPER_BOUND; }
+ boolean is_lower_bound(int i) { return alpha_status[i] == LOWER_BOUND; }
+ boolean is_free(int i) { return alpha_status[i] == FREE; }
+
+ // java: information about solution except alpha,
+ // because we cannot return multiple values otherwise...
+ static class SolutionInfo {
+ double obj;
+ double rho;
+ double upper_bound_p;
+ double upper_bound_n;
+ double r; // for Solver_NU
+ }
+
+ void swap_index(int i, int j)
+ {
+ Q.swap_index(i,j);
+ do {byte _=y[i]; y[i]=y[j]; y[j]=_;} while(false);
+ do {double _=G[i]; G[i]=G[j]; G[j]=_;} while(false);
+ do {byte _=alpha_status[i]; alpha_status[i]=alpha_status[j]; alpha_status[j]=_;} while(false);
+ do {double _=alpha[i]; alpha[i]=alpha[j]; alpha[j]=_;} while(false);
+ do {double _=p[i]; p[i]=p[j]; p[j]=_;} while(false);
+ do {int _=active_set[i]; active_set[i]=active_set[j]; active_set[j]=_;} while(false);
+ do {double _=G_bar[i]; G_bar[i]=G_bar[j]; G_bar[j]=_;} while(false);
+ }
+
+ void reconstruct_gradient()
+ {
+ // reconstruct inactive elements of G from G_bar and free variables
+
+ if(active_size == l) return;
+
+ int i,j;
+ int nr_free = 0;
+
+ for(j=active_size;j 2*active_size*(l-active_size))
+ {
+ for(i=active_size;iInteger.MAX_VALUE/100 ? Integer.MAX_VALUE : 100*l);
+ int counter = Math.min(l,1000)+1;
+ int[] working_set = new int[2];
+
+ while(iter < max_iter)
+ {
+ // show progress and do shrinking
+
+ if(--counter == 0)
+ {
+ counter = Math.min(l,1000);
+ if(shrinking!=0) do_shrinking();
+ svm.info(".");
+ }
+
+ if(select_working_set(working_set)!=0)
+ {
+ // reconstruct the whole gradient
+ reconstruct_gradient();
+ // reset active set size and check
+ active_size = l;
+ svm.info("*");
+ if(select_working_set(working_set)!=0)
+ break;
+ else
+ counter = 1; // do shrinking next iteration
+ }
+
+ int i = working_set[0];
+ int j = working_set[1];
+
+ ++iter;
+
+ // update alpha[i] and alpha[j], handle bounds carefully
+
+ float[] Q_i = Q.get_Q(i,active_size);
+ float[] Q_j = Q.get_Q(j,active_size);
+
+ double C_i = get_C(i);
+ double C_j = get_C(j);
+
+ double old_alpha_i = alpha[i];
+ double old_alpha_j = alpha[j];
+
+ if(y[i]!=y[j])
+ {
+ double quad_coef = QD[i]+QD[j]+2*Q_i[j];
+ if (quad_coef <= 0)
+ quad_coef = 1e-12;
+ double delta = (-G[i]-G[j])/quad_coef;
+ double diff = alpha[i] - alpha[j];
+ alpha[i] += delta;
+ alpha[j] += delta;
+
+ if(diff > 0)
+ {
+ if(alpha[j] < 0)
+ {
+ alpha[j] = 0;
+ alpha[i] = diff;
+ }
+ }
+ else
+ {
+ if(alpha[i] < 0)
+ {
+ alpha[i] = 0;
+ alpha[j] = -diff;
+ }
+ }
+ if(diff > C_i - C_j)
+ {
+ if(alpha[i] > C_i)
+ {
+ alpha[i] = C_i;
+ alpha[j] = C_i - diff;
+ }
+ }
+ else
+ {
+ if(alpha[j] > C_j)
+ {
+ alpha[j] = C_j;
+ alpha[i] = C_j + diff;
+ }
+ }
+ }
+ else
+ {
+ double quad_coef = QD[i]+QD[j]-2*Q_i[j];
+ if (quad_coef <= 0)
+ quad_coef = 1e-12;
+ double delta = (G[i]-G[j])/quad_coef;
+ double sum = alpha[i] + alpha[j];
+ alpha[i] -= delta;
+ alpha[j] += delta;
+
+ if(sum > C_i)
+ {
+ if(alpha[i] > C_i)
+ {
+ alpha[i] = C_i;
+ alpha[j] = sum - C_i;
+ }
+ }
+ else
+ {
+ if(alpha[j] < 0)
+ {
+ alpha[j] = 0;
+ alpha[i] = sum;
+ }
+ }
+ if(sum > C_j)
+ {
+ if(alpha[j] > C_j)
+ {
+ alpha[j] = C_j;
+ alpha[i] = sum - C_j;
+ }
+ }
+ else
+ {
+ if(alpha[i] < 0)
+ {
+ alpha[i] = 0;
+ alpha[j] = sum;
+ }
+ }
+ }
+
+ // update G
+
+ double delta_alpha_i = alpha[i] - old_alpha_i;
+ double delta_alpha_j = alpha[j] - old_alpha_j;
+
+ for(int k=0;k= max_iter)
+ {
+ if(active_size < l)
+ {
+ // reconstruct the whole gradient to calculate objective value
+ reconstruct_gradient();
+ active_size = l;
+ svm.info("*");
+ }
+ System.err.print("\nWARNING: reaching max number of iterations\n");
+ }
+
+ // calculate rho
+
+ si.rho = calculate_rho();
+
+ // calculate objective value
+ {
+ double v = 0;
+ int i;
+ for(i=0;i= Gmax)
+ {
+ Gmax = -G[t];
+ Gmax_idx = t;
+ }
+ }
+ else
+ {
+ if(!is_lower_bound(t))
+ if(G[t] >= Gmax)
+ {
+ Gmax = G[t];
+ Gmax_idx = t;
+ }
+ }
+
+ int i = Gmax_idx;
+ float[] Q_i = null;
+ if(i != -1) // null Q_i not accessed: Gmax=-INF if i=-1
+ Q_i = Q.get_Q(i,active_size);
+
+ for(int j=0;j= Gmax2)
+ Gmax2 = G[j];
+ if (grad_diff > 0)
+ {
+ double obj_diff;
+ double quad_coef = QD[i]+QD[j]-2.0*y[i]*Q_i[j];
+ if (quad_coef > 0)
+ obj_diff = -(grad_diff*grad_diff)/quad_coef;
+ else
+ obj_diff = -(grad_diff*grad_diff)/1e-12;
+
+ if (obj_diff <= obj_diff_min)
+ {
+ Gmin_idx=j;
+ obj_diff_min = obj_diff;
+ }
+ }
+ }
+ }
+ else
+ {
+ if (!is_upper_bound(j))
+ {
+ double grad_diff= Gmax-G[j];
+ if (-G[j] >= Gmax2)
+ Gmax2 = -G[j];
+ if (grad_diff > 0)
+ {
+ double obj_diff;
+ double quad_coef = QD[i]+QD[j]+2.0*y[i]*Q_i[j];
+ if (quad_coef > 0)
+ obj_diff = -(grad_diff*grad_diff)/quad_coef;
+ else
+ obj_diff = -(grad_diff*grad_diff)/1e-12;
+
+ if (obj_diff <= obj_diff_min)
+ {
+ Gmin_idx=j;
+ obj_diff_min = obj_diff;
+ }
+ }
+ }
+ }
+ }
+
+ if(Gmax+Gmax2 < eps || Gmin_idx == -1)
+ return 1;
+
+ working_set[0] = Gmax_idx;
+ working_set[1] = Gmin_idx;
+ return 0;
+ }
+
+ private boolean be_shrunk(int i, double Gmax1, double Gmax2)
+ {
+ if(is_upper_bound(i))
+ {
+ if(y[i]==+1)
+ return(-G[i] > Gmax1);
+ else
+ return(-G[i] > Gmax2);
+ }
+ else if(is_lower_bound(i))
+ {
+ if(y[i]==+1)
+ return(G[i] > Gmax2);
+ else
+ return(G[i] > Gmax1);
+ }
+ else
+ return(false);
+ }
+
+ void do_shrinking()
+ {
+ int i;
+ double Gmax1 = -INF; // max { -y_i * grad(f)_i | i in I_up(\alpha) }
+ double Gmax2 = -INF; // max { y_i * grad(f)_i | i in I_low(\alpha) }
+
+ // find maximal violating pair first
+ for(i=0;i= Gmax1)
+ Gmax1 = -G[i];
+ }
+ if(!is_lower_bound(i))
+ {
+ if(G[i] >= Gmax2)
+ Gmax2 = G[i];
+ }
+ }
+ else
+ {
+ if(!is_upper_bound(i))
+ {
+ if(-G[i] >= Gmax2)
+ Gmax2 = -G[i];
+ }
+ if(!is_lower_bound(i))
+ {
+ if(G[i] >= Gmax1)
+ Gmax1 = G[i];
+ }
+ }
+ }
+
+ if(unshrink == false && Gmax1 + Gmax2 <= eps*10)
+ {
+ unshrink = true;
+ reconstruct_gradient();
+ active_size = l;
+ }
+
+ for(i=0;i i)
+ {
+ if (!be_shrunk(active_size, Gmax1, Gmax2))
+ {
+ swap_index(i,active_size);
+ break;
+ }
+ active_size--;
+ }
+ }
+ }
+
+ double calculate_rho()
+ {
+ double r;
+ int nr_free = 0;
+ double ub = INF, lb = -INF, sum_free = 0;
+ for(int i=0;i 0)
+ ub = Math.min(ub,yG);
+ else
+ lb = Math.max(lb,yG);
+ }
+ else if(is_upper_bound(i))
+ {
+ if(y[i] < 0)
+ ub = Math.min(ub,yG);
+ else
+ lb = Math.max(lb,yG);
+ }
+ else
+ {
+ ++nr_free;
+ sum_free += yG;
+ }
+ }
+
+ if(nr_free>0)
+ r = sum_free/nr_free;
+ else
+ r = (ub+lb)/2;
+
+ return r;
+ }
+
+}
+
+//
+// Solver for nu-svm classification and regression
+//
+// additional constraint: e^T \alpha = constant
+//
+final class Solver_NU extends Solver
+{
+ private SolutionInfo si;
+
+ void Solve(int l, QMatrix Q, double[] p, byte[] y,
+ double[] alpha, double Cp, double Cn, double eps,
+ SolutionInfo si, int shrinking)
+ {
+ this.si = si;
+ super.Solve(l,Q,p,y,alpha,Cp,Cn,eps,si,shrinking);
+ }
+
+ // return 1 if already optimal, return 0 otherwise
+ int select_working_set(int[] working_set)
+ {
+ // return i,j such that y_i = y_j and
+ // i: maximizes -y_i * grad(f)_i, i in I_up(\alpha)
+ // j: minimizes the decrease of obj value
+ // (if quadratic coefficeint <= 0, replace it with tau)
+ // -y_j*grad(f)_j < -y_i*grad(f)_i, j in I_low(\alpha)
+
+ double Gmaxp = -INF;
+ double Gmaxp2 = -INF;
+ int Gmaxp_idx = -1;
+
+ double Gmaxn = -INF;
+ double Gmaxn2 = -INF;
+ int Gmaxn_idx = -1;
+
+ int Gmin_idx = -1;
+ double obj_diff_min = INF;
+
+ for(int t=0;t= Gmaxp)
+ {
+ Gmaxp = -G[t];
+ Gmaxp_idx = t;
+ }
+ }
+ else
+ {
+ if(!is_lower_bound(t))
+ if(G[t] >= Gmaxn)
+ {
+ Gmaxn = G[t];
+ Gmaxn_idx = t;
+ }
+ }
+
+ int ip = Gmaxp_idx;
+ int in = Gmaxn_idx;
+ float[] Q_ip = null;
+ float[] Q_in = null;
+ if(ip != -1) // null Q_ip not accessed: Gmaxp=-INF if ip=-1
+ Q_ip = Q.get_Q(ip,active_size);
+ if(in != -1)
+ Q_in = Q.get_Q(in,active_size);
+
+ for(int j=0;j= Gmaxp2)
+ Gmaxp2 = G[j];
+ if (grad_diff > 0)
+ {
+ double obj_diff;
+ double quad_coef = QD[ip]+QD[j]-2*Q_ip[j];
+ if (quad_coef > 0)
+ obj_diff = -(grad_diff*grad_diff)/quad_coef;
+ else
+ obj_diff = -(grad_diff*grad_diff)/1e-12;
+
+ if (obj_diff <= obj_diff_min)
+ {
+ Gmin_idx=j;
+ obj_diff_min = obj_diff;
+ }
+ }
+ }
+ }
+ else
+ {
+ if (!is_upper_bound(j))
+ {
+ double grad_diff=Gmaxn-G[j];
+ if (-G[j] >= Gmaxn2)
+ Gmaxn2 = -G[j];
+ if (grad_diff > 0)
+ {
+ double obj_diff;
+ double quad_coef = QD[in]+QD[j]-2*Q_in[j];
+ if (quad_coef > 0)
+ obj_diff = -(grad_diff*grad_diff)/quad_coef;
+ else
+ obj_diff = -(grad_diff*grad_diff)/1e-12;
+
+ if (obj_diff <= obj_diff_min)
+ {
+ Gmin_idx=j;
+ obj_diff_min = obj_diff;
+ }
+ }
+ }
+ }
+ }
+
+ if(Math.max(Gmaxp+Gmaxp2,Gmaxn+Gmaxn2) < eps || Gmin_idx == -1)
+ return 1;
+
+ if(y[Gmin_idx] == +1)
+ working_set[0] = Gmaxp_idx;
+ else
+ working_set[0] = Gmaxn_idx;
+ working_set[1] = Gmin_idx;
+
+ return 0;
+ }
+
+ private boolean be_shrunk(int i, double Gmax1, double Gmax2, double Gmax3, double Gmax4)
+ {
+ if(is_upper_bound(i))
+ {
+ if(y[i]==+1)
+ return(-G[i] > Gmax1);
+ else
+ return(-G[i] > Gmax4);
+ }
+ else if(is_lower_bound(i))
+ {
+ if(y[i]==+1)
+ return(G[i] > Gmax2);
+ else
+ return(G[i] > Gmax3);
+ }
+ else
+ return(false);
+ }
+
+ void do_shrinking()
+ {
+ double Gmax1 = -INF; // max { -y_i * grad(f)_i | y_i = +1, i in I_up(\alpha) }
+ double Gmax2 = -INF; // max { y_i * grad(f)_i | y_i = +1, i in I_low(\alpha) }
+ double Gmax3 = -INF; // max { -y_i * grad(f)_i | y_i = -1, i in I_up(\alpha) }
+ double Gmax4 = -INF; // max { y_i * grad(f)_i | y_i = -1, i in I_low(\alpha) }
+
+ // find maximal violating pair first
+ int i;
+ for(i=0;i Gmax1) Gmax1 = -G[i];
+ }
+ else if(-G[i] > Gmax4) Gmax4 = -G[i];
+ }
+ if(!is_lower_bound(i))
+ {
+ if(y[i]==+1)
+ {
+ if(G[i] > Gmax2) Gmax2 = G[i];
+ }
+ else if(G[i] > Gmax3) Gmax3 = G[i];
+ }
+ }
+
+ if(unshrink == false && Math.max(Gmax1+Gmax2,Gmax3+Gmax4) <= eps*10)
+ {
+ unshrink = true;
+ reconstruct_gradient();
+ active_size = l;
+ }
+
+ for(i=0;i i)
+ {
+ if (!be_shrunk(active_size, Gmax1, Gmax2, Gmax3, Gmax4))
+ {
+ swap_index(i,active_size);
+ break;
+ }
+ active_size--;
+ }
+ }
+ }
+
+ double calculate_rho()
+ {
+ int nr_free1 = 0,nr_free2 = 0;
+ double ub1 = INF, ub2 = INF;
+ double lb1 = -INF, lb2 = -INF;
+ double sum_free1 = 0, sum_free2 = 0;
+
+ for(int i=0;i 0)
+ r1 = sum_free1/nr_free1;
+ else
+ r1 = (ub1+lb1)/2;
+
+ if(nr_free2 > 0)
+ r2 = sum_free2/nr_free2;
+ else
+ r2 = (ub2+lb2)/2;
+
+ si.r = (r1+r2)/2;
+ return (r1-r2)/2;
+ }
+}
+
+//
+// Q matrices for various formulations
+//
+class SVC_Q extends Kernel
+{
+ private final byte[] y;
+ private final Cache cache;
+ private final double[] QD;
+
+ SVC_Q(svm_problem prob, svm_parameter param, byte[] y_)
+ {
+ super(prob.l, prob.x, param);
+ y = (byte[])y_.clone();
+ cache = new Cache(prob.l,(long)(param.cache_size*(1<<20)));
+ QD = new double[prob.l];
+ for(int i=0;i 0) y[i] = +1; else y[i] = -1;
+ }
+
+ Solver s = new Solver();
+ s.Solve(l, new SVC_Q(prob,param,y), minus_ones, y,
+ alpha, Cp, Cn, param.eps, si, param.shrinking);
+
+ double sum_alpha=0;
+ for(i=0;i0)
+ y[i] = +1;
+ else
+ y[i] = -1;
+
+ double sum_pos = nu*l/2;
+ double sum_neg = nu*l/2;
+
+ for(i=0;i 0)
+ {
+ ++nSV;
+ if(prob.y[i] > 0)
+ {
+ if(Math.abs(alpha[i]) >= si.upper_bound_p)
+ ++nBSV;
+ }
+ else
+ {
+ if(Math.abs(alpha[i]) >= si.upper_bound_n)
+ ++nBSV;
+ }
+ }
+ }
+
+ svm.info("nSV = "+nSV+", nBSV = "+nBSV+"\n");
+
+ decision_function f = new decision_function();
+ f.alpha = alpha;
+ f.rho = si.rho;
+ return f;
+ }
+
+ // Platt's binary SVM Probablistic Output: an improvement from Lin et al.
+ private static void sigmoid_train(int l, double[] dec_values, double[] labels,
+ double[] probAB)
+ {
+ double A, B;
+ double prior1=0, prior0 = 0;
+ int i;
+
+ for (i=0;i 0) prior1+=1;
+ else prior0+=1;
+
+ int max_iter=100; // Maximal number of iterations
+ double min_step=1e-10; // Minimal step taken in line search
+ double sigma=1e-12; // For numerically strict PD of Hessian
+ double eps=1e-5;
+ double hiTarget=(prior1+1.0)/(prior1+2.0);
+ double loTarget=1/(prior0+2.0);
+ double[] t= new double[l];
+ double fApB,p,q,h11,h22,h21,g1,g2,det,dA,dB,gd,stepsize;
+ double newA,newB,newf,d1,d2;
+ int iter;
+
+ // Initial Point and Initial Fun Value
+ A=0.0; B=Math.log((prior0+1.0)/(prior1+1.0));
+ double fval = 0.0;
+
+ for (i=0;i0) t[i]=hiTarget;
+ else t[i]=loTarget;
+ fApB = dec_values[i]*A+B;
+ if (fApB>=0)
+ fval += t[i]*fApB + Math.log(1+Math.exp(-fApB));
+ else
+ fval += (t[i] - 1)*fApB +Math.log(1+Math.exp(fApB));
+ }
+ for (iter=0;iter= 0)
+ {
+ p=Math.exp(-fApB)/(1.0+Math.exp(-fApB));
+ q=1.0/(1.0+Math.exp(-fApB));
+ }
+ else
+ {
+ p=1.0/(1.0+Math.exp(fApB));
+ q=Math.exp(fApB)/(1.0+Math.exp(fApB));
+ }
+ d2=p*q;
+ h11+=dec_values[i]*dec_values[i]*d2;
+ h22+=d2;
+ h21+=dec_values[i]*d2;
+ d1=t[i]-p;
+ g1+=dec_values[i]*d1;
+ g2+=d1;
+ }
+
+ // Stopping Criteria
+ if (Math.abs(g1)= min_step)
+ {
+ newA = A + stepsize * dA;
+ newB = B + stepsize * dB;
+
+ // New function value
+ newf = 0.0;
+ for (i=0;i= 0)
+ newf += t[i]*fApB + Math.log(1+Math.exp(-fApB));
+ else
+ newf += (t[i] - 1)*fApB +Math.log(1+Math.exp(fApB));
+ }
+ // Check sufficient decrease
+ if (newf=max_iter)
+ svm.info("Reaching maximal iterations in two-class probability estimates\n");
+ probAB[0]=A;probAB[1]=B;
+ }
+
+ private static double sigmoid_predict(double decision_value, double A, double B)
+ {
+ double fApB = decision_value*A+B;
+ if (fApB >= 0)
+ return Math.exp(-fApB)/(1.0+Math.exp(-fApB));
+ else
+ return 1.0/(1+Math.exp(fApB)) ;
+ }
+
+ // Method 2 from the multiclass_prob paper by Wu, Lin, and Weng
+ private static void multiclass_probability(int k, double[][] r, double[] p)
+ {
+ int t,j;
+ int iter = 0, max_iter=Math.max(100,k);
+ double[][] Q=new double[k][k];
+ double[] Qp=new double[k];
+ double pQp, eps=0.005/k;
+
+ for (t=0;tmax_error)
+ max_error=error;
+ }
+ if (max_error=max_iter)
+ svm.info("Exceeds max_iter in multiclass_prob\n");
+ }
+
+ // Cross-validation decision values for probability estimates
+ private static void svm_binary_svc_probability(svm_problem prob, svm_parameter param, double Cp, double Cn, double[] probAB)
+ {
+ int i;
+ int nr_fold = 5;
+ int[] perm = new int[prob.l];
+ double[] dec_values = new double[prob.l];
+
+ // random shuffle
+ for(i=0;i0)
+ p_count++;
+ else
+ n_count++;
+
+ if(p_count==0 && n_count==0)
+ for(j=begin;j 0 && n_count == 0)
+ for(j=begin;j 0)
+ for(j=begin;j 5*std)
+ count=count+1;
+ else
+ mae+=Math.abs(ymv[i]);
+ mae /= (prob.l-count);
+ svm.info("Prob. model for test data: target value = predicted value + z,\nz: Laplace distribution e^(-|z|/sigma)/(2sigma),sigma="+mae+"\n");
+ return mae;
+ }
+
+ // label: label name, start: begin of each class, count: #data of classes, perm: indices to the original data
+ // perm, length l, must be allocated before calling this subroutine
+ private static void svm_group_classes(svm_problem prob, int[] nr_class_ret, int[][] label_ret, int[][] start_ret, int[][] count_ret, int[] perm)
+ {
+ int l = prob.l;
+ int max_nr_class = 16;
+ int nr_class = 0;
+ int[] label = new int[max_nr_class];
+ int[] count = new int[max_nr_class];
+ int[] data_label = new int[l];
+ int i;
+
+ for(i=0;i 0) ++nSV;
+ model.l = nSV;
+ model.SV = new svm_node[nSV][];
+ model.sv_coef[0] = new double[nSV];
+ model.sv_indices = new int[nSV];
+ int j = 0;
+ for(i=0;i 0)
+ {
+ model.SV[j] = prob.x[i];
+ model.sv_coef[0][j] = f.alpha[i];
+ model.sv_indices[j] = i+1;
+ ++j;
+ }
+ }
+ else
+ {
+ // classification
+ int l = prob.l;
+ int[] tmp_nr_class = new int[1];
+ int[][] tmp_label = new int[1][];
+ int[][] tmp_start = new int[1][];
+ int[][] tmp_count = new int[1][];
+ int[] perm = new int[l];
+
+ // group training data of the same class
+ svm_group_classes(prob,tmp_nr_class,tmp_label,tmp_start,tmp_count,perm);
+ int nr_class = tmp_nr_class[0];
+ int[] label = tmp_label[0];
+ int[] start = tmp_start[0];
+ int[] count = tmp_count[0];
+
+ if(nr_class == 1)
+ svm.info("WARNING: training data in only one class. See README for details.\n");
+
+ svm_node[][] x = new svm_node[l][];
+ int i;
+ for(i=0;i 0)
+ nonzero[si+k] = true;
+ for(k=0;k 0)
+ nonzero[sj+k] = true;
+ ++p;
+ }
+
+ // build output
+
+ model.nr_class = nr_class;
+
+ model.label = new int[nr_class];
+ for(i=0;i some folds may have zero elements
+ if((param.svm_type == svm_parameter.C_SVC ||
+ param.svm_type == svm_parameter.NU_SVC) && nr_fold < l)
+ {
+ int[] tmp_nr_class = new int[1];
+ int[][] tmp_label = new int[1][];
+ int[][] tmp_start = new int[1][];
+ int[][] tmp_count = new int[1][];
+
+ svm_group_classes(prob,tmp_nr_class,tmp_label,tmp_start,tmp_count,perm);
+
+ int nr_class = tmp_nr_class[0];
+ int[] start = tmp_start[0];
+ int[] count = tmp_count[0];
+
+ // random shuffle and then data grouped by fold using the array perm
+ int[] fold_count = new int[nr_fold];
+ int c;
+ int[] index = new int[l];
+ for(i=0;i0)?1:-1;
+ else
+ return sum;
+ }
+ else
+ {
+ int nr_class = model.nr_class;
+ int l = model.l;
+
+ double[] kvalue = new double[l];
+ for(i=0;i 0)
+ ++vote[i];
+ else
+ ++vote[j];
+ p++;
+ }
+
+ int vote_max_idx = 0;
+ for(i=1;i vote[vote_max_idx])
+ vote_max_idx = i;
+
+ return model.label[vote_max_idx];
+ }
+ }
+
+ public static double svm_predict(svm_model model, svm_node[] x)
+ {
+ int nr_class = model.nr_class;
+ double[] dec_values;
+ if(model.param.svm_type == svm_parameter.ONE_CLASS ||
+ model.param.svm_type == svm_parameter.EPSILON_SVR ||
+ model.param.svm_type == svm_parameter.NU_SVR)
+ dec_values = new double[1];
+ else
+ dec_values = new double[nr_class*(nr_class-1)/2];
+ double pred_result = svm_predict_values(model, x, dec_values);
+ return pred_result;
+ }
+
+ public static double svm_predict_probability(svm_model model, svm_node[] x, double[] prob_estimates)
+ {
+ if ((model.param.svm_type == svm_parameter.C_SVC || model.param.svm_type == svm_parameter.NU_SVC) &&
+ model.probA!=null && model.probB!=null)
+ {
+ int i;
+ int nr_class = model.nr_class;
+ double[] dec_values = new double[nr_class*(nr_class-1)/2];
+ svm_predict_values(model, x, dec_values);
+
+ double min_prob=1e-7;
+ double[][] pairwise_prob=new double[nr_class][nr_class];
+
+ int k=0;
+ for(i=0;i prob_estimates[prob_max_idx])
+ prob_max_idx = i;
+ return model.label[prob_max_idx];
+ }
+ else
+ return svm_predict(model, x);
+ }
+
+ static final String svm_type_table[] =
+ {
+ "c_svc","nu_svc","one_class","epsilon_svr","nu_svr",
+ };
+
+ static final String kernel_type_table[]=
+ {
+ "linear","polynomial","rbf","sigmoid","precomputed"
+ };
+
+ public static void svm_save_model(String model_file_name, svm_model model) throws IOException
+ {
+ DataOutputStream fp = new DataOutputStream(new BufferedOutputStream(new FileOutputStream(model_file_name)));
+
+ svm_parameter param = model.param;
+
+ fp.writeBytes("svm_type "+svm_type_table[param.svm_type]+"\n");
+ fp.writeBytes("kernel_type "+kernel_type_table[param.kernel_type]+"\n");
+
+ if(param.kernel_type == svm_parameter.POLY)
+ fp.writeBytes("degree "+param.degree+"\n");
+
+ if(param.kernel_type == svm_parameter.POLY ||
+ param.kernel_type == svm_parameter.RBF ||
+ param.kernel_type == svm_parameter.SIGMOID)
+ fp.writeBytes("gamma "+param.gamma+"\n");
+
+ if(param.kernel_type == svm_parameter.POLY ||
+ param.kernel_type == svm_parameter.SIGMOID)
+ fp.writeBytes("coef0 "+param.coef0+"\n");
+
+ int nr_class = model.nr_class;
+ int l = model.l;
+ fp.writeBytes("nr_class "+nr_class+"\n");
+ fp.writeBytes("total_sv "+l+"\n");
+
+ {
+ fp.writeBytes("rho");
+ for(int i=0;i 1)
+ return "nu <= 0 or nu > 1";
+
+ if(svm_type == svm_parameter.EPSILON_SVR)
+ if(param.p < 0)
+ return "p < 0";
+
+ if(param.shrinking != 0 &&
+ param.shrinking != 1)
+ return "shrinking != 0 and shrinking != 1";
+
+ if(param.probability != 0 &&
+ param.probability != 1)
+ return "probability != 0 and probability != 1";
+
+ if(param.probability == 1 &&
+ svm_type == svm_parameter.ONE_CLASS)
+ return "one-class SVM probability output not supported yet";
+
+ // check whether nu-svc is feasible
+
+ if(svm_type == svm_parameter.NU_SVC)
+ {
+ int l = prob.l;
+ int max_nr_class = 16;
+ int nr_class = 0;
+ int[] label = new int[max_nr_class];
+ int[] count = new int[max_nr_class];
+
+ int i;
+ for(i=0;i Math.min(n1,n2))
+ return "specified nu is infeasible";
+ }
+ }
+ }
+
+ return null;
+ }
+
+ public static int svm_check_probability_model(svm_model model)
+ {
+ if (((model.param.svm_type == svm_parameter.C_SVC || model.param.svm_type == svm_parameter.NU_SVC) &&
+ model.probA!=null && model.probB!=null) ||
+ ((model.param.svm_type == svm_parameter.EPSILON_SVR || model.param.svm_type == svm_parameter.NU_SVR) &&
+ model.probA!=null))
+ return 1;
+ else
+ return 0;
+ }
+
+ public static void svm_set_print_string_function(svm_print_interface print_func)
+ {
+ if (print_func == null)
+ svm_print_string = svm_print_stdout;
+ else
+ svm_print_string = print_func;
+ }
+}
diff --git a/libsvm-3.21/java/libsvm/svm.m4 b/libsvm-3.21/java/libsvm/svm.m4
new file mode 100644
index 0000000..5dca654
--- /dev/null
+++ b/libsvm-3.21/java/libsvm/svm.m4
@@ -0,0 +1,2849 @@
+define(`swap',`do {$1 _=$2; $2=$3; $3=_;} while(false)')
+define(`Qfloat',`float')
+define(`SIZE_OF_QFLOAT',4)
+define(`TAU',1e-12)
+changecom(`//',`')
+package libsvm;
+import java.io.*;
+import java.util.*;
+
+//
+// Kernel Cache
+//
+// l is the number of total data items
+// size is the cache size limit in bytes
+//
+class Cache {
+ private final int l;
+ private long size;
+ private final class head_t
+ {
+ head_t prev, next; // a cicular list
+ Qfloat[] data;
+ int len; // data[0,len) is cached in this entry
+ }
+ private final head_t[] head;
+ private head_t lru_head;
+
+ Cache(int l_, long size_)
+ {
+ l = l_;
+ size = size_;
+ head = new head_t[l];
+ for(int i=0;i= len if nothing needs to be filled)
+ // java: simulate pointer using single-element array
+ int get_data(int index, Qfloat[][] data, int len)
+ {
+ head_t h = head[index];
+ if(h.len > 0) lru_delete(h);
+ int more = len - h.len;
+
+ if(more > 0)
+ {
+ // free old space
+ while(size < more)
+ {
+ head_t old = lru_head.next;
+ lru_delete(old);
+ size += old.len;
+ old.data = null;
+ old.len = 0;
+ }
+
+ // allocate new space
+ Qfloat[] new_data = new Qfloat[len];
+ if(h.data != null) System.arraycopy(h.data,0,new_data,0,h.len);
+ h.data = new_data;
+ size -= more;
+ swap(int,h.len,len);
+ }
+
+ lru_insert(h);
+ data[0] = h.data;
+ return len;
+ }
+
+ void swap_index(int i, int j)
+ {
+ if(i==j) return;
+
+ if(head[i].len > 0) lru_delete(head[i]);
+ if(head[j].len > 0) lru_delete(head[j]);
+ swap(Qfloat[],head[i].data,head[j].data);
+ swap(int,head[i].len,head[j].len);
+ if(head[i].len > 0) lru_insert(head[i]);
+ if(head[j].len > 0) lru_insert(head[j]);
+
+ if(i>j) swap(int,i,j);
+ for(head_t h = lru_head.next; h!=lru_head; h=h.next)
+ {
+ if(h.len > i)
+ {
+ if(h.len > j)
+ swap(Qfloat,h.data[i],h.data[j]);
+ else
+ {
+ // give up
+ lru_delete(h);
+ size += h.len;
+ h.data = null;
+ h.len = 0;
+ }
+ }
+ }
+ }
+}
+
+//
+// Kernel evaluation
+//
+// the static method k_function is for doing single kernel evaluation
+// the constructor of Kernel prepares to calculate the l*l kernel matrix
+// the member function get_Q is for getting one column from the Q Matrix
+//
+abstract class QMatrix {
+ abstract Qfloat[] get_Q(int column, int len);
+ abstract double[] get_QD();
+ abstract void swap_index(int i, int j);
+};
+
+abstract class Kernel extends QMatrix {
+ private svm_node[][] x;
+ private final double[] x_square;
+
+ // svm_parameter
+ private final int kernel_type;
+ private final int degree;
+ private final double gamma;
+ private final double coef0;
+
+ abstract Qfloat[] get_Q(int column, int len);
+ abstract double[] get_QD();
+
+ void swap_index(int i, int j)
+ {
+ swap(svm_node[],x[i],x[j]);
+ if(x_square != null) swap(double,x_square[i],x_square[j]);
+ }
+
+ private static double powi(double base, int times)
+ {
+ double tmp = base, ret = 1.0;
+
+ for(int t=times; t>0; t/=2)
+ {
+ if(t%2==1) ret*=tmp;
+ tmp = tmp * tmp;
+ }
+ return ret;
+ }
+
+ double kernel_function(int i, int j)
+ {
+ switch(kernel_type)
+ {
+ case svm_parameter.LINEAR:
+ return dot(x[i],x[j]);
+ case svm_parameter.POLY:
+ return powi(gamma*dot(x[i],x[j])+coef0,degree);
+ case svm_parameter.RBF:
+ return Math.exp(-gamma*(x_square[i]+x_square[j]-2*dot(x[i],x[j])));
+ case svm_parameter.SIGMOID:
+ return Math.tanh(gamma*dot(x[i],x[j])+coef0);
+ case svm_parameter.PRECOMPUTED:
+ return x[i][(int)(x[j][0].value)].value;
+ default:
+ return 0; // java
+ }
+ }
+
+ Kernel(int l, svm_node[][] x_, svm_parameter param)
+ {
+ this.kernel_type = param.kernel_type;
+ this.degree = param.degree;
+ this.gamma = param.gamma;
+ this.coef0 = param.coef0;
+
+ x = (svm_node[][])x_.clone();
+
+ if(kernel_type == svm_parameter.RBF)
+ {
+ x_square = new double[l];
+ for(int i=0;i y[j].index)
+ ++j;
+ else
+ ++i;
+ }
+ }
+ return sum;
+ }
+
+ static double k_function(svm_node[] x, svm_node[] y,
+ svm_parameter param)
+ {
+ switch(param.kernel_type)
+ {
+ case svm_parameter.LINEAR:
+ return dot(x,y);
+ case svm_parameter.POLY:
+ return powi(param.gamma*dot(x,y)+param.coef0,param.degree);
+ case svm_parameter.RBF:
+ {
+ double sum = 0;
+ int xlen = x.length;
+ int ylen = y.length;
+ int i = 0;
+ int j = 0;
+ while(i < xlen && j < ylen)
+ {
+ if(x[i].index == y[j].index)
+ {
+ double d = x[i++].value - y[j++].value;
+ sum += d*d;
+ }
+ else if(x[i].index > y[j].index)
+ {
+ sum += y[j].value * y[j].value;
+ ++j;
+ }
+ else
+ {
+ sum += x[i].value * x[i].value;
+ ++i;
+ }
+ }
+
+ while(i < xlen)
+ {
+ sum += x[i].value * x[i].value;
+ ++i;
+ }
+
+ while(j < ylen)
+ {
+ sum += y[j].value * y[j].value;
+ ++j;
+ }
+
+ return Math.exp(-param.gamma*sum);
+ }
+ case svm_parameter.SIGMOID:
+ return Math.tanh(param.gamma*dot(x,y)+param.coef0);
+ case svm_parameter.PRECOMPUTED:
+ return x[(int)(y[0].value)].value;
+ default:
+ return 0; // java
+ }
+ }
+}
+
+// An SMO algorithm in Fan et al., JMLR 6(2005), p. 1889--1918
+// Solves:
+//
+// min 0.5(\alpha^T Q \alpha) + p^T \alpha
+//
+// y^T \alpha = \delta
+// y_i = +1 or -1
+// 0 <= alpha_i <= Cp for y_i = 1
+// 0 <= alpha_i <= Cn for y_i = -1
+//
+// Given:
+//
+// Q, p, y, Cp, Cn, and an initial feasible point \alpha
+// l is the size of vectors and matrices
+// eps is the stopping tolerance
+//
+// solution will be put in \alpha, objective value will be put in obj
+//
+class Solver {
+ int active_size;
+ byte[] y;
+ double[] G; // gradient of objective function
+ static final byte LOWER_BOUND = 0;
+ static final byte UPPER_BOUND = 1;
+ static final byte FREE = 2;
+ byte[] alpha_status; // LOWER_BOUND, UPPER_BOUND, FREE
+ double[] alpha;
+ QMatrix Q;
+ double[] QD;
+ double eps;
+ double Cp,Cn;
+ double[] p;
+ int[] active_set;
+ double[] G_bar; // gradient, if we treat free variables as 0
+ int l;
+ boolean unshrink; // XXX
+
+ static final double INF = java.lang.Double.POSITIVE_INFINITY;
+
+ double get_C(int i)
+ {
+ return (y[i] > 0)? Cp : Cn;
+ }
+ void update_alpha_status(int i)
+ {
+ if(alpha[i] >= get_C(i))
+ alpha_status[i] = UPPER_BOUND;
+ else if(alpha[i] <= 0)
+ alpha_status[i] = LOWER_BOUND;
+ else alpha_status[i] = FREE;
+ }
+ boolean is_upper_bound(int i) { return alpha_status[i] == UPPER_BOUND; }
+ boolean is_lower_bound(int i) { return alpha_status[i] == LOWER_BOUND; }
+ boolean is_free(int i) { return alpha_status[i] == FREE; }
+
+ // java: information about solution except alpha,
+ // because we cannot return multiple values otherwise...
+ static class SolutionInfo {
+ double obj;
+ double rho;
+ double upper_bound_p;
+ double upper_bound_n;
+ double r; // for Solver_NU
+ }
+
+ void swap_index(int i, int j)
+ {
+ Q.swap_index(i,j);
+ swap(byte, y[i],y[j]);
+ swap(double, G[i],G[j]);
+ swap(byte, alpha_status[i],alpha_status[j]);
+ swap(double, alpha[i],alpha[j]);
+ swap(double, p[i],p[j]);
+ swap(int, active_set[i],active_set[j]);
+ swap(double, G_bar[i],G_bar[j]);
+ }
+
+ void reconstruct_gradient()
+ {
+ // reconstruct inactive elements of G from G_bar and free variables
+
+ if(active_size == l) return;
+
+ int i,j;
+ int nr_free = 0;
+
+ for(j=active_size;j 2*active_size*(l-active_size))
+ {
+ for(i=active_size;iInteger.MAX_VALUE/100 ? Integer.MAX_VALUE : 100*l);
+ int counter = Math.min(l,1000)+1;
+ int[] working_set = new int[2];
+
+ while(iter < max_iter)
+ {
+ // show progress and do shrinking
+
+ if(--counter == 0)
+ {
+ counter = Math.min(l,1000);
+ if(shrinking!=0) do_shrinking();
+ svm.info(".");
+ }
+
+ if(select_working_set(working_set)!=0)
+ {
+ // reconstruct the whole gradient
+ reconstruct_gradient();
+ // reset active set size and check
+ active_size = l;
+ svm.info("*");
+ if(select_working_set(working_set)!=0)
+ break;
+ else
+ counter = 1; // do shrinking next iteration
+ }
+
+ int i = working_set[0];
+ int j = working_set[1];
+
+ ++iter;
+
+ // update alpha[i] and alpha[j], handle bounds carefully
+
+ Qfloat[] Q_i = Q.get_Q(i,active_size);
+ Qfloat[] Q_j = Q.get_Q(j,active_size);
+
+ double C_i = get_C(i);
+ double C_j = get_C(j);
+
+ double old_alpha_i = alpha[i];
+ double old_alpha_j = alpha[j];
+
+ if(y[i]!=y[j])
+ {
+ double quad_coef = QD[i]+QD[j]+2*Q_i[j];
+ if (quad_coef <= 0)
+ quad_coef = TAU;
+ double delta = (-G[i]-G[j])/quad_coef;
+ double diff = alpha[i] - alpha[j];
+ alpha[i] += delta;
+ alpha[j] += delta;
+
+ if(diff > 0)
+ {
+ if(alpha[j] < 0)
+ {
+ alpha[j] = 0;
+ alpha[i] = diff;
+ }
+ }
+ else
+ {
+ if(alpha[i] < 0)
+ {
+ alpha[i] = 0;
+ alpha[j] = -diff;
+ }
+ }
+ if(diff > C_i - C_j)
+ {
+ if(alpha[i] > C_i)
+ {
+ alpha[i] = C_i;
+ alpha[j] = C_i - diff;
+ }
+ }
+ else
+ {
+ if(alpha[j] > C_j)
+ {
+ alpha[j] = C_j;
+ alpha[i] = C_j + diff;
+ }
+ }
+ }
+ else
+ {
+ double quad_coef = QD[i]+QD[j]-2*Q_i[j];
+ if (quad_coef <= 0)
+ quad_coef = TAU;
+ double delta = (G[i]-G[j])/quad_coef;
+ double sum = alpha[i] + alpha[j];
+ alpha[i] -= delta;
+ alpha[j] += delta;
+
+ if(sum > C_i)
+ {
+ if(alpha[i] > C_i)
+ {
+ alpha[i] = C_i;
+ alpha[j] = sum - C_i;
+ }
+ }
+ else
+ {
+ if(alpha[j] < 0)
+ {
+ alpha[j] = 0;
+ alpha[i] = sum;
+ }
+ }
+ if(sum > C_j)
+ {
+ if(alpha[j] > C_j)
+ {
+ alpha[j] = C_j;
+ alpha[i] = sum - C_j;
+ }
+ }
+ else
+ {
+ if(alpha[i] < 0)
+ {
+ alpha[i] = 0;
+ alpha[j] = sum;
+ }
+ }
+ }
+
+ // update G
+
+ double delta_alpha_i = alpha[i] - old_alpha_i;
+ double delta_alpha_j = alpha[j] - old_alpha_j;
+
+ for(int k=0;k= max_iter)
+ {
+ if(active_size < l)
+ {
+ // reconstruct the whole gradient to calculate objective value
+ reconstruct_gradient();
+ active_size = l;
+ svm.info("*");
+ }
+ System.err.print("\nWARNING: reaching max number of iterations\n");
+ }
+
+ // calculate rho
+
+ si.rho = calculate_rho();
+
+ // calculate objective value
+ {
+ double v = 0;
+ int i;
+ for(i=0;i= Gmax)
+ {
+ Gmax = -G[t];
+ Gmax_idx = t;
+ }
+ }
+ else
+ {
+ if(!is_lower_bound(t))
+ if(G[t] >= Gmax)
+ {
+ Gmax = G[t];
+ Gmax_idx = t;
+ }
+ }
+
+ int i = Gmax_idx;
+ Qfloat[] Q_i = null;
+ if(i != -1) // null Q_i not accessed: Gmax=-INF if i=-1
+ Q_i = Q.get_Q(i,active_size);
+
+ for(int j=0;j= Gmax2)
+ Gmax2 = G[j];
+ if (grad_diff > 0)
+ {
+ double obj_diff;
+ double quad_coef = QD[i]+QD[j]-2.0*y[i]*Q_i[j];
+ if (quad_coef > 0)
+ obj_diff = -(grad_diff*grad_diff)/quad_coef;
+ else
+ obj_diff = -(grad_diff*grad_diff)/TAU;
+
+ if (obj_diff <= obj_diff_min)
+ {
+ Gmin_idx=j;
+ obj_diff_min = obj_diff;
+ }
+ }
+ }
+ }
+ else
+ {
+ if (!is_upper_bound(j))
+ {
+ double grad_diff= Gmax-G[j];
+ if (-G[j] >= Gmax2)
+ Gmax2 = -G[j];
+ if (grad_diff > 0)
+ {
+ double obj_diff;
+ double quad_coef = QD[i]+QD[j]+2.0*y[i]*Q_i[j];
+ if (quad_coef > 0)
+ obj_diff = -(grad_diff*grad_diff)/quad_coef;
+ else
+ obj_diff = -(grad_diff*grad_diff)/TAU;
+
+ if (obj_diff <= obj_diff_min)
+ {
+ Gmin_idx=j;
+ obj_diff_min = obj_diff;
+ }
+ }
+ }
+ }
+ }
+
+ if(Gmax+Gmax2 < eps || Gmin_idx == -1)
+ return 1;
+
+ working_set[0] = Gmax_idx;
+ working_set[1] = Gmin_idx;
+ return 0;
+ }
+
+ private boolean be_shrunk(int i, double Gmax1, double Gmax2)
+ {
+ if(is_upper_bound(i))
+ {
+ if(y[i]==+1)
+ return(-G[i] > Gmax1);
+ else
+ return(-G[i] > Gmax2);
+ }
+ else if(is_lower_bound(i))
+ {
+ if(y[i]==+1)
+ return(G[i] > Gmax2);
+ else
+ return(G[i] > Gmax1);
+ }
+ else
+ return(false);
+ }
+
+ void do_shrinking()
+ {
+ int i;
+ double Gmax1 = -INF; // max { -y_i * grad(f)_i | i in I_up(\alpha) }
+ double Gmax2 = -INF; // max { y_i * grad(f)_i | i in I_low(\alpha) }
+
+ // find maximal violating pair first
+ for(i=0;i= Gmax1)
+ Gmax1 = -G[i];
+ }
+ if(!is_lower_bound(i))
+ {
+ if(G[i] >= Gmax2)
+ Gmax2 = G[i];
+ }
+ }
+ else
+ {
+ if(!is_upper_bound(i))
+ {
+ if(-G[i] >= Gmax2)
+ Gmax2 = -G[i];
+ }
+ if(!is_lower_bound(i))
+ {
+ if(G[i] >= Gmax1)
+ Gmax1 = G[i];
+ }
+ }
+ }
+
+ if(unshrink == false && Gmax1 + Gmax2 <= eps*10)
+ {
+ unshrink = true;
+ reconstruct_gradient();
+ active_size = l;
+ }
+
+ for(i=0;i i)
+ {
+ if (!be_shrunk(active_size, Gmax1, Gmax2))
+ {
+ swap_index(i,active_size);
+ break;
+ }
+ active_size--;
+ }
+ }
+ }
+
+ double calculate_rho()
+ {
+ double r;
+ int nr_free = 0;
+ double ub = INF, lb = -INF, sum_free = 0;
+ for(int i=0;i 0)
+ ub = Math.min(ub,yG);
+ else
+ lb = Math.max(lb,yG);
+ }
+ else if(is_upper_bound(i))
+ {
+ if(y[i] < 0)
+ ub = Math.min(ub,yG);
+ else
+ lb = Math.max(lb,yG);
+ }
+ else
+ {
+ ++nr_free;
+ sum_free += yG;
+ }
+ }
+
+ if(nr_free>0)
+ r = sum_free/nr_free;
+ else
+ r = (ub+lb)/2;
+
+ return r;
+ }
+
+}
+
+//
+// Solver for nu-svm classification and regression
+//
+// additional constraint: e^T \alpha = constant
+//
+final class Solver_NU extends Solver
+{
+ private SolutionInfo si;
+
+ void Solve(int l, QMatrix Q, double[] p, byte[] y,
+ double[] alpha, double Cp, double Cn, double eps,
+ SolutionInfo si, int shrinking)
+ {
+ this.si = si;
+ super.Solve(l,Q,p,y,alpha,Cp,Cn,eps,si,shrinking);
+ }
+
+ // return 1 if already optimal, return 0 otherwise
+ int select_working_set(int[] working_set)
+ {
+ // return i,j such that y_i = y_j and
+ // i: maximizes -y_i * grad(f)_i, i in I_up(\alpha)
+ // j: minimizes the decrease of obj value
+ // (if quadratic coefficeint <= 0, replace it with tau)
+ // -y_j*grad(f)_j < -y_i*grad(f)_i, j in I_low(\alpha)
+
+ double Gmaxp = -INF;
+ double Gmaxp2 = -INF;
+ int Gmaxp_idx = -1;
+
+ double Gmaxn = -INF;
+ double Gmaxn2 = -INF;
+ int Gmaxn_idx = -1;
+
+ int Gmin_idx = -1;
+ double obj_diff_min = INF;
+
+ for(int t=0;t= Gmaxp)
+ {
+ Gmaxp = -G[t];
+ Gmaxp_idx = t;
+ }
+ }
+ else
+ {
+ if(!is_lower_bound(t))
+ if(G[t] >= Gmaxn)
+ {
+ Gmaxn = G[t];
+ Gmaxn_idx = t;
+ }
+ }
+
+ int ip = Gmaxp_idx;
+ int in = Gmaxn_idx;
+ Qfloat[] Q_ip = null;
+ Qfloat[] Q_in = null;
+ if(ip != -1) // null Q_ip not accessed: Gmaxp=-INF if ip=-1
+ Q_ip = Q.get_Q(ip,active_size);
+ if(in != -1)
+ Q_in = Q.get_Q(in,active_size);
+
+ for(int j=0;j= Gmaxp2)
+ Gmaxp2 = G[j];
+ if (grad_diff > 0)
+ {
+ double obj_diff;
+ double quad_coef = QD[ip]+QD[j]-2*Q_ip[j];
+ if (quad_coef > 0)
+ obj_diff = -(grad_diff*grad_diff)/quad_coef;
+ else
+ obj_diff = -(grad_diff*grad_diff)/TAU;
+
+ if (obj_diff <= obj_diff_min)
+ {
+ Gmin_idx=j;
+ obj_diff_min = obj_diff;
+ }
+ }
+ }
+ }
+ else
+ {
+ if (!is_upper_bound(j))
+ {
+ double grad_diff=Gmaxn-G[j];
+ if (-G[j] >= Gmaxn2)
+ Gmaxn2 = -G[j];
+ if (grad_diff > 0)
+ {
+ double obj_diff;
+ double quad_coef = QD[in]+QD[j]-2*Q_in[j];
+ if (quad_coef > 0)
+ obj_diff = -(grad_diff*grad_diff)/quad_coef;
+ else
+ obj_diff = -(grad_diff*grad_diff)/TAU;
+
+ if (obj_diff <= obj_diff_min)
+ {
+ Gmin_idx=j;
+ obj_diff_min = obj_diff;
+ }
+ }
+ }
+ }
+ }
+
+ if(Math.max(Gmaxp+Gmaxp2,Gmaxn+Gmaxn2) < eps || Gmin_idx == -1)
+ return 1;
+
+ if(y[Gmin_idx] == +1)
+ working_set[0] = Gmaxp_idx;
+ else
+ working_set[0] = Gmaxn_idx;
+ working_set[1] = Gmin_idx;
+
+ return 0;
+ }
+
+ private boolean be_shrunk(int i, double Gmax1, double Gmax2, double Gmax3, double Gmax4)
+ {
+ if(is_upper_bound(i))
+ {
+ if(y[i]==+1)
+ return(-G[i] > Gmax1);
+ else
+ return(-G[i] > Gmax4);
+ }
+ else if(is_lower_bound(i))
+ {
+ if(y[i]==+1)
+ return(G[i] > Gmax2);
+ else
+ return(G[i] > Gmax3);
+ }
+ else
+ return(false);
+ }
+
+ void do_shrinking()
+ {
+ double Gmax1 = -INF; // max { -y_i * grad(f)_i | y_i = +1, i in I_up(\alpha) }
+ double Gmax2 = -INF; // max { y_i * grad(f)_i | y_i = +1, i in I_low(\alpha) }
+ double Gmax3 = -INF; // max { -y_i * grad(f)_i | y_i = -1, i in I_up(\alpha) }
+ double Gmax4 = -INF; // max { y_i * grad(f)_i | y_i = -1, i in I_low(\alpha) }
+
+ // find maximal violating pair first
+ int i;
+ for(i=0;i Gmax1) Gmax1 = -G[i];
+ }
+ else if(-G[i] > Gmax4) Gmax4 = -G[i];
+ }
+ if(!is_lower_bound(i))
+ {
+ if(y[i]==+1)
+ {
+ if(G[i] > Gmax2) Gmax2 = G[i];
+ }
+ else if(G[i] > Gmax3) Gmax3 = G[i];
+ }
+ }
+
+ if(unshrink == false && Math.max(Gmax1+Gmax2,Gmax3+Gmax4) <= eps*10)
+ {
+ unshrink = true;
+ reconstruct_gradient();
+ active_size = l;
+ }
+
+ for(i=0;i i)
+ {
+ if (!be_shrunk(active_size, Gmax1, Gmax2, Gmax3, Gmax4))
+ {
+ swap_index(i,active_size);
+ break;
+ }
+ active_size--;
+ }
+ }
+ }
+
+ double calculate_rho()
+ {
+ int nr_free1 = 0,nr_free2 = 0;
+ double ub1 = INF, ub2 = INF;
+ double lb1 = -INF, lb2 = -INF;
+ double sum_free1 = 0, sum_free2 = 0;
+
+ for(int i=0;i 0)
+ r1 = sum_free1/nr_free1;
+ else
+ r1 = (ub1+lb1)/2;
+
+ if(nr_free2 > 0)
+ r2 = sum_free2/nr_free2;
+ else
+ r2 = (ub2+lb2)/2;
+
+ si.r = (r1+r2)/2;
+ return (r1-r2)/2;
+ }
+}
+
+//
+// Q matrices for various formulations
+//
+class SVC_Q extends Kernel
+{
+ private final byte[] y;
+ private final Cache cache;
+ private final double[] QD;
+
+ SVC_Q(svm_problem prob, svm_parameter param, byte[] y_)
+ {
+ super(prob.l, prob.x, param);
+ y = (byte[])y_.clone();
+ cache = new Cache(prob.l,(long)(param.cache_size*(1<<20)));
+ QD = new double[prob.l];
+ for(int i=0;i 0) y[i] = +1; else y[i] = -1;
+ }
+
+ Solver s = new Solver();
+ s.Solve(l, new SVC_Q(prob,param,y), minus_ones, y,
+ alpha, Cp, Cn, param.eps, si, param.shrinking);
+
+ double sum_alpha=0;
+ for(i=0;i0)
+ y[i] = +1;
+ else
+ y[i] = -1;
+
+ double sum_pos = nu*l/2;
+ double sum_neg = nu*l/2;
+
+ for(i=0;i 0)
+ {
+ ++nSV;
+ if(prob.y[i] > 0)
+ {
+ if(Math.abs(alpha[i]) >= si.upper_bound_p)
+ ++nBSV;
+ }
+ else
+ {
+ if(Math.abs(alpha[i]) >= si.upper_bound_n)
+ ++nBSV;
+ }
+ }
+ }
+
+ svm.info("nSV = "+nSV+", nBSV = "+nBSV+"\n");
+
+ decision_function f = new decision_function();
+ f.alpha = alpha;
+ f.rho = si.rho;
+ return f;
+ }
+
+ // Platt's binary SVM Probablistic Output: an improvement from Lin et al.
+ private static void sigmoid_train(int l, double[] dec_values, double[] labels,
+ double[] probAB)
+ {
+ double A, B;
+ double prior1=0, prior0 = 0;
+ int i;
+
+ for (i=0;i 0) prior1+=1;
+ else prior0+=1;
+
+ int max_iter=100; // Maximal number of iterations
+ double min_step=1e-10; // Minimal step taken in line search
+ double sigma=1e-12; // For numerically strict PD of Hessian
+ double eps=1e-5;
+ double hiTarget=(prior1+1.0)/(prior1+2.0);
+ double loTarget=1/(prior0+2.0);
+ double[] t= new double[l];
+ double fApB,p,q,h11,h22,h21,g1,g2,det,dA,dB,gd,stepsize;
+ double newA,newB,newf,d1,d2;
+ int iter;
+
+ // Initial Point and Initial Fun Value
+ A=0.0; B=Math.log((prior0+1.0)/(prior1+1.0));
+ double fval = 0.0;
+
+ for (i=0;i0) t[i]=hiTarget;
+ else t[i]=loTarget;
+ fApB = dec_values[i]*A+B;
+ if (fApB>=0)
+ fval += t[i]*fApB + Math.log(1+Math.exp(-fApB));
+ else
+ fval += (t[i] - 1)*fApB +Math.log(1+Math.exp(fApB));
+ }
+ for (iter=0;iter= 0)
+ {
+ p=Math.exp(-fApB)/(1.0+Math.exp(-fApB));
+ q=1.0/(1.0+Math.exp(-fApB));
+ }
+ else
+ {
+ p=1.0/(1.0+Math.exp(fApB));
+ q=Math.exp(fApB)/(1.0+Math.exp(fApB));
+ }
+ d2=p*q;
+ h11+=dec_values[i]*dec_values[i]*d2;
+ h22+=d2;
+ h21+=dec_values[i]*d2;
+ d1=t[i]-p;
+ g1+=dec_values[i]*d1;
+ g2+=d1;
+ }
+
+ // Stopping Criteria
+ if (Math.abs(g1)= min_step)
+ {
+ newA = A + stepsize * dA;
+ newB = B + stepsize * dB;
+
+ // New function value
+ newf = 0.0;
+ for (i=0;i= 0)
+ newf += t[i]*fApB + Math.log(1+Math.exp(-fApB));
+ else
+ newf += (t[i] - 1)*fApB +Math.log(1+Math.exp(fApB));
+ }
+ // Check sufficient decrease
+ if (newf=max_iter)
+ svm.info("Reaching maximal iterations in two-class probability estimates\n");
+ probAB[0]=A;probAB[1]=B;
+ }
+
+ private static double sigmoid_predict(double decision_value, double A, double B)
+ {
+ double fApB = decision_value*A+B;
+ if (fApB >= 0)
+ return Math.exp(-fApB)/(1.0+Math.exp(-fApB));
+ else
+ return 1.0/(1+Math.exp(fApB)) ;
+ }
+
+ // Method 2 from the multiclass_prob paper by Wu, Lin, and Weng
+ private static void multiclass_probability(int k, double[][] r, double[] p)
+ {
+ int t,j;
+ int iter = 0, max_iter=Math.max(100,k);
+ double[][] Q=new double[k][k];
+ double[] Qp=new double[k];
+ double pQp, eps=0.005/k;
+
+ for (t=0;tmax_error)
+ max_error=error;
+ }
+ if (max_error=max_iter)
+ svm.info("Exceeds max_iter in multiclass_prob\n");
+ }
+
+ // Cross-validation decision values for probability estimates
+ private static void svm_binary_svc_probability(svm_problem prob, svm_parameter param, double Cp, double Cn, double[] probAB)
+ {
+ int i;
+ int nr_fold = 5;
+ int[] perm = new int[prob.l];
+ double[] dec_values = new double[prob.l];
+
+ // random shuffle
+ for(i=0;i0)
+ p_count++;
+ else
+ n_count++;
+
+ if(p_count==0 && n_count==0)
+ for(j=begin;j 0 && n_count == 0)
+ for(j=begin;j 0)
+ for(j=begin;j 5*std)
+ count=count+1;
+ else
+ mae+=Math.abs(ymv[i]);
+ mae /= (prob.l-count);
+ svm.info("Prob. model for test data: target value = predicted value + z,\nz: Laplace distribution e^(-|z|/sigma)/(2sigma),sigma="+mae+"\n");
+ return mae;
+ }
+
+ // label: label name, start: begin of each class, count: #data of classes, perm: indices to the original data
+ // perm, length l, must be allocated before calling this subroutine
+ private static void svm_group_classes(svm_problem prob, int[] nr_class_ret, int[][] label_ret, int[][] start_ret, int[][] count_ret, int[] perm)
+ {
+ int l = prob.l;
+ int max_nr_class = 16;
+ int nr_class = 0;
+ int[] label = new int[max_nr_class];
+ int[] count = new int[max_nr_class];
+ int[] data_label = new int[l];
+ int i;
+
+ for(i=0;i 0) ++nSV;
+ model.l = nSV;
+ model.SV = new svm_node[nSV][];
+ model.sv_coef[0] = new double[nSV];
+ model.sv_indices = new int[nSV];
+ int j = 0;
+ for(i=0;i 0)
+ {
+ model.SV[j] = prob.x[i];
+ model.sv_coef[0][j] = f.alpha[i];
+ model.sv_indices[j] = i+1;
+ ++j;
+ }
+ }
+ else
+ {
+ // classification
+ int l = prob.l;
+ int[] tmp_nr_class = new int[1];
+ int[][] tmp_label = new int[1][];
+ int[][] tmp_start = new int[1][];
+ int[][] tmp_count = new int[1][];
+ int[] perm = new int[l];
+
+ // group training data of the same class
+ svm_group_classes(prob,tmp_nr_class,tmp_label,tmp_start,tmp_count,perm);
+ int nr_class = tmp_nr_class[0];
+ int[] label = tmp_label[0];
+ int[] start = tmp_start[0];
+ int[] count = tmp_count[0];
+
+ if(nr_class == 1)
+ svm.info("WARNING: training data in only one class. See README for details.\n");
+
+ svm_node[][] x = new svm_node[l][];
+ int i;
+ for(i=0;i 0)
+ nonzero[si+k] = true;
+ for(k=0;k 0)
+ nonzero[sj+k] = true;
+ ++p;
+ }
+
+ // build output
+
+ model.nr_class = nr_class;
+
+ model.label = new int[nr_class];
+ for(i=0;i some folds may have zero elements
+ if((param.svm_type == svm_parameter.C_SVC ||
+ param.svm_type == svm_parameter.NU_SVC) && nr_fold < l)
+ {
+ int[] tmp_nr_class = new int[1];
+ int[][] tmp_label = new int[1][];
+ int[][] tmp_start = new int[1][];
+ int[][] tmp_count = new int[1][];
+
+ svm_group_classes(prob,tmp_nr_class,tmp_label,tmp_start,tmp_count,perm);
+
+ int nr_class = tmp_nr_class[0];
+ int[] start = tmp_start[0];
+ int[] count = tmp_count[0];
+
+ // random shuffle and then data grouped by fold using the array perm
+ int[] fold_count = new int[nr_fold];
+ int c;
+ int[] index = new int[l];
+ for(i=0;i0)?1:-1;
+ else
+ return sum;
+ }
+ else
+ {
+ int nr_class = model.nr_class;
+ int l = model.l;
+
+ double[] kvalue = new double[l];
+ for(i=0;i 0)
+ ++vote[i];
+ else
+ ++vote[j];
+ p++;
+ }
+
+ int vote_max_idx = 0;
+ for(i=1;i vote[vote_max_idx])
+ vote_max_idx = i;
+
+ return model.label[vote_max_idx];
+ }
+ }
+
+ public static double svm_predict(svm_model model, svm_node[] x)
+ {
+ int nr_class = model.nr_class;
+ double[] dec_values;
+ if(model.param.svm_type == svm_parameter.ONE_CLASS ||
+ model.param.svm_type == svm_parameter.EPSILON_SVR ||
+ model.param.svm_type == svm_parameter.NU_SVR)
+ dec_values = new double[1];
+ else
+ dec_values = new double[nr_class*(nr_class-1)/2];
+ double pred_result = svm_predict_values(model, x, dec_values);
+ return pred_result;
+ }
+
+ public static double svm_predict_probability(svm_model model, svm_node[] x, double[] prob_estimates)
+ {
+ if ((model.param.svm_type == svm_parameter.C_SVC || model.param.svm_type == svm_parameter.NU_SVC) &&
+ model.probA!=null && model.probB!=null)
+ {
+ int i;
+ int nr_class = model.nr_class;
+ double[] dec_values = new double[nr_class*(nr_class-1)/2];
+ svm_predict_values(model, x, dec_values);
+
+ double min_prob=1e-7;
+ double[][] pairwise_prob=new double[nr_class][nr_class];
+
+ int k=0;
+ for(i=0;i prob_estimates[prob_max_idx])
+ prob_max_idx = i;
+ return model.label[prob_max_idx];
+ }
+ else
+ return svm_predict(model, x);
+ }
+
+ static final String svm_type_table[] =
+ {
+ "c_svc","nu_svc","one_class","epsilon_svr","nu_svr",
+ };
+
+ static final String kernel_type_table[]=
+ {
+ "linear","polynomial","rbf","sigmoid","precomputed"
+ };
+
+ public static void svm_save_model(String model_file_name, svm_model model) throws IOException
+ {
+ DataOutputStream fp = new DataOutputStream(new BufferedOutputStream(new FileOutputStream(model_file_name)));
+
+ svm_parameter param = model.param;
+
+ fp.writeBytes("svm_type "+svm_type_table[param.svm_type]+"\n");
+ fp.writeBytes("kernel_type "+kernel_type_table[param.kernel_type]+"\n");
+
+ if(param.kernel_type == svm_parameter.POLY)
+ fp.writeBytes("degree "+param.degree+"\n");
+
+ if(param.kernel_type == svm_parameter.POLY ||
+ param.kernel_type == svm_parameter.RBF ||
+ param.kernel_type == svm_parameter.SIGMOID)
+ fp.writeBytes("gamma "+param.gamma+"\n");
+
+ if(param.kernel_type == svm_parameter.POLY ||
+ param.kernel_type == svm_parameter.SIGMOID)
+ fp.writeBytes("coef0 "+param.coef0+"\n");
+
+ int nr_class = model.nr_class;
+ int l = model.l;
+ fp.writeBytes("nr_class "+nr_class+"\n");
+ fp.writeBytes("total_sv "+l+"\n");
+
+ {
+ fp.writeBytes("rho");
+ for(int i=0;i 1)
+ return "nu <= 0 or nu > 1";
+
+ if(svm_type == svm_parameter.EPSILON_SVR)
+ if(param.p < 0)
+ return "p < 0";
+
+ if(param.shrinking != 0 &&
+ param.shrinking != 1)
+ return "shrinking != 0 and shrinking != 1";
+
+ if(param.probability != 0 &&
+ param.probability != 1)
+ return "probability != 0 and probability != 1";
+
+ if(param.probability == 1 &&
+ svm_type == svm_parameter.ONE_CLASS)
+ return "one-class SVM probability output not supported yet";
+
+ // check whether nu-svc is feasible
+
+ if(svm_type == svm_parameter.NU_SVC)
+ {
+ int l = prob.l;
+ int max_nr_class = 16;
+ int nr_class = 0;
+ int[] label = new int[max_nr_class];
+ int[] count = new int[max_nr_class];
+
+ int i;
+ for(i=0;i Math.min(n1,n2))
+ return "specified nu is infeasible";
+ }
+ }
+ }
+
+ return null;
+ }
+
+ public static int svm_check_probability_model(svm_model model)
+ {
+ if (((model.param.svm_type == svm_parameter.C_SVC || model.param.svm_type == svm_parameter.NU_SVC) &&
+ model.probA!=null && model.probB!=null) ||
+ ((model.param.svm_type == svm_parameter.EPSILON_SVR || model.param.svm_type == svm_parameter.NU_SVR) &&
+ model.probA!=null))
+ return 1;
+ else
+ return 0;
+ }
+
+ public static void svm_set_print_string_function(svm_print_interface print_func)
+ {
+ if (print_func == null)
+ svm_print_string = svm_print_stdout;
+ else
+ svm_print_string = print_func;
+ }
+}
diff --git a/libsvm-3.21/java/libsvm/svm_model.java b/libsvm-3.21/java/libsvm/svm_model.java
new file mode 100644
index 0000000..a38be3f
--- /dev/null
+++ b/libsvm-3.21/java/libsvm/svm_model.java
@@ -0,0 +1,22 @@
+//
+// svm_model
+//
+package libsvm;
+public class svm_model implements java.io.Serializable
+{
+ public svm_parameter param; // parameter
+ public int nr_class; // number of classes, = 2 in regression/one class svm
+ public int l; // total #SV
+ public svm_node[][] SV; // SVs (SV[l])
+ public double[][] sv_coef; // coefficients for SVs in decision functions (sv_coef[k-1][l])
+ public double[] rho; // constants in decision functions (rho[k*(k-1)/2])
+ public double[] probA; // pariwise probability information
+ public double[] probB;
+ public int[] sv_indices; // sv_indices[0,...,nSV-1] are values in [1,...,num_traning_data] to indicate SVs in the training set
+
+ // for classification only
+
+ public int[] label; // label of each class (label[k])
+ public int[] nSV; // number of SVs for each class (nSV[k])
+ // nSV[0] + nSV[1] + ... + nSV[k-1] = l
+};
diff --git a/libsvm-3.21/java/libsvm/svm_node.java b/libsvm-3.21/java/libsvm/svm_node.java
new file mode 100644
index 0000000..9ab0a10
--- /dev/null
+++ b/libsvm-3.21/java/libsvm/svm_node.java
@@ -0,0 +1,6 @@
+package libsvm;
+public class svm_node implements java.io.Serializable
+{
+ public int index;
+ public double value;
+}
diff --git a/libsvm-3.21/java/libsvm/svm_parameter.java b/libsvm-3.21/java/libsvm/svm_parameter.java
new file mode 100644
index 0000000..429f041
--- /dev/null
+++ b/libsvm-3.21/java/libsvm/svm_parameter.java
@@ -0,0 +1,47 @@
+package libsvm;
+public class svm_parameter implements Cloneable,java.io.Serializable
+{
+ /* svm_type */
+ public static final int C_SVC = 0;
+ public static final int NU_SVC = 1;
+ public static final int ONE_CLASS = 2;
+ public static final int EPSILON_SVR = 3;
+ public static final int NU_SVR = 4;
+
+ /* kernel_type */
+ public static final int LINEAR = 0;
+ public static final int POLY = 1;
+ public static final int RBF = 2;
+ public static final int SIGMOID = 3;
+ public static final int PRECOMPUTED = 4;
+
+ public int svm_type;
+ public int kernel_type;
+ public int degree; // for poly
+ public double gamma; // for poly/rbf/sigmoid
+ public double coef0; // for poly/sigmoid
+
+ // these are for training only
+ public double cache_size; // in MB
+ public double eps; // stopping criteria
+ public double C; // for C_SVC, EPSILON_SVR and NU_SVR
+ public int nr_weight; // for C_SVC
+ public int[] weight_label; // for C_SVC
+ public double[] weight; // for C_SVC
+ public double nu; // for NU_SVC, ONE_CLASS, and NU_SVR
+ public double p; // for EPSILON_SVR
+ public int shrinking; // use the shrinking heuristics
+ public int probability; // do probability estimates
+
+ public Object clone()
+ {
+ try
+ {
+ return super.clone();
+ } catch (CloneNotSupportedException e)
+ {
+ return null;
+ }
+ }
+
+}
diff --git a/libsvm-3.21/java/libsvm/svm_print_interface.java b/libsvm-3.21/java/libsvm/svm_print_interface.java
new file mode 100644
index 0000000..ff4d0e8
--- /dev/null
+++ b/libsvm-3.21/java/libsvm/svm_print_interface.java
@@ -0,0 +1,5 @@
+package libsvm;
+public interface svm_print_interface
+{
+ public void print(String s);
+}
diff --git a/libsvm-3.21/java/libsvm/svm_problem.java b/libsvm-3.21/java/libsvm/svm_problem.java
new file mode 100644
index 0000000..5d74609
--- /dev/null
+++ b/libsvm-3.21/java/libsvm/svm_problem.java
@@ -0,0 +1,7 @@
+package libsvm;
+public class svm_problem implements java.io.Serializable
+{
+ public int l;
+ public double[] y;
+ public svm_node[][] x;
+}
diff --git a/libsvm-3.21/java/svm_predict.java b/libsvm-3.21/java/svm_predict.java
new file mode 100644
index 0000000..d714c5b
--- /dev/null
+++ b/libsvm-3.21/java/svm_predict.java
@@ -0,0 +1,194 @@
+import libsvm.*;
+import java.io.*;
+import java.util.*;
+
+class svm_predict {
+ private static svm_print_interface svm_print_null = new svm_print_interface()
+ {
+ public void print(String s) {}
+ };
+
+ private static svm_print_interface svm_print_stdout = new svm_print_interface()
+ {
+ public void print(String s)
+ {
+ System.out.print(s);
+ }
+ };
+
+ private static svm_print_interface svm_print_string = svm_print_stdout;
+
+ static void info(String s)
+ {
+ svm_print_string.print(s);
+ }
+
+ private static double atof(String s)
+ {
+ return Double.valueOf(s).doubleValue();
+ }
+
+ private static int atoi(String s)
+ {
+ return Integer.parseInt(s);
+ }
+
+ private static void predict(BufferedReader input, DataOutputStream output, svm_model model, int predict_probability) throws IOException
+ {
+ int correct = 0;
+ int total = 0;
+ double error = 0;
+ double sumv = 0, sumy = 0, sumvv = 0, sumyy = 0, sumvy = 0;
+
+ int svm_type=svm.svm_get_svm_type(model);
+ int nr_class=svm.svm_get_nr_class(model);
+ double[] prob_estimates=null;
+
+ if(predict_probability == 1)
+ {
+ if(svm_type == svm_parameter.EPSILON_SVR ||
+ svm_type == svm_parameter.NU_SVR)
+ {
+ svm_predict.info("Prob. model for test data: target value = predicted value + z,\nz: Laplace distribution e^(-|z|/sigma)/(2sigma),sigma="+svm.svm_get_svr_probability(model)+"\n");
+ }
+ else
+ {
+ int[] labels=new int[nr_class];
+ svm.svm_get_labels(model,labels);
+ prob_estimates = new double[nr_class];
+ output.writeBytes("labels");
+ for(int j=0;j=argv.length-2)
+ exit_with_help();
+ try
+ {
+ BufferedReader input = new BufferedReader(new FileReader(argv[i]));
+ DataOutputStream output = new DataOutputStream(new BufferedOutputStream(new FileOutputStream(argv[i+2])));
+ svm_model model = svm.svm_load_model(argv[i+1]);
+ if (model == null)
+ {
+ System.err.print("can't open model file "+argv[i+1]+"\n");
+ System.exit(1);
+ }
+ if(predict_probability == 1)
+ {
+ if(svm.svm_check_probability_model(model)==0)
+ {
+ System.err.print("Model does not support probabiliy estimates\n");
+ System.exit(1);
+ }
+ }
+ else
+ {
+ if(svm.svm_check_probability_model(model)!=0)
+ {
+ svm_predict.info("Model supports probability estimates, but disabled in prediction.\n");
+ }
+ }
+ predict(input,output,model,predict_probability);
+ input.close();
+ output.close();
+ }
+ catch(FileNotFoundException e)
+ {
+ exit_with_help();
+ }
+ catch(ArrayIndexOutOfBoundsException e)
+ {
+ exit_with_help();
+ }
+ }
+}
diff --git a/libsvm-3.21/java/svm_scale.java b/libsvm-3.21/java/svm_scale.java
new file mode 100644
index 0000000..6e8d458
--- /dev/null
+++ b/libsvm-3.21/java/svm_scale.java
@@ -0,0 +1,350 @@
+import libsvm.*;
+import java.io.*;
+import java.util.*;
+import java.text.DecimalFormat;
+
+class svm_scale
+{
+ private String line = null;
+ private double lower = -1.0;
+ private double upper = 1.0;
+ private double y_lower;
+ private double y_upper;
+ private boolean y_scaling = false;
+ private double[] feature_max;
+ private double[] feature_min;
+ private double y_max = -Double.MAX_VALUE;
+ private double y_min = Double.MAX_VALUE;
+ private int max_index;
+ private long num_nonzeros = 0;
+ private long new_num_nonzeros = 0;
+
+ private static void exit_with_help()
+ {
+ System.out.print(
+ "Usage: svm-scale [options] data_filename\n"
+ +"options:\n"
+ +"-l lower : x scaling lower limit (default -1)\n"
+ +"-u upper : x scaling upper limit (default +1)\n"
+ +"-y y_lower y_upper : y scaling limits (default: no y scaling)\n"
+ +"-s save_filename : save scaling parameters to save_filename\n"
+ +"-r restore_filename : restore scaling parameters from restore_filename\n"
+ );
+ System.exit(1);
+ }
+
+ private BufferedReader rewind(BufferedReader fp, String filename) throws IOException
+ {
+ fp.close();
+ return new BufferedReader(new FileReader(filename));
+ }
+
+ private void output_target(double value)
+ {
+ if(y_scaling)
+ {
+ if(value == y_min)
+ value = y_lower;
+ else if(value == y_max)
+ value = y_upper;
+ else
+ value = y_lower + (y_upper-y_lower) *
+ (value-y_min) / (y_max-y_min);
+ }
+
+ System.out.print(value + " ");
+ }
+
+ private void output(int index, double value)
+ {
+ /* skip single-valued attribute */
+ if(feature_max[index] == feature_min[index])
+ return;
+
+ if(value == feature_min[index])
+ value = lower;
+ else if(value == feature_max[index])
+ value = upper;
+ else
+ value = lower + (upper-lower) *
+ (value-feature_min[index])/
+ (feature_max[index]-feature_min[index]);
+
+ if(value != 0)
+ {
+ System.out.print(index + ":" + value + " ");
+ new_num_nonzeros++;
+ }
+ }
+
+ private String readline(BufferedReader fp) throws IOException
+ {
+ line = fp.readLine();
+ return line;
+ }
+
+ private void run(String []argv) throws IOException
+ {
+ int i,index;
+ BufferedReader fp = null, fp_restore = null;
+ String save_filename = null;
+ String restore_filename = null;
+ String data_filename = null;
+
+
+ for(i=0;i lower) || (y_scaling && !(y_upper > y_lower)))
+ {
+ System.err.println("inconsistent lower/upper specification");
+ System.exit(1);
+ }
+ if(restore_filename != null && save_filename != null)
+ {
+ System.err.println("cannot use -r and -s simultaneously");
+ System.exit(1);
+ }
+
+ if(argv.length != i+1)
+ exit_with_help();
+
+ data_filename = argv[i];
+ try {
+ fp = new BufferedReader(new FileReader(data_filename));
+ } catch (Exception e) {
+ System.err.println("can't open file " + data_filename);
+ System.exit(1);
+ }
+
+ /* assumption: min index of attributes is 1 */
+ /* pass 1: find out max index of attributes */
+ max_index = 0;
+
+ if(restore_filename != null)
+ {
+ int idx, c;
+
+ try {
+ fp_restore = new BufferedReader(new FileReader(restore_filename));
+ }
+ catch (Exception e) {
+ System.err.println("can't open file " + restore_filename);
+ System.exit(1);
+ }
+ if((c = fp_restore.read()) == 'y')
+ {
+ fp_restore.readLine();
+ fp_restore.readLine();
+ fp_restore.readLine();
+ }
+ fp_restore.readLine();
+ fp_restore.readLine();
+
+ String restore_line = null;
+ while((restore_line = fp_restore.readLine())!=null)
+ {
+ StringTokenizer st2 = new StringTokenizer(restore_line);
+ idx = Integer.parseInt(st2.nextToken());
+ max_index = Math.max(max_index, idx);
+ }
+ fp_restore = rewind(fp_restore, restore_filename);
+ }
+
+ while (readline(fp) != null)
+ {
+ StringTokenizer st = new StringTokenizer(line," \t\n\r\f:");
+ st.nextToken();
+ while(st.hasMoreTokens())
+ {
+ index = Integer.parseInt(st.nextToken());
+ max_index = Math.max(max_index, index);
+ st.nextToken();
+ num_nonzeros++;
+ }
+ }
+
+ try {
+ feature_max = new double[(max_index+1)];
+ feature_min = new double[(max_index+1)];
+ } catch(OutOfMemoryError e) {
+ System.err.println("can't allocate enough memory");
+ System.exit(1);
+ }
+
+ for(i=0;i<=max_index;i++)
+ {
+ feature_max[i] = -Double.MAX_VALUE;
+ feature_min[i] = Double.MAX_VALUE;
+ }
+
+ fp = rewind(fp, data_filename);
+
+ /* pass 2: find out min/max value */
+ while(readline(fp) != null)
+ {
+ int next_index = 1;
+ double target;
+ double value;
+
+ StringTokenizer st = new StringTokenizer(line," \t\n\r\f:");
+ target = Double.parseDouble(st.nextToken());
+ y_max = Math.max(y_max, target);
+ y_min = Math.min(y_min, target);
+
+ while (st.hasMoreTokens())
+ {
+ index = Integer.parseInt(st.nextToken());
+ value = Double.parseDouble(st.nextToken());
+
+ for (i = next_index; i num_nonzeros)
+ System.err.print(
+ "WARNING: original #nonzeros " + num_nonzeros+"\n"
+ +" new #nonzeros " + new_num_nonzeros+"\n"
+ +"Use -l 0 if many original feature values are zeros\n");
+
+ fp.close();
+ }
+
+ public static void main(String argv[]) throws IOException
+ {
+ svm_scale s = new svm_scale();
+ s.run(argv);
+ }
+}
diff --git a/libsvm-3.21/java/svm_toy.java b/libsvm-3.21/java/svm_toy.java
new file mode 100644
index 0000000..c4bd503
--- /dev/null
+++ b/libsvm-3.21/java/svm_toy.java
@@ -0,0 +1,502 @@
+import libsvm.*;
+import java.applet.*;
+import java.awt.*;
+import java.util.*;
+import java.awt.event.*;
+import java.io.*;
+
+public class svm_toy extends Applet {
+
+ static final String DEFAULT_PARAM="-t 2 -c 100";
+ int XLEN;
+ int YLEN;
+
+ // off-screen buffer
+
+ Image buffer;
+ Graphics buffer_gc;
+
+ // pre-allocated colors
+
+ final static Color colors[] =
+ {
+ new Color(0,0,0),
+ new Color(0,120,120),
+ new Color(120,120,0),
+ new Color(120,0,120),
+ new Color(0,200,200),
+ new Color(200,200,0),
+ new Color(200,0,200)
+ };
+
+ class point {
+ point(double x, double y, byte value)
+ {
+ this.x = x;
+ this.y = y;
+ this.value = value;
+ }
+ double x, y;
+ byte value;
+ }
+
+ Vector point_list = new Vector();
+ byte current_value = 1;
+
+ public void init()
+ {
+ setSize(getSize());
+
+ final Button button_change = new Button("Change");
+ Button button_run = new Button("Run");
+ Button button_clear = new Button("Clear");
+ Button button_save = new Button("Save");
+ Button button_load = new Button("Load");
+ final TextField input_line = new TextField(DEFAULT_PARAM);
+
+ BorderLayout layout = new BorderLayout();
+ this.setLayout(layout);
+
+ Panel p = new Panel();
+ GridBagLayout gridbag = new GridBagLayout();
+ p.setLayout(gridbag);
+
+ GridBagConstraints c = new GridBagConstraints();
+ c.fill = GridBagConstraints.HORIZONTAL;
+ c.weightx = 1;
+ c.gridwidth = 1;
+ gridbag.setConstraints(button_change,c);
+ gridbag.setConstraints(button_run,c);
+ gridbag.setConstraints(button_clear,c);
+ gridbag.setConstraints(button_save,c);
+ gridbag.setConstraints(button_load,c);
+ c.weightx = 5;
+ c.gridwidth = 5;
+ gridbag.setConstraints(input_line,c);
+
+ button_change.setBackground(colors[current_value]);
+
+ p.add(button_change);
+ p.add(button_run);
+ p.add(button_clear);
+ p.add(button_save);
+ p.add(button_load);
+ p.add(input_line);
+ this.add(p,BorderLayout.SOUTH);
+
+ button_change.addActionListener(new ActionListener()
+ { public void actionPerformed (ActionEvent e)
+ { button_change_clicked(); button_change.setBackground(colors[current_value]); }});
+
+ button_run.addActionListener(new ActionListener()
+ { public void actionPerformed (ActionEvent e)
+ { button_run_clicked(input_line.getText()); }});
+
+ button_clear.addActionListener(new ActionListener()
+ { public void actionPerformed (ActionEvent e)
+ { button_clear_clicked(); }});
+
+ button_save.addActionListener(new ActionListener()
+ { public void actionPerformed (ActionEvent e)
+ { button_save_clicked(input_line.getText()); }});
+
+ button_load.addActionListener(new ActionListener()
+ { public void actionPerformed (ActionEvent e)
+ { button_load_clicked(); }});
+
+ input_line.addActionListener(new ActionListener()
+ { public void actionPerformed (ActionEvent e)
+ { button_run_clicked(input_line.getText()); }});
+
+ this.enableEvents(AWTEvent.MOUSE_EVENT_MASK);
+ }
+
+ void draw_point(point p)
+ {
+ Color c = colors[p.value+3];
+
+ Graphics window_gc = getGraphics();
+ buffer_gc.setColor(c);
+ buffer_gc.fillRect((int)(p.x*XLEN),(int)(p.y*YLEN),4,4);
+ window_gc.setColor(c);
+ window_gc.fillRect((int)(p.x*XLEN),(int)(p.y*YLEN),4,4);
+ }
+
+ void clear_all()
+ {
+ point_list.removeAllElements();
+ if(buffer != null)
+ {
+ buffer_gc.setColor(colors[0]);
+ buffer_gc.fillRect(0,0,XLEN,YLEN);
+ }
+ repaint();
+ }
+
+ void draw_all_points()
+ {
+ int n = point_list.size();
+ for(int i=0;i 3) current_value = 1;
+ }
+
+ private static double atof(String s)
+ {
+ return Double.valueOf(s).doubleValue();
+ }
+
+ private static int atoi(String s)
+ {
+ return Integer.parseInt(s);
+ }
+
+ void button_run_clicked(String args)
+ {
+ // guard
+ if(point_list.isEmpty()) return;
+
+ svm_parameter param = new svm_parameter();
+
+ // default values
+ param.svm_type = svm_parameter.C_SVC;
+ param.kernel_type = svm_parameter.RBF;
+ param.degree = 3;
+ param.gamma = 0;
+ param.coef0 = 0;
+ param.nu = 0.5;
+ param.cache_size = 40;
+ param.C = 1;
+ param.eps = 1e-3;
+ param.p = 0.1;
+ param.shrinking = 1;
+ param.probability = 0;
+ param.nr_weight = 0;
+ param.weight_label = new int[0];
+ param.weight = new double[0];
+
+ // parse options
+ StringTokenizer st = new StringTokenizer(args);
+ String[] argv = new String[st.countTokens()];
+ for(int i=0;i=argv.length)
+ {
+ System.err.print("unknown option\n");
+ break;
+ }
+ switch(argv[i-1].charAt(1))
+ {
+ case 's':
+ param.svm_type = atoi(argv[i]);
+ break;
+ case 't':
+ param.kernel_type = atoi(argv[i]);
+ break;
+ case 'd':
+ param.degree = atoi(argv[i]);
+ break;
+ case 'g':
+ param.gamma = atof(argv[i]);
+ break;
+ case 'r':
+ param.coef0 = atof(argv[i]);
+ break;
+ case 'n':
+ param.nu = atof(argv[i]);
+ break;
+ case 'm':
+ param.cache_size = atof(argv[i]);
+ break;
+ case 'c':
+ param.C = atof(argv[i]);
+ break;
+ case 'e':
+ param.eps = atof(argv[i]);
+ break;
+ case 'p':
+ param.p = atof(argv[i]);
+ break;
+ case 'h':
+ param.shrinking = atoi(argv[i]);
+ break;
+ case 'b':
+ param.probability = atoi(argv[i]);
+ break;
+ case 'w':
+ ++param.nr_weight;
+ {
+ int[] old = param.weight_label;
+ param.weight_label = new int[param.nr_weight];
+ System.arraycopy(old,0,param.weight_label,0,param.nr_weight-1);
+ }
+
+ {
+ double[] old = param.weight;
+ param.weight = new double[param.nr_weight];
+ System.arraycopy(old,0,param.weight,0,param.nr_weight-1);
+ }
+
+ param.weight_label[param.nr_weight-1] = atoi(argv[i-1].substring(2));
+ param.weight[param.nr_weight-1] = atof(argv[i]);
+ break;
+ default:
+ System.err.print("unknown option\n");
+ }
+ }
+
+ // build problem
+ svm_problem prob = new svm_problem();
+ prob.l = point_list.size();
+ prob.y = new double[prob.l];
+
+ if(param.kernel_type == svm_parameter.PRECOMPUTED)
+ {
+ }
+ else if(param.svm_type == svm_parameter.EPSILON_SVR ||
+ param.svm_type == svm_parameter.NU_SVR)
+ {
+ if(param.gamma == 0) param.gamma = 1;
+ prob.x = new svm_node[prob.l][1];
+ for(int i=0;i= XLEN || e.getY() >= YLEN) return;
+ point p = new point((double)e.getX()/XLEN,
+ (double)e.getY()/YLEN,
+ current_value);
+ point_list.addElement(p);
+ draw_point(p);
+ }
+ }
+
+ public void paint(Graphics g)
+ {
+ // create buffer first time
+ if(buffer == null) {
+ buffer = this.createImage(XLEN,YLEN);
+ buffer_gc = buffer.getGraphics();
+ buffer_gc.setColor(colors[0]);
+ buffer_gc.fillRect(0,0,XLEN,YLEN);
+ }
+ g.drawImage(buffer,0,0,this);
+ }
+
+ public Dimension getPreferredSize() { return new Dimension(XLEN,YLEN+50); }
+
+ public void setSize(Dimension d) { setSize(d.width,d.height); }
+ public void setSize(int w,int h) {
+ super.setSize(w,h);
+ XLEN = w;
+ YLEN = h-50;
+ clear_all();
+ }
+
+ public static void main(String[] argv)
+ {
+ new AppletFrame("svm_toy",new svm_toy(),500,500+50);
+ }
+}
+
+class AppletFrame extends Frame {
+ AppletFrame(String title, Applet applet, int width, int height)
+ {
+ super(title);
+ this.addWindowListener(new WindowAdapter() {
+ public void windowClosing(WindowEvent e) {
+ System.exit(0);
+ }
+ });
+ applet.init();
+ applet.setSize(width,height);
+ applet.start();
+ this.add(applet);
+ this.pack();
+ this.setVisible(true);
+ }
+}
diff --git a/libsvm-3.21/java/svm_train.java b/libsvm-3.21/java/svm_train.java
new file mode 100644
index 0000000..22ee043
--- /dev/null
+++ b/libsvm-3.21/java/svm_train.java
@@ -0,0 +1,318 @@
+import libsvm.*;
+import java.io.*;
+import java.util.*;
+
+class svm_train {
+ private svm_parameter param; // set by parse_command_line
+ private svm_problem prob; // set by read_problem
+ private svm_model model;
+ private String input_file_name; // set by parse_command_line
+ private String model_file_name; // set by parse_command_line
+ private String error_msg;
+ private int cross_validation;
+ private int nr_fold;
+
+ private static svm_print_interface svm_print_null = new svm_print_interface()
+ {
+ public void print(String s) {}
+ };
+
+ private static void exit_with_help()
+ {
+ System.out.print(
+ "Usage: svm_train [options] training_set_file [model_file]\n"
+ +"options:\n"
+ +"-s svm_type : set type of SVM (default 0)\n"
+ +" 0 -- C-SVC (multi-class classification)\n"
+ +" 1 -- nu-SVC (multi-class classification)\n"
+ +" 2 -- one-class SVM\n"
+ +" 3 -- epsilon-SVR (regression)\n"
+ +" 4 -- nu-SVR (regression)\n"
+ +"-t kernel_type : set type of kernel function (default 2)\n"
+ +" 0 -- linear: u'*v\n"
+ +" 1 -- polynomial: (gamma*u'*v + coef0)^degree\n"
+ +" 2 -- radial basis function: exp(-gamma*|u-v|^2)\n"
+ +" 3 -- sigmoid: tanh(gamma*u'*v + coef0)\n"
+ +" 4 -- precomputed kernel (kernel values in training_set_file)\n"
+ +"-d degree : set degree in kernel function (default 3)\n"
+ +"-g gamma : set gamma in kernel function (default 1/num_features)\n"
+ +"-r coef0 : set coef0 in kernel function (default 0)\n"
+ +"-c cost : set the parameter C of C-SVC, epsilon-SVR, and nu-SVR (default 1)\n"
+ +"-n nu : set the parameter nu of nu-SVC, one-class SVM, and nu-SVR (default 0.5)\n"
+ +"-p epsilon : set the epsilon in loss function of epsilon-SVR (default 0.1)\n"
+ +"-m cachesize : set cache memory size in MB (default 100)\n"
+ +"-e epsilon : set tolerance of termination criterion (default 0.001)\n"
+ +"-h shrinking : whether to use the shrinking heuristics, 0 or 1 (default 1)\n"
+ +"-b probability_estimates : whether to train a SVC or SVR model for probability estimates, 0 or 1 (default 0)\n"
+ +"-wi weight : set the parameter C of class i to weight*C, for C-SVC (default 1)\n"
+ +"-v n : n-fold cross validation mode\n"
+ +"-q : quiet mode (no outputs)\n"
+ );
+ System.exit(1);
+ }
+
+ private void do_cross_validation()
+ {
+ int i;
+ int total_correct = 0;
+ double total_error = 0;
+ double sumv = 0, sumy = 0, sumvv = 0, sumyy = 0, sumvy = 0;
+ double[] target = new double[prob.l];
+
+ svm.svm_cross_validation(prob,param,nr_fold,target);
+ if(param.svm_type == svm_parameter.EPSILON_SVR ||
+ param.svm_type == svm_parameter.NU_SVR)
+ {
+ for(i=0;i=argv.length)
+ exit_with_help();
+ switch(argv[i-1].charAt(1))
+ {
+ case 's':
+ param.svm_type = atoi(argv[i]);
+ break;
+ case 't':
+ param.kernel_type = atoi(argv[i]);
+ break;
+ case 'd':
+ param.degree = atoi(argv[i]);
+ break;
+ case 'g':
+ param.gamma = atof(argv[i]);
+ break;
+ case 'r':
+ param.coef0 = atof(argv[i]);
+ break;
+ case 'n':
+ param.nu = atof(argv[i]);
+ break;
+ case 'm':
+ param.cache_size = atof(argv[i]);
+ break;
+ case 'c':
+ param.C = atof(argv[i]);
+ break;
+ case 'e':
+ param.eps = atof(argv[i]);
+ break;
+ case 'p':
+ param.p = atof(argv[i]);
+ break;
+ case 'h':
+ param.shrinking = atoi(argv[i]);
+ break;
+ case 'b':
+ param.probability = atoi(argv[i]);
+ break;
+ case 'q':
+ print_func = svm_print_null;
+ i--;
+ break;
+ case 'v':
+ cross_validation = 1;
+ nr_fold = atoi(argv[i]);
+ if(nr_fold < 2)
+ {
+ System.err.print("n-fold cross validation: n must >= 2\n");
+ exit_with_help();
+ }
+ break;
+ case 'w':
+ ++param.nr_weight;
+ {
+ int[] old = param.weight_label;
+ param.weight_label = new int[param.nr_weight];
+ System.arraycopy(old,0,param.weight_label,0,param.nr_weight-1);
+ }
+
+ {
+ double[] old = param.weight;
+ param.weight = new double[param.nr_weight];
+ System.arraycopy(old,0,param.weight,0,param.nr_weight-1);
+ }
+
+ param.weight_label[param.nr_weight-1] = atoi(argv[i-1].substring(2));
+ param.weight[param.nr_weight-1] = atof(argv[i]);
+ break;
+ default:
+ System.err.print("Unknown option: " + argv[i-1] + "\n");
+ exit_with_help();
+ }
+ }
+
+ svm.svm_set_print_string_function(print_func);
+
+ // determine filenames
+
+ if(i>=argv.length)
+ exit_with_help();
+
+ input_file_name = argv[i];
+
+ if(i vy = new Vector();
+ Vector vx = new Vector();
+ int max_index = 0;
+
+ while(true)
+ {
+ String line = fp.readLine();
+ if(line == null) break;
+
+ StringTokenizer st = new StringTokenizer(line," \t\n\r\f:");
+
+ vy.addElement(atof(st.nextToken()));
+ int m = st.countTokens()/2;
+ svm_node[] x = new svm_node[m];
+ for(int j=0;j0) max_index = Math.max(max_index, x[m-1].index);
+ vx.addElement(x);
+ }
+
+ prob = new svm_problem();
+ prob.l = vy.size();
+ prob.x = new svm_node[prob.l][];
+ for(int i=0;i 0)
+ param.gamma = 1.0/max_index;
+
+ if(param.kernel_type == svm_parameter.PRECOMPUTED)
+ for(int i=0;i max_index)
+ {
+ System.err.print("Wrong input format: sample_serial_number out of range\n");
+ System.exit(1);
+ }
+ }
+
+ fp.close();
+ }
+}
diff --git a/libsvm-3.21/java/test_applet.html b/libsvm-3.21/java/test_applet.html
new file mode 100644
index 0000000..7f40424
--- /dev/null
+++ b/libsvm-3.21/java/test_applet.html
@@ -0,0 +1 @@
+
diff --git a/libsvm-3.21/matlab/Makefile b/libsvm-3.21/matlab/Makefile
new file mode 100644
index 0000000..37d6345
--- /dev/null
+++ b/libsvm-3.21/matlab/Makefile
@@ -0,0 +1,45 @@
+# This Makefile is used under Linux
+
+MATLABDIR ?= /usr/local/MATLAB/R2015b
+# for Mac
+# MATLABDIR ?= /opt/local/matlab
+
+CXX ?= g++
+#CXX = g++-4.1
+CFLAGS = -Wall -Wconversion -O3 -fPIC -I$(MATLABDIR)/extern/include -I..
+
+MEX = $(MATLABDIR)/bin/mex
+MEX_OPTION = CC="$(CXX)" CXX="$(CXX)" CFLAGS="$(CFLAGS)" CXXFLAGS="$(CFLAGS)"
+# comment the following line if you use MATLAB on 32-bit computer
+MEX_OPTION += -largeArrayDims
+MEX_EXT = $(shell $(MATLABDIR)/bin/mexext)
+
+all: matlab
+
+matlab: binary
+
+octave:
+ @echo "please type make under Octave"
+
+binary: svmpredict.$(MEX_EXT) svmtrain.$(MEX_EXT) libsvmread.$(MEX_EXT) libsvmwrite.$(MEX_EXT)
+
+svmpredict.$(MEX_EXT): svmpredict.c ../svm.h ../svm.o svm_model_matlab.o
+ $(MEX) $(MEX_OPTION) svmpredict.c ../svm.o svm_model_matlab.o
+
+svmtrain.$(MEX_EXT): svmtrain.c ../svm.h ../svm.o svm_model_matlab.o
+ $(MEX) $(MEX_OPTION) svmtrain.c ../svm.o svm_model_matlab.o
+
+libsvmread.$(MEX_EXT): libsvmread.c
+ $(MEX) $(MEX_OPTION) libsvmread.c
+
+libsvmwrite.$(MEX_EXT): libsvmwrite.c
+ $(MEX) $(MEX_OPTION) libsvmwrite.c
+
+svm_model_matlab.o: svm_model_matlab.c ../svm.h
+ $(CXX) $(CFLAGS) -c svm_model_matlab.c
+
+../svm.o: ../svm.cpp ../svm.h
+ make -C .. svm.o
+
+clean:
+ rm -f *~ *.o *.mex* *.obj ../svm.o
diff --git a/libsvm-3.21/matlab/README b/libsvm-3.21/matlab/README
new file mode 100644
index 0000000..ce1bcf8
--- /dev/null
+++ b/libsvm-3.21/matlab/README
@@ -0,0 +1,245 @@
+-----------------------------------------
+--- MATLAB/OCTAVE interface of LIBSVM ---
+-----------------------------------------
+
+Table of Contents
+=================
+
+- Introduction
+- Installation
+- Usage
+- Returned Model Structure
+- Other Utilities
+- Examples
+- Additional Information
+
+
+Introduction
+============
+
+This tool provides a simple interface to LIBSVM, a library for support vector
+machines (http://www.csie.ntu.edu.tw/~cjlin/libsvm). It is very easy to use as
+the usage and the way of specifying parameters are the same as that of LIBSVM.
+
+Installation
+============
+
+On Windows systems, pre-built binary files are already in the
+directory '..\windows', so no need to conduct installation. Now we
+provide binary files only for 64bit MATLAB on Windows. If you would
+like to re-build the package, please rely on the following steps.
+
+We recommend using make.m on both MATLAB and OCTAVE. Just type 'make'
+to build 'libsvmread.mex', 'libsvmwrite.mex', 'svmtrain.mex', and
+'svmpredict.mex'.
+
+On MATLAB or Octave:
+
+ >> make
+
+If make.m does not work on MATLAB (especially for Windows), try 'mex
+-setup' to choose a suitable compiler for mex. Make sure your compiler
+is accessible and workable. Then type 'make' to start the
+installation.
+
+Example:
+
+ matlab>> mex -setup
+ (ps: MATLAB will show the following messages to setup default compiler.)
+ Please choose your compiler for building external interface (MEX) files:
+ Would you like mex to locate installed compilers [y]/n? y
+ Select a compiler:
+ [1] Microsoft Visual C/C++ version 7.1 in C:\Program Files\Microsoft Visual Studio
+ [0] None
+ Compiler: 1
+ Please verify your choices:
+ Compiler: Microsoft Visual C/C++ 7.1
+ Location: C:\Program Files\Microsoft Visual Studio
+ Are these correct?([y]/n): y
+
+ matlab>> make
+
+On Unix systems, if neither make.m nor 'mex -setup' works, please use
+Makefile and type 'make' in a command window. Note that we assume
+your MATLAB is installed in '/usr/local/matlab'. If not, please change
+MATLABDIR in Makefile.
+
+Example:
+ linux> make
+
+To use octave, type 'make octave':
+
+Example:
+ linux> make octave
+
+For a list of supported/compatible compilers for MATLAB, please check
+the following page:
+
+http://www.mathworks.com/support/compilers/current_release/
+
+Usage
+=====
+
+matlab> model = svmtrain(training_label_vector, training_instance_matrix [, 'libsvm_options']);
+
+ -training_label_vector:
+ An m by 1 vector of training labels (type must be double).
+ -training_instance_matrix:
+ An m by n matrix of m training instances with n features.
+ It can be dense or sparse (type must be double).
+ -libsvm_options:
+ A string of training options in the same format as that of LIBSVM.
+
+matlab> [predicted_label, accuracy, decision_values/prob_estimates] = svmpredict(testing_label_vector, testing_instance_matrix, model [, 'libsvm_options']);
+matlab> [predicted_label] = svmpredict(testing_label_vector, testing_instance_matrix, model [, 'libsvm_options']);
+
+ -testing_label_vector:
+ An m by 1 vector of prediction labels. If labels of test
+ data are unknown, simply use any random values. (type must be double)
+ -testing_instance_matrix:
+ An m by n matrix of m testing instances with n features.
+ It can be dense or sparse. (type must be double)
+ -model:
+ The output of svmtrain.
+ -libsvm_options:
+ A string of testing options in the same format as that of LIBSVM.
+
+Returned Model Structure
+========================
+
+The 'svmtrain' function returns a model which can be used for future
+prediction. It is a structure and is organized as [Parameters, nr_class,
+totalSV, rho, Label, ProbA, ProbB, nSV, sv_coef, SVs]:
+
+ -Parameters: parameters
+ -nr_class: number of classes; = 2 for regression/one-class svm
+ -totalSV: total #SV
+ -rho: -b of the decision function(s) wx+b
+ -Label: label of each class; empty for regression/one-class SVM
+ -sv_indices: values in [1,...,num_traning_data] to indicate SVs in the training set
+ -ProbA: pairwise probability information; empty if -b 0 or in one-class SVM
+ -ProbB: pairwise probability information; empty if -b 0 or in one-class SVM
+ -nSV: number of SVs for each class; empty for regression/one-class SVM
+ -sv_coef: coefficients for SVs in decision functions
+ -SVs: support vectors
+
+If you do not use the option '-b 1', ProbA and ProbB are empty
+matrices. If the '-v' option is specified, cross validation is
+conducted and the returned model is just a scalar: cross-validation
+accuracy for classification and mean-squared error for regression.
+
+More details about this model can be found in LIBSVM FAQ
+(http://www.csie.ntu.edu.tw/~cjlin/libsvm/faq.html) and LIBSVM
+implementation document
+(http://www.csie.ntu.edu.tw/~cjlin/papers/libsvm.pdf).
+
+Result of Prediction
+====================
+
+The function 'svmpredict' has three outputs. The first one,
+predictd_label, is a vector of predicted labels. The second output,
+accuracy, is a vector including accuracy (for classification), mean
+squared error, and squared correlation coefficient (for regression).
+The third is a matrix containing decision values or probability
+estimates (if '-b 1' is specified). If k is the number of classes
+in training data, for decision values, each row includes results of
+predicting k(k-1)/2 binary-class SVMs. For classification, k = 1 is a
+special case. Decision value +1 is returned for each testing instance,
+instead of an empty vector. For probabilities, each row contains k values
+indicating the probability that the testing instance is in each class.
+Note that the order of classes here is the same as 'Label' field
+in the model structure.
+
+Other Utilities
+===============
+
+A matlab function libsvmread reads files in LIBSVM format:
+
+[label_vector, instance_matrix] = libsvmread('data.txt');
+
+Two outputs are labels and instances, which can then be used as inputs
+of svmtrain or svmpredict.
+
+A matlab function libsvmwrite writes Matlab matrix to a file in LIBSVM format:
+
+libsvmwrite('data.txt', label_vector, instance_matrix)
+
+The instance_matrix must be a sparse matrix. (type must be double)
+For 32bit and 64bit MATLAB on Windows, pre-built binary files are ready
+in the directory `..\windows', but in future releases, we will only
+include 64bit MATLAB binary files.
+
+These codes are prepared by Rong-En Fan and Kai-Wei Chang from National
+Taiwan University.
+
+Examples
+========
+
+Train and test on the provided data heart_scale:
+
+matlab> [heart_scale_label, heart_scale_inst] = libsvmread('../heart_scale');
+matlab> model = svmtrain(heart_scale_label, heart_scale_inst, '-c 1 -g 0.07');
+matlab> [predict_label, accuracy, dec_values] = svmpredict(heart_scale_label, heart_scale_inst, model); % test the training data
+
+For probability estimates, you need '-b 1' for training and testing:
+
+matlab> [heart_scale_label, heart_scale_inst] = libsvmread('../heart_scale');
+matlab> model = svmtrain(heart_scale_label, heart_scale_inst, '-c 1 -g 0.07 -b 1');
+matlab> [heart_scale_label, heart_scale_inst] = libsvmread('../heart_scale');
+matlab> [predict_label, accuracy, prob_estimates] = svmpredict(heart_scale_label, heart_scale_inst, model, '-b 1');
+
+To use precomputed kernel, you must include sample serial number as
+the first column of the training and testing data (assume your kernel
+matrix is K, # of instances is n):
+
+matlab> K1 = [(1:n)', K]; % include sample serial number as first column
+matlab> model = svmtrain(label_vector, K1, '-t 4');
+matlab> [predict_label, accuracy, dec_values] = svmpredict(label_vector, K1, model); % test the training data
+
+We give the following detailed example by splitting heart_scale into
+150 training and 120 testing data. Constructing a linear kernel
+matrix and then using the precomputed kernel gives exactly the same
+testing error as using the LIBSVM built-in linear kernel.
+
+matlab> [heart_scale_label, heart_scale_inst] = libsvmread('../heart_scale');
+matlab>
+matlab> % Split Data
+matlab> train_data = heart_scale_inst(1:150,:);
+matlab> train_label = heart_scale_label(1:150,:);
+matlab> test_data = heart_scale_inst(151:270,:);
+matlab> test_label = heart_scale_label(151:270,:);
+matlab>
+matlab> % Linear Kernel
+matlab> model_linear = svmtrain(train_label, train_data, '-t 0');
+matlab> [predict_label_L, accuracy_L, dec_values_L] = svmpredict(test_label, test_data, model_linear);
+matlab>
+matlab> % Precomputed Kernel
+matlab> model_precomputed = svmtrain(train_label, [(1:150)', train_data*train_data'], '-t 4');
+matlab> [predict_label_P, accuracy_P, dec_values_P] = svmpredict(test_label, [(1:120)', test_data*train_data'], model_precomputed);
+matlab>
+matlab> accuracy_L % Display the accuracy using linear kernel
+matlab> accuracy_P % Display the accuracy using precomputed kernel
+
+Note that for testing, you can put anything in the
+testing_label_vector. For more details of precomputed kernels, please
+read the section ``Precomputed Kernels'' in the README of the LIBSVM
+package.
+
+Additional Information
+======================
+
+This interface was initially written by Jun-Cheng Chen, Kuan-Jen Peng,
+Chih-Yuan Yang and Chih-Huai Cheng from Department of Computer
+Science, National Taiwan University. The current version was prepared
+by Rong-En Fan and Ting-Fan Wu. If you find this tool useful, please
+cite LIBSVM as follows
+
+Chih-Chung Chang and Chih-Jen Lin, LIBSVM : a library for support
+vector machines. ACM Transactions on Intelligent Systems and
+Technology, 2:27:1--27:27, 2011. Software available at
+http://www.csie.ntu.edu.tw/~cjlin/libsvm
+
+For any question, please contact Chih-Jen Lin ,
+or check the FAQ page:
+
+http://www.csie.ntu.edu.tw/~cjlin/libsvm/faq.html#/Q10:_MATLAB_interface
diff --git a/libsvm-3.21/matlab/libsvmread.c b/libsvm-3.21/matlab/libsvmread.c
new file mode 100644
index 0000000..d2fe0f5
--- /dev/null
+++ b/libsvm-3.21/matlab/libsvmread.c
@@ -0,0 +1,212 @@
+#include
+#include
+#include
+#include
+#include
+
+#include "mex.h"
+
+#ifdef MX_API_VER
+#if MX_API_VER < 0x07030000
+typedef int mwIndex;
+#endif
+#endif
+#ifndef max
+#define max(x,y) (((x)>(y))?(x):(y))
+#endif
+#ifndef min
+#define min(x,y) (((x)<(y))?(x):(y))
+#endif
+
+void exit_with_help()
+{
+ mexPrintf(
+ "Usage: [label_vector, instance_matrix] = libsvmread('filename');\n"
+ );
+}
+
+static void fake_answer(int nlhs, mxArray *plhs[])
+{
+ int i;
+ for(i=0;i start from 0
+ strtok(line," \t"); // label
+ while (1)
+ {
+ idx = strtok(NULL,":"); // index:value
+ val = strtok(NULL," \t");
+ if(val == NULL)
+ break;
+
+ errno = 0;
+ index = (int) strtol(idx,&endptr,10);
+ if(endptr == idx || errno != 0 || *endptr != '\0' || index <= inst_max_index)
+ {
+ mexPrintf("Wrong input format at line %d\n",l+1);
+ fake_answer(nlhs, plhs);
+ return;
+ }
+ else
+ inst_max_index = index;
+
+ min_index = min(min_index, index);
+ elements++;
+ }
+ max_index = max(max_index, inst_max_index);
+ l++;
+ }
+ rewind(fp);
+
+ // y
+ plhs[0] = mxCreateDoubleMatrix(l, 1, mxREAL);
+ // x^T
+ if (min_index <= 0)
+ plhs[1] = mxCreateSparse(max_index-min_index+1, l, elements, mxREAL);
+ else
+ plhs[1] = mxCreateSparse(max_index, l, elements, mxREAL);
+
+ labels = mxGetPr(plhs[0]);
+ samples = mxGetPr(plhs[1]);
+ ir = mxGetIr(plhs[1]);
+ jc = mxGetJc(plhs[1]);
+
+ k=0;
+ for(i=0;i start from 0
+
+ errno = 0;
+ samples[k] = strtod(val,&endptr);
+ if (endptr == val || errno != 0 || (*endptr != '\0' && !isspace(*endptr)))
+ {
+ mexPrintf("Wrong input format at line %d\n",i+1);
+ fake_answer(nlhs, plhs);
+ return;
+ }
+ ++k;
+ }
+ }
+ jc[l] = k;
+
+ fclose(fp);
+ free(line);
+
+ {
+ mxArray *rhs[1], *lhs[1];
+ rhs[0] = plhs[1];
+ if(mexCallMATLAB(1, lhs, 1, rhs, "transpose"))
+ {
+ mexPrintf("Error: cannot transpose problem\n");
+ fake_answer(nlhs, plhs);
+ return;
+ }
+ plhs[1] = lhs[0];
+ }
+}
+
+void mexFunction( int nlhs, mxArray *plhs[],
+ int nrhs, const mxArray *prhs[] )
+{
+ char filename[256];
+
+ if(nrhs != 1 || nlhs != 2)
+ {
+ exit_with_help();
+ fake_answer(nlhs, plhs);
+ return;
+ }
+
+ mxGetString(prhs[0], filename, mxGetN(prhs[0]) + 1);
+
+ if(filename == NULL)
+ {
+ mexPrintf("Error: filename is NULL\n");
+ return;
+ }
+
+ read_problem(filename, nlhs, plhs);
+
+ return;
+}
+
diff --git a/libsvm-3.21/matlab/libsvmread.mexa64 b/libsvm-3.21/matlab/libsvmread.mexa64
new file mode 100644
index 0000000..6a9f3e1
Binary files /dev/null and b/libsvm-3.21/matlab/libsvmread.mexa64 differ
diff --git a/libsvm-3.21/matlab/libsvmwrite.c b/libsvm-3.21/matlab/libsvmwrite.c
new file mode 100644
index 0000000..9c93fd3
--- /dev/null
+++ b/libsvm-3.21/matlab/libsvmwrite.c
@@ -0,0 +1,119 @@
+#include
+#include
+#include
+#include "mex.h"
+
+#ifdef MX_API_VER
+#if MX_API_VER < 0x07030000
+typedef int mwIndex;
+#endif
+#endif
+
+void exit_with_help()
+{
+ mexPrintf(
+ "Usage: libsvmwrite('filename', label_vector, instance_matrix);\n"
+ );
+}
+
+static void fake_answer(int nlhs, mxArray *plhs[])
+{
+ int i;
+ for(i=0;i 0)
+ {
+ exit_with_help();
+ fake_answer(nlhs, plhs);
+ return;
+ }
+
+ // Transform the input Matrix to libsvm format
+ if(nrhs == 3)
+ {
+ char filename[256];
+ if(!mxIsDouble(prhs[1]) || !mxIsDouble(prhs[2]))
+ {
+ mexPrintf("Error: label vector and instance matrix must be double\n");
+ return;
+ }
+
+ mxGetString(prhs[0], filename, mxGetN(prhs[0])+1);
+
+ if(mxIsSparse(prhs[2]))
+ libsvmwrite(filename, prhs[1], prhs[2]);
+ else
+ {
+ mexPrintf("Instance_matrix must be sparse\n");
+ return;
+ }
+ }
+ else
+ {
+ exit_with_help();
+ return;
+ }
+}
diff --git a/libsvm-3.21/matlab/libsvmwrite.mexa64 b/libsvm-3.21/matlab/libsvmwrite.mexa64
new file mode 100644
index 0000000..8f45610
Binary files /dev/null and b/libsvm-3.21/matlab/libsvmwrite.mexa64 differ
diff --git a/libsvm-3.21/matlab/make.m b/libsvm-3.21/matlab/make.m
new file mode 100644
index 0000000..276bfae
--- /dev/null
+++ b/libsvm-3.21/matlab/make.m
@@ -0,0 +1,22 @@
+% This make.m is for MATLAB and OCTAVE under Windows, Mac, and Unix
+function make()
+try
+ % This part is for OCTAVE
+ if (exist ('OCTAVE_VERSION', 'builtin'))
+ mex libsvmread.c
+ mex libsvmwrite.c
+ mex -I.. svmtrain.c ../svm.cpp svm_model_matlab.c
+ mex -I.. svmpredict.c ../svm.cpp svm_model_matlab.c
+ % This part is for MATLAB
+ % Add -largeArrayDims on 64-bit machines of MATLAB
+ else
+ mex CFLAGS="\$CFLAGS -std=c99" -largeArrayDims libsvmread.c
+ mex CFLAGS="\$CFLAGS -std=c99" -largeArrayDims libsvmwrite.c
+ mex CFLAGS="\$CFLAGS -std=c99" -I.. -largeArrayDims svmtrain.c ../svm.cpp svm_model_matlab.c
+ mex CFLAGS="\$CFLAGS -std=c99" -I.. -largeArrayDims svmpredict.c ../svm.cpp svm_model_matlab.c
+ end
+catch err
+ fprintf('Error: %s failed (line %d)\n', err.stack(1).file, err.stack(1).line);
+ disp(err.message);
+ fprintf('=> Please check README for detailed instructions.\n');
+end
diff --git a/libsvm-3.21/matlab/svm_model_matlab.c b/libsvm-3.21/matlab/svm_model_matlab.c
new file mode 100644
index 0000000..1fea1ba
--- /dev/null
+++ b/libsvm-3.21/matlab/svm_model_matlab.c
@@ -0,0 +1,374 @@
+#include
+#include
+#include "svm.h"
+
+#include "mex.h"
+
+#ifdef MX_API_VER
+#if MX_API_VER < 0x07030000
+typedef int mwIndex;
+#endif
+#endif
+
+#define NUM_OF_RETURN_FIELD 11
+
+#define Malloc(type,n) (type *)malloc((n)*sizeof(type))
+
+static const char *field_names[] = {
+ "Parameters",
+ "nr_class",
+ "totalSV",
+ "rho",
+ "Label",
+ "sv_indices",
+ "ProbA",
+ "ProbB",
+ "nSV",
+ "sv_coef",
+ "SVs"
+};
+
+const char *model_to_matlab_structure(mxArray *plhs[], int num_of_feature, struct svm_model *model)
+{
+ int i, j, n;
+ double *ptr;
+ mxArray *return_model, **rhs;
+ int out_id = 0;
+
+ rhs = (mxArray **)mxMalloc(sizeof(mxArray *)*NUM_OF_RETURN_FIELD);
+
+ // Parameters
+ rhs[out_id] = mxCreateDoubleMatrix(5, 1, mxREAL);
+ ptr = mxGetPr(rhs[out_id]);
+ ptr[0] = model->param.svm_type;
+ ptr[1] = model->param.kernel_type;
+ ptr[2] = model->param.degree;
+ ptr[3] = model->param.gamma;
+ ptr[4] = model->param.coef0;
+ out_id++;
+
+ // nr_class
+ rhs[out_id] = mxCreateDoubleMatrix(1, 1, mxREAL);
+ ptr = mxGetPr(rhs[out_id]);
+ ptr[0] = model->nr_class;
+ out_id++;
+
+ // total SV
+ rhs[out_id] = mxCreateDoubleMatrix(1, 1, mxREAL);
+ ptr = mxGetPr(rhs[out_id]);
+ ptr[0] = model->l;
+ out_id++;
+
+ // rho
+ n = model->nr_class*(model->nr_class-1)/2;
+ rhs[out_id] = mxCreateDoubleMatrix(n, 1, mxREAL);
+ ptr = mxGetPr(rhs[out_id]);
+ for(i = 0; i < n; i++)
+ ptr[i] = model->rho[i];
+ out_id++;
+
+ // Label
+ if(model->label)
+ {
+ rhs[out_id] = mxCreateDoubleMatrix(model->nr_class, 1, mxREAL);
+ ptr = mxGetPr(rhs[out_id]);
+ for(i = 0; i < model->nr_class; i++)
+ ptr[i] = model->label[i];
+ }
+ else
+ rhs[out_id] = mxCreateDoubleMatrix(0, 0, mxREAL);
+ out_id++;
+
+ // sv_indices
+ if(model->sv_indices)
+ {
+ rhs[out_id] = mxCreateDoubleMatrix(model->l, 1, mxREAL);
+ ptr = mxGetPr(rhs[out_id]);
+ for(i = 0; i < model->l; i++)
+ ptr[i] = model->sv_indices[i];
+ }
+ else
+ rhs[out_id] = mxCreateDoubleMatrix(0, 0, mxREAL);
+ out_id++;
+
+ // probA
+ if(model->probA != NULL)
+ {
+ rhs[out_id] = mxCreateDoubleMatrix(n, 1, mxREAL);
+ ptr = mxGetPr(rhs[out_id]);
+ for(i = 0; i < n; i++)
+ ptr[i] = model->probA[i];
+ }
+ else
+ rhs[out_id] = mxCreateDoubleMatrix(0, 0, mxREAL);
+ out_id ++;
+
+ // probB
+ if(model->probB != NULL)
+ {
+ rhs[out_id] = mxCreateDoubleMatrix(n, 1, mxREAL);
+ ptr = mxGetPr(rhs[out_id]);
+ for(i = 0; i < n; i++)
+ ptr[i] = model->probB[i];
+ }
+ else
+ rhs[out_id] = mxCreateDoubleMatrix(0, 0, mxREAL);
+ out_id++;
+
+ // nSV
+ if(model->nSV)
+ {
+ rhs[out_id] = mxCreateDoubleMatrix(model->nr_class, 1, mxREAL);
+ ptr = mxGetPr(rhs[out_id]);
+ for(i = 0; i < model->nr_class; i++)
+ ptr[i] = model->nSV[i];
+ }
+ else
+ rhs[out_id] = mxCreateDoubleMatrix(0, 0, mxREAL);
+ out_id++;
+
+ // sv_coef
+ rhs[out_id] = mxCreateDoubleMatrix(model->l, model->nr_class-1, mxREAL);
+ ptr = mxGetPr(rhs[out_id]);
+ for(i = 0; i < model->nr_class-1; i++)
+ for(j = 0; j < model->l; j++)
+ ptr[(i*(model->l))+j] = model->sv_coef[i][j];
+ out_id++;
+
+ // SVs
+ {
+ int ir_index, nonzero_element;
+ mwIndex *ir, *jc;
+ mxArray *pprhs[1], *pplhs[1];
+
+ if(model->param.kernel_type == PRECOMPUTED)
+ {
+ nonzero_element = model->l;
+ num_of_feature = 1;
+ }
+ else
+ {
+ nonzero_element = 0;
+ for(i = 0; i < model->l; i++) {
+ j = 0;
+ while(model->SV[i][j].index != -1)
+ {
+ nonzero_element++;
+ j++;
+ }
+ }
+ }
+
+ // SV in column, easier accessing
+ rhs[out_id] = mxCreateSparse(num_of_feature, model->l, nonzero_element, mxREAL);
+ ir = mxGetIr(rhs[out_id]);
+ jc = mxGetJc(rhs[out_id]);
+ ptr = mxGetPr(rhs[out_id]);
+ jc[0] = ir_index = 0;
+ for(i = 0;i < model->l; i++)
+ {
+ if(model->param.kernel_type == PRECOMPUTED)
+ {
+ // make a (1 x model->l) matrix
+ ir[ir_index] = 0;
+ ptr[ir_index] = model->SV[i][0].value;
+ ir_index++;
+ jc[i+1] = jc[i] + 1;
+ }
+ else
+ {
+ int x_index = 0;
+ while (model->SV[i][x_index].index != -1)
+ {
+ ir[ir_index] = model->SV[i][x_index].index - 1;
+ ptr[ir_index] = model->SV[i][x_index].value;
+ ir_index++, x_index++;
+ }
+ jc[i+1] = jc[i] + x_index;
+ }
+ }
+ // transpose back to SV in row
+ pprhs[0] = rhs[out_id];
+ if(mexCallMATLAB(1, pplhs, 1, pprhs, "transpose"))
+ return "cannot transpose SV matrix";
+ rhs[out_id] = pplhs[0];
+ out_id++;
+ }
+
+ /* Create a struct matrix contains NUM_OF_RETURN_FIELD fields */
+ return_model = mxCreateStructMatrix(1, 1, NUM_OF_RETURN_FIELD, field_names);
+
+ /* Fill struct matrix with input arguments */
+ for(i = 0; i < NUM_OF_RETURN_FIELD; i++)
+ mxSetField(return_model,0,field_names[i],mxDuplicateArray(rhs[i]));
+ /* return */
+ plhs[0] = return_model;
+ mxFree(rhs);
+
+ return NULL;
+}
+
+struct svm_model *matlab_matrix_to_model(const mxArray *matlab_struct, const char **msg)
+{
+ int i, j, n, num_of_fields;
+ double *ptr;
+ int id = 0;
+ struct svm_node *x_space;
+ struct svm_model *model;
+ mxArray **rhs;
+
+ num_of_fields = mxGetNumberOfFields(matlab_struct);
+ if(num_of_fields != NUM_OF_RETURN_FIELD)
+ {
+ *msg = "number of return field is not correct";
+ return NULL;
+ }
+ rhs = (mxArray **) mxMalloc(sizeof(mxArray *)*num_of_fields);
+
+ for(i=0;irho = NULL;
+ model->probA = NULL;
+ model->probB = NULL;
+ model->label = NULL;
+ model->sv_indices = NULL;
+ model->nSV = NULL;
+ model->free_sv = 1; // XXX
+
+ ptr = mxGetPr(rhs[id]);
+ model->param.svm_type = (int)ptr[0];
+ model->param.kernel_type = (int)ptr[1];
+ model->param.degree = (int)ptr[2];
+ model->param.gamma = ptr[3];
+ model->param.coef0 = ptr[4];
+ id++;
+
+ ptr = mxGetPr(rhs[id]);
+ model->nr_class = (int)ptr[0];
+ id++;
+
+ ptr = mxGetPr(rhs[id]);
+ model->l = (int)ptr[0];
+ id++;
+
+ // rho
+ n = model->nr_class * (model->nr_class-1)/2;
+ model->rho = (double*) malloc(n*sizeof(double));
+ ptr = mxGetPr(rhs[id]);
+ for(i=0;irho[i] = ptr[i];
+ id++;
+
+ // label
+ if(mxIsEmpty(rhs[id]) == 0)
+ {
+ model->label = (int*) malloc(model->nr_class*sizeof(int));
+ ptr = mxGetPr(rhs[id]);
+ for(i=0;inr_class;i++)
+ model->label[i] = (int)ptr[i];
+ }
+ id++;
+
+ // sv_indices
+ if(mxIsEmpty(rhs[id]) == 0)
+ {
+ model->sv_indices = (int*) malloc(model->l*sizeof(int));
+ ptr = mxGetPr(rhs[id]);
+ for(i=0;il;i++)
+ model->sv_indices[i] = (int)ptr[i];
+ }
+ id++;
+
+ // probA
+ if(mxIsEmpty(rhs[id]) == 0)
+ {
+ model->probA = (double*) malloc(n*sizeof(double));
+ ptr = mxGetPr(rhs[id]);
+ for(i=0;iprobA[i] = ptr[i];
+ }
+ id++;
+
+ // probB
+ if(mxIsEmpty(rhs[id]) == 0)
+ {
+ model->probB = (double*) malloc(n*sizeof(double));
+ ptr = mxGetPr(rhs[id]);
+ for(i=0;iprobB[i] = ptr[i];
+ }
+ id++;
+
+ // nSV
+ if(mxIsEmpty(rhs[id]) == 0)
+ {
+ model->nSV = (int*) malloc(model->nr_class*sizeof(int));
+ ptr = mxGetPr(rhs[id]);
+ for(i=0;inr_class;i++)
+ model->nSV[i] = (int)ptr[i];
+ }
+ id++;
+
+ // sv_coef
+ ptr = mxGetPr(rhs[id]);
+ model->sv_coef = (double**) malloc((model->nr_class-1)*sizeof(double));
+ for( i=0 ; i< model->nr_class -1 ; i++ )
+ model->sv_coef[i] = (double*) malloc((model->l)*sizeof(double));
+ for(i = 0; i < model->nr_class - 1; i++)
+ for(j = 0; j < model->l; j++)
+ model->sv_coef[i][j] = ptr[i*(model->l)+j];
+ id++;
+
+ // SV
+ {
+ int sr, elements;
+ int num_samples;
+ mwIndex *ir, *jc;
+ mxArray *pprhs[1], *pplhs[1];
+
+ // transpose SV
+ pprhs[0] = rhs[id];
+ if(mexCallMATLAB(1, pplhs, 1, pprhs, "transpose"))
+ {
+ svm_free_and_destroy_model(&model);
+ *msg = "cannot transpose SV matrix";
+ return NULL;
+ }
+ rhs[id] = pplhs[0];
+
+ sr = (int)mxGetN(rhs[id]);
+
+ ptr = mxGetPr(rhs[id]);
+ ir = mxGetIr(rhs[id]);
+ jc = mxGetJc(rhs[id]);
+
+ num_samples = (int)mxGetNzmax(rhs[id]);
+
+ elements = num_samples + sr;
+
+ model->SV = (struct svm_node **) malloc(sr * sizeof(struct svm_node *));
+ x_space = (struct svm_node *)malloc(elements * sizeof(struct svm_node));
+
+ // SV is in column
+ for(i=0;iSV[i] = &x_space[low+i];
+ for(j=low;jSV[i][x_index].index = (int)ir[j] + 1;
+ model->SV[i][x_index].value = ptr[j];
+ x_index++;
+ }
+ model->SV[i][x_index].index = -1;
+ }
+
+ id++;
+ }
+ mxFree(rhs);
+
+ return model;
+}
diff --git a/libsvm-3.21/matlab/svm_model_matlab.h b/libsvm-3.21/matlab/svm_model_matlab.h
new file mode 100644
index 0000000..3668a84
--- /dev/null
+++ b/libsvm-3.21/matlab/svm_model_matlab.h
@@ -0,0 +1,2 @@
+const char *model_to_matlab_structure(mxArray *plhs[], int num_of_feature, struct svm_model *model);
+struct svm_model *matlab_matrix_to_model(const mxArray *matlab_struct, const char **error_message);
diff --git a/libsvm-3.21/matlab/svm_model_matlab.o b/libsvm-3.21/matlab/svm_model_matlab.o
new file mode 100644
index 0000000..30df5be
Binary files /dev/null and b/libsvm-3.21/matlab/svm_model_matlab.o differ
diff --git a/libsvm-3.21/matlab/svmpredict.c b/libsvm-3.21/matlab/svmpredict.c
new file mode 100644
index 0000000..96fedbc
--- /dev/null
+++ b/libsvm-3.21/matlab/svmpredict.c
@@ -0,0 +1,370 @@
+#include
+#include
+#include
+#include "svm.h"
+
+#include "mex.h"
+#include "svm_model_matlab.h"
+
+#ifdef MX_API_VER
+#if MX_API_VER < 0x07030000
+typedef int mwIndex;
+#endif
+#endif
+
+#define CMD_LEN 2048
+
+int print_null(const char *s,...) {}
+int (*info)(const char *fmt,...) = &mexPrintf;
+
+void read_sparse_instance(const mxArray *prhs, int index, struct svm_node *x)
+{
+ int i, j, low, high;
+ mwIndex *ir, *jc;
+ double *samples;
+
+ ir = mxGetIr(prhs);
+ jc = mxGetJc(prhs);
+ samples = mxGetPr(prhs);
+
+ // each column is one instance
+ j = 0;
+ low = (int)jc[index], high = (int)jc[index+1];
+ for(i=low;iparam.kernel_type == PRECOMPUTED)
+ {
+ // precomputed kernel requires dense matrix, so we make one
+ mxArray *rhs[1], *lhs[1];
+ rhs[0] = mxDuplicateArray(prhs[1]);
+ if(mexCallMATLAB(1, lhs, 1, rhs, "full"))
+ {
+ mexPrintf("Error: cannot full testing instance matrix\n");
+ fake_answer(nlhs, plhs);
+ return;
+ }
+ ptr_instance = mxGetPr(lhs[0]);
+ mxDestroyArray(rhs[0]);
+ }
+ else
+ {
+ mxArray *pprhs[1];
+ pprhs[0] = mxDuplicateArray(prhs[1]);
+ if(mexCallMATLAB(1, pplhs, 1, pprhs, "transpose"))
+ {
+ mexPrintf("Error: cannot transpose testing instance matrix\n");
+ fake_answer(nlhs, plhs);
+ return;
+ }
+ }
+ }
+
+ if(predict_probability)
+ {
+ if(svm_type==NU_SVR || svm_type==EPSILON_SVR)
+ info("Prob. model for test data: target value = predicted value + z,\nz: Laplace distribution e^(-|z|/sigma)/(2sigma),sigma=%g\n",svm_get_svr_probability(model));
+ else
+ prob_estimates = (double *) malloc(nr_class*sizeof(double));
+ }
+
+ tplhs[0] = mxCreateDoubleMatrix(testing_instance_number, 1, mxREAL);
+ if(predict_probability)
+ {
+ // prob estimates are in plhs[2]
+ if(svm_type==C_SVC || svm_type==NU_SVC)
+ tplhs[2] = mxCreateDoubleMatrix(testing_instance_number, nr_class, mxREAL);
+ else
+ tplhs[2] = mxCreateDoubleMatrix(0, 0, mxREAL);
+ }
+ else
+ {
+ // decision values are in plhs[2]
+ if(svm_type == ONE_CLASS ||
+ svm_type == EPSILON_SVR ||
+ svm_type == NU_SVR ||
+ nr_class == 1) // if only one class in training data, decision values are still returned.
+ tplhs[2] = mxCreateDoubleMatrix(testing_instance_number, 1, mxREAL);
+ else
+ tplhs[2] = mxCreateDoubleMatrix(testing_instance_number, nr_class*(nr_class-1)/2, mxREAL);
+ }
+
+ ptr_predict_label = mxGetPr(tplhs[0]);
+ ptr_prob_estimates = mxGetPr(tplhs[2]);
+ ptr_dec_values = mxGetPr(tplhs[2]);
+ x = (struct svm_node*)malloc((feature_number+1)*sizeof(struct svm_node) );
+ for(instance_index=0;instance_indexparam.kernel_type != PRECOMPUTED) // prhs[1]^T is still sparse
+ read_sparse_instance(pplhs[0], instance_index, x);
+ else
+ {
+ for(i=0;i 3 || nrhs > 4 || nrhs < 3)
+ {
+ exit_with_help();
+ fake_answer(nlhs, plhs);
+ return;
+ }
+
+ if(!mxIsDouble(prhs[0]) || !mxIsDouble(prhs[1])) {
+ mexPrintf("Error: label vector and instance matrix must be double\n");
+ fake_answer(nlhs, plhs);
+ return;
+ }
+
+ if(mxIsStruct(prhs[2]))
+ {
+ const char *error_msg;
+
+ // parse options
+ if(nrhs==4)
+ {
+ int i, argc = 1;
+ char cmd[CMD_LEN], *argv[CMD_LEN/2];
+
+ // put options in argv[]
+ mxGetString(prhs[3], cmd, mxGetN(prhs[3]) + 1);
+ if((argv[argc] = strtok(cmd, " ")) != NULL)
+ while((argv[++argc] = strtok(NULL, " ")) != NULL)
+ ;
+
+ for(i=1;i=argc) && argv[i-1][1] != 'q')
+ {
+ exit_with_help();
+ fake_answer(nlhs, plhs);
+ return;
+ }
+ switch(argv[i-1][1])
+ {
+ case 'b':
+ prob_estimate_flag = atoi(argv[i]);
+ break;
+ case 'q':
+ i--;
+ info = &print_null;
+ break;
+ default:
+ mexPrintf("Unknown option: -%c\n", argv[i-1][1]);
+ exit_with_help();
+ fake_answer(nlhs, plhs);
+ return;
+ }
+ }
+ }
+
+ model = matlab_matrix_to_model(prhs[2], &error_msg);
+ if (model == NULL)
+ {
+ mexPrintf("Error: can't read model: %s\n", error_msg);
+ fake_answer(nlhs, plhs);
+ return;
+ }
+
+ if(prob_estimate_flag)
+ {
+ if(svm_check_probability_model(model)==0)
+ {
+ mexPrintf("Model does not support probabiliy estimates\n");
+ fake_answer(nlhs, plhs);
+ svm_free_and_destroy_model(&model);
+ return;
+ }
+ }
+ else
+ {
+ if(svm_check_probability_model(model)!=0)
+ info("Model supports probability estimates, but disabled in predicton.\n");
+ }
+
+ predict(nlhs, plhs, prhs, model, prob_estimate_flag);
+ // destroy model
+ svm_free_and_destroy_model(&model);
+ }
+ else
+ {
+ mexPrintf("model file should be a struct array\n");
+ fake_answer(nlhs, plhs);
+ }
+
+ return;
+}
diff --git a/libsvm-3.21/matlab/svmpredict.mexa64 b/libsvm-3.21/matlab/svmpredict.mexa64
new file mode 100644
index 0000000..b4fa92b
Binary files /dev/null and b/libsvm-3.21/matlab/svmpredict.mexa64 differ
diff --git a/libsvm-3.21/matlab/svmtrain.c b/libsvm-3.21/matlab/svmtrain.c
new file mode 100644
index 0000000..27a52b8
--- /dev/null
+++ b/libsvm-3.21/matlab/svmtrain.c
@@ -0,0 +1,495 @@
+#include
+#include
+#include
+#include
+#include "svm.h"
+
+#include "mex.h"
+#include "svm_model_matlab.h"
+
+#ifdef MX_API_VER
+#if MX_API_VER < 0x07030000
+typedef int mwIndex;
+#endif
+#endif
+
+#define CMD_LEN 2048
+#define Malloc(type,n) (type *)malloc((n)*sizeof(type))
+
+void print_null(const char *s) {}
+void print_string_matlab(const char *s) {mexPrintf(s);}
+
+void exit_with_help()
+{
+ mexPrintf(
+ "Usage: model = svmtrain(training_label_vector, training_instance_matrix, 'libsvm_options');\n"
+ "libsvm_options:\n"
+ "-s svm_type : set type of SVM (default 0)\n"
+ " 0 -- C-SVC (multi-class classification)\n"
+ " 1 -- nu-SVC (multi-class classification)\n"
+ " 2 -- one-class SVM\n"
+ " 3 -- epsilon-SVR (regression)\n"
+ " 4 -- nu-SVR (regression)\n"
+ "-t kernel_type : set type of kernel function (default 2)\n"
+ " 0 -- linear: u'*v\n"
+ " 1 -- polynomial: (gamma*u'*v + coef0)^degree\n"
+ " 2 -- radial basis function: exp(-gamma*|u-v|^2)\n"
+ " 3 -- sigmoid: tanh(gamma*u'*v + coef0)\n"
+ " 4 -- precomputed kernel (kernel values in training_instance_matrix)\n"
+ "-d degree : set degree in kernel function (default 3)\n"
+ "-g gamma : set gamma in kernel function (default 1/num_features)\n"
+ "-r coef0 : set coef0 in kernel function (default 0)\n"
+ "-c cost : set the parameter C of C-SVC, epsilon-SVR, and nu-SVR (default 1)\n"
+ "-n nu : set the parameter nu of nu-SVC, one-class SVM, and nu-SVR (default 0.5)\n"
+ "-p epsilon : set the epsilon in loss function of epsilon-SVR (default 0.1)\n"
+ "-m cachesize : set cache memory size in MB (default 100)\n"
+ "-e epsilon : set tolerance of termination criterion (default 0.001)\n"
+ "-h shrinking : whether to use the shrinking heuristics, 0 or 1 (default 1)\n"
+ "-b probability_estimates : whether to train a SVC or SVR model for probability estimates, 0 or 1 (default 0)\n"
+ "-wi weight : set the parameter C of class i to weight*C, for C-SVC (default 1)\n"
+ "-v n : n-fold cross validation mode\n"
+ "-q : quiet mode (no outputs)\n"
+ );
+}
+
+// svm arguments
+struct svm_parameter param; // set by parse_command_line
+struct svm_problem prob; // set by read_problem
+struct svm_model *model;
+struct svm_node *x_space;
+int cross_validation;
+int nr_fold;
+
+
+double do_cross_validation()
+{
+ int i;
+ int total_correct = 0;
+ double total_error = 0;
+ double sumv = 0, sumy = 0, sumvv = 0, sumyy = 0, sumvy = 0;
+ double *target = Malloc(double,prob.l);
+ double retval = 0.0;
+
+ svm_cross_validation(&prob,¶m,nr_fold,target);
+ if(param.svm_type == EPSILON_SVR ||
+ param.svm_type == NU_SVR)
+ {
+ for(i=0;i 2)
+ {
+ // put options in argv[]
+ mxGetString(prhs[2], cmd, mxGetN(prhs[2]) + 1);
+ if((argv[argc] = strtok(cmd, " ")) != NULL)
+ while((argv[++argc] = strtok(NULL, " ")) != NULL)
+ ;
+ }
+
+ // parse options
+ for(i=1;i=argc && argv[i-1][1] != 'q') // since option -q has no parameter
+ return 1;
+ switch(argv[i-1][1])
+ {
+ case 's':
+ param.svm_type = atoi(argv[i]);
+ break;
+ case 't':
+ param.kernel_type = atoi(argv[i]);
+ break;
+ case 'd':
+ param.degree = atoi(argv[i]);
+ break;
+ case 'g':
+ param.gamma = atof(argv[i]);
+ break;
+ case 'r':
+ param.coef0 = atof(argv[i]);
+ break;
+ case 'n':
+ param.nu = atof(argv[i]);
+ break;
+ case 'm':
+ param.cache_size = atof(argv[i]);
+ break;
+ case 'c':
+ param.C = atof(argv[i]);
+ break;
+ case 'e':
+ param.eps = atof(argv[i]);
+ break;
+ case 'p':
+ param.p = atof(argv[i]);
+ break;
+ case 'h':
+ param.shrinking = atoi(argv[i]);
+ break;
+ case 'b':
+ param.probability = atoi(argv[i]);
+ break;
+ case 'q':
+ print_func = &print_null;
+ i--;
+ break;
+ case 'v':
+ cross_validation = 1;
+ nr_fold = atoi(argv[i]);
+ if(nr_fold < 2)
+ {
+ mexPrintf("n-fold cross validation: n must >= 2\n");
+ return 1;
+ }
+ break;
+ case 'w':
+ ++param.nr_weight;
+ param.weight_label = (int *)realloc(param.weight_label,sizeof(int)*param.nr_weight);
+ param.weight = (double *)realloc(param.weight,sizeof(double)*param.nr_weight);
+ param.weight_label[param.nr_weight-1] = atoi(&argv[i-1][2]);
+ param.weight[param.nr_weight-1] = atof(argv[i]);
+ break;
+ default:
+ mexPrintf("Unknown option -%c\n", argv[i-1][1]);
+ return 1;
+ }
+ }
+
+ svm_set_print_string_function(print_func);
+
+ return 0;
+}
+
+// read in a problem (in svmlight format)
+int read_problem_dense(const mxArray *label_vec, const mxArray *instance_mat)
+{
+ // using size_t due to the output type of matlab functions
+ size_t i, j, k, l;
+ size_t elements, max_index, sc, label_vector_row_num;
+ double *samples, *labels;
+
+ prob.x = NULL;
+ prob.y = NULL;
+ x_space = NULL;
+
+ labels = mxGetPr(label_vec);
+ samples = mxGetPr(instance_mat);
+ sc = mxGetN(instance_mat);
+
+ elements = 0;
+ // number of instances
+ l = mxGetM(instance_mat);
+ label_vector_row_num = mxGetM(label_vec);
+ prob.l = (int)l;
+
+ if(label_vector_row_num!=l)
+ {
+ mexPrintf("Length of label vector does not match # of instances.\n");
+ return -1;
+ }
+
+ if(param.kernel_type == PRECOMPUTED)
+ elements = l * (sc + 1);
+ else
+ {
+ for(i = 0; i < l; i++)
+ {
+ for(k = 0; k < sc; k++)
+ if(samples[k * l + i] != 0)
+ elements++;
+ // count the '-1' element
+ elements++;
+ }
+ }
+
+ prob.y = Malloc(double,l);
+ prob.x = Malloc(struct svm_node *,l);
+ x_space = Malloc(struct svm_node, elements);
+
+ max_index = sc;
+ j = 0;
+ for(i = 0; i < l; i++)
+ {
+ prob.x[i] = &x_space[j];
+ prob.y[i] = labels[i];
+
+ for(k = 0; k < sc; k++)
+ {
+ if(param.kernel_type == PRECOMPUTED || samples[k * l + i] != 0)
+ {
+ x_space[j].index = (int)k + 1;
+ x_space[j].value = samples[k * l + i];
+ j++;
+ }
+ }
+ x_space[j++].index = -1;
+ }
+
+ if(param.gamma == 0 && max_index > 0)
+ param.gamma = (double)(1.0/max_index);
+
+ if(param.kernel_type == PRECOMPUTED)
+ for(i=0;i (int)max_index)
+ {
+ mexPrintf("Wrong input format: sample_serial_number out of range\n");
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+int read_problem_sparse(const mxArray *label_vec, const mxArray *instance_mat)
+{
+ mwIndex *ir, *jc, low, high, k;
+ // using size_t due to the output type of matlab functions
+ size_t i, j, l, elements, max_index, label_vector_row_num;
+ mwSize num_samples;
+ double *samples, *labels;
+ mxArray *instance_mat_col; // transposed instance sparse matrix
+
+ prob.x = NULL;
+ prob.y = NULL;
+ x_space = NULL;
+
+ // transpose instance matrix
+ {
+ mxArray *prhs[1], *plhs[1];
+ prhs[0] = mxDuplicateArray(instance_mat);
+ if(mexCallMATLAB(1, plhs, 1, prhs, "transpose"))
+ {
+ mexPrintf("Error: cannot transpose training instance matrix\n");
+ return -1;
+ }
+ instance_mat_col = plhs[0];
+ mxDestroyArray(prhs[0]);
+ }
+
+ // each column is one instance
+ labels = mxGetPr(label_vec);
+ samples = mxGetPr(instance_mat_col);
+ ir = mxGetIr(instance_mat_col);
+ jc = mxGetJc(instance_mat_col);
+
+ num_samples = mxGetNzmax(instance_mat_col);
+
+ // number of instances
+ l = mxGetN(instance_mat_col);
+ label_vector_row_num = mxGetM(label_vec);
+ prob.l = (int) l;
+
+ if(label_vector_row_num!=l)
+ {
+ mexPrintf("Length of label vector does not match # of instances.\n");
+ return -1;
+ }
+
+ elements = num_samples + l;
+ max_index = mxGetM(instance_mat_col);
+
+ prob.y = Malloc(double,l);
+ prob.x = Malloc(struct svm_node *,l);
+ x_space = Malloc(struct svm_node, elements);
+
+ j = 0;
+ for(i=0;i 0)
+ param.gamma = (double)(1.0/max_index);
+
+ return 0;
+}
+
+static void fake_answer(int nlhs, mxArray *plhs[])
+{
+ int i;
+ for(i=0;i 1)
+ {
+ exit_with_help();
+ fake_answer(nlhs, plhs);
+ return;
+ }
+
+ // Transform the input Matrix to libsvm format
+ if(nrhs > 1 && nrhs < 4)
+ {
+ int err;
+
+ if(!mxIsDouble(prhs[0]) || !mxIsDouble(prhs[1]))
+ {
+ mexPrintf("Error: label vector and instance matrix must be double\n");
+ fake_answer(nlhs, plhs);
+ return;
+ }
+
+ if(mxIsSparse(prhs[0]))
+ {
+ mexPrintf("Error: label vector should not be in sparse format\n");
+ fake_answer(nlhs, plhs);
+ return;
+ }
+
+ if(parse_command_line(nrhs, prhs, NULL))
+ {
+ exit_with_help();
+ svm_destroy_param(¶m);
+ fake_answer(nlhs, plhs);
+ return;
+ }
+
+ if(mxIsSparse(prhs[1]))
+ {
+ if(param.kernel_type == PRECOMPUTED)
+ {
+ // precomputed kernel requires dense matrix, so we make one
+ mxArray *rhs[1], *lhs[1];
+
+ rhs[0] = mxDuplicateArray(prhs[1]);
+ if(mexCallMATLAB(1, lhs, 1, rhs, "full"))
+ {
+ mexPrintf("Error: cannot generate a full training instance matrix\n");
+ svm_destroy_param(¶m);
+ fake_answer(nlhs, plhs);
+ return;
+ }
+ err = read_problem_dense(prhs[0], lhs[0]);
+ mxDestroyArray(lhs[0]);
+ mxDestroyArray(rhs[0]);
+ }
+ else
+ err = read_problem_sparse(prhs[0], prhs[1]);
+ }
+ else
+ err = read_problem_dense(prhs[0], prhs[1]);
+
+ // svmtrain's original code
+ error_msg = svm_check_parameter(&prob, ¶m);
+
+ if(err || error_msg)
+ {
+ if (error_msg != NULL)
+ mexPrintf("Error: %s\n", error_msg);
+ svm_destroy_param(¶m);
+ free(prob.y);
+ free(prob.x);
+ free(x_space);
+ fake_answer(nlhs, plhs);
+ return;
+ }
+
+ if(cross_validation)
+ {
+ double *ptr;
+ plhs[0] = mxCreateDoubleMatrix(1, 1, mxREAL);
+ ptr = mxGetPr(plhs[0]);
+ ptr[0] = do_cross_validation();
+ }
+ else
+ {
+ int nr_feat = (int)mxGetN(prhs[1]);
+ const char *error_msg;
+ model = svm_train(&prob, ¶m);
+ error_msg = model_to_matlab_structure(plhs, nr_feat, model);
+ if(error_msg)
+ mexPrintf("Error: can't convert libsvm model to matrix structure: %s\n", error_msg);
+ svm_free_and_destroy_model(&model);
+ }
+ svm_destroy_param(¶m);
+ free(prob.y);
+ free(prob.x);
+ free(x_space);
+ }
+ else
+ {
+ exit_with_help();
+ fake_answer(nlhs, plhs);
+ return;
+ }
+}
diff --git a/libsvm-3.21/matlab/svmtrain.mexa64 b/libsvm-3.21/matlab/svmtrain.mexa64
new file mode 100644
index 0000000..f3529b8
Binary files /dev/null and b/libsvm-3.21/matlab/svmtrain.mexa64 differ
diff --git a/libsvm-3.21/python/Makefile b/libsvm-3.21/python/Makefile
new file mode 100644
index 0000000..9837052
--- /dev/null
+++ b/libsvm-3.21/python/Makefile
@@ -0,0 +1,4 @@
+all = lib
+
+lib:
+ make -C .. lib
diff --git a/libsvm-3.21/python/README b/libsvm-3.21/python/README
new file mode 100644
index 0000000..d705594
--- /dev/null
+++ b/libsvm-3.21/python/README
@@ -0,0 +1,367 @@
+----------------------------------
+--- Python interface of LIBSVM ---
+----------------------------------
+
+Table of Contents
+=================
+
+- Introduction
+- Installation
+- Quick Start
+- Design Description
+- Data Structures
+- Utility Functions
+- Additional Information
+
+Introduction
+============
+
+Python (http://www.python.org/) is a programming language suitable for rapid
+development. This tool provides a simple Python interface to LIBSVM, a library
+for support vector machines (http://www.csie.ntu.edu.tw/~cjlin/libsvm). The
+interface is very easy to use as the usage is the same as that of LIBSVM. The
+interface is developed with the built-in Python library "ctypes."
+
+Installation
+============
+
+On Unix systems, type
+
+> make
+
+The interface needs only LIBSVM shared library, which is generated by
+the above command. We assume that the shared library is on the LIBSVM
+main directory or in the system path.
+
+For windows, the shared library libsvm.dll for 32-bit python is ready
+in the directory `..\windows'. You can also copy it to the system
+directory (e.g., `C:\WINDOWS\system32\' for Windows XP). To regenerate
+the shared library, please follow the instruction of building windows
+binaries in LIBSVM README.
+
+Quick Start
+===========
+
+There are two levels of usage. The high-level one uses utility functions
+in svmutil.py and the usage is the same as the LIBSVM MATLAB interface.
+
+>>> from svmutil import *
+# Read data in LIBSVM format
+>>> y, x = svm_read_problem('../heart_scale')
+>>> m = svm_train(y[:200], x[:200], '-c 4')
+>>> p_label, p_acc, p_val = svm_predict(y[200:], x[200:], m)
+
+# Construct problem in python format
+# Dense data
+>>> y, x = [1,-1], [[1,0,1], [-1,0,-1]]
+# Sparse data
+>>> y, x = [1,-1], [{1:1, 3:1}, {1:-1,3:-1}]
+>>> prob = svm_problem(y, x)
+>>> param = svm_parameter('-t 0 -c 4 -b 1')
+>>> m = svm_train(prob, param)
+
+# Precomputed kernel data (-t 4)
+# Dense data
+>>> y, x = [1,-1], [[1, 2, -2], [2, -2, 2]]
+# Sparse data
+>>> y, x = [1,-1], [{0:1, 1:2, 2:-2}, {0:2, 1:-2, 2:2}]
+# isKernel=True must be set for precomputer kernel
+>>> prob = svm_problem(y, x, isKernel=True)
+>>> param = svm_parameter('-t 4 -c 4 -b 1')
+>>> m = svm_train(prob, param)
+# For the format of precomputed kernel, please read LIBSVM README.
+
+
+# Other utility functions
+>>> svm_save_model('heart_scale.model', m)
+>>> m = svm_load_model('heart_scale.model')
+>>> p_label, p_acc, p_val = svm_predict(y, x, m, '-b 1')
+>>> ACC, MSE, SCC = evaluations(y, p_label)
+
+# Getting online help
+>>> help(svm_train)
+
+The low-level use directly calls C interfaces imported by svm.py. Note that
+all arguments and return values are in ctypes format. You need to handle them
+carefully.
+
+>>> from svm import *
+>>> prob = svm_problem([1,-1], [{1:1, 3:1}, {1:-1,3:-1}])
+>>> param = svm_parameter('-c 4')
+>>> m = libsvm.svm_train(prob, param) # m is a ctype pointer to an svm_model
+# Convert a Python-format instance to svm_nodearray, a ctypes structure
+>>> x0, max_idx = gen_svm_nodearray({1:1, 3:1})
+>>> label = libsvm.svm_predict(m, x0)
+
+Design Description
+==================
+
+There are two files svm.py and svmutil.py, which respectively correspond to
+low-level and high-level use of the interface.
+
+In svm.py, we adopt the Python built-in library "ctypes," so that
+Python can directly access C structures and interface functions defined
+in svm.h.
+
+While advanced users can use structures/functions in svm.py, to
+avoid handling ctypes structures, in svmutil.py we provide some easy-to-use
+functions. The usage is similar to LIBSVM MATLAB interface.
+
+Data Structures
+===============
+
+Four data structures derived from svm.h are svm_node, svm_problem, svm_parameter,
+and svm_model. They all contain fields with the same names in svm.h. Access
+these fields carefully because you directly use a C structure instead of a
+Python object. For svm_model, accessing the field directly is not recommanded.
+Programmers should use the interface functions or methods of svm_model class
+in Python to get the values. The following description introduces additional
+fields and methods.
+
+Before using the data structures, execute the following command to load the
+LIBSVM shared library:
+
+ >>> from svm import *
+
+- class svm_node:
+
+ Construct an svm_node.
+
+ >>> node = svm_node(idx, val)
+
+ idx: an integer indicates the feature index.
+
+ val: a float indicates the feature value.
+
+ Show the index and the value of a node.
+
+ >>> print(node)
+
+- Function: gen_svm_nodearray(xi [,feature_max=None [,isKernel=False]])
+
+ Generate a feature vector from a Python list/tuple or a dictionary:
+
+ >>> xi, max_idx = gen_svm_nodearray({1:1, 3:1, 5:-2})
+
+ xi: the returned svm_nodearray (a ctypes structure)
+
+ max_idx: the maximal feature index of xi
+
+ feature_max: if feature_max is assigned, features with indices larger than
+ feature_max are removed.
+
+ isKernel: if isKernel == True, the list index starts from 0 for precomputed
+ kernel. Otherwise, the list index starts from 1. The default
+ value is False.
+
+- class svm_problem:
+
+ Construct an svm_problem instance
+
+ >>> prob = svm_problem(y, x)
+
+ y: a Python list/tuple of l labels (type must be int/double).
+
+ x: a Python list/tuple of l data instances. Each element of x must be
+ an instance of list/tuple/dictionary type.
+
+ Note that if your x contains sparse data (i.e., dictionary), the internal
+ ctypes data format is still sparse.
+
+ For pre-computed kernel, the isKernel flag should be set to True:
+
+ >>> prob = svm_problem(y, x, isKernel=True)
+
+ Please read LIBSVM README for more details of pre-computed kernel.
+
+- class svm_parameter:
+
+ Construct an svm_parameter instance
+
+ >>> param = svm_parameter('training_options')
+
+ If 'training_options' is empty, LIBSVM default values are applied.
+
+ Set param to LIBSVM default values.
+
+ >>> param.set_to_default_values()
+
+ Parse a string of options.
+
+ >>> param.parse_options('training_options')
+
+ Show values of parameters.
+
+ >>> print(param)
+
+- class svm_model:
+
+ There are two ways to obtain an instance of svm_model:
+
+ >>> model = svm_train(y, x)
+ >>> model = svm_load_model('model_file_name')
+
+ Note that the returned structure of interface functions
+ libsvm.svm_train and libsvm.svm_load_model is a ctypes pointer of
+ svm_model, which is different from the svm_model object returned
+ by svm_train and svm_load_model in svmutil.py. We provide a
+ function toPyModel for the conversion:
+
+ >>> model_ptr = libsvm.svm_train(prob, param)
+ >>> model = toPyModel(model_ptr)
+
+ If you obtain a model in a way other than the above approaches,
+ handle it carefully to avoid memory leak or segmentation fault.
+
+ Some interface functions to access LIBSVM models are wrapped as
+ members of the class svm_model:
+
+ >>> svm_type = model.get_svm_type()
+ >>> nr_class = model.get_nr_class()
+ >>> svr_probability = model.get_svr_probability()
+ >>> class_labels = model.get_labels()
+ >>> sv_indices = model.get_sv_indices()
+ >>> nr_sv = model.get_nr_sv()
+ >>> is_prob_model = model.is_probability_model()
+ >>> support_vector_coefficients = model.get_sv_coef()
+ >>> support_vectors = model.get_SV()
+
+Utility Functions
+=================
+
+To use utility functions, type
+
+ >>> from svmutil import *
+
+The above command loads
+ svm_train() : train an SVM model
+ svm_predict() : predict testing data
+ svm_read_problem() : read the data from a LIBSVM-format file.
+ svm_load_model() : load a LIBSVM model.
+ svm_save_model() : save model to a file.
+ evaluations() : evaluate prediction results.
+
+- Function: svm_train
+
+ There are three ways to call svm_train()
+
+ >>> model = svm_train(y, x [, 'training_options'])
+ >>> model = svm_train(prob [, 'training_options'])
+ >>> model = svm_train(prob, param)
+
+ y: a list/tuple of l training labels (type must be int/double).
+
+ x: a list/tuple of l training instances. The feature vector of
+ each training instance is an instance of list/tuple or dictionary.
+
+ training_options: a string in the same form as that for LIBSVM command
+ mode.
+
+ prob: an svm_problem instance generated by calling
+ svm_problem(y, x).
+ For pre-computed kernel, you should use
+ svm_problem(y, x, isKernel=True)
+
+ param: an svm_parameter instance generated by calling
+ svm_parameter('training_options')
+
+ model: the returned svm_model instance. See svm.h for details of this
+ structure. If '-v' is specified, cross validation is
+ conducted and the returned model is just a scalar: cross-validation
+ accuracy for classification and mean-squared error for regression.
+
+ To train the same data many times with different
+ parameters, the second and the third ways should be faster..
+
+ Examples:
+
+ >>> y, x = svm_read_problem('../heart_scale')
+ >>> prob = svm_problem(y, x)
+ >>> param = svm_parameter('-s 3 -c 5 -h 0')
+ >>> m = svm_train(y, x, '-c 5')
+ >>> m = svm_train(prob, '-t 2 -c 5')
+ >>> m = svm_train(prob, param)
+ >>> CV_ACC = svm_train(y, x, '-v 3')
+
+- Function: svm_predict
+
+ To predict testing data with a model, use
+
+ >>> p_labs, p_acc, p_vals = svm_predict(y, x, model [,'predicting_options'])
+
+ y: a list/tuple of l true labels (type must be int/double). It is used
+ for calculating the accuracy. Use [0]*len(x) if true labels are
+ unavailable.
+
+ x: a list/tuple of l predicting instances. The feature vector of
+ each predicting instance is an instance of list/tuple or dictionary.
+
+ predicting_options: a string of predicting options in the same format as
+ that of LIBSVM.
+
+ model: an svm_model instance.
+
+ p_labels: a list of predicted labels
+
+ p_acc: a tuple including accuracy (for classification), mean
+ squared error, and squared correlation coefficient (for
+ regression).
+
+ p_vals: a list of decision values or probability estimates (if '-b 1'
+ is specified). If k is the number of classes in training data,
+ for decision values, each element includes results of predicting
+ k(k-1)/2 binary-class SVMs. For classification, k = 1 is a
+ special case. Decision value [+1] is returned for each testing
+ instance, instead of an empty list.
+ For probabilities, each element contains k values indicating
+ the probability that the testing instance is in each class.
+ Note that the order of classes is the same as the 'model.label'
+ field in the model structure.
+
+ Example:
+
+ >>> m = svm_train(y, x, '-c 5')
+ >>> p_labels, p_acc, p_vals = svm_predict(y, x, m)
+
+- Functions: svm_read_problem/svm_load_model/svm_save_model
+
+ See the usage by examples:
+
+ >>> y, x = svm_read_problem('data.txt')
+ >>> m = svm_load_model('model_file')
+ >>> svm_save_model('model_file', m)
+
+- Function: evaluations
+
+ Calculate some evaluations using the true values (ty) and predicted
+ values (pv):
+
+ >>> (ACC, MSE, SCC) = evaluations(ty, pv)
+
+ ty: a list of true values.
+
+ pv: a list of predict values.
+
+ ACC: accuracy.
+
+ MSE: mean squared error.
+
+ SCC: squared correlation coefficient.
+
+
+Additional Information
+======================
+
+This interface was written by Hsiang-Fu Yu from Department of Computer
+Science, National Taiwan University. If you find this tool useful, please
+cite LIBSVM as follows
+
+Chih-Chung Chang and Chih-Jen Lin, LIBSVM : a library for support
+vector machines. ACM Transactions on Intelligent Systems and
+Technology, 2:27:1--27:27, 2011. Software available at
+http://www.csie.ntu.edu.tw/~cjlin/libsvm
+
+For any question, please contact Chih-Jen Lin ,
+or check the FAQ page:
+
+http://www.csie.ntu.edu.tw/~cjlin/libsvm/faq.html
diff --git a/libsvm-3.21/python/svm.py b/libsvm-3.21/python/svm.py
new file mode 100644
index 0000000..577160d
--- /dev/null
+++ b/libsvm-3.21/python/svm.py
@@ -0,0 +1,330 @@
+#!/usr/bin/env python
+
+from ctypes import *
+from ctypes.util import find_library
+from os import path
+import sys
+
+if sys.version_info[0] >= 3:
+ xrange = range
+
+__all__ = ['libsvm', 'svm_problem', 'svm_parameter',
+ 'toPyModel', 'gen_svm_nodearray', 'print_null', 'svm_node', 'C_SVC',
+ 'EPSILON_SVR', 'LINEAR', 'NU_SVC', 'NU_SVR', 'ONE_CLASS',
+ 'POLY', 'PRECOMPUTED', 'PRINT_STRING_FUN', 'RBF',
+ 'SIGMOID', 'c_double', 'svm_model']
+
+try:
+ dirname = path.dirname(path.abspath(__file__))
+ if sys.platform == 'win32':
+ libsvm = CDLL(path.join(dirname, r'..\windows\libsvm.dll'))
+ else:
+ libsvm = CDLL(path.join(dirname, '../libsvm.so.2'))
+except:
+# For unix the prefix 'lib' is not considered.
+ if find_library('svm'):
+ libsvm = CDLL(find_library('svm'))
+ elif find_library('libsvm'):
+ libsvm = CDLL(find_library('libsvm'))
+ else:
+ raise Exception('LIBSVM library not found.')
+
+C_SVC = 0
+NU_SVC = 1
+ONE_CLASS = 2
+EPSILON_SVR = 3
+NU_SVR = 4
+
+LINEAR = 0
+POLY = 1
+RBF = 2
+SIGMOID = 3
+PRECOMPUTED = 4
+
+PRINT_STRING_FUN = CFUNCTYPE(None, c_char_p)
+def print_null(s):
+ return
+
+def genFields(names, types):
+ return list(zip(names, types))
+
+def fillprototype(f, restype, argtypes):
+ f.restype = restype
+ f.argtypes = argtypes
+
+class svm_node(Structure):
+ _names = ["index", "value"]
+ _types = [c_int, c_double]
+ _fields_ = genFields(_names, _types)
+
+ def __str__(self):
+ return '%d:%g' % (self.index, self.value)
+
+def gen_svm_nodearray(xi, feature_max=None, isKernel=None):
+ if isinstance(xi, dict):
+ index_range = xi.keys()
+ elif isinstance(xi, (list, tuple)):
+ if not isKernel:
+ xi = [0] + xi # idx should start from 1
+ index_range = range(len(xi))
+ else:
+ raise TypeError('xi should be a dictionary, list or tuple')
+
+ if feature_max:
+ assert(isinstance(feature_max, int))
+ index_range = filter(lambda j: j <= feature_max, index_range)
+ if not isKernel:
+ index_range = filter(lambda j:xi[j] != 0, index_range)
+
+ index_range = sorted(index_range)
+ ret = (svm_node * (len(index_range)+1))()
+ ret[-1].index = -1
+ for idx, j in enumerate(index_range):
+ ret[idx].index = j
+ ret[idx].value = xi[j]
+ max_idx = 0
+ if index_range:
+ max_idx = index_range[-1]
+ return ret, max_idx
+
+class svm_problem(Structure):
+ _names = ["l", "y", "x"]
+ _types = [c_int, POINTER(c_double), POINTER(POINTER(svm_node))]
+ _fields_ = genFields(_names, _types)
+
+ def __init__(self, y, x, isKernel=None):
+ if len(y) != len(x):
+ raise ValueError("len(y) != len(x)")
+ self.l = l = len(y)
+
+ max_idx = 0
+ x_space = self.x_space = []
+ for i, xi in enumerate(x):
+ tmp_xi, tmp_idx = gen_svm_nodearray(xi,isKernel=isKernel)
+ x_space += [tmp_xi]
+ max_idx = max(max_idx, tmp_idx)
+ self.n = max_idx
+
+ self.y = (c_double * l)()
+ for i, yi in enumerate(y): self.y[i] = yi
+
+ self.x = (POINTER(svm_node) * l)()
+ for i, xi in enumerate(self.x_space): self.x[i] = xi
+
+class svm_parameter(Structure):
+ _names = ["svm_type", "kernel_type", "degree", "gamma", "coef0",
+ "cache_size", "eps", "C", "nr_weight", "weight_label", "weight",
+ "nu", "p", "shrinking", "probability"]
+ _types = [c_int, c_int, c_int, c_double, c_double,
+ c_double, c_double, c_double, c_int, POINTER(c_int), POINTER(c_double),
+ c_double, c_double, c_int, c_int]
+ _fields_ = genFields(_names, _types)
+
+ def __init__(self, options = None):
+ if options == None:
+ options = ''
+ self.parse_options(options)
+
+ def __str__(self):
+ s = ''
+ attrs = svm_parameter._names + list(self.__dict__.keys())
+ values = map(lambda attr: getattr(self, attr), attrs)
+ for attr, val in zip(attrs, values):
+ s += (' %s: %s\n' % (attr, val))
+ s = s.strip()
+
+ return s
+
+ def set_to_default_values(self):
+ self.svm_type = C_SVC;
+ self.kernel_type = RBF
+ self.degree = 3
+ self.gamma = 0
+ self.coef0 = 0
+ self.nu = 0.5
+ self.cache_size = 100
+ self.C = 1
+ self.eps = 0.001
+ self.p = 0.1
+ self.shrinking = 1
+ self.probability = 0
+ self.nr_weight = 0
+ self.weight_label = None
+ self.weight = None
+ self.cross_validation = False
+ self.nr_fold = 0
+ self.print_func = cast(None, PRINT_STRING_FUN)
+
+ def parse_options(self, options):
+ if isinstance(options, list):
+ argv = options
+ elif isinstance(options, str):
+ argv = options.split()
+ else:
+ raise TypeError("arg 1 should be a list or a str.")
+ self.set_to_default_values()
+ self.print_func = cast(None, PRINT_STRING_FUN)
+ weight_label = []
+ weight = []
+
+ i = 0
+ while i < len(argv):
+ if argv[i] == "-s":
+ i = i + 1
+ self.svm_type = int(argv[i])
+ elif argv[i] == "-t":
+ i = i + 1
+ self.kernel_type = int(argv[i])
+ elif argv[i] == "-d":
+ i = i + 1
+ self.degree = int(argv[i])
+ elif argv[i] == "-g":
+ i = i + 1
+ self.gamma = float(argv[i])
+ elif argv[i] == "-r":
+ i = i + 1
+ self.coef0 = float(argv[i])
+ elif argv[i] == "-n":
+ i = i + 1
+ self.nu = float(argv[i])
+ elif argv[i] == "-m":
+ i = i + 1
+ self.cache_size = float(argv[i])
+ elif argv[i] == "-c":
+ i = i + 1
+ self.C = float(argv[i])
+ elif argv[i] == "-e":
+ i = i + 1
+ self.eps = float(argv[i])
+ elif argv[i] == "-p":
+ i = i + 1
+ self.p = float(argv[i])
+ elif argv[i] == "-h":
+ i = i + 1
+ self.shrinking = int(argv[i])
+ elif argv[i] == "-b":
+ i = i + 1
+ self.probability = int(argv[i])
+ elif argv[i] == "-q":
+ self.print_func = PRINT_STRING_FUN(print_null)
+ elif argv[i] == "-v":
+ i = i + 1
+ self.cross_validation = 1
+ self.nr_fold = int(argv[i])
+ if self.nr_fold < 2:
+ raise ValueError("n-fold cross validation: n must >= 2")
+ elif argv[i].startswith("-w"):
+ i = i + 1
+ self.nr_weight += 1
+ weight_label += [int(argv[i-1][2:])]
+ weight += [float(argv[i])]
+ else:
+ raise ValueError("Wrong options")
+ i += 1
+
+ libsvm.svm_set_print_string_function(self.print_func)
+ self.weight_label = (c_int*self.nr_weight)()
+ self.weight = (c_double*self.nr_weight)()
+ for i in range(self.nr_weight):
+ self.weight[i] = weight[i]
+ self.weight_label[i] = weight_label[i]
+
+class svm_model(Structure):
+ _names = ['param', 'nr_class', 'l', 'SV', 'sv_coef', 'rho',
+ 'probA', 'probB', 'sv_indices', 'label', 'nSV', 'free_sv']
+ _types = [svm_parameter, c_int, c_int, POINTER(POINTER(svm_node)),
+ POINTER(POINTER(c_double)), POINTER(c_double),
+ POINTER(c_double), POINTER(c_double), POINTER(c_int),
+ POINTER(c_int), POINTER(c_int), c_int]
+ _fields_ = genFields(_names, _types)
+
+ def __init__(self):
+ self.__createfrom__ = 'python'
+
+ def __del__(self):
+ # free memory created by C to avoid memory leak
+ if hasattr(self, '__createfrom__') and self.__createfrom__ == 'C':
+ libsvm.svm_free_and_destroy_model(pointer(self))
+
+ def get_svm_type(self):
+ return libsvm.svm_get_svm_type(self)
+
+ def get_nr_class(self):
+ return libsvm.svm_get_nr_class(self)
+
+ def get_svr_probability(self):
+ return libsvm.svm_get_svr_probability(self)
+
+ def get_labels(self):
+ nr_class = self.get_nr_class()
+ labels = (c_int * nr_class)()
+ libsvm.svm_get_labels(self, labels)
+ return labels[:nr_class]
+
+ def get_sv_indices(self):
+ total_sv = self.get_nr_sv()
+ sv_indices = (c_int * total_sv)()
+ libsvm.svm_get_sv_indices(self, sv_indices)
+ return sv_indices[:total_sv]
+
+ def get_nr_sv(self):
+ return libsvm.svm_get_nr_sv(self)
+
+ def is_probability_model(self):
+ return (libsvm.svm_check_probability_model(self) == 1)
+
+ def get_sv_coef(self):
+ return [tuple(self.sv_coef[j][i] for j in xrange(self.nr_class - 1))
+ for i in xrange(self.l)]
+
+ def get_SV(self):
+ result = []
+ for sparse_sv in self.SV[:self.l]:
+ row = dict()
+
+ i = 0
+ while True:
+ row[sparse_sv[i].index] = sparse_sv[i].value
+ if sparse_sv[i].index == -1:
+ break
+ i += 1
+
+ result.append(row)
+ return result
+
+def toPyModel(model_ptr):
+ """
+ toPyModel(model_ptr) -> svm_model
+
+ Convert a ctypes POINTER(svm_model) to a Python svm_model
+ """
+ if bool(model_ptr) == False:
+ raise ValueError("Null pointer")
+ m = model_ptr.contents
+ m.__createfrom__ = 'C'
+ return m
+
+fillprototype(libsvm.svm_train, POINTER(svm_model), [POINTER(svm_problem), POINTER(svm_parameter)])
+fillprototype(libsvm.svm_cross_validation, None, [POINTER(svm_problem), POINTER(svm_parameter), c_int, POINTER(c_double)])
+
+fillprototype(libsvm.svm_save_model, c_int, [c_char_p, POINTER(svm_model)])
+fillprototype(libsvm.svm_load_model, POINTER(svm_model), [c_char_p])
+
+fillprototype(libsvm.svm_get_svm_type, c_int, [POINTER(svm_model)])
+fillprototype(libsvm.svm_get_nr_class, c_int, [POINTER(svm_model)])
+fillprototype(libsvm.svm_get_labels, None, [POINTER(svm_model), POINTER(c_int)])
+fillprototype(libsvm.svm_get_sv_indices, None, [POINTER(svm_model), POINTER(c_int)])
+fillprototype(libsvm.svm_get_nr_sv, c_int, [POINTER(svm_model)])
+fillprototype(libsvm.svm_get_svr_probability, c_double, [POINTER(svm_model)])
+
+fillprototype(libsvm.svm_predict_values, c_double, [POINTER(svm_model), POINTER(svm_node), POINTER(c_double)])
+fillprototype(libsvm.svm_predict, c_double, [POINTER(svm_model), POINTER(svm_node)])
+fillprototype(libsvm.svm_predict_probability, c_double, [POINTER(svm_model), POINTER(svm_node), POINTER(c_double)])
+
+fillprototype(libsvm.svm_free_model_content, None, [POINTER(svm_model)])
+fillprototype(libsvm.svm_free_and_destroy_model, None, [POINTER(POINTER(svm_model))])
+fillprototype(libsvm.svm_destroy_param, None, [POINTER(svm_parameter)])
+
+fillprototype(libsvm.svm_check_parameter, c_char_p, [POINTER(svm_problem), POINTER(svm_parameter)])
+fillprototype(libsvm.svm_check_probability_model, c_int, [POINTER(svm_model)])
+fillprototype(libsvm.svm_set_print_string_function, None, [PRINT_STRING_FUN])
diff --git a/libsvm-3.21/python/svmutil.py b/libsvm-3.21/python/svmutil.py
new file mode 100644
index 0000000..d353010
--- /dev/null
+++ b/libsvm-3.21/python/svmutil.py
@@ -0,0 +1,262 @@
+#!/usr/bin/env python
+
+import os
+import sys
+from svm import *
+from svm import __all__ as svm_all
+
+
+__all__ = ['evaluations', 'svm_load_model', 'svm_predict', 'svm_read_problem',
+ 'svm_save_model', 'svm_train'] + svm_all
+
+sys.path = [os.path.dirname(os.path.abspath(__file__))] + sys.path
+
+def svm_read_problem(data_file_name):
+ """
+ svm_read_problem(data_file_name) -> [y, x]
+
+ Read LIBSVM-format data from data_file_name and return labels y
+ and data instances x.
+ """
+ prob_y = []
+ prob_x = []
+ for line in open(data_file_name):
+ line = line.split(None, 1)
+ # In case an instance with all zero features
+ if len(line) == 1: line += ['']
+ label, features = line
+ xi = {}
+ for e in features.split():
+ ind, val = e.split(":")
+ xi[int(ind)] = float(val)
+ prob_y += [float(label)]
+ prob_x += [xi]
+ return (prob_y, prob_x)
+
+def svm_load_model(model_file_name):
+ """
+ svm_load_model(model_file_name) -> model
+
+ Load a LIBSVM model from model_file_name and return.
+ """
+ model = libsvm.svm_load_model(model_file_name.encode())
+ if not model:
+ print("can't open model file %s" % model_file_name)
+ return None
+ model = toPyModel(model)
+ return model
+
+def svm_save_model(model_file_name, model):
+ """
+ svm_save_model(model_file_name, model) -> None
+
+ Save a LIBSVM model to the file model_file_name.
+ """
+ libsvm.svm_save_model(model_file_name.encode(), model)
+
+def evaluations(ty, pv):
+ """
+ evaluations(ty, pv) -> (ACC, MSE, SCC)
+
+ Calculate accuracy, mean squared error and squared correlation coefficient
+ using the true values (ty) and predicted values (pv).
+ """
+ if len(ty) != len(pv):
+ raise ValueError("len(ty) must equal to len(pv)")
+ total_correct = total_error = 0
+ sumv = sumy = sumvv = sumyy = sumvy = 0
+ for v, y in zip(pv, ty):
+ if y == v:
+ total_correct += 1
+ total_error += (v-y)*(v-y)
+ sumv += v
+ sumy += y
+ sumvv += v*v
+ sumyy += y*y
+ sumvy += v*y
+ l = len(ty)
+ ACC = 100.0*total_correct/l
+ MSE = total_error/l
+ try:
+ SCC = ((l*sumvy-sumv*sumy)*(l*sumvy-sumv*sumy))/((l*sumvv-sumv*sumv)*(l*sumyy-sumy*sumy))
+ except:
+ SCC = float('nan')
+ return (ACC, MSE, SCC)
+
+def svm_train(arg1, arg2=None, arg3=None):
+ """
+ svm_train(y, x [, options]) -> model | ACC | MSE
+ svm_train(prob [, options]) -> model | ACC | MSE
+ svm_train(prob, param) -> model | ACC| MSE
+
+ Train an SVM model from data (y, x) or an svm_problem prob using
+ 'options' or an svm_parameter param.
+ If '-v' is specified in 'options' (i.e., cross validation)
+ either accuracy (ACC) or mean-squared error (MSE) is returned.
+ options:
+ -s svm_type : set type of SVM (default 0)
+ 0 -- C-SVC (multi-class classification)
+ 1 -- nu-SVC (multi-class classification)
+ 2 -- one-class SVM
+ 3 -- epsilon-SVR (regression)
+ 4 -- nu-SVR (regression)
+ -t kernel_type : set type of kernel function (default 2)
+ 0 -- linear: u'*v
+ 1 -- polynomial: (gamma*u'*v + coef0)^degree
+ 2 -- radial basis function: exp(-gamma*|u-v|^2)
+ 3 -- sigmoid: tanh(gamma*u'*v + coef0)
+ 4 -- precomputed kernel (kernel values in training_set_file)
+ -d degree : set degree in kernel function (default 3)
+ -g gamma : set gamma in kernel function (default 1/num_features)
+ -r coef0 : set coef0 in kernel function (default 0)
+ -c cost : set the parameter C of C-SVC, epsilon-SVR, and nu-SVR (default 1)
+ -n nu : set the parameter nu of nu-SVC, one-class SVM, and nu-SVR (default 0.5)
+ -p epsilon : set the epsilon in loss function of epsilon-SVR (default 0.1)
+ -m cachesize : set cache memory size in MB (default 100)
+ -e epsilon : set tolerance of termination criterion (default 0.001)
+ -h shrinking : whether to use the shrinking heuristics, 0 or 1 (default 1)
+ -b probability_estimates : whether to train a SVC or SVR model for probability estimates, 0 or 1 (default 0)
+ -wi weight : set the parameter C of class i to weight*C, for C-SVC (default 1)
+ -v n: n-fold cross validation mode
+ -q : quiet mode (no outputs)
+ """
+ prob, param = None, None
+ if isinstance(arg1, (list, tuple)):
+ assert isinstance(arg2, (list, tuple))
+ y, x, options = arg1, arg2, arg3
+ param = svm_parameter(options)
+ prob = svm_problem(y, x, isKernel=(param.kernel_type == PRECOMPUTED))
+ elif isinstance(arg1, svm_problem):
+ prob = arg1
+ if isinstance(arg2, svm_parameter):
+ param = arg2
+ else:
+ param = svm_parameter(arg2)
+ if prob == None or param == None:
+ raise TypeError("Wrong types for the arguments")
+
+ if param.kernel_type == PRECOMPUTED:
+ for xi in prob.x_space:
+ idx, val = xi[0].index, xi[0].value
+ if xi[0].index != 0:
+ raise ValueError('Wrong input format: first column must be 0:sample_serial_number')
+ if val <= 0 or val > prob.n:
+ raise ValueError('Wrong input format: sample_serial_number out of range')
+
+ if param.gamma == 0 and prob.n > 0:
+ param.gamma = 1.0 / prob.n
+ libsvm.svm_set_print_string_function(param.print_func)
+ err_msg = libsvm.svm_check_parameter(prob, param)
+ if err_msg:
+ raise ValueError('Error: %s' % err_msg)
+
+ if param.cross_validation:
+ l, nr_fold = prob.l, param.nr_fold
+ target = (c_double * l)()
+ libsvm.svm_cross_validation(prob, param, nr_fold, target)
+ ACC, MSE, SCC = evaluations(prob.y[:l], target[:l])
+ if param.svm_type in [EPSILON_SVR, NU_SVR]:
+ print("Cross Validation Mean squared error = %g" % MSE)
+ print("Cross Validation Squared correlation coefficient = %g" % SCC)
+ return MSE
+ else:
+ print("Cross Validation Accuracy = %g%%" % ACC)
+ return ACC
+ else:
+ m = libsvm.svm_train(prob, param)
+ m = toPyModel(m)
+
+ # If prob is destroyed, data including SVs pointed by m can remain.
+ m.x_space = prob.x_space
+ return m
+
+def svm_predict(y, x, m, options=""):
+ """
+ svm_predict(y, x, m [, options]) -> (p_labels, p_acc, p_vals)
+
+ Predict data (y, x) with the SVM model m.
+ options:
+ -b probability_estimates: whether to predict probability estimates,
+ 0 or 1 (default 0); for one-class SVM only 0 is supported.
+ -q : quiet mode (no outputs).
+
+ The return tuple contains
+ p_labels: a list of predicted labels
+ p_acc: a tuple including accuracy (for classification), mean-squared
+ error, and squared correlation coefficient (for regression).
+ p_vals: a list of decision values or probability estimates (if '-b 1'
+ is specified). If k is the number of classes, for decision values,
+ each element includes results of predicting k(k-1)/2 binary-class
+ SVMs. For probabilities, each element contains k values indicating
+ the probability that the testing instance is in each class.
+ Note that the order of classes here is the same as 'model.label'
+ field in the model structure.
+ """
+
+ def info(s):
+ print(s)
+
+ predict_probability = 0
+ argv = options.split()
+ i = 0
+ while i < len(argv):
+ if argv[i] == '-b':
+ i += 1
+ predict_probability = int(argv[i])
+ elif argv[i] == '-q':
+ info = print_null
+ else:
+ raise ValueError("Wrong options")
+ i+=1
+
+ svm_type = m.get_svm_type()
+ is_prob_model = m.is_probability_model()
+ nr_class = m.get_nr_class()
+ pred_labels = []
+ pred_values = []
+
+ if predict_probability:
+ if not is_prob_model:
+ raise ValueError("Model does not support probabiliy estimates")
+
+ if svm_type in [NU_SVR, EPSILON_SVR]:
+ info("Prob. model for test data: target value = predicted value + z,\n"
+ "z: Laplace distribution e^(-|z|/sigma)/(2sigma),sigma=%g" % m.get_svr_probability());
+ nr_class = 0
+
+ prob_estimates = (c_double * nr_class)()
+ for xi in x:
+ xi, idx = gen_svm_nodearray(xi, isKernel=(m.param.kernel_type == PRECOMPUTED))
+ label = libsvm.svm_predict_probability(m, xi, prob_estimates)
+ values = prob_estimates[:nr_class]
+ pred_labels += [label]
+ pred_values += [values]
+ else:
+ if is_prob_model:
+ info("Model supports probability estimates, but disabled in predicton.")
+ if svm_type in (ONE_CLASS, EPSILON_SVR, NU_SVC):
+ nr_classifier = 1
+ else:
+ nr_classifier = nr_class*(nr_class-1)//2
+ dec_values = (c_double * nr_classifier)()
+ for xi in x:
+ xi, idx = gen_svm_nodearray(xi, isKernel=(m.param.kernel_type == PRECOMPUTED))
+ label = libsvm.svm_predict_values(m, xi, dec_values)
+ if(nr_class == 1):
+ values = [1]
+ else:
+ values = dec_values[:nr_classifier]
+ pred_labels += [label]
+ pred_values += [values]
+
+ ACC, MSE, SCC = evaluations(y, pred_labels)
+ l = len(y)
+ if svm_type in [EPSILON_SVR, NU_SVR]:
+ info("Mean squared error = %g (regression)" % MSE)
+ info("Squared correlation coefficient = %g (regression)" % SCC)
+ else:
+ info("Accuracy = %g%% (%d/%d) (classification)" % (ACC, int(l*ACC/100), l))
+
+ return pred_labels, (ACC, MSE, SCC), pred_values
+
+
diff --git a/libsvm-3.21/svm-predict.c b/libsvm-3.21/svm-predict.c
new file mode 100644
index 0000000..859c9ff
--- /dev/null
+++ b/libsvm-3.21/svm-predict.c
@@ -0,0 +1,239 @@
+#include
+#include
+#include
+#include
+#include
+#include "svm.h"
+
+int print_null(const char *s,...) {return 0;}
+
+static int (*info)(const char *fmt,...) = &printf;
+
+struct svm_node *x;
+int max_nr_attr = 64;
+
+struct svm_model* model;
+int predict_probability=0;
+
+static char *line = NULL;
+static int max_line_len;
+
+static char* readline(FILE *input)
+{
+ int len;
+
+ if(fgets(line,max_line_len,input) == NULL)
+ return NULL;
+
+ while(strrchr(line,'\n') == NULL)
+ {
+ max_line_len *= 2;
+ line = (char *) realloc(line,max_line_len);
+ len = (int) strlen(line);
+ if(fgets(line+len,max_line_len-len,input) == NULL)
+ break;
+ }
+ return line;
+}
+
+void exit_input_error(int line_num)
+{
+ fprintf(stderr,"Wrong input format at line %d\n", line_num);
+ exit(1);
+}
+
+void predict(FILE *input, FILE *output)
+{
+ int correct = 0;
+ int total = 0;
+ double error = 0;
+ double sump = 0, sumt = 0, sumpp = 0, sumtt = 0, sumpt = 0;
+
+ int svm_type=svm_get_svm_type(model);
+ int nr_class=svm_get_nr_class(model);
+ double *prob_estimates=NULL;
+ int j;
+
+ if(predict_probability)
+ {
+ if (svm_type==NU_SVR || svm_type==EPSILON_SVR)
+ info("Prob. model for test data: target value = predicted value + z,\nz: Laplace distribution e^(-|z|/sigma)/(2sigma),sigma=%g\n",svm_get_svr_probability(model));
+ else
+ {
+ int *labels=(int *) malloc(nr_class*sizeof(int));
+ svm_get_labels(model,labels);
+ prob_estimates = (double *) malloc(nr_class*sizeof(double));
+ fprintf(output,"labels");
+ for(j=0;j start from 0
+
+ label = strtok(line," \t\n");
+ if(label == NULL) // empty line
+ exit_input_error(total+1);
+
+ target_label = strtod(label,&endptr);
+ if(endptr == label || *endptr != '\0')
+ exit_input_error(total+1);
+
+ while(1)
+ {
+ if(i>=max_nr_attr-1) // need one more for index = -1
+ {
+ max_nr_attr *= 2;
+ x = (struct svm_node *) realloc(x,max_nr_attr*sizeof(struct svm_node));
+ }
+
+ idx = strtok(NULL,":");
+ val = strtok(NULL," \t");
+
+ if(val == NULL)
+ break;
+ errno = 0;
+ x[i].index = (int) strtol(idx,&endptr,10);
+ if(endptr == idx || errno != 0 || *endptr != '\0' || x[i].index <= inst_max_index)
+ exit_input_error(total+1);
+ else
+ inst_max_index = x[i].index;
+
+ errno = 0;
+ x[i].value = strtod(val,&endptr);
+ if(endptr == val || errno != 0 || (*endptr != '\0' && !isspace(*endptr)))
+ exit_input_error(total+1);
+
+ ++i;
+ }
+ x[i].index = -1;
+
+ if (predict_probability && (svm_type==C_SVC || svm_type==NU_SVC))
+ {
+ predict_label = svm_predict_probability(model,x,prob_estimates);
+ fprintf(output,"%g",predict_label);
+ for(j=0;j=argc-2)
+ exit_with_help();
+
+ input = fopen(argv[i],"r");
+ if(input == NULL)
+ {
+ fprintf(stderr,"can't open input file %s\n",argv[i]);
+ exit(1);
+ }
+
+ output = fopen(argv[i+2],"w");
+ if(output == NULL)
+ {
+ fprintf(stderr,"can't open output file %s\n",argv[i+2]);
+ exit(1);
+ }
+
+ if((model=svm_load_model(argv[i+1]))==0)
+ {
+ fprintf(stderr,"can't open model file %s\n",argv[i+1]);
+ exit(1);
+ }
+
+ x = (struct svm_node *) malloc(max_nr_attr*sizeof(struct svm_node));
+ if(predict_probability)
+ {
+ if(svm_check_probability_model(model)==0)
+ {
+ fprintf(stderr,"Model does not support probabiliy estimates\n");
+ exit(1);
+ }
+ }
+ else
+ {
+ if(svm_check_probability_model(model)!=0)
+ info("Model supports probability estimates, but disabled in prediction.\n");
+ }
+
+ predict(input,output);
+ svm_free_and_destroy_model(&model);
+ free(x);
+ free(line);
+ fclose(input);
+ fclose(output);
+ return 0;
+}
diff --git a/libsvm-3.21/svm-scale.c b/libsvm-3.21/svm-scale.c
new file mode 100644
index 0000000..197537b
--- /dev/null
+++ b/libsvm-3.21/svm-scale.c
@@ -0,0 +1,397 @@
+#include
+#include
+#include
+#include
+#include
+
+void exit_with_help()
+{
+ printf(
+ "Usage: svm-scale [options] data_filename\n"
+ "options:\n"
+ "-l lower : x scaling lower limit (default -1)\n"
+ "-u upper : x scaling upper limit (default +1)\n"
+ "-y y_lower y_upper : y scaling limits (default: no y scaling)\n"
+ "-s save_filename : save scaling parameters to save_filename\n"
+ "-r restore_filename : restore scaling parameters from restore_filename\n"
+ );
+ exit(1);
+}
+
+char *line = NULL;
+int max_line_len = 1024;
+double lower=-1.0,upper=1.0,y_lower,y_upper;
+int y_scaling = 0;
+double *feature_max;
+double *feature_min;
+double y_max = -DBL_MAX;
+double y_min = DBL_MAX;
+int max_index;
+int min_index;
+long int num_nonzeros = 0;
+long int new_num_nonzeros = 0;
+
+#define max(x,y) (((x)>(y))?(x):(y))
+#define min(x,y) (((x)<(y))?(x):(y))
+
+void output_target(double value);
+void output(int index, double value);
+char* readline(FILE *input);
+int clean_up(FILE *fp_restore, FILE *fp, const char *msg);
+
+int main(int argc,char **argv)
+{
+ int i,index;
+ FILE *fp, *fp_restore = NULL;
+ char *save_filename = NULL;
+ char *restore_filename = NULL;
+
+ for(i=1;i lower) || (y_scaling && !(y_upper > y_lower)))
+ {
+ fprintf(stderr,"inconsistent lower/upper specification\n");
+ exit(1);
+ }
+
+ if(restore_filename && save_filename)
+ {
+ fprintf(stderr,"cannot use -r and -s simultaneously\n");
+ exit(1);
+ }
+
+ if(argc != i+1)
+ exit_with_help();
+
+ fp=fopen(argv[i],"r");
+
+ if(fp==NULL)
+ {
+ fprintf(stderr,"can't open file %s\n", argv[i]);
+ exit(1);
+ }
+
+ line = (char *) malloc(max_line_len*sizeof(char));
+
+#define SKIP_TARGET\
+ while(isspace(*p)) ++p;\
+ while(!isspace(*p)) ++p;
+
+#define SKIP_ELEMENT\
+ while(*p!=':') ++p;\
+ ++p;\
+ while(isspace(*p)) ++p;\
+ while(*p && !isspace(*p)) ++p;
+
+ /* assumption: min index of attributes is 1 */
+ /* pass 1: find out max index of attributes */
+ max_index = 0;
+ min_index = 1;
+
+ if(restore_filename)
+ {
+ int idx, c;
+
+ fp_restore = fopen(restore_filename,"r");
+ if(fp_restore==NULL)
+ {
+ fprintf(stderr,"can't open file %s\n", restore_filename);
+ exit(1);
+ }
+
+ c = fgetc(fp_restore);
+ if(c == 'y')
+ {
+ readline(fp_restore);
+ readline(fp_restore);
+ readline(fp_restore);
+ }
+ readline(fp_restore);
+ readline(fp_restore);
+
+ while(fscanf(fp_restore,"%d %*f %*f\n",&idx) == 1)
+ max_index = max(idx,max_index);
+ rewind(fp_restore);
+ }
+
+ while(readline(fp)!=NULL)
+ {
+ char *p=line;
+
+ SKIP_TARGET
+
+ while(sscanf(p,"%d:%*f",&index)==1)
+ {
+ max_index = max(max_index, index);
+ min_index = min(min_index, index);
+ SKIP_ELEMENT
+ num_nonzeros++;
+ }
+ }
+
+ if(min_index < 1)
+ fprintf(stderr,
+ "WARNING: minimal feature index is %d, but indices should start from 1\n", min_index);
+
+ rewind(fp);
+
+ feature_max = (double *)malloc((max_index+1)* sizeof(double));
+ feature_min = (double *)malloc((max_index+1)* sizeof(double));
+
+ if(feature_max == NULL || feature_min == NULL)
+ {
+ fprintf(stderr,"can't allocate enough memory\n");
+ exit(1);
+ }
+
+ for(i=0;i<=max_index;i++)
+ {
+ feature_max[i]=-DBL_MAX;
+ feature_min[i]=DBL_MAX;
+ }
+
+ /* pass 2: find out min/max value */
+ while(readline(fp)!=NULL)
+ {
+ char *p=line;
+ int next_index=1;
+ double target;
+ double value;
+
+ if (sscanf(p,"%lf",&target) != 1)
+ return clean_up(fp_restore, fp, "ERROR: failed to read labels\n");
+ y_max = max(y_max,target);
+ y_min = min(y_min,target);
+
+ SKIP_TARGET
+
+ while(sscanf(p,"%d:%lf",&index,&value)==2)
+ {
+ for(i=next_index;i num_nonzeros)
+ fprintf(stderr,
+ "WARNING: original #nonzeros %ld\n"
+ " > new #nonzeros %ld\n"
+ "If feature values are non-negative and sparse, use -l 0 rather than the default -l -1\n",
+ num_nonzeros, new_num_nonzeros);
+
+ free(line);
+ free(feature_max);
+ free(feature_min);
+ fclose(fp);
+ return 0;
+}
+
+char* readline(FILE *input)
+{
+ int len;
+
+ if(fgets(line,max_line_len,input) == NULL)
+ return NULL;
+
+ while(strrchr(line,'\n') == NULL)
+ {
+ max_line_len *= 2;
+ line = (char *) realloc(line, max_line_len);
+ len = (int) strlen(line);
+ if(fgets(line+len,max_line_len-len,input) == NULL)
+ break;
+ }
+ return line;
+}
+
+void output_target(double value)
+{
+ if(y_scaling)
+ {
+ if(value == y_min)
+ value = y_lower;
+ else if(value == y_max)
+ value = y_upper;
+ else value = y_lower + (y_upper-y_lower) *
+ (value - y_min)/(y_max-y_min);
+ }
+ printf("%g ",value);
+}
+
+void output(int index, double value)
+{
+ /* skip single-valued attribute */
+ if(feature_max[index] == feature_min[index])
+ return;
+
+ if(value == feature_min[index])
+ value = lower;
+ else if(value == feature_max[index])
+ value = upper;
+ else
+ value = lower + (upper-lower) *
+ (value-feature_min[index])/
+ (feature_max[index]-feature_min[index]);
+
+ if(value != 0)
+ {
+ printf("%d:%g ",index, value);
+ new_num_nonzeros++;
+ }
+}
+
+int clean_up(FILE *fp_restore, FILE *fp, const char* msg)
+{
+ fprintf(stderr, "%s", msg);
+ free(line);
+ free(feature_max);
+ free(feature_min);
+ fclose(fp);
+ if (fp_restore)
+ fclose(fp_restore);
+ return -1;
+}
+
diff --git a/libsvm-3.21/svm-toy/gtk/Makefile b/libsvm-3.21/svm-toy/gtk/Makefile
new file mode 100644
index 0000000..673174f
--- /dev/null
+++ b/libsvm-3.21/svm-toy/gtk/Makefile
@@ -0,0 +1,22 @@
+CC? = gcc
+CXX? = g++
+CFLAGS = -Wall -O3 -g `pkg-config --cflags gtk+-2.0`
+LIBS = `pkg-config --libs gtk+-2.0`
+
+svm-toy: main.o interface.o callbacks.o ../../svm.o
+ $(CXX) $(CFLAGS) main.o interface.o callbacks.o ../../svm.o -o svm-toy $(LIBS)
+
+main.o: main.c
+ $(CC) $(CFLAGS) -c main.c
+
+interface.o: interface.c interface.h
+ $(CC) $(CFLAGS) -c interface.c
+
+callbacks.o: callbacks.cpp callbacks.h
+ $(CXX) $(CFLAGS) -c callbacks.cpp
+
+../../svm.o: ../../svm.cpp ../../svm.h
+ make -C ../.. svm.o
+
+clean:
+ rm -f *~ callbacks.o svm-toy main.o interface.o callbacks.o ../../svm.o
diff --git a/libsvm-3.21/svm-toy/gtk/callbacks.cpp b/libsvm-3.21/svm-toy/gtk/callbacks.cpp
new file mode 100644
index 0000000..7828611
--- /dev/null
+++ b/libsvm-3.21/svm-toy/gtk/callbacks.cpp
@@ -0,0 +1,447 @@
+#include
+#include
+#include
+#include
+#include
+#include
+#include "callbacks.h"
+#include "interface.h"
+#include "../../svm.h"
+using namespace std;
+
+#define DEFAULT_PARAM "-t 2 -c 100"
+#define XLEN 500
+#define YLEN 500
+
+GdkColor colors[] =
+{
+ {0,0,0,0},
+ {0,0,120<<8,120<<8},
+ {0,120<<8,120<<8,0},
+ {0,120<<8,0,120<<8},
+ {0,0,200<<8,200<<8},
+ {0,200<<8,200<<8,0},
+ {0,200<<8,0,200<<8},
+};
+
+GdkGC *gc;
+GdkPixmap *pixmap;
+extern "C" GtkWidget *draw_main;
+GtkWidget *draw_main;
+extern "C" GtkWidget *entry_option;
+GtkWidget *entry_option;
+
+typedef struct {
+ double x, y;
+ signed char value;
+} point;
+
+list point_list;
+int current_value = 1;
+
+extern "C" void svm_toy_initialize()
+{
+ gboolean success[7];
+
+ gdk_colormap_alloc_colors(
+ gdk_colormap_get_system(),
+ colors,
+ 7,
+ FALSE,
+ TRUE,
+ success);
+
+ gc = gdk_gc_new(draw_main->window);
+ pixmap = gdk_pixmap_new(draw_main->window,XLEN,YLEN,-1);
+ gdk_gc_set_foreground(gc,&colors[0]);
+ gdk_draw_rectangle(pixmap,gc,TRUE,0,0,XLEN,YLEN);
+ gtk_entry_set_text(GTK_ENTRY(entry_option),DEFAULT_PARAM);
+}
+
+void redraw_area(GtkWidget* widget, int x, int y, int w, int h)
+{
+ gdk_draw_pixmap(widget->window,
+ gc,
+ pixmap,
+ x,y,x,y,w,h);
+}
+
+void draw_point(const point& p)
+{
+ gdk_gc_set_foreground(gc,&colors[p.value+3]);
+ gdk_draw_rectangle(pixmap, gc, TRUE,int(p.x*XLEN),int(p.y*YLEN),4,4);
+ gdk_draw_rectangle(draw_main->window, gc, TRUE,int(p.x*XLEN),int(p.y*YLEN),4,4);
+}
+
+void draw_all_points()
+{
+ for(list::iterator p = point_list.begin(); p != point_list.end();p++)
+ draw_point(*p);
+}
+
+void clear_all()
+{
+ point_list.clear();
+ gdk_gc_set_foreground(gc,&colors[0]);
+ gdk_draw_rectangle(pixmap,gc,TRUE,0,0,XLEN,YLEN);
+ redraw_area(draw_main,0,0,XLEN,YLEN);
+}
+
+void
+on_button_change_clicked (GtkButton *button,
+ gpointer user_data)
+{
+ ++current_value;
+ if(current_value > 3) current_value = 1;
+}
+
+void
+on_button_run_clicked (GtkButton *button,
+ gpointer user_data)
+{
+ // guard
+ if(point_list.empty()) return;
+
+ svm_parameter param;
+ int i,j;
+
+ // default values
+ param.svm_type = C_SVC;
+ param.kernel_type = RBF;
+ param.degree = 3;
+ param.gamma = 0;
+ param.coef0 = 0;
+ param.nu = 0.5;
+ param.cache_size = 100;
+ param.C = 1;
+ param.eps = 1e-3;
+ param.p = 0.1;
+ param.shrinking = 1;
+ param.probability = 0;
+ param.nr_weight = 0;
+ param.weight_label = NULL;
+ param.weight = NULL;
+
+ // parse options
+ const char *p = gtk_entry_get_text(GTK_ENTRY(entry_option));
+
+ while (1) {
+ while (*p && *p != '-')
+ p++;
+
+ if (*p == '\0')
+ break;
+
+ p++;
+ switch (*p++) {
+ case 's':
+ param.svm_type = atoi(p);
+ break;
+ case 't':
+ param.kernel_type = atoi(p);
+ break;
+ case 'd':
+ param.degree = atoi(p);
+ break;
+ case 'g':
+ param.gamma = atof(p);
+ break;
+ case 'r':
+ param.coef0 = atof(p);
+ break;
+ case 'n':
+ param.nu = atof(p);
+ break;
+ case 'm':
+ param.cache_size = atof(p);
+ break;
+ case 'c':
+ param.C = atof(p);
+ break;
+ case 'e':
+ param.eps = atof(p);
+ break;
+ case 'p':
+ param.p = atof(p);
+ break;
+ case 'h':
+ param.shrinking = atoi(p);
+ break;
+ case 'b':
+ param.probability = atoi(p);
+ break;
+ case 'w':
+ ++param.nr_weight;
+ param.weight_label = (int *)realloc(param.weight_label,sizeof(int)*param.nr_weight);
+ param.weight = (double *)realloc(param.weight,sizeof(double)*param.nr_weight);
+ param.weight_label[param.nr_weight-1] = atoi(p);
+ while(*p && !isspace(*p)) ++p;
+ param.weight[param.nr_weight-1] = atof(p);
+ break;
+ }
+ }
+
+ // build problem
+ svm_problem prob;
+
+ prob.l = point_list.size();
+ prob.y = new double[prob.l];
+
+ if(param.kernel_type == PRECOMPUTED)
+ {
+ }
+ else if(param.svm_type == EPSILON_SVR ||
+ param.svm_type == NU_SVR)
+ {
+ if(param.gamma == 0) param.gamma = 1;
+ svm_node *x_space = new svm_node[2 * prob.l];
+ prob.x = new svm_node *[prob.l];
+
+ i = 0;
+ for (list ::iterator q = point_list.begin(); q != point_list.end(); q++, i++)
+ {
+ x_space[2 * i].index = 1;
+ x_space[2 * i].value = q->x;
+ x_space[2 * i + 1].index = -1;
+ prob.x[i] = &x_space[2 * i];
+ prob.y[i] = q->y;
+ }
+
+ // build model & classify
+ svm_model *model = svm_train(&prob, ¶m);
+ svm_node x[2];
+ x[0].index = 1;
+ x[1].index = -1;
+ int *j = new int[XLEN];
+
+ for (i = 0; i < XLEN; i++)
+ {
+ x[0].value = (double) i / XLEN;
+ j[i] = (int)(YLEN*svm_predict(model, x));
+ }
+
+ gdk_gc_set_foreground(gc,&colors[0]);
+ gdk_draw_line(pixmap,gc,0,0,0,YLEN-1);
+ gdk_draw_line(draw_main->window,gc,0,0,0,YLEN-1);
+
+ int p = (int)(param.p * YLEN);
+ for(i = 1; i < XLEN; i++)
+ {
+ gdk_gc_set_foreground(gc,&colors[0]);
+ gdk_draw_line(pixmap,gc,i,0,i,YLEN-1);
+ gdk_draw_line(draw_main->window,gc,i,0,i,YLEN-1);
+
+ gdk_gc_set_foreground(gc,&colors[5]);
+ gdk_draw_line(pixmap,gc,i-1,j[i-1],i,j[i]);
+ gdk_draw_line(draw_main->window,gc,i-1,j[i-1],i,j[i]);
+
+ if(param.svm_type == EPSILON_SVR)
+ {
+ gdk_gc_set_foreground(gc,&colors[2]);
+ gdk_draw_line(pixmap,gc,i-1,j[i-1]+p,i,j[i]+p);
+ gdk_draw_line(draw_main->window,gc,i-1,j[i-1]+p,i,j[i]+p);
+
+ gdk_gc_set_foreground(gc,&colors[2]);
+ gdk_draw_line(pixmap,gc,i-1,j[i-1]-p,i,j[i]-p);
+ gdk_draw_line(draw_main->window,gc,i-1,j[i-1]-p,i,j[i]-p);
+ }
+ }
+
+ svm_free_and_destroy_model(&model);
+ delete[] j;
+ delete[] x_space;
+ delete[] prob.x;
+ delete[] prob.y;
+ }
+ else
+ {
+ if(param.gamma == 0) param.gamma = 0.5;
+ svm_node *x_space = new svm_node[3 * prob.l];
+ prob.x = new svm_node *[prob.l];
+
+ i = 0;
+ for (list ::iterator q = point_list.begin(); q != point_list.end(); q++, i++)
+ {
+ x_space[3 * i].index = 1;
+ x_space[3 * i].value = q->x;
+ x_space[3 * i + 1].index = 2;
+ x_space[3 * i + 1].value = q->y;
+ x_space[3 * i + 2].index = -1;
+ prob.x[i] = &x_space[3 * i];
+ prob.y[i] = q->value;
+ }
+
+ // build model & classify
+ svm_model *model = svm_train(&prob, ¶m);
+ svm_node x[3];
+ x[0].index = 1;
+ x[1].index = 2;
+ x[2].index = -1;
+
+ for (i = 0; i < XLEN; i++)
+ for (j = 0; j < YLEN; j++) {
+ x[0].value = (double) i / XLEN;
+ x[1].value = (double) j / YLEN;
+ double d = svm_predict(model, x);
+ if (param.svm_type == ONE_CLASS && d<0) d=2;
+ gdk_gc_set_foreground(gc,&colors[(int)d]);
+ gdk_draw_point(pixmap,gc,i,j);
+ gdk_draw_point(draw_main->window,gc,i,j);
+ }
+
+ svm_free_and_destroy_model(&model);
+ delete[] x_space;
+ delete[] prob.x;
+ delete[] prob.y;
+ }
+ free(param.weight_label);
+ free(param.weight);
+ draw_all_points();
+}
+
+void
+on_button_clear_clicked (GtkButton *button,
+ gpointer user_data)
+{
+ clear_all();
+}
+
+void
+on_window1_destroy (GtkObject *object,
+ gpointer user_data)
+{
+ gtk_exit(0);
+}
+
+gboolean
+on_draw_main_button_press_event (GtkWidget *widget,
+ GdkEventButton *event,
+ gpointer user_data)
+{
+ point p = {(double)event->x/XLEN, (double)event->y/YLEN, current_value};
+ point_list.push_back(p);
+ draw_point(p);
+ return FALSE;
+}
+
+gboolean
+on_draw_main_expose_event (GtkWidget *widget,
+ GdkEventExpose *event,
+ gpointer user_data)
+{
+ redraw_area(widget,
+ event->area.x, event->area.y,
+ event->area.width, event->area.height);
+ return FALSE;
+}
+
+GtkWidget *fileselection;
+static enum { SAVE, LOAD } fileselection_flag;
+
+void show_fileselection()
+{
+ fileselection = create_fileselection();
+ gtk_signal_connect_object(
+ GTK_OBJECT(GTK_FILE_SELECTION(fileselection)->ok_button),
+ "clicked", GTK_SIGNAL_FUNC(gtk_widget_destroy),
+ (GtkObject *) fileselection);
+
+ gtk_signal_connect_object (GTK_OBJECT
+ (GTK_FILE_SELECTION(fileselection)->cancel_button),
+ "clicked", GTK_SIGNAL_FUNC(gtk_widget_destroy),
+ (GtkObject *) fileselection);
+
+ gtk_widget_show(fileselection);
+}
+
+void
+on_button_save_clicked (GtkButton *button,
+ gpointer user_data)
+{
+ fileselection_flag = SAVE;
+ show_fileselection();
+}
+
+
+void
+on_button_load_clicked (GtkButton *button,
+ gpointer user_data)
+{
+ fileselection_flag = LOAD;
+ show_fileselection();
+}
+
+void
+on_filesel_ok_clicked (GtkButton *button,
+ gpointer user_data)
+{
+ gtk_widget_hide(fileselection);
+ const char *filename = gtk_file_selection_get_filename(GTK_FILE_SELECTION(fileselection));
+
+ if(fileselection_flag == SAVE)
+ {
+ FILE *fp = fopen(filename,"w");
+
+ const char *p = gtk_entry_get_text(GTK_ENTRY(entry_option));
+ const char* svm_type_str = strstr(p, "-s ");
+ int svm_type = C_SVC;
+ if(svm_type_str != NULL)
+ sscanf(svm_type_str, "-s %d", &svm_type);
+
+ if(fp)
+ {
+ if(svm_type == EPSILON_SVR || svm_type == NU_SVR)
+ {
+ for(list::iterator p = point_list.begin(); p != point_list.end();p++)
+ fprintf(fp,"%f 1:%f\n", p->y, p->x);
+ }
+ else
+ {
+ for(list::iterator p = point_list.begin(); p != point_list.end();p++)
+ fprintf(fp,"%d 1:%f 2:%f\n", p->value, p->x, p->y);
+ }
+ fclose(fp);
+ }
+
+ }
+ else if(fileselection_flag == LOAD)
+ {
+ FILE *fp = fopen(filename,"r");
+ if(fp)
+ {
+ clear_all();
+ char buf[4096];
+ while(fgets(buf,sizeof(buf),fp))
+ {
+ int v;
+ double x,y;
+ if(sscanf(buf,"%d%*d:%lf%*d:%lf",&v,&x,&y)==3)
+ {
+ point p = {x,y,v};
+ point_list.push_back(p);
+ }
+ else if(sscanf(buf,"%lf%*d:%lf",&y,&x)==2)
+ {
+ point p = {x,y,current_value};
+ point_list.push_back(p);
+ }
+ else
+ break;
+ }
+ fclose(fp);
+ draw_all_points();
+ }
+ }
+}
+
+void
+on_fileselection_destroy (GtkObject *object,
+ gpointer user_data)
+{
+}
+
+void
+on_filesel_cancel_clicked (GtkButton *button,
+ gpointer user_data)
+{
+}
diff --git a/libsvm-3.21/svm-toy/gtk/callbacks.h b/libsvm-3.21/svm-toy/gtk/callbacks.h
new file mode 100644
index 0000000..7cb8727
--- /dev/null
+++ b/libsvm-3.21/svm-toy/gtk/callbacks.h
@@ -0,0 +1,54 @@
+#include
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+void
+on_window1_destroy (GtkObject *object,
+ gpointer user_data);
+
+gboolean
+on_draw_main_button_press_event (GtkWidget *widget,
+ GdkEventButton *event,
+ gpointer user_data);
+
+gboolean
+on_draw_main_expose_event (GtkWidget *widget,
+ GdkEventExpose *event,
+ gpointer user_data);
+
+void
+on_button_change_clicked (GtkButton *button,
+ gpointer user_data);
+
+void
+on_button_run_clicked (GtkButton *button,
+ gpointer user_data);
+
+void
+on_button_clear_clicked (GtkButton *button,
+ gpointer user_data);
+
+void
+on_button_save_clicked (GtkButton *button,
+ gpointer user_data);
+
+void
+on_button_load_clicked (GtkButton *button,
+ gpointer user_data);
+
+void
+on_fileselection_destroy (GtkObject *object,
+ gpointer user_data);
+
+void
+on_filesel_ok_clicked (GtkButton *button,
+ gpointer user_data);
+
+void
+on_filesel_cancel_clicked (GtkButton *button,
+ gpointer user_data);
+#ifdef __cplusplus
+}
+#endif
diff --git a/libsvm-3.21/svm-toy/gtk/interface.c b/libsvm-3.21/svm-toy/gtk/interface.c
new file mode 100644
index 0000000..b3815eb
--- /dev/null
+++ b/libsvm-3.21/svm-toy/gtk/interface.c
@@ -0,0 +1,164 @@
+/*
+ * DO NOT EDIT THIS FILE - it is generated by Glade.
+ */
+
+#include
+#include
+#include
+#include
+
+#include
+#include
+
+#include "callbacks.h"
+#include "interface.h"
+
+GtkWidget*
+create_window (void)
+{
+ GtkWidget *window;
+ GtkWidget *vbox1;
+ extern GtkWidget *draw_main;
+ GtkWidget *hbox1;
+ GtkWidget *button_change;
+ GtkWidget *button_run;
+ GtkWidget *button_clear;
+ GtkWidget *button_save;
+ GtkWidget *button_load;
+ extern GtkWidget *entry_option;
+
+ window = gtk_window_new (GTK_WINDOW_TOPLEVEL);
+ gtk_object_set_data (GTK_OBJECT (window), "window", window);
+ gtk_window_set_title (GTK_WINDOW (window), "SVM Toy");
+
+ vbox1 = gtk_vbox_new (FALSE, 0);
+ gtk_widget_ref (vbox1);
+ gtk_object_set_data_full (GTK_OBJECT (window), "vbox1", vbox1,
+ (GtkDestroyNotify) gtk_widget_unref);
+ gtk_widget_show (vbox1);
+ gtk_container_add (GTK_CONTAINER (window), vbox1);
+
+ draw_main = gtk_drawing_area_new ();
+ gtk_widget_ref (draw_main);
+ gtk_object_set_data_full (GTK_OBJECT (window), "draw_main", draw_main,
+ (GtkDestroyNotify) gtk_widget_unref);
+ gtk_widget_show (draw_main);
+ gtk_box_pack_start (GTK_BOX (vbox1), draw_main, TRUE, TRUE, 0);
+ gtk_widget_set_usize (draw_main, 500, 500);
+ gtk_widget_set_events (draw_main, GDK_EXPOSURE_MASK | GDK_BUTTON_PRESS_MASK);
+
+ hbox1 = gtk_hbox_new (FALSE, 0);
+ gtk_widget_ref (hbox1);
+ gtk_object_set_data_full (GTK_OBJECT (window), "hbox1", hbox1,
+ (GtkDestroyNotify) gtk_widget_unref);
+ gtk_widget_show (hbox1);
+ gtk_box_pack_start (GTK_BOX (vbox1), hbox1, FALSE, FALSE, 0);
+
+ button_change = gtk_button_new_with_label ("Change");
+ gtk_widget_ref (button_change);
+ gtk_object_set_data_full (GTK_OBJECT (window), "button_change", button_change,
+ (GtkDestroyNotify) gtk_widget_unref);
+ gtk_widget_show (button_change);
+ gtk_box_pack_start (GTK_BOX (hbox1), button_change, FALSE, FALSE, 0);
+
+ button_run = gtk_button_new_with_label ("Run");
+ gtk_widget_ref (button_run);
+ gtk_object_set_data_full (GTK_OBJECT (window), "button_run", button_run,
+ (GtkDestroyNotify) gtk_widget_unref);
+ gtk_widget_show (button_run);
+ gtk_box_pack_start (GTK_BOX (hbox1), button_run, FALSE, FALSE, 0);
+
+ button_clear = gtk_button_new_with_label ("Clear");
+ gtk_widget_ref (button_clear);
+ gtk_object_set_data_full (GTK_OBJECT (window), "button_clear", button_clear,
+ (GtkDestroyNotify) gtk_widget_unref);
+ gtk_widget_show (button_clear);
+ gtk_box_pack_start (GTK_BOX (hbox1), button_clear, FALSE, FALSE, 0);
+
+ button_save = gtk_button_new_with_label ("Save");
+ gtk_widget_ref (button_save);
+ gtk_object_set_data_full (GTK_OBJECT (window), "button_save", button_save,
+ (GtkDestroyNotify) gtk_widget_unref);
+ gtk_widget_show (button_save);
+ gtk_box_pack_start (GTK_BOX (hbox1), button_save, FALSE, FALSE, 0);
+
+ button_load = gtk_button_new_with_label ("Load");
+ gtk_widget_ref (button_load);
+ gtk_object_set_data_full (GTK_OBJECT (window), "button_load", button_load,
+ (GtkDestroyNotify) gtk_widget_unref);
+ gtk_widget_show (button_load);
+ gtk_box_pack_start (GTK_BOX (hbox1), button_load, FALSE, FALSE, 0);
+
+ entry_option = gtk_entry_new ();
+ gtk_widget_ref (entry_option);
+ gtk_object_set_data_full (GTK_OBJECT (window), "entry_option", entry_option,
+ (GtkDestroyNotify) gtk_widget_unref);
+ gtk_widget_show (entry_option);
+ gtk_box_pack_start (GTK_BOX (hbox1), entry_option, TRUE, TRUE, 0);
+
+ gtk_signal_connect (GTK_OBJECT (window), "destroy",
+ GTK_SIGNAL_FUNC (on_window1_destroy),
+ NULL);
+ gtk_signal_connect (GTK_OBJECT (draw_main), "button_press_event",
+ GTK_SIGNAL_FUNC (on_draw_main_button_press_event),
+ NULL);
+ gtk_signal_connect (GTK_OBJECT (draw_main), "expose_event",
+ GTK_SIGNAL_FUNC (on_draw_main_expose_event),
+ NULL);
+ gtk_signal_connect (GTK_OBJECT (button_change), "clicked",
+ GTK_SIGNAL_FUNC (on_button_change_clicked),
+ NULL);
+ gtk_signal_connect (GTK_OBJECT (button_run), "clicked",
+ GTK_SIGNAL_FUNC (on_button_run_clicked),
+ NULL);
+ gtk_signal_connect (GTK_OBJECT (button_clear), "clicked",
+ GTK_SIGNAL_FUNC (on_button_clear_clicked),
+ NULL);
+ gtk_signal_connect (GTK_OBJECT (button_save), "clicked",
+ GTK_SIGNAL_FUNC (on_button_save_clicked),
+ NULL);
+ gtk_signal_connect (GTK_OBJECT (button_load), "clicked",
+ GTK_SIGNAL_FUNC (on_button_load_clicked),
+ NULL);
+ gtk_signal_connect (GTK_OBJECT (entry_option), "activate",
+ GTK_SIGNAL_FUNC (on_button_run_clicked),
+ NULL);
+
+ return window;
+}
+
+GtkWidget*
+create_fileselection (void)
+{
+ GtkWidget *fileselection;
+ GtkWidget *filesel_ok;
+ GtkWidget *filesel_cancel;
+
+ fileselection = gtk_file_selection_new ("Select File");
+ gtk_object_set_data (GTK_OBJECT (fileselection), "fileselection", fileselection);
+ gtk_container_set_border_width (GTK_CONTAINER (fileselection), 10);
+ gtk_window_set_modal (GTK_WINDOW (fileselection), TRUE);
+
+ filesel_ok = GTK_FILE_SELECTION (fileselection)->ok_button;
+ gtk_object_set_data (GTK_OBJECT (fileselection), "filesel_ok", filesel_ok);
+ gtk_widget_show (filesel_ok);
+ GTK_WIDGET_SET_FLAGS (filesel_ok, GTK_CAN_DEFAULT);
+
+ filesel_cancel = GTK_FILE_SELECTION (fileselection)->cancel_button;
+ gtk_object_set_data (GTK_OBJECT (fileselection), "filesel_cancel", filesel_cancel);
+ gtk_widget_show (filesel_cancel);
+ GTK_WIDGET_SET_FLAGS (filesel_cancel, GTK_CAN_DEFAULT);
+
+ gtk_signal_connect (GTK_OBJECT (fileselection), "destroy",
+ GTK_SIGNAL_FUNC (on_fileselection_destroy),
+ NULL);
+ gtk_signal_connect (GTK_OBJECT (filesel_ok), "clicked",
+ GTK_SIGNAL_FUNC (on_filesel_ok_clicked),
+ NULL);
+ gtk_signal_connect (GTK_OBJECT (filesel_cancel), "clicked",
+ GTK_SIGNAL_FUNC (on_filesel_cancel_clicked),
+ NULL);
+
+ return fileselection;
+}
+
diff --git a/libsvm-3.21/svm-toy/gtk/interface.h b/libsvm-3.21/svm-toy/gtk/interface.h
new file mode 100644
index 0000000..7ca0cbb
--- /dev/null
+++ b/libsvm-3.21/svm-toy/gtk/interface.h
@@ -0,0 +1,14 @@
+/*
+ * DO NOT EDIT THIS FILE - it is generated by Glade.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+GtkWidget* create_window (void);
+GtkWidget* create_fileselection (void);
+
+#ifdef __cplusplus
+}
+#endif
diff --git a/libsvm-3.21/svm-toy/gtk/main.c b/libsvm-3.21/svm-toy/gtk/main.c
new file mode 100644
index 0000000..d9f037d
--- /dev/null
+++ b/libsvm-3.21/svm-toy/gtk/main.c
@@ -0,0 +1,23 @@
+/*
+ * Initial main.c file generated by Glade. Edit as required.
+ * Glade will not overwrite this file.
+ */
+
+#include
+#include "interface.h"
+void svm_toy_initialize();
+
+int main (int argc, char *argv[])
+{
+ GtkWidget *window;
+
+ gtk_set_locale ();
+ gtk_init (&argc, &argv);
+
+ window = create_window ();
+ gtk_widget_show (window);
+
+ svm_toy_initialize();
+ gtk_main ();
+ return 0;
+}
diff --git a/libsvm-3.21/svm-toy/gtk/svm-toy.glade b/libsvm-3.21/svm-toy/gtk/svm-toy.glade
new file mode 100644
index 0000000..71d9f41
--- /dev/null
+++ b/libsvm-3.21/svm-toy/gtk/svm-toy.glade
@@ -0,0 +1,238 @@
+
+
+
+
+ svm-toy
+ svm-toy
+
+ src
+ pixmaps
+ C
+ False
+ False
+ False
+ True
+ True
+ True
+ False
+ interface.c
+ interface.h
+ callbacks.c
+ callbacks.h
+ support.c
+ support.h
+
+
+
+
+ GtkWindow
+ window
+
+ destroy
+ on_window1_destroy
+ Sun, 16 Apr 2000 09:47:10 GMT
+
+ SVM Toy
+ GTK_WINDOW_TOPLEVEL
+ GTK_WIN_POS_NONE
+ False
+ False
+ True
+ False
+
+
+ GtkVBox
+ vbox1
+ False
+ 0
+
+
+ GtkDrawingArea
+ draw_main
+ 500
+ 500
+ GDK_EXPOSURE_MASK | GDK_BUTTON_PRESS_MASK
+
+ button_press_event
+ on_draw_main_button_press_event
+ Sun, 16 Apr 2000 13:02:05 GMT
+
+
+ expose_event
+ on_draw_main_expose_event
+ Sun, 16 Apr 2000 14:27:05 GMT
+
+
+ 0
+ True
+ True
+
+
+
+
+ GtkHBox
+ hbox1
+ False
+ 0
+
+ 0
+ False
+ False
+
+
+
+ GtkButton
+ button_change
+ True
+
+ clicked
+ on_button_change_clicked
+ Sun, 16 Apr 2000 09:40:18 GMT
+
+ Change
+
+ 0
+ False
+ False
+
+
+
+
+ GtkButton
+ button_run
+ True
+
+ clicked
+ on_button_run_clicked
+ Sun, 16 Apr 2000 09:40:37 GMT
+
+ Run
+
+ 0
+ False
+ False
+
+
+
+
+ GtkButton
+ button_clear
+ True
+
+ clicked
+ on_button_clear_clicked
+ Sun, 16 Apr 2000 09:40:44 GMT
+
+ Clear
+
+ 0
+ False
+ False
+
+
+
+
+ GtkButton
+ button_save
+ True
+
+ clicked
+ on_button_save_clicked
+ Fri, 16 Jun 2000 18:23:46 GMT
+
+ Save
+
+ 0
+ False
+ False
+
+
+
+
+ GtkButton
+ button_load
+ True
+
+ clicked
+ on_button_load_clicked
+ Fri, 16 Jun 2000 18:23:56 GMT
+
+ Load
+
+ 0
+ False
+ False
+
+
+
+
+ GtkEntry
+ entry_option
+ True
+
+ activate
+ on_button_run_clicked
+ Sun, 16 Apr 2000 09:42:46 GMT
+
+ True
+ True
+ 0
+
+
+ 0
+ True
+ True
+
+
+
+
+
+
+
+ GtkFileSelection
+ fileselection
+ 10
+
+ destroy
+ on_fileselection_destroy
+ Fri, 16 Jun 2000 18:11:28 GMT
+
+ Select File
+ GTK_WINDOW_TOPLEVEL
+ GTK_WIN_POS_NONE
+ True
+ False
+ True
+ False
+ True
+
+
+ GtkButton
+ FileSel:ok_button
+ filesel_ok
+ True
+ True
+
+ clicked
+ on_filesel_ok_clicked
+ Fri, 16 Jun 2000 18:09:56 GMT
+
+ OK
+
+
+
+ GtkButton
+ FileSel:cancel_button
+ filesel_cancel
+ True
+ True
+
+ clicked
+ on_filesel_cancel_clicked
+ Fri, 16 Jun 2000 18:09:46 GMT
+
+ Cancel
+
+
+
+
diff --git a/libsvm-3.21/svm-toy/qt/Makefile b/libsvm-3.21/svm-toy/qt/Makefile
new file mode 100644
index 0000000..986d3a0
--- /dev/null
+++ b/libsvm-3.21/svm-toy/qt/Makefile
@@ -0,0 +1,18 @@
+CXX? = g++
+INCLUDE = /usr/include/qt4
+CFLAGS = -Wall -O3 -I$(INCLUDE) -I$(INCLUDE)/QtGui -I$(INCLUDE)/QtCore
+LIB = -lQtGui -lQtCore
+MOC = /usr/bin/moc-qt4
+
+svm-toy: svm-toy.cpp svm-toy.moc ../../svm.o
+ $(CXX) $(CFLAGS) svm-toy.cpp ../../svm.o -o svm-toy $(LIB)
+
+svm-toy.moc: svm-toy.cpp
+ $(MOC) svm-toy.cpp -o svm-toy.moc
+
+../../svm.o: ../../svm.cpp ../../svm.h
+ make -C ../.. svm.o
+
+clean:
+ rm -f *~ svm-toy svm-toy.moc ../../svm.o
+
diff --git a/libsvm-3.21/svm-toy/qt/svm-toy.cpp b/libsvm-3.21/svm-toy/qt/svm-toy.cpp
new file mode 100644
index 0000000..0bbb934
--- /dev/null
+++ b/libsvm-3.21/svm-toy/qt/svm-toy.cpp
@@ -0,0 +1,437 @@
+#include
+#include
+#include
+#include
+#include
+#include
+#include "../../svm.h"
+using namespace std;
+
+#define DEFAULT_PARAM "-t 2 -c 100"
+#define XLEN 500
+#define YLEN 500
+
+QRgb colors[] =
+{
+ qRgb(0,0,0),
+ qRgb(0,120,120),
+ qRgb(120,120,0),
+ qRgb(120,0,120),
+ qRgb(0,200,200),
+ qRgb(200,200,0),
+ qRgb(200,0,200)
+};
+
+class SvmToyWindow : public QWidget
+{
+
+Q_OBJECT
+
+public:
+ SvmToyWindow();
+ ~SvmToyWindow();
+protected:
+ virtual void mousePressEvent( QMouseEvent* );
+ virtual void paintEvent( QPaintEvent* );
+
+private:
+ QPixmap buffer;
+ QPixmap icon1;
+ QPixmap icon2;
+ QPixmap icon3;
+ QPushButton button_change_icon;
+ QPushButton button_run;
+ QPushButton button_clear;
+ QPushButton button_save;
+ QPushButton button_load;
+ QLineEdit input_line;
+ QPainter buffer_painter;
+ struct point {
+ double x, y;
+ signed char value;
+ };
+ list point_list;
+ int current_value;
+ const QPixmap& choose_icon(int v)
+ {
+ if(v==1) return icon1;
+ else if(v==2) return icon2;
+ else return icon3;
+ }
+ void clear_all()
+ {
+ point_list.clear();
+ buffer.fill(Qt::black);
+ repaint();
+ }
+ void draw_point(const point& p)
+ {
+ const QPixmap& icon = choose_icon(p.value);
+ buffer_painter.drawPixmap((int)(p.x*XLEN),(int)(p.y*YLEN),icon);
+ repaint();
+ }
+ void draw_all_points()
+ {
+ for(list::iterator p = point_list.begin(); p != point_list.end();p++)
+ draw_point(*p);
+ }
+private slots:
+ void button_change_icon_clicked()
+ {
+ ++current_value;
+ if(current_value > 3) current_value = 1;
+ button_change_icon.setIcon(choose_icon(current_value));
+ }
+ void button_run_clicked()
+ {
+ // guard
+ if(point_list.empty()) return;
+
+ svm_parameter param;
+ int i,j;
+
+ // default values
+ param.svm_type = C_SVC;
+ param.kernel_type = RBF;
+ param.degree = 3;
+ param.gamma = 0;
+ param.coef0 = 0;
+ param.nu = 0.5;
+ param.cache_size = 100;
+ param.C = 1;
+ param.eps = 1e-3;
+ param.p = 0.1;
+ param.shrinking = 1;
+ param.probability = 0;
+ param.nr_weight = 0;
+ param.weight_label = NULL;
+ param.weight = NULL;
+
+ // parse options
+ const char *p = input_line.text().toAscii().constData();
+
+ while (1) {
+ while (*p && *p != '-')
+ p++;
+
+ if (*p == '\0')
+ break;
+
+ p++;
+ switch (*p++) {
+ case 's':
+ param.svm_type = atoi(p);
+ break;
+ case 't':
+ param.kernel_type = atoi(p);
+ break;
+ case 'd':
+ param.degree = atoi(p);
+ break;
+ case 'g':
+ param.gamma = atof(p);
+ break;
+ case 'r':
+ param.coef0 = atof(p);
+ break;
+ case 'n':
+ param.nu = atof(p);
+ break;
+ case 'm':
+ param.cache_size = atof(p);
+ break;
+ case 'c':
+ param.C = atof(p);
+ break;
+ case 'e':
+ param.eps = atof(p);
+ break;
+ case 'p':
+ param.p = atof(p);
+ break;
+ case 'h':
+ param.shrinking = atoi(p);
+ break;
+ case 'b':
+ param.probability = atoi(p);
+ break;
+ case 'w':
+ ++param.nr_weight;
+ param.weight_label = (int *)realloc(param.weight_label,sizeof(int)*param.nr_weight);
+ param.weight = (double *)realloc(param.weight,sizeof(double)*param.nr_weight);
+ param.weight_label[param.nr_weight-1] = atoi(p);
+ while(*p && !isspace(*p)) ++p;
+ param.weight[param.nr_weight-1] = atof(p);
+ break;
+ }
+ }
+
+ // build problem
+ svm_problem prob;
+
+ prob.l = point_list.size();
+ prob.y = new double[prob.l];
+
+ if(param.kernel_type == PRECOMPUTED)
+ {
+ }
+ else if(param.svm_type == EPSILON_SVR ||
+ param.svm_type == NU_SVR)
+ {
+ if(param.gamma == 0) param.gamma = 1;
+ svm_node *x_space = new svm_node[2 * prob.l];
+ prob.x = new svm_node *[prob.l];
+
+ i = 0;
+ for (list ::iterator q = point_list.begin(); q != point_list.end(); q++, i++)
+ {
+ x_space[2 * i].index = 1;
+ x_space[2 * i].value = q->x;
+ x_space[2 * i + 1].index = -1;
+ prob.x[i] = &x_space[2 * i];
+ prob.y[i] = q->y;
+ }
+
+ // build model & classify
+ svm_model *model = svm_train(&prob, ¶m);
+ svm_node x[2];
+ x[0].index = 1;
+ x[1].index = -1;
+ int *j = new int[XLEN];
+
+ for (i = 0; i < XLEN; i++)
+ {
+ x[0].value = (double) i / XLEN;
+ j[i] = (int)(YLEN*svm_predict(model, x));
+ }
+
+ buffer_painter.setPen(colors[0]);
+ buffer_painter.drawLine(0,0,0,YLEN-1);
+
+ int p = (int)(param.p * YLEN);
+ for(i = 1; i < XLEN; i++)
+ {
+ buffer_painter.setPen(colors[0]);
+ buffer_painter.drawLine(i,0,i,YLEN-1);
+
+ buffer_painter.setPen(colors[5]);
+ buffer_painter.drawLine(i-1,j[i-1],i,j[i]);
+
+ if(param.svm_type == EPSILON_SVR)
+ {
+ buffer_painter.setPen(colors[2]);
+ buffer_painter.drawLine(i-1,j[i-1]+p,i,j[i]+p);
+
+ buffer_painter.setPen(colors[2]);
+ buffer_painter.drawLine(i-1,j[i-1]-p,i,j[i]-p);
+ }
+ }
+
+ svm_free_and_destroy_model(&model);
+ delete[] j;
+ delete[] x_space;
+ delete[] prob.x;
+ delete[] prob.y;
+ }
+ else
+ {
+ if(param.gamma == 0) param.gamma = 0.5;
+ svm_node *x_space = new svm_node[3 * prob.l];
+ prob.x = new svm_node *[prob.l];
+
+ i = 0;
+ for (list ::iterator q = point_list.begin(); q != point_list.end(); q++, i++)
+ {
+ x_space[3 * i].index = 1;
+ x_space[3 * i].value = q->x;
+ x_space[3 * i + 1].index = 2;
+ x_space[3 * i + 1].value = q->y;
+ x_space[3 * i + 2].index = -1;
+ prob.x[i] = &x_space[3 * i];
+ prob.y[i] = q->value;
+ }
+
+ // build model & classify
+ svm_model *model = svm_train(&prob, ¶m);
+ svm_node x[3];
+ x[0].index = 1;
+ x[1].index = 2;
+ x[2].index = -1;
+
+ for (i = 0; i < XLEN; i++)
+ for (j = 0; j < YLEN ; j++) {
+ x[0].value = (double) i / XLEN;
+ x[1].value = (double) j / YLEN;
+ double d = svm_predict(model, x);
+ if (param.svm_type == ONE_CLASS && d<0) d=2;
+ buffer_painter.setPen(colors[(int)d]);
+ buffer_painter.drawPoint(i,j);
+ }
+
+ svm_free_and_destroy_model(&model);
+ delete[] x_space;
+ delete[] prob.x;
+ delete[] prob.y;
+ }
+ free(param.weight_label);
+ free(param.weight);
+ draw_all_points();
+ }
+ void button_clear_clicked()
+ {
+ clear_all();
+ }
+ void button_save_clicked()
+ {
+ QString filename = QFileDialog::getSaveFileName();
+ if(!filename.isNull())
+ {
+ FILE *fp = fopen(filename.toAscii().constData(),"w");
+
+ const char *p = input_line.text().toAscii().constData();
+ const char* svm_type_str = strstr(p, "-s ");
+ int svm_type = C_SVC;
+ if(svm_type_str != NULL)
+ sscanf(svm_type_str, "-s %d", &svm_type);
+
+ if(fp)
+ {
+ if(svm_type == EPSILON_SVR || svm_type == NU_SVR)
+ {
+ for(list::iterator p = point_list.begin(); p != point_list.end();p++)
+ fprintf(fp,"%f 1:%f\n", p->y, p->x);
+ }
+ else
+ {
+ for(list::iterator p = point_list.begin(); p != point_list.end();p++)
+ fprintf(fp,"%d 1:%f 2:%f\n", p->value, p->x, p->y);
+ }
+ fclose(fp);
+ }
+ }
+ }
+ void button_load_clicked()
+ {
+ QString filename = QFileDialog::getOpenFileName();
+ if(!filename.isNull())
+ {
+ FILE *fp = fopen(filename.toAscii().constData(),"r");
+ if(fp)
+ {
+ clear_all();
+ char buf[4096];
+ while(fgets(buf,sizeof(buf),fp))
+ {
+ int v;
+ double x,y;
+ if(sscanf(buf,"%d%*d:%lf%*d:%lf",&v,&x,&y)==3)
+ {
+ point p = {x,y,v};
+ point_list.push_back(p);
+ }
+ else if(sscanf(buf,"%lf%*d:%lf",&y,&x)==2)
+ {
+ point p = {x,y,current_value};
+ point_list.push_back(p);
+ }
+ else
+ break;
+ }
+ fclose(fp);
+ draw_all_points();
+ }
+ }
+
+ }
+};
+
+#include "svm-toy.moc"
+
+SvmToyWindow::SvmToyWindow()
+:button_change_icon(this)
+,button_run("Run",this)
+,button_clear("Clear",this)
+,button_save("Save",this)
+,button_load("Load",this)
+,input_line(this)
+,current_value(1)
+{
+ buffer = QPixmap(XLEN,YLEN);
+ buffer.fill(Qt::black);
+
+ buffer_painter.begin(&buffer);
+
+ QObject::connect(&button_change_icon, SIGNAL(clicked()), this,
+ SLOT(button_change_icon_clicked()));
+ QObject::connect(&button_run, SIGNAL(clicked()), this,
+ SLOT(button_run_clicked()));
+ QObject::connect(&button_clear, SIGNAL(clicked()), this,
+ SLOT(button_clear_clicked()));
+ QObject::connect(&button_save, SIGNAL(clicked()), this,
+ SLOT(button_save_clicked()));
+ QObject::connect(&button_load, SIGNAL(clicked()), this,
+ SLOT(button_load_clicked()));
+ QObject::connect(&input_line, SIGNAL(returnPressed()), this,
+ SLOT(button_run_clicked()));
+
+ // don't blank the window before repainting
+ setAttribute(Qt::WA_NoBackground);
+
+ icon1 = QPixmap(4,4);
+ icon2 = QPixmap(4,4);
+ icon3 = QPixmap(4,4);
+
+
+ QPainter painter;
+ painter.begin(&icon1);
+ painter.fillRect(0,0,4,4,QBrush(colors[4]));
+ painter.end();
+
+ painter.begin(&icon2);
+ painter.fillRect(0,0,4,4,QBrush(colors[5]));
+ painter.end();
+
+ painter.begin(&icon3);
+ painter.fillRect(0,0,4,4,QBrush(colors[6]));
+ painter.end();
+
+ button_change_icon.setGeometry( 0, YLEN, 50, 25 );
+ button_run.setGeometry( 50, YLEN, 50, 25 );
+ button_clear.setGeometry( 100, YLEN, 50, 25 );
+ button_save.setGeometry( 150, YLEN, 50, 25);
+ button_load.setGeometry( 200, YLEN, 50, 25);
+ input_line.setGeometry( 250, YLEN, 250, 25);
+
+ input_line.setText(DEFAULT_PARAM);
+ button_change_icon.setIcon(icon1);
+}
+
+SvmToyWindow::~SvmToyWindow()
+{
+ buffer_painter.end();
+}
+
+void SvmToyWindow::mousePressEvent( QMouseEvent* event )
+{
+ point p = {(double)event->x()/XLEN, (double)event->y()/YLEN, current_value};
+ point_list.push_back(p);
+ draw_point(p);
+}
+
+void SvmToyWindow::paintEvent( QPaintEvent* )
+{
+ // copy the image from the buffer pixmap to the window
+ QPainter p(this);
+ p.drawPixmap(0, 0, buffer);
+}
+
+int main( int argc, char* argv[] )
+{
+ QApplication myapp( argc, argv );
+
+ SvmToyWindow* mywidget = new SvmToyWindow();
+ mywidget->setGeometry( 100, 100, XLEN, YLEN+25 );
+
+ mywidget->show();
+ return myapp.exec();
+}
diff --git a/libsvm-3.21/svm-toy/windows/svm-toy.cpp b/libsvm-3.21/svm-toy/windows/svm-toy.cpp
new file mode 100644
index 0000000..b1faafb
--- /dev/null
+++ b/libsvm-3.21/svm-toy/windows/svm-toy.cpp
@@ -0,0 +1,482 @@
+#include
+#include
+#include
+#include
+#include
+#include
+#include "../../svm.h"
+using namespace std;
+
+#define DEFAULT_PARAM "-t 2 -c 100"
+#define XLEN 500
+#define YLEN 500
+#define DrawLine(dc,x1,y1,x2,y2,c) \
+ do { \
+ HPEN hpen = CreatePen(PS_SOLID,0,c); \
+ HPEN horig = SelectPen(dc,hpen); \
+ MoveToEx(dc,x1,y1,NULL); \
+ LineTo(dc,x2,y2); \
+ SelectPen(dc,horig); \
+ DeletePen(hpen); \
+ } while(0)
+
+using namespace std;
+
+COLORREF colors[] =
+{
+ RGB(0,0,0),
+ RGB(0,120,120),
+ RGB(120,120,0),
+ RGB(120,0,120),
+ RGB(0,200,200),
+ RGB(200,200,0),
+ RGB(200,0,200)
+};
+
+HWND main_window;
+HBITMAP buffer;
+HDC window_dc;
+HDC buffer_dc;
+HBRUSH brush1, brush2, brush3;
+HWND edit;
+
+enum {
+ ID_BUTTON_CHANGE, ID_BUTTON_RUN, ID_BUTTON_CLEAR,
+ ID_BUTTON_LOAD, ID_BUTTON_SAVE, ID_EDIT
+};
+
+struct point {
+ double x, y;
+ signed char value;
+};
+
+list point_list;
+int current_value = 1;
+
+LRESULT CALLBACK WndProc(HWND, UINT, WPARAM, LPARAM);
+
+int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance,
+ PSTR szCmdLine, int iCmdShow)
+{
+ static char szAppName[] = "SvmToy";
+ MSG msg;
+ WNDCLASSEX wndclass;
+
+ wndclass.cbSize = sizeof(wndclass);
+ wndclass.style = CS_HREDRAW | CS_VREDRAW;
+ wndclass.lpfnWndProc = WndProc;
+ wndclass.cbClsExtra = 0;
+ wndclass.cbWndExtra = 0;
+ wndclass.hInstance = hInstance;
+ wndclass.hIcon = LoadIcon(NULL, IDI_APPLICATION);
+ wndclass.hCursor = LoadCursor(NULL, IDC_ARROW);
+ wndclass.hbrBackground = (HBRUSH) GetStockObject(BLACK_BRUSH);
+ wndclass.lpszMenuName = NULL;
+ wndclass.lpszClassName = szAppName;
+ wndclass.hIconSm = LoadIcon(NULL, IDI_APPLICATION);
+
+ RegisterClassEx(&wndclass);
+
+ main_window = CreateWindow(szAppName, // window class name
+ "SVM Toy", // window caption
+ WS_OVERLAPPEDWINDOW,// window style
+ CW_USEDEFAULT, // initial x position
+ CW_USEDEFAULT, // initial y position
+ XLEN, // initial x size
+ YLEN+52, // initial y size
+ NULL, // parent window handle
+ NULL, // window menu handle
+ hInstance, // program instance handle
+ NULL); // creation parameters
+
+ ShowWindow(main_window, iCmdShow);
+ UpdateWindow(main_window);
+
+ CreateWindow("button", "Change", WS_CHILD | WS_VISIBLE | BS_PUSHBUTTON,
+ 0, YLEN, 50, 25, main_window, (HMENU) ID_BUTTON_CHANGE, hInstance, NULL);
+ CreateWindow("button", "Run", WS_CHILD | WS_VISIBLE | BS_PUSHBUTTON,
+ 50, YLEN, 50, 25, main_window, (HMENU) ID_BUTTON_RUN, hInstance, NULL);
+ CreateWindow("button", "Clear", WS_CHILD | WS_VISIBLE | BS_PUSHBUTTON,
+ 100, YLEN, 50, 25, main_window, (HMENU) ID_BUTTON_CLEAR, hInstance, NULL);
+ CreateWindow("button", "Save", WS_CHILD | WS_VISIBLE | BS_PUSHBUTTON,
+ 150, YLEN, 50, 25, main_window, (HMENU) ID_BUTTON_SAVE, hInstance, NULL);
+ CreateWindow("button", "Load", WS_CHILD | WS_VISIBLE | BS_PUSHBUTTON,
+ 200, YLEN, 50, 25, main_window, (HMENU) ID_BUTTON_LOAD, hInstance, NULL);
+
+ edit = CreateWindow("edit", NULL, WS_CHILD | WS_VISIBLE,
+ 250, YLEN, 250, 25, main_window, (HMENU) ID_EDIT, hInstance, NULL);
+
+ Edit_SetText(edit,DEFAULT_PARAM);
+
+ brush1 = CreateSolidBrush(colors[4]);
+ brush2 = CreateSolidBrush(colors[5]);
+ brush3 = CreateSolidBrush(colors[6]);
+
+ window_dc = GetDC(main_window);
+ buffer = CreateCompatibleBitmap(window_dc, XLEN, YLEN);
+ buffer_dc = CreateCompatibleDC(window_dc);
+ SelectObject(buffer_dc, buffer);
+ PatBlt(buffer_dc, 0, 0, XLEN, YLEN, BLACKNESS);
+
+ while (GetMessage(&msg, NULL, 0, 0)) {
+ TranslateMessage(&msg);
+ DispatchMessage(&msg);
+ }
+ return msg.wParam;
+}
+
+int getfilename( HWND hWnd , char *filename, int len, int save)
+{
+ OPENFILENAME OpenFileName;
+ memset(&OpenFileName,0,sizeof(OpenFileName));
+ filename[0]='\0';
+
+ OpenFileName.lStructSize = sizeof(OPENFILENAME);
+ OpenFileName.hwndOwner = hWnd;
+ OpenFileName.lpstrFile = filename;
+ OpenFileName.nMaxFile = len;
+ OpenFileName.Flags = 0;
+
+ return save?GetSaveFileName(&OpenFileName):GetOpenFileName(&OpenFileName);
+}
+
+void clear_all()
+{
+ point_list.clear();
+ PatBlt(buffer_dc, 0, 0, XLEN, YLEN, BLACKNESS);
+ InvalidateRect(main_window, 0, 0);
+}
+
+HBRUSH choose_brush(int v)
+{
+ if(v==1) return brush1;
+ else if(v==2) return brush2;
+ else return brush3;
+}
+
+void draw_point(const point & p)
+{
+ RECT rect;
+ rect.left = int(p.x*XLEN);
+ rect.top = int(p.y*YLEN);
+ rect.right = int(p.x*XLEN) + 3;
+ rect.bottom = int(p.y*YLEN) + 3;
+ FillRect(window_dc, &rect, choose_brush(p.value));
+ FillRect(buffer_dc, &rect, choose_brush(p.value));
+}
+
+void draw_all_points()
+{
+ for(list::iterator p = point_list.begin(); p != point_list.end(); p++)
+ draw_point(*p);
+}
+
+void button_run_clicked()
+{
+ // guard
+ if(point_list.empty()) return;
+
+ svm_parameter param;
+ int i,j;
+
+ // default values
+ param.svm_type = C_SVC;
+ param.kernel_type = RBF;
+ param.degree = 3;
+ param.gamma = 0;
+ param.coef0 = 0;
+ param.nu = 0.5;
+ param.cache_size = 100;
+ param.C = 1;
+ param.eps = 1e-3;
+ param.p = 0.1;
+ param.shrinking = 1;
+ param.probability = 0;
+ param.nr_weight = 0;
+ param.weight_label = NULL;
+ param.weight = NULL;
+
+ // parse options
+ char str[1024];
+ Edit_GetLine(edit, 0, str, sizeof(str));
+ const char *p = str;
+
+ while (1) {
+ while (*p && *p != '-')
+ p++;
+
+ if (*p == '\0')
+ break;
+
+ p++;
+ switch (*p++) {
+ case 's':
+ param.svm_type = atoi(p);
+ break;
+ case 't':
+ param.kernel_type = atoi(p);
+ break;
+ case 'd':
+ param.degree = atoi(p);
+ break;
+ case 'g':
+ param.gamma = atof(p);
+ break;
+ case 'r':
+ param.coef0 = atof(p);
+ break;
+ case 'n':
+ param.nu = atof(p);
+ break;
+ case 'm':
+ param.cache_size = atof(p);
+ break;
+ case 'c':
+ param.C = atof(p);
+ break;
+ case 'e':
+ param.eps = atof(p);
+ break;
+ case 'p':
+ param.p = atof(p);
+ break;
+ case 'h':
+ param.shrinking = atoi(p);
+ break;
+ case 'b':
+ param.probability = atoi(p);
+ break;
+ case 'w':
+ ++param.nr_weight;
+ param.weight_label = (int *)realloc(param.weight_label,sizeof(int)*param.nr_weight);
+ param.weight = (double *)realloc(param.weight,sizeof(double)*param.nr_weight);
+ param.weight_label[param.nr_weight-1] = atoi(p);
+ while(*p && !isspace(*p)) ++p;
+ param.weight[param.nr_weight-1] = atof(p);
+ break;
+ }
+ }
+
+ // build problem
+ svm_problem prob;
+
+ prob.l = point_list.size();
+ prob.y = new double[prob.l];
+
+ if(param.kernel_type == PRECOMPUTED)
+ {
+ }
+ else if(param.svm_type == EPSILON_SVR ||
+ param.svm_type == NU_SVR)
+ {
+ if(param.gamma == 0) param.gamma = 1;
+ svm_node *x_space = new svm_node[2 * prob.l];
+ prob.x = new svm_node *[prob.l];
+
+ i = 0;
+ for (list::iterator q = point_list.begin(); q != point_list.end(); q++, i++)
+ {
+ x_space[2 * i].index = 1;
+ x_space[2 * i].value = q->x;
+ x_space[2 * i + 1].index = -1;
+ prob.x[i] = &x_space[2 * i];
+ prob.y[i] = q->y;
+ }
+
+ // build model & classify
+ svm_model *model = svm_train(&prob, ¶m);
+ svm_node x[2];
+ x[0].index = 1;
+ x[1].index = -1;
+ int *j = new int[XLEN];
+
+ for (i = 0; i < XLEN; i++)
+ {
+ x[0].value = (double) i / XLEN;
+ j[i] = (int)(YLEN*svm_predict(model, x));
+ }
+
+ DrawLine(buffer_dc,0,0,0,YLEN,colors[0]);
+ DrawLine(window_dc,0,0,0,YLEN,colors[0]);
+
+ int p = (int)(param.p * YLEN);
+ for(int i=1; i < XLEN; i++)
+ {
+ DrawLine(buffer_dc,i,0,i,YLEN,colors[0]);
+ DrawLine(window_dc,i,0,i,YLEN,colors[0]);
+
+ DrawLine(buffer_dc,i-1,j[i-1],i,j[i],colors[5]);
+ DrawLine(window_dc,i-1,j[i-1],i,j[i],colors[5]);
+
+ if(param.svm_type == EPSILON_SVR)
+ {
+ DrawLine(buffer_dc,i-1,j[i-1]+p,i,j[i]+p,colors[2]);
+ DrawLine(window_dc,i-1,j[i-1]+p,i,j[i]+p,colors[2]);
+
+ DrawLine(buffer_dc,i-1,j[i-1]-p,i,j[i]-p,colors[2]);
+ DrawLine(window_dc,i-1,j[i-1]-p,i,j[i]-p,colors[2]);
+ }
+ }
+
+ svm_free_and_destroy_model(&model);
+ delete[] j;
+ delete[] x_space;
+ delete[] prob.x;
+ delete[] prob.y;
+ }
+ else
+ {
+ if(param.gamma == 0) param.gamma = 0.5;
+ svm_node *x_space = new svm_node[3 * prob.l];
+ prob.x = new svm_node *[prob.l];
+
+ i = 0;
+ for (list::iterator q = point_list.begin(); q != point_list.end(); q++, i++)
+ {
+ x_space[3 * i].index = 1;
+ x_space[3 * i].value = q->x;
+ x_space[3 * i + 1].index = 2;
+ x_space[3 * i + 1].value = q->y;
+ x_space[3 * i + 2].index = -1;
+ prob.x[i] = &x_space[3 * i];
+ prob.y[i] = q->value;
+ }
+
+ // build model & classify
+ svm_model *model = svm_train(&prob, ¶m);
+ svm_node x[3];
+ x[0].index = 1;
+ x[1].index = 2;
+ x[2].index = -1;
+
+ for (i = 0; i < XLEN; i++)
+ for (j = 0; j < YLEN; j++) {
+ x[0].value = (double) i / XLEN;
+ x[1].value = (double) j / YLEN;
+ double d = svm_predict(model, x);
+ if (param.svm_type == ONE_CLASS && d<0) d=2;
+ SetPixel(window_dc, i, j, colors[(int)d]);
+ SetPixel(buffer_dc, i, j, colors[(int)d]);
+ }
+
+ svm_free_and_destroy_model(&model);
+ delete[] x_space;
+ delete[] prob.x;
+ delete[] prob.y;
+ }
+ free(param.weight_label);
+ free(param.weight);
+ draw_all_points();
+}
+
+LRESULT CALLBACK WndProc(HWND hwnd, UINT iMsg, WPARAM wParam, LPARAM lParam)
+{
+ HDC hdc;
+ PAINTSTRUCT ps;
+
+ switch (iMsg) {
+ case WM_LBUTTONDOWN:
+ {
+ int x = LOWORD(lParam);
+ int y = HIWORD(lParam);
+ point p = {(double)x/XLEN, (double)y/YLEN, current_value};
+ point_list.push_back(p);
+ draw_point(p);
+ }
+ return 0;
+ case WM_PAINT:
+ {
+ hdc = BeginPaint(hwnd, &ps);
+ BitBlt(hdc, 0, 0, XLEN, YLEN, buffer_dc, 0, 0, SRCCOPY);
+ EndPaint(hwnd, &ps);
+ }
+ return 0;
+ case WM_COMMAND:
+ {
+ int id = LOWORD(wParam);
+ switch (id) {
+ case ID_BUTTON_CHANGE:
+ ++current_value;
+ if(current_value > 3) current_value = 1;
+ break;
+ case ID_BUTTON_RUN:
+ button_run_clicked();
+ break;
+ case ID_BUTTON_CLEAR:
+ clear_all();
+ break;
+ case ID_BUTTON_SAVE:
+ {
+ char filename[1024];
+ if(getfilename(hwnd,filename,1024,1))
+ {
+ FILE *fp = fopen(filename,"w");
+
+ char str[1024];
+ Edit_GetLine(edit, 0, str, sizeof(str));
+ const char *p = str;
+ const char* svm_type_str = strstr(p, "-s ");
+ int svm_type = C_SVC;
+ if(svm_type_str != NULL)
+ sscanf(svm_type_str, "-s %d", &svm_type);
+
+ if(fp)
+ {
+ if(svm_type == EPSILON_SVR || svm_type == NU_SVR)
+ {
+ for(list::iterator p = point_list.begin(); p != point_list.end();p++)
+ fprintf(fp,"%f 1:%f\n", p->y, p->x);
+ }
+ else
+ {
+ for(list::iterator p = point_list.begin(); p != point_list.end();p++)
+ fprintf(fp,"%d 1:%f 2:%f\n", p->value, p->x, p->y);
+ }
+ fclose(fp);
+ }
+ }
+ }
+ break;
+ case ID_BUTTON_LOAD:
+ {
+ char filename[1024];
+ if(getfilename(hwnd,filename,1024,0))
+ {
+ FILE *fp = fopen(filename,"r");
+ if(fp)
+ {
+ clear_all();
+ char buf[4096];
+ while(fgets(buf,sizeof(buf),fp))
+ {
+ int v;
+ double x,y;
+ if(sscanf(buf,"%d%*d:%lf%*d:%lf",&v,&x,&y)==3)
+ {
+ point p = {x,y,v};
+ point_list.push_back(p);
+ }
+ else if(sscanf(buf,"%lf%*d:%lf",&y,&x)==2)
+ {
+ point p = {x,y,current_value};
+ point_list.push_back(p);
+ }
+ else
+ break;
+ }
+ fclose(fp);
+ draw_all_points();
+ }
+ }
+ }
+ break;
+ }
+ }
+ return 0;
+ case WM_DESTROY:
+ PostQuitMessage(0);
+ return 0;
+ }
+
+ return DefWindowProc(hwnd, iMsg, wParam, lParam);
+}
diff --git a/libsvm-3.21/svm-train.c b/libsvm-3.21/svm-train.c
new file mode 100644
index 0000000..716815b
--- /dev/null
+++ b/libsvm-3.21/svm-train.c
@@ -0,0 +1,380 @@
+#include
+#include
+#include
+#include
+#include
+#include "svm.h"
+#define Malloc(type,n) (type *)malloc((n)*sizeof(type))
+
+void print_null(const char *s) {}
+
+void exit_with_help()
+{
+ printf(
+ "Usage: svm-train [options] training_set_file [model_file]\n"
+ "options:\n"
+ "-s svm_type : set type of SVM (default 0)\n"
+ " 0 -- C-SVC (multi-class classification)\n"
+ " 1 -- nu-SVC (multi-class classification)\n"
+ " 2 -- one-class SVM\n"
+ " 3 -- epsilon-SVR (regression)\n"
+ " 4 -- nu-SVR (regression)\n"
+ "-t kernel_type : set type of kernel function (default 2)\n"
+ " 0 -- linear: u'*v\n"
+ " 1 -- polynomial: (gamma*u'*v + coef0)^degree\n"
+ " 2 -- radial basis function: exp(-gamma*|u-v|^2)\n"
+ " 3 -- sigmoid: tanh(gamma*u'*v + coef0)\n"
+ " 4 -- precomputed kernel (kernel values in training_set_file)\n"
+ "-d degree : set degree in kernel function (default 3)\n"
+ "-g gamma : set gamma in kernel function (default 1/num_features)\n"
+ "-r coef0 : set coef0 in kernel function (default 0)\n"
+ "-c cost : set the parameter C of C-SVC, epsilon-SVR, and nu-SVR (default 1)\n"
+ "-n nu : set the parameter nu of nu-SVC, one-class SVM, and nu-SVR (default 0.5)\n"
+ "-p epsilon : set the epsilon in loss function of epsilon-SVR (default 0.1)\n"
+ "-m cachesize : set cache memory size in MB (default 100)\n"
+ "-e epsilon : set tolerance of termination criterion (default 0.001)\n"
+ "-h shrinking : whether to use the shrinking heuristics, 0 or 1 (default 1)\n"
+ "-b probability_estimates : whether to train a SVC or SVR model for probability estimates, 0 or 1 (default 0)\n"
+ "-wi weight : set the parameter C of class i to weight*C, for C-SVC (default 1)\n"
+ "-v n: n-fold cross validation mode\n"
+ "-q : quiet mode (no outputs)\n"
+ );
+ exit(1);
+}
+
+void exit_input_error(int line_num)
+{
+ fprintf(stderr,"Wrong input format at line %d\n", line_num);
+ exit(1);
+}
+
+void parse_command_line(int argc, char **argv, char *input_file_name, char *model_file_name);
+void read_problem(const char *filename);
+void do_cross_validation();
+
+struct svm_parameter param; // set by parse_command_line
+struct svm_problem prob; // set by read_problem
+struct svm_model *model;
+struct svm_node *x_space;
+int cross_validation;
+int nr_fold;
+
+static char *line = NULL;
+static int max_line_len;
+
+static char* readline(FILE *input)
+{
+ int len;
+
+ if(fgets(line,max_line_len,input) == NULL)
+ return NULL;
+
+ while(strrchr(line,'\n') == NULL)
+ {
+ max_line_len *= 2;
+ line = (char *) realloc(line,max_line_len);
+ len = (int) strlen(line);
+ if(fgets(line+len,max_line_len-len,input) == NULL)
+ break;
+ }
+ return line;
+}
+
+int main(int argc, char **argv)
+{
+ char input_file_name[1024];
+ char model_file_name[1024];
+ const char *error_msg;
+
+ parse_command_line(argc, argv, input_file_name, model_file_name);
+ read_problem(input_file_name);
+ error_msg = svm_check_parameter(&prob,¶m);
+
+ if(error_msg)
+ {
+ fprintf(stderr,"ERROR: %s\n",error_msg);
+ exit(1);
+ }
+
+ if(cross_validation)
+ {
+ do_cross_validation();
+ }
+ else
+ {
+ model = svm_train(&prob,¶m);
+ if(svm_save_model(model_file_name,model))
+ {
+ fprintf(stderr, "can't save model to file %s\n", model_file_name);
+ exit(1);
+ }
+ svm_free_and_destroy_model(&model);
+ }
+ svm_destroy_param(¶m);
+ free(prob.y);
+ free(prob.x);
+ free(x_space);
+ free(line);
+
+ return 0;
+}
+
+void do_cross_validation()
+{
+ int i;
+ int total_correct = 0;
+ double total_error = 0;
+ double sumv = 0, sumy = 0, sumvv = 0, sumyy = 0, sumvy = 0;
+ double *target = Malloc(double,prob.l);
+
+ svm_cross_validation(&prob,¶m,nr_fold,target);
+ if(param.svm_type == EPSILON_SVR ||
+ param.svm_type == NU_SVR)
+ {
+ for(i=0;i=argc)
+ exit_with_help();
+ switch(argv[i-1][1])
+ {
+ case 's':
+ param.svm_type = atoi(argv[i]);
+ break;
+ case 't':
+ param.kernel_type = atoi(argv[i]);
+ break;
+ case 'd':
+ param.degree = atoi(argv[i]);
+ break;
+ case 'g':
+ param.gamma = atof(argv[i]);
+ break;
+ case 'r':
+ param.coef0 = atof(argv[i]);
+ break;
+ case 'n':
+ param.nu = atof(argv[i]);
+ break;
+ case 'm':
+ param.cache_size = atof(argv[i]);
+ break;
+ case 'c':
+ param.C = atof(argv[i]);
+ break;
+ case 'e':
+ param.eps = atof(argv[i]);
+ break;
+ case 'p':
+ param.p = atof(argv[i]);
+ break;
+ case 'h':
+ param.shrinking = atoi(argv[i]);
+ break;
+ case 'b':
+ param.probability = atoi(argv[i]);
+ break;
+ case 'q':
+ print_func = &print_null;
+ i--;
+ break;
+ case 'v':
+ cross_validation = 1;
+ nr_fold = atoi(argv[i]);
+ if(nr_fold < 2)
+ {
+ fprintf(stderr,"n-fold cross validation: n must >= 2\n");
+ exit_with_help();
+ }
+ break;
+ case 'w':
+ ++param.nr_weight;
+ param.weight_label = (int *)realloc(param.weight_label,sizeof(int)*param.nr_weight);
+ param.weight = (double *)realloc(param.weight,sizeof(double)*param.nr_weight);
+ param.weight_label[param.nr_weight-1] = atoi(&argv[i-1][2]);
+ param.weight[param.nr_weight-1] = atof(argv[i]);
+ break;
+ default:
+ fprintf(stderr,"Unknown option: -%c\n", argv[i-1][1]);
+ exit_with_help();
+ }
+ }
+
+ svm_set_print_string_function(print_func);
+
+ // determine filenames
+
+ if(i>=argc)
+ exit_with_help();
+
+ strcpy(input_file_name, argv[i]);
+
+ if(i start from 0
+ readline(fp);
+ prob.x[i] = &x_space[j];
+ label = strtok(line," \t\n");
+ if(label == NULL) // empty line
+ exit_input_error(i+1);
+
+ prob.y[i] = strtod(label,&endptr);
+ if(endptr == label || *endptr != '\0')
+ exit_input_error(i+1);
+
+ while(1)
+ {
+ idx = strtok(NULL,":");
+ val = strtok(NULL," \t");
+
+ if(val == NULL)
+ break;
+
+ errno = 0;
+ x_space[j].index = (int) strtol(idx,&endptr,10);
+ if(endptr == idx || errno != 0 || *endptr != '\0' || x_space[j].index <= inst_max_index)
+ exit_input_error(i+1);
+ else
+ inst_max_index = x_space[j].index;
+
+ errno = 0;
+ x_space[j].value = strtod(val,&endptr);
+ if(endptr == val || errno != 0 || (*endptr != '\0' && !isspace(*endptr)))
+ exit_input_error(i+1);
+
+ ++j;
+ }
+
+ if(inst_max_index > max_index)
+ max_index = inst_max_index;
+ x_space[j++].index = -1;
+ }
+
+ if(param.gamma == 0 && max_index > 0)
+ param.gamma = 1.0/max_index;
+
+ if(param.kernel_type == PRECOMPUTED)
+ for(i=0;i max_index)
+ {
+ fprintf(stderr,"Wrong input format: sample_serial_number out of range\n");
+ exit(1);
+ }
+ }
+
+ fclose(fp);
+}
diff --git a/libsvm-3.21/svm.cpp b/libsvm-3.21/svm.cpp
new file mode 100644
index 0000000..f31a5a0
--- /dev/null
+++ b/libsvm-3.21/svm.cpp
@@ -0,0 +1,3170 @@
+#include
+#include
+#include
+#include
+#include
+#include
+#include
+#include
+#include
+#include "svm.h"
+int libsvm_version = LIBSVM_VERSION;
+typedef float Qfloat;
+typedef signed char schar;
+#ifndef min
+template static inline T min(T x,T y) { return (x static inline T max(T x,T y) { return (x>y)?x:y; }
+#endif
+template static inline void swap(T& x, T& y) { T t=x; x=y; y=t; }
+template static inline void clone(T*& dst, S* src, int n)
+{
+ dst = new T[n];
+ memcpy((void *)dst,(void *)src,sizeof(T)*n);
+}
+static inline double powi(double base, int times)
+{
+ double tmp = base, ret = 1.0;
+
+ for(int t=times; t>0; t/=2)
+ {
+ if(t%2==1) ret*=tmp;
+ tmp = tmp * tmp;
+ }
+ return ret;
+}
+#define INF HUGE_VAL
+#define TAU 1e-12
+#define Malloc(type,n) (type *)malloc((n)*sizeof(type))
+
+static void print_string_stdout(const char *s)
+{
+ fputs(s,stdout);
+ fflush(stdout);
+}
+static void (*svm_print_string) (const char *) = &print_string_stdout;
+#if 1
+static void info(const char *fmt,...)
+{
+ char buf[BUFSIZ];
+ va_list ap;
+ va_start(ap,fmt);
+ vsprintf(buf,fmt,ap);
+ va_end(ap);
+ (*svm_print_string)(buf);
+}
+#else
+static void info(const char *fmt,...) {}
+#endif
+
+//
+// Kernel Cache
+//
+// l is the number of total data items
+// size is the cache size limit in bytes
+//
+class Cache
+{
+public:
+ Cache(int l,long int size);
+ ~Cache();
+
+ // request data [0,len)
+ // return some position p where [p,len) need to be filled
+ // (p >= len if nothing needs to be filled)
+ int get_data(const int index, Qfloat **data, int len);
+ void swap_index(int i, int j);
+private:
+ int l;
+ long int size;
+ struct head_t
+ {
+ head_t *prev, *next; // a circular list
+ Qfloat *data;
+ int len; // data[0,len) is cached in this entry
+ };
+
+ head_t *head;
+ head_t lru_head;
+ void lru_delete(head_t *h);
+ void lru_insert(head_t *h);
+};
+
+Cache::Cache(int l_,long int size_):l(l_),size(size_)
+{
+ head = (head_t *)calloc(l,sizeof(head_t)); // initialized to 0
+ size /= sizeof(Qfloat);
+ size -= l * sizeof(head_t) / sizeof(Qfloat);
+ size = max(size, 2 * (long int) l); // cache must be large enough for two columns
+ lru_head.next = lru_head.prev = &lru_head;
+}
+
+Cache::~Cache()
+{
+ for(head_t *h = lru_head.next; h != &lru_head; h=h->next)
+ free(h->data);
+ free(head);
+}
+
+void Cache::lru_delete(head_t *h)
+{
+ // delete from current location
+ h->prev->next = h->next;
+ h->next->prev = h->prev;
+}
+
+void Cache::lru_insert(head_t *h)
+{
+ // insert to last position
+ h->next = &lru_head;
+ h->prev = lru_head.prev;
+ h->prev->next = h;
+ h->next->prev = h;
+}
+
+int Cache::get_data(const int index, Qfloat **data, int len)
+{
+ head_t *h = &head[index];
+ if(h->len) lru_delete(h);
+ int more = len - h->len;
+
+ if(more > 0)
+ {
+ // free old space
+ while(size < more)
+ {
+ head_t *old = lru_head.next;
+ lru_delete(old);
+ free(old->data);
+ size += old->len;
+ old->data = 0;
+ old->len = 0;
+ }
+
+ // allocate new space
+ h->data = (Qfloat *)realloc(h->data,sizeof(Qfloat)*len);
+ size -= more;
+ swap(h->len,len);
+ }
+
+ lru_insert(h);
+ *data = h->data;
+ return len;
+}
+
+void Cache::swap_index(int i, int j)
+{
+ if(i==j) return;
+
+ if(head[i].len) lru_delete(&head[i]);
+ if(head[j].len) lru_delete(&head[j]);
+ swap(head[i].data,head[j].data);
+ swap(head[i].len,head[j].len);
+ if(head[i].len) lru_insert(&head[i]);
+ if(head[j].len) lru_insert(&head[j]);
+
+ if(i>j) swap(i,j);
+ for(head_t *h = lru_head.next; h!=&lru_head; h=h->next)
+ {
+ if(h->len > i)
+ {
+ if(h->len > j)
+ swap(h->data[i],h->data[j]);
+ else
+ {
+ // give up
+ lru_delete(h);
+ free(h->data);
+ size += h->len;
+ h->data = 0;
+ h->len = 0;
+ }
+ }
+ }
+}
+
+//
+// Kernel evaluation
+//
+// the static method k_function is for doing single kernel evaluation
+// the constructor of Kernel prepares to calculate the l*l kernel matrix
+// the member function get_Q is for getting one column from the Q Matrix
+//
+class QMatrix {
+public:
+ virtual Qfloat *get_Q(int column, int len) const = 0;
+ virtual double *get_QD() const = 0;
+ virtual void swap_index(int i, int j) const = 0;
+ virtual ~QMatrix() {}
+};
+
+class Kernel: public QMatrix {
+public:
+ Kernel(int l, svm_node * const * x, const svm_parameter& param);
+ virtual ~Kernel();
+
+ static double k_function(const svm_node *x, const svm_node *y,
+ const svm_parameter& param);
+ virtual Qfloat *get_Q(int column, int len) const = 0;
+ virtual double *get_QD() const = 0;
+ virtual void swap_index(int i, int j) const // no so const...
+ {
+ swap(x[i],x[j]);
+ if(x_square) swap(x_square[i],x_square[j]);
+ }
+protected:
+
+ double (Kernel::*kernel_function)(int i, int j) const;
+
+private:
+ const svm_node **x;
+ double *x_square;
+
+ // svm_parameter
+ const int kernel_type;
+ const int degree;
+ const double gamma;
+ const double coef0;
+
+ static double dot(const svm_node *px, const svm_node *py);
+ double kernel_linear(int i, int j) const
+ {
+ return dot(x[i],x[j]);
+ }
+ double kernel_poly(int i, int j) const
+ {
+ return powi(gamma*dot(x[i],x[j])+coef0,degree);
+ }
+ double kernel_rbf(int i, int j) const
+ {
+ return exp(-gamma*(x_square[i]+x_square[j]-2*dot(x[i],x[j])));
+ }
+ double kernel_sigmoid(int i, int j) const
+ {
+ return tanh(gamma*dot(x[i],x[j])+coef0);
+ }
+ double kernel_precomputed(int i, int j) const
+ {
+ return x[i][(int)(x[j][0].value)].value;
+ }
+};
+
+Kernel::Kernel(int l, svm_node * const * x_, const svm_parameter& param)
+:kernel_type(param.kernel_type), degree(param.degree),
+ gamma(param.gamma), coef0(param.coef0)
+{
+ switch(kernel_type)
+ {
+ case LINEAR:
+ kernel_function = &Kernel::kernel_linear;
+ break;
+ case POLY:
+ kernel_function = &Kernel::kernel_poly;
+ break;
+ case RBF:
+ kernel_function = &Kernel::kernel_rbf;
+ break;
+ case SIGMOID:
+ kernel_function = &Kernel::kernel_sigmoid;
+ break;
+ case PRECOMPUTED:
+ kernel_function = &Kernel::kernel_precomputed;
+ break;
+ }
+
+ clone(x,x_,l);
+
+ if(kernel_type == RBF)
+ {
+ x_square = new double[l];
+ for(int i=0;iindex != -1 && py->index != -1)
+ {
+ if(px->index == py->index)
+ {
+ sum += px->value * py->value;
+ ++px;
+ ++py;
+ }
+ else
+ {
+ if(px->index > py->index)
+ ++py;
+ else
+ ++px;
+ }
+ }
+ return sum;
+}
+
+double Kernel::k_function(const svm_node *x, const svm_node *y,
+ const svm_parameter& param)
+{
+ switch(param.kernel_type)
+ {
+ case LINEAR:
+ return dot(x,y);
+ case POLY:
+ return powi(param.gamma*dot(x,y)+param.coef0,param.degree);
+ case RBF:
+ {
+ double sum = 0;
+ while(x->index != -1 && y->index !=-1)
+ {
+ if(x->index == y->index)
+ {
+ double d = x->value - y->value;
+ sum += d*d;
+ ++x;
+ ++y;
+ }
+ else
+ {
+ if(x->index > y->index)
+ {
+ sum += y->value * y->value;
+ ++y;
+ }
+ else
+ {
+ sum += x->value * x->value;
+ ++x;
+ }
+ }
+ }
+
+ while(x->index != -1)
+ {
+ sum += x->value * x->value;
+ ++x;
+ }
+
+ while(y->index != -1)
+ {
+ sum += y->value * y->value;
+ ++y;
+ }
+
+ return exp(-param.gamma*sum);
+ }
+ case SIGMOID:
+ return tanh(param.gamma*dot(x,y)+param.coef0);
+ case PRECOMPUTED: //x: test (validation), y: SV
+ return x[(int)(y->value)].value;
+ default:
+ return 0; // Unreachable
+ }
+}
+
+// An SMO algorithm in Fan et al., JMLR 6(2005), p. 1889--1918
+// Solves:
+//
+// min 0.5(\alpha^T Q \alpha) + p^T \alpha
+//
+// y^T \alpha = \delta
+// y_i = +1 or -1
+// 0 <= alpha_i <= Cp for y_i = 1
+// 0 <= alpha_i <= Cn for y_i = -1
+//
+// Given:
+//
+// Q, p, y, Cp, Cn, and an initial feasible point \alpha
+// l is the size of vectors and matrices
+// eps is the stopping tolerance
+//
+// solution will be put in \alpha, objective value will be put in obj
+//
+class Solver {
+public:
+ Solver() {};
+ virtual ~Solver() {};
+
+ struct SolutionInfo {
+ double obj;
+ double rho;
+ double upper_bound_p;
+ double upper_bound_n;
+ double r; // for Solver_NU
+ };
+
+ void Solve(int l, const QMatrix& Q, const double *p_, const schar *y_,
+ double *alpha_, double Cp, double Cn, double eps,
+ SolutionInfo* si, int shrinking);
+protected:
+ int active_size;
+ schar *y;
+ double *G; // gradient of objective function
+ enum { LOWER_BOUND, UPPER_BOUND, FREE };
+ char *alpha_status; // LOWER_BOUND, UPPER_BOUND, FREE
+ double *alpha;
+ const QMatrix *Q;
+ const double *QD;
+ double eps;
+ double Cp,Cn;
+ double *p;
+ int *active_set;
+ double *G_bar; // gradient, if we treat free variables as 0
+ int l;
+ bool unshrink; // XXX
+
+ double get_C(int i)
+ {
+ return (y[i] > 0)? Cp : Cn;
+ }
+ void update_alpha_status(int i)
+ {
+ if(alpha[i] >= get_C(i))
+ alpha_status[i] = UPPER_BOUND;
+ else if(alpha[i] <= 0)
+ alpha_status[i] = LOWER_BOUND;
+ else alpha_status[i] = FREE;
+ }
+ bool is_upper_bound(int i) { return alpha_status[i] == UPPER_BOUND; }
+ bool is_lower_bound(int i) { return alpha_status[i] == LOWER_BOUND; }
+ bool is_free(int i) { return alpha_status[i] == FREE; }
+ void swap_index(int i, int j);
+ void reconstruct_gradient();
+ virtual int select_working_set(int &i, int &j);
+ virtual double calculate_rho();
+ virtual void do_shrinking();
+private:
+ bool be_shrunk(int i, double Gmax1, double Gmax2);
+};
+
+void Solver::swap_index(int i, int j)
+{
+ Q->swap_index(i,j);
+ swap(y[i],y[j]);
+ swap(G[i],G[j]);
+ swap(alpha_status[i],alpha_status[j]);
+ swap(alpha[i],alpha[j]);
+ swap(p[i],p[j]);
+ swap(active_set[i],active_set[j]);
+ swap(G_bar[i],G_bar[j]);
+}
+
+void Solver::reconstruct_gradient()
+{
+ // reconstruct inactive elements of G from G_bar and free variables
+
+ if(active_size == l) return;
+
+ int i,j;
+ int nr_free = 0;
+
+ for(j=active_size;j 2*active_size*(l-active_size))
+ {
+ for(i=active_size;iget_Q(i,active_size);
+ for(j=0;jget_Q(i,l);
+ double alpha_i = alpha[i];
+ for(j=active_size;jl = l;
+ this->Q = &Q;
+ QD=Q.get_QD();
+ clone(p, p_,l);
+ clone(y, y_,l);
+ clone(alpha,alpha_,l);
+ this->Cp = Cp;
+ this->Cn = Cn;
+ this->eps = eps;
+ unshrink = false;
+
+ // initialize alpha_status
+ {
+ alpha_status = new char[l];
+ for(int i=0;iINT_MAX/100 ? INT_MAX : 100*l);
+ int counter = min(l,1000)+1;
+
+ while(iter < max_iter)
+ {
+ // show progress and do shrinking
+
+ if(--counter == 0)
+ {
+ counter = min(l,1000);
+ if(shrinking) do_shrinking();
+ info(".");
+ }
+
+ int i,j;
+ if(select_working_set(i,j)!=0)
+ {
+ // reconstruct the whole gradient
+ reconstruct_gradient();
+ // reset active set size and check
+ active_size = l;
+ info("*");
+ if(select_working_set(i,j)!=0)
+ break;
+ else
+ counter = 1; // do shrinking next iteration
+ }
+
+ ++iter;
+
+ // update alpha[i] and alpha[j], handle bounds carefully
+
+ const Qfloat *Q_i = Q.get_Q(i,active_size);
+ const Qfloat *Q_j = Q.get_Q(j,active_size);
+
+ double C_i = get_C(i);
+ double C_j = get_C(j);
+
+ double old_alpha_i = alpha[i];
+ double old_alpha_j = alpha[j];
+
+ if(y[i]!=y[j])
+ {
+ double quad_coef = QD[i]+QD[j]+2*Q_i[j];
+ if (quad_coef <= 0)
+ quad_coef = TAU;
+ double delta = (-G[i]-G[j])/quad_coef;
+ double diff = alpha[i] - alpha[j];
+ alpha[i] += delta;
+ alpha[j] += delta;
+
+ if(diff > 0)
+ {
+ if(alpha[j] < 0)
+ {
+ alpha[j] = 0;
+ alpha[i] = diff;
+ }
+ }
+ else
+ {
+ if(alpha[i] < 0)
+ {
+ alpha[i] = 0;
+ alpha[j] = -diff;
+ }
+ }
+ if(diff > C_i - C_j)
+ {
+ if(alpha[i] > C_i)
+ {
+ alpha[i] = C_i;
+ alpha[j] = C_i - diff;
+ }
+ }
+ else
+ {
+ if(alpha[j] > C_j)
+ {
+ alpha[j] = C_j;
+ alpha[i] = C_j + diff;
+ }
+ }
+ }
+ else
+ {
+ double quad_coef = QD[i]+QD[j]-2*Q_i[j];
+ if (quad_coef <= 0)
+ quad_coef = TAU;
+ double delta = (G[i]-G[j])/quad_coef;
+ double sum = alpha[i] + alpha[j];
+ alpha[i] -= delta;
+ alpha[j] += delta;
+
+ if(sum > C_i)
+ {
+ if(alpha[i] > C_i)
+ {
+ alpha[i] = C_i;
+ alpha[j] = sum - C_i;
+ }
+ }
+ else
+ {
+ if(alpha[j] < 0)
+ {
+ alpha[j] = 0;
+ alpha[i] = sum;
+ }
+ }
+ if(sum > C_j)
+ {
+ if(alpha[j] > C_j)
+ {
+ alpha[j] = C_j;
+ alpha[i] = sum - C_j;
+ }
+ }
+ else
+ {
+ if(alpha[i] < 0)
+ {
+ alpha[i] = 0;
+ alpha[j] = sum;
+ }
+ }
+ }
+
+ // update G
+
+ double delta_alpha_i = alpha[i] - old_alpha_i;
+ double delta_alpha_j = alpha[j] - old_alpha_j;
+
+ for(int k=0;k= max_iter)
+ {
+ if(active_size < l)
+ {
+ // reconstruct the whole gradient to calculate objective value
+ reconstruct_gradient();
+ active_size = l;
+ info("*");
+ }
+ fprintf(stderr,"\nWARNING: reaching max number of iterations\n");
+ }
+
+ // calculate rho
+
+ si->rho = calculate_rho();
+
+ // calculate objective value
+ {
+ double v = 0;
+ int i;
+ for(i=0;iobj = v/2;
+ }
+
+ // put back the solution
+ {
+ for(int i=0;iupper_bound_p = Cp;
+ si->upper_bound_n = Cn;
+
+ info("\noptimization finished, #iter = %d\n",iter);
+
+ delete[] p;
+ delete[] y;
+ delete[] alpha;
+ delete[] alpha_status;
+ delete[] active_set;
+ delete[] G;
+ delete[] G_bar;
+}
+
+// return 1 if already optimal, return 0 otherwise
+int Solver::select_working_set(int &out_i, int &out_j)
+{
+ // return i,j such that
+ // i: maximizes -y_i * grad(f)_i, i in I_up(\alpha)
+ // j: minimizes the decrease of obj value
+ // (if quadratic coefficeint <= 0, replace it with tau)
+ // -y_j*grad(f)_j < -y_i*grad(f)_i, j in I_low(\alpha)
+
+ double Gmax = -INF;
+ double Gmax2 = -INF;
+ int Gmax_idx = -1;
+ int Gmin_idx = -1;
+ double obj_diff_min = INF;
+
+ for(int t=0;t= Gmax)
+ {
+ Gmax = -G[t];
+ Gmax_idx = t;
+ }
+ }
+ else
+ {
+ if(!is_lower_bound(t))
+ if(G[t] >= Gmax)
+ {
+ Gmax = G[t];
+ Gmax_idx = t;
+ }
+ }
+
+ int i = Gmax_idx;
+ const Qfloat *Q_i = NULL;
+ if(i != -1) // NULL Q_i not accessed: Gmax=-INF if i=-1
+ Q_i = Q->get_Q(i,active_size);
+
+ for(int j=0;j= Gmax2)
+ Gmax2 = G[j];
+ if (grad_diff > 0)
+ {
+ double obj_diff;
+ double quad_coef = QD[i]+QD[j]-2.0*y[i]*Q_i[j];
+ if (quad_coef > 0)
+ obj_diff = -(grad_diff*grad_diff)/quad_coef;
+ else
+ obj_diff = -(grad_diff*grad_diff)/TAU;
+
+ if (obj_diff <= obj_diff_min)
+ {
+ Gmin_idx=j;
+ obj_diff_min = obj_diff;
+ }
+ }
+ }
+ }
+ else
+ {
+ if (!is_upper_bound(j))
+ {
+ double grad_diff= Gmax-G[j];
+ if (-G[j] >= Gmax2)
+ Gmax2 = -G[j];
+ if (grad_diff > 0)
+ {
+ double obj_diff;
+ double quad_coef = QD[i]+QD[j]+2.0*y[i]*Q_i[j];
+ if (quad_coef > 0)
+ obj_diff = -(grad_diff*grad_diff)/quad_coef;
+ else
+ obj_diff = -(grad_diff*grad_diff)/TAU;
+
+ if (obj_diff <= obj_diff_min)
+ {
+ Gmin_idx=j;
+ obj_diff_min = obj_diff;
+ }
+ }
+ }
+ }
+ }
+
+ if(Gmax+Gmax2 < eps || Gmin_idx == -1)
+ return 1;
+
+ out_i = Gmax_idx;
+ out_j = Gmin_idx;
+ return 0;
+}
+
+bool Solver::be_shrunk(int i, double Gmax1, double Gmax2)
+{
+ if(is_upper_bound(i))
+ {
+ if(y[i]==+1)
+ return(-G[i] > Gmax1);
+ else
+ return(-G[i] > Gmax2);
+ }
+ else if(is_lower_bound(i))
+ {
+ if(y[i]==+1)
+ return(G[i] > Gmax2);
+ else
+ return(G[i] > Gmax1);
+ }
+ else
+ return(false);
+}
+
+void Solver::do_shrinking()
+{
+ int i;
+ double Gmax1 = -INF; // max { -y_i * grad(f)_i | i in I_up(\alpha) }
+ double Gmax2 = -INF; // max { y_i * grad(f)_i | i in I_low(\alpha) }
+
+ // find maximal violating pair first
+ for(i=0;i= Gmax1)
+ Gmax1 = -G[i];
+ }
+ if(!is_lower_bound(i))
+ {
+ if(G[i] >= Gmax2)
+ Gmax2 = G[i];
+ }
+ }
+ else
+ {
+ if(!is_upper_bound(i))
+ {
+ if(-G[i] >= Gmax2)
+ Gmax2 = -G[i];
+ }
+ if(!is_lower_bound(i))
+ {
+ if(G[i] >= Gmax1)
+ Gmax1 = G[i];
+ }
+ }
+ }
+
+ if(unshrink == false && Gmax1 + Gmax2 <= eps*10)
+ {
+ unshrink = true;
+ reconstruct_gradient();
+ active_size = l;
+ info("*");
+ }
+
+ for(i=0;i i)
+ {
+ if (!be_shrunk(active_size, Gmax1, Gmax2))
+ {
+ swap_index(i,active_size);
+ break;
+ }
+ active_size--;
+ }
+ }
+}
+
+double Solver::calculate_rho()
+{
+ double r;
+ int nr_free = 0;
+ double ub = INF, lb = -INF, sum_free = 0;
+ for(int i=0;i0)
+ r = sum_free/nr_free;
+ else
+ r = (ub+lb)/2;
+
+ return r;
+}
+
+//
+// Solver for nu-svm classification and regression
+//
+// additional constraint: e^T \alpha = constant
+//
+class Solver_NU: public Solver
+{
+public:
+ Solver_NU() {}
+ void Solve(int l, const QMatrix& Q, const double *p, const schar *y,
+ double *alpha, double Cp, double Cn, double eps,
+ SolutionInfo* si, int shrinking)
+ {
+ this->si = si;
+ Solver::Solve(l,Q,p,y,alpha,Cp,Cn,eps,si,shrinking);
+ }
+private:
+ SolutionInfo *si;
+ int select_working_set(int &i, int &j);
+ double calculate_rho();
+ bool be_shrunk(int i, double Gmax1, double Gmax2, double Gmax3, double Gmax4);
+ void do_shrinking();
+};
+
+// return 1 if already optimal, return 0 otherwise
+int Solver_NU::select_working_set(int &out_i, int &out_j)
+{
+ // return i,j such that y_i = y_j and
+ // i: maximizes -y_i * grad(f)_i, i in I_up(\alpha)
+ // j: minimizes the decrease of obj value
+ // (if quadratic coefficeint <= 0, replace it with tau)
+ // -y_j*grad(f)_j < -y_i*grad(f)_i, j in I_low(\alpha)
+
+ double Gmaxp = -INF;
+ double Gmaxp2 = -INF;
+ int Gmaxp_idx = -1;
+
+ double Gmaxn = -INF;
+ double Gmaxn2 = -INF;
+ int Gmaxn_idx = -1;
+
+ int Gmin_idx = -1;
+ double obj_diff_min = INF;
+
+ for(int t=0;t= Gmaxp)
+ {
+ Gmaxp = -G[t];
+ Gmaxp_idx = t;
+ }
+ }
+ else
+ {
+ if(!is_lower_bound(t))
+ if(G[t] >= Gmaxn)
+ {
+ Gmaxn = G[t];
+ Gmaxn_idx = t;
+ }
+ }
+
+ int ip = Gmaxp_idx;
+ int in = Gmaxn_idx;
+ const Qfloat *Q_ip = NULL;
+ const Qfloat *Q_in = NULL;
+ if(ip != -1) // NULL Q_ip not accessed: Gmaxp=-INF if ip=-1
+ Q_ip = Q->get_Q(ip,active_size);
+ if(in != -1)
+ Q_in = Q->get_Q(in,active_size);
+
+ for(int j=0;j= Gmaxp2)
+ Gmaxp2 = G[j];
+ if (grad_diff > 0)
+ {
+ double obj_diff;
+ double quad_coef = QD[ip]+QD[j]-2*Q_ip[j];
+ if (quad_coef > 0)
+ obj_diff = -(grad_diff*grad_diff)/quad_coef;
+ else
+ obj_diff = -(grad_diff*grad_diff)/TAU;
+
+ if (obj_diff <= obj_diff_min)
+ {
+ Gmin_idx=j;
+ obj_diff_min = obj_diff;
+ }
+ }
+ }
+ }
+ else
+ {
+ if (!is_upper_bound(j))
+ {
+ double grad_diff=Gmaxn-G[j];
+ if (-G[j] >= Gmaxn2)
+ Gmaxn2 = -G[j];
+ if (grad_diff > 0)
+ {
+ double obj_diff;
+ double quad_coef = QD[in]+QD[j]-2*Q_in[j];
+ if (quad_coef > 0)
+ obj_diff = -(grad_diff*grad_diff)/quad_coef;
+ else
+ obj_diff = -(grad_diff*grad_diff)/TAU;
+
+ if (obj_diff <= obj_diff_min)
+ {
+ Gmin_idx=j;
+ obj_diff_min = obj_diff;
+ }
+ }
+ }
+ }
+ }
+
+ if(max(Gmaxp+Gmaxp2,Gmaxn+Gmaxn2) < eps || Gmin_idx == -1)
+ return 1;
+
+ if (y[Gmin_idx] == +1)
+ out_i = Gmaxp_idx;
+ else
+ out_i = Gmaxn_idx;
+ out_j = Gmin_idx;
+
+ return 0;
+}
+
+bool Solver_NU::be_shrunk(int i, double Gmax1, double Gmax2, double Gmax3, double Gmax4)
+{
+ if(is_upper_bound(i))
+ {
+ if(y[i]==+1)
+ return(-G[i] > Gmax1);
+ else
+ return(-G[i] > Gmax4);
+ }
+ else if(is_lower_bound(i))
+ {
+ if(y[i]==+1)
+ return(G[i] > Gmax2);
+ else
+ return(G[i] > Gmax3);
+ }
+ else
+ return(false);
+}
+
+void Solver_NU::do_shrinking()
+{
+ double Gmax1 = -INF; // max { -y_i * grad(f)_i | y_i = +1, i in I_up(\alpha) }
+ double Gmax2 = -INF; // max { y_i * grad(f)_i | y_i = +1, i in I_low(\alpha) }
+ double Gmax3 = -INF; // max { -y_i * grad(f)_i | y_i = -1, i in I_up(\alpha) }
+ double Gmax4 = -INF; // max { y_i * grad(f)_i | y_i = -1, i in I_low(\alpha) }
+
+ // find maximal violating pair first
+ int i;
+ for(i=0;i Gmax1) Gmax1 = -G[i];
+ }
+ else if(-G[i] > Gmax4) Gmax4 = -G[i];
+ }
+ if(!is_lower_bound(i))
+ {
+ if(y[i]==+1)
+ {
+ if(G[i] > Gmax2) Gmax2 = G[i];
+ }
+ else if(G[i] > Gmax3) Gmax3 = G[i];
+ }
+ }
+
+ if(unshrink == false && max(Gmax1+Gmax2,Gmax3+Gmax4) <= eps*10)
+ {
+ unshrink = true;
+ reconstruct_gradient();
+ active_size = l;
+ }
+
+ for(i=0;i i)
+ {
+ if (!be_shrunk(active_size, Gmax1, Gmax2, Gmax3, Gmax4))
+ {
+ swap_index(i,active_size);
+ break;
+ }
+ active_size--;
+ }
+ }
+}
+
+double Solver_NU::calculate_rho()
+{
+ int nr_free1 = 0,nr_free2 = 0;
+ double ub1 = INF, ub2 = INF;
+ double lb1 = -INF, lb2 = -INF;
+ double sum_free1 = 0, sum_free2 = 0;
+
+ for(int i=0;i 0)
+ r1 = sum_free1/nr_free1;
+ else
+ r1 = (ub1+lb1)/2;
+
+ if(nr_free2 > 0)
+ r2 = sum_free2/nr_free2;
+ else
+ r2 = (ub2+lb2)/2;
+
+ si->r = (r1+r2)/2;
+ return (r1-r2)/2;
+}
+
+//
+// Q matrices for various formulations
+//
+class SVC_Q: public Kernel
+{
+public:
+ SVC_Q(const svm_problem& prob, const svm_parameter& param, const schar *y_)
+ :Kernel(prob.l, prob.x, param)
+ {
+ clone(y,y_,prob.l);
+ cache = new Cache(prob.l,(long int)(param.cache_size*(1<<20)));
+ QD = new double[prob.l];
+ for(int i=0;i