Skip to content

Commit

Permalink
remove unnecessary whitespace characters in files except java and m4 …
Browse files Browse the repository at this point in the history
…files
  • Loading branch information
Kevin committed Dec 23, 2016
1 parent 53f203d commit 6699ea4
Show file tree
Hide file tree
Showing 21 changed files with 502 additions and 502 deletions.
566 changes: 283 additions & 283 deletions FAQ.html

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion Makefile.win
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ svm.obj: svm.cpp svm.h
$(CXX) $(CFLAGS) -c svm.cpp

lib: svm.cpp svm.h svm.def
$(CXX) $(CFLAGS) -LD svm.cpp -Fe$(TARGET)\libsvm -link -DEF:svm.def
$(CXX) $(CFLAGS) -LD svm.cpp -Fe$(TARGET)\libsvm -link -DEF:svm.def

clean:
-erase /Q *.obj $(TARGET)\*.exe $(TARGET)\*.dll $(TARGET)\*.exp $(TARGET)\*.lib
Expand Down
80 changes: 40 additions & 40 deletions README
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ classification, one-class-SVM, epsilon-SVM regression, and nu-SVM
regression. It also provides an automatic model selection tool for
C-SVM classification. This document explains the use of libsvm.

Libsvm is available at
Libsvm is available at
http://www.csie.ntu.edu.tw/~cjlin/libsvm
Please read the COPYRIGHT file before using libsvm.

Expand All @@ -18,7 +18,7 @@ Table of Contents
- `svm-scale' Usage
- Tips on Practical Use
- Examples
- Precomputed Kernels
- Precomputed Kernels
- Library Usage
- Java Version
- Building Windows Binaries
Expand All @@ -30,8 +30,8 @@ Table of Contents
Quick Start
===========

If you are new to SVM and if the data is not large, please go to
`tools' directory and use easy.py after installation. It does
If you are new to SVM and if the data is not large, please go to
`tools' directory and use easy.py after installation. It does
everything automatic -- from data scaling to parameter selection.

Usage: easy.py training_file [testing_file]
Expand Down Expand Up @@ -93,8 +93,8 @@ svm-scale:
svm-toy:

This is a simple graphical interface which shows how SVM
separate data in a plane. You can click in the window to
draw data points. Use "change" button to choose class
separate data in a plane. You can click in the window to
draw data points. Use "change" button to choose class
1, 2 or 3 (i.e., up to three classes are supported), "load"
button to load data from a file, "save" button to save data to
a file, "run" button to obtain an SVM model, and "clear"
Expand All @@ -117,7 +117,7 @@ svm-toy:

You need GTK+ library to build the GTK version.
(available from http://www.gtk.org)

The pre-built Windows binaries are in the `windows'
directory. We use Visual C++ on a 64-bit machine.

Expand All @@ -129,7 +129,7 @@ options:
-s svm_type : set type of SVM (default 0)
0 -- C-SVC (multi-class classification)
1 -- nu-SVC (multi-class classification)
2 -- one-class SVM
2 -- one-class SVM
3 -- epsilon-SVR (regression)
4 -- nu-SVR (regression)
-t kernel_type : set type of kernel function (default 2)
Expand Down Expand Up @@ -206,7 +206,7 @@ Scale each feature of the training data to be in [-1,1]. Scaling
factors are stored in the file range and then used for scaling the
test data.

> svm-train -s 0 -c 5 -t 2 -g 0.5 -e 0.1 data_file
> svm-train -s 0 -c 5 -t 2 -g 0.5 -e 0.1 data_file

Train a classifier with RBF kernel exp(-0.5|u-v|^2), C=10, and
stopping tolerance 0.1.
Expand All @@ -232,25 +232,25 @@ the parameters C = 100 and gamma = 0.1
Obtain a model with probability information and predict test data with
probability estimates

Precomputed Kernels
Precomputed Kernels
===================

Users may precompute kernel values and input them as training and
testing files. Then libsvm does not need the original
training/testing sets.

Assume there are L training instances x1, ..., xL and.
Assume there are L training instances x1, ..., xL and.
Let K(x, y) be the kernel
value of two instances x and y. The input formats
are:

New training instance for xi:

<label> 0:i 1:K(xi,x1) ... L:K(xi,xL)
<label> 0:i 1:K(xi,x1) ... L:K(xi,xL)

New testing instance for any x:

<label> 0:? 1:K(x,x1) ... L:K(x,xL)
<label> 0:? 1:K(x,x1) ... L:K(x,xL)

That is, in the training file the first column must be the "ID" of
xi. In testing, ? can be any value.
Expand All @@ -277,17 +277,17 @@ Examples:
training/testing sets:

15 0:1 1:4 2:6 3:1
45 0:2 1:6 2:18 3:0
45 0:2 1:6 2:18 3:0
25 0:3 1:1 2:0 3:1

15 0:? 1:2 2:0 3:1

? can be any value.

Any subset of the above training file is also valid. For example,

25 0:3 1:1 2:0 3:1
45 0:2 1:6 2:18 3:0
45 0:2 1:6 2:18 3:0

implies that the kernel matrix is

Expand Down Expand Up @@ -316,18 +316,18 @@ to classify new data.
the given training data and parameters.

struct svm_problem describes the problem:

struct svm_problem
{
int l;
double *y;
struct svm_node **x;
};

where `l' is the number of training data, and `y' is an array containing
their target values. (integers in classification, real numbers in
regression) `x' is an array of pointers, each of which points to a sparse
representation (array of svm_node) of one training vector.
representation (array of svm_node) of one training vector.

For example, if we have the following training data:

Expand Down Expand Up @@ -361,7 +361,7 @@ to classify new data.

index = -1 indicates the end of one vector. Note that indices must
be in ASCENDING order.

struct svm_parameter describes the parameters of an SVM model:

struct svm_parameter
Expand Down Expand Up @@ -402,7 +402,7 @@ to classify new data.
PRECOMPUTED: kernel values in training_set_file

cache_size is the size of the kernel cache, specified in megabytes.
C is the cost of constraints violation.
C is the cost of constraints violation.
eps is the stopping criterion. (we usually use 0.00001 in nu-SVC,
0.001 in others). nu is the parameter in nu-SVM, nu-SVR, and
one-class-SVM. p is the epsilon in epsilon-insensitive loss function
Expand All @@ -418,13 +418,13 @@ to classify new data.
nr_weight is the number of elements in the array weight_label and
weight. Each weight[i] corresponds to weight_label[i], meaning that
the penalty of class weight_label[i] is scaled by a factor of weight[i].

If you do not want to change penalty for any of the classes,
just set nr_weight to 0.

*NOTE* Because svm_model contains pointers to svm_problem, you can
not free the memory used by svm_problem if you are still using the
svm_model produced by svm_train().
svm_model produced by svm_train().

*NOTE* To avoid wrong parameters, svm_check_parameter() should be
called before svm_train().
Expand Down Expand Up @@ -464,7 +464,7 @@ to classify new data.
k classes. For data in class j, the corresponding sv_coef includes (k-1) y*alpha vectors,
where alpha's are solutions of the following two class problems:
1 vs j, 2 vs j, ..., j-1 vs j, j vs j+1, j vs j+2, ..., j vs k
and y=1 for the first j-1 vectors, while y=-1 for the remaining k-j
and y=1 for the first j-1 vectors, while y=-1 for the remaining k-j
vectors. For example, if there are 4 classes, sv_coef and SV are like:

+-+-+-+--------------------+
Expand Down Expand Up @@ -500,8 +500,8 @@ to classify new data.

nSV is the number of support vectors in each class.

free_sv is a flag used to determine whether the space of SV should
be released in free_model_content(struct svm_model*) and
free_sv is a flag used to determine whether the space of SV should
be released in free_model_content(struct svm_model*) and
free_and_destroy_model(struct svm_model**). If the model is
generated by svm_train(), then SV points to data in svm_problem
and should not be removed. For example, free_sv is 0 if svm_model
Expand All @@ -527,7 +527,7 @@ to classify new data.
labels (of all prob's instances) in the validation process are
stored in the array called target.

The format of svm_prob is same as that for svm_train().
The format of svm_prob is same as that for svm_train().

- Function: int svm_get_svm_type(const struct svm_model *model);

Expand All @@ -540,18 +540,18 @@ to classify new data.
classes. For a regression or an one-class model, 2 is returned.

- Function: void svm_get_labels(const svm_model *model, int* label)

For a classification model, this function outputs the name of
labels into an array called label. For regression and one-class
models, label is unchanged.

- Function: void svm_get_sv_indices(const struct svm_model *model, int *sv_indices)

This function outputs indices of support vectors into an array called sv_indices.
The size of sv_indices is the number of support vectors and can be obtained by calling svm_get_nr_sv.
This function outputs indices of support vectors into an array called sv_indices.
The size of sv_indices is the number of support vectors and can be obtained by calling svm_get_nr_sv.
Each sv_indices[i] is in the range of [1, ..., num_traning_data].

- Function: int svm_get_nr_sv(const struct svm_model *model)
- Function: int svm_get_nr_sv(const struct svm_model *model)

This function gives the number of total support vector.

Expand All @@ -565,7 +565,7 @@ to classify new data.
If the model is not for svr or does not contain required
information, 0 is returned.

- Function: double svm_predict_values(const svm_model *model,
- Function: double svm_predict_values(const svm_model *model,
const svm_node *x, double* dec_values)

This function gives decision values on a test vector x given a
Expand All @@ -579,17 +579,17 @@ to classify new data.
label[0] vs. label[nr_class-1], label[1] vs. label[2], ...,
label[nr_class-2] vs. label[nr_class-1], where label can be
obtained from the function svm_get_labels. The returned value is
the predicted class for x. Note that when nr_class = 1, this
the predicted class for x. Note that when nr_class = 1, this
function does not give any decision value.

For a regression model, dec_values[0] and the returned value are
both the function value of x calculated using the model. For a
one-class model, dec_values[0] is the decision value of x, while
the returned value is +1/-1.

- Function: double svm_predict_probability(const struct svm_model *model,
- Function: double svm_predict_probability(const struct svm_model *model,
const struct svm_node *x, double* prob_estimates);

This function does classification or regression on a test vector x
given a model with probability information.

Expand Down Expand Up @@ -645,7 +645,7 @@ to classify new data.
- Function: void svm_set_print_string_function(void (*print_func)(const char *));

Users can specify their output format by a function. Use
svm_set_print_string_function(NULL);
svm_set_print_string_function(NULL);
for default printing to stdout.

Java Version
Expand All @@ -667,7 +667,7 @@ You may need to increase maximum Java heap size.
Library usages are similar to the C version. These functions are available:

public class svm {
public static final int LIBSVM_VERSION=322;
public static final int LIBSVM_VERSION=322;
public static svm_model svm_train(svm_problem prob, svm_parameter param);
public static void svm_cross_validation(svm_problem prob, svm_parameter param, int nr_fold, double[] target);
public static int svm_get_svm_type(svm_model model);
Expand All @@ -692,7 +692,7 @@ Note that in Java version, svm_node[] is not ended with a node whose index = -1.
Users can specify their output format by

your_print_func = new svm_print_interface()
{
{
public void print(String s)
{
// your own format
Expand Down Expand Up @@ -726,7 +726,7 @@ nmake -f Makefile.win lib
4. (optional) To build 32-bit windows binaries, you must
(1) Setup "C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\bin\vcvars32.bat" instead of vcvars64.bat
(2) Change CFLAGS in Makefile.win: /D _WIN64 to /D _WIN32

Another way is to build them from Visual C++ environment. See details
in libsvm FAQ.

Expand Down Expand Up @@ -761,7 +761,7 @@ http://www.csie.ntu.edu.tw/~cjlin/papers/libsvm.pdf
For any questions and comments, please email [email protected]

Acknowledgments:
This work was supported in part by the National Science
This work was supported in part by the National Science
Council of Taiwan via the grant NSC 89-2213-E-002-013.
The authors thank their group members and users
for many helpful discussions and comments. They are listed in
Expand Down
22 changes: 11 additions & 11 deletions matlab/README
Original file line number Diff line number Diff line change
Expand Up @@ -24,9 +24,9 @@ the usage and the way of specifying parameters are the same as that of LIBSVM.
Installation
============

On Windows systems, pre-built binary files are already in the
directory '..\windows', so no need to conduct installation. Now we
provide binary files only for 64bit MATLAB on Windows. If you would
On Windows systems, pre-built binary files are already in the
directory '..\windows', so no need to conduct installation. Now we
provide binary files only for 64bit MATLAB on Windows. If you would
like to re-build the package, please rely on the following steps.

We recommend using make.m on both MATLAB and OCTAVE. Just type 'make'
Expand Down Expand Up @@ -60,8 +60,8 @@ Example:
matlab>> make

On Unix systems, if neither make.m nor 'mex -setup' works, please use
Makefile and type 'make' in a command window. Note that we assume
your MATLAB is installed in '/usr/local/matlab'. If not, please change
Makefile and type 'make' in a command window. Note that we assume
your MATLAB is installed in '/usr/local/matlab'. If not, please change
MATLABDIR in Makefile.

Example:
Expand Down Expand Up @@ -142,7 +142,7 @@ accuracy, is a vector including accuracy (for classification), mean
squared error, and squared correlation coefficient (for regression).
The third is a matrix containing decision values or probability
estimates (if '-b 1' is specified). If k is the number of classes
in training data, for decision values, each row includes results of
in training data, for decision values, each row includes results of
predicting k(k-1)/2 binary-class SVMs. For classification, k = 1 is a
special case. Decision value +1 is returned for each testing instance,
instead of an empty vector. For probabilities, each row contains k values
Expand All @@ -153,20 +153,20 @@ in the model structure.
Other Utilities
===============

A matlab function libsvmread reads files in LIBSVM format:
A matlab function libsvmread reads files in LIBSVM format:

[label_vector, instance_matrix] = libsvmread('data.txt');
[label_vector, instance_matrix] = libsvmread('data.txt');

Two outputs are labels and instances, which can then be used as inputs
of svmtrain or svmpredict.
of svmtrain or svmpredict.

A matlab function libsvmwrite writes Matlab matrix to a file in LIBSVM format:

libsvmwrite('data.txt', label_vector, instance_matrix)

The instance_matrix must be a sparse matrix. (type must be double)
For 32bit and 64bit MATLAB on Windows, pre-built binary files are ready
in the directory `..\windows', but in future releases, we will only
For 32bit and 64bit MATLAB on Windows, pre-built binary files are ready
in the directory `..\windows', but in future releases, we will only
include 64bit MATLAB binary files.

These codes are prepared by Rong-En Fan and Kai-Wei Chang from National
Expand Down
6 changes: 3 additions & 3 deletions matlab/libsvmread.c
Original file line number Diff line number Diff line change
Expand Up @@ -9,8 +9,8 @@
#ifdef MX_API_VER
#if MX_API_VER < 0x07030000
typedef int mwIndex;
#endif
#endif
#endif
#endif
#ifndef max
#define max(x,y) (((x)>(y))?(x):(y))
#endif
Expand Down Expand Up @@ -38,7 +38,7 @@ static int max_line_len;
static char* readline(FILE *input)
{
int len;

if(fgets(line,max_line_len,input) == NULL)
return NULL;

Expand Down
Loading

0 comments on commit 6699ea4

Please sign in to comment.