Wednesday, May 4, 2022

How does one interpret SVM feature weights

Рассмотрим следующий набор данных, который является линейно разделимым

import numpy as np
X = np.array([[3,4],[1,4],[2,3],[6,-1],[7,-1],[5,-3]] )
y = np.array([-1,-1, -1, 1, 1 , 1 ])





 (.env) [boris@Server35fedora SVM]$ cat classSVM.py

import numpy as np

X = np.array([[3,4],[1,4],[2,3],[6,-1],[7,-1],[5,-3]] )

y = np.array([-1,-1, -1, 1, 1 , 1 ])

from sklearn.svm import SVC

clf = SVC(C = 1e5, kernel = 'linear')

clf.fit(X, y)

print('w = ',clf.coef_)

print('b = ',clf.intercept_)

print('Indices of support vectors = ', clf.support_)

print('Support vectors = ', clf.support_vectors_)

print('Number of support vectors for each class = ', clf.n_support_)

print('Coefficients of the support vector in the decision function = ', np.abs(clf.dual_coef_))

(.env) [boris@Server35fedora SVM]$ python classSVM.py

w =  [[ 0.25 -0.25]]

b =  [-0.75]

Indices of support vectors =  [2 3]

Support vectors =  [[ 2.  3.]

 [ 6. -1.]]

Number of support vectors for each class =  [1 1]

Coefficients of the support vector in the decision function =  [[0.0625 0.0625]]


No comments:

Post a Comment