6.13 High-dimensional mapping. Let : X ! H be a feature mapping such that the dimension N...

Question:

6.13 High-dimensional mapping. Let : X ! H be a feature mapping such that the dimension N of H is very large and let K: XX ! R be a PDS kernel de ned by K(x; x0) = E iD



[(x)]i[(x0)]i



; (6.27)

where [(x)]i is the ith component of (x) (and similarly for 0(x)) and where D is a distribution over the indices i. We shall assume that j[(x)]ij  R for all x 2 X and i 2 [N]. Suppose that the only method available to compute K(x; x0)

involved direct computation of the inner product (6.27), which would require O(N) time. Alternatively, an approximation can be computed based on random selection of a subset I of the N components of (x) and (x0) according to D, that is:

K0(x; x0) =

1 n

X i2I D(i)[(x)]i[(x0)]i; (6.28)

where jIj = n.

(a) Fix x and x0 in X. Prove that P

IDn

[jK(x; x0) ???? K0(x; x0)j > ]  2e????n2 2r2 : (6.29)

(Hint: use McDiarmid's inequality).

(b) Let K and K0 be the kernel matrices associated to K and K0. Show that for any ;  > 0, for n > r2 2 log m(m+1)
 , with probability at least 1 ???? , jK0ij ????Kij j   for all i; j 2 [m].

Fantastic news! We've Found the answer you've been seeking!

Step by Step Answer:

Related Book For  book-img-for-question

Foundations Of Machine Learning

ISBN: 9780262351362

2nd Edition

Authors: Mehryar Mohri, Afshin Rostamizadeh

Question Posted: