### PROBLEM LINKS

### DIFFICULTY

CHALLENGE

### EXPLANATION

The problem is more commonly known as the “Shortest Vector in a Lattice” problem and is not very well understood from the perspective of polynomial-time approximations. One result is that it can not be approximated within 2^(log^c d) for any constant c < 1/2 unless any problem in NP can be solved by a randomized algorithm that almost always answers correctly whose running time is much faster than exponential (see [1] for details). On the other hand, essentially the best know approximation finds a solution whose cost is within a factor 2^{d/2} of the optimum [2]. There have been minor improvements since [2], but the best algorithm is still exponential in its approximation guarantee. A nice description of this algorithm can be found in the book [3].

In the special case of d = 1, elementary number theory tells us that the shortest vector has length equal to the greatest common divisor of the lengths of the input vectors. If d = 2, then there is a polynomial-time exact algorithm that proceeds much like generalization Euclid’s algorithm. To find the shortest integer combination of two vectors a,b with d=2, the algorithm proceeds as follows:

- if |a| > |b| then swap(a,b)
- let x := <a,b>/|b| (where <a,b> is the dot product of a,b and |b| is the length of b)
- if the absolute value of x is > 1/2 then set b := b-m*a where m is the closest integer to x and go to step 1
- output a

To recover the coefficients, one simply has to do a little additional bookkeeping. This is reminiscent of Euclid’s algorithm for gcd since it involves subtracting copies of the shorter vector from the longer vector until some terminating condition is met. Proof of correctness of this approach can be found in [3]. For those who are curious, the basic idea is to prove that if the angle between a,b is between 60 and 120 degrees, then the shorter of the two is a shortest integer combination and to prove that if |a| <= |b| with |x| <= 1/2 (where x is as in the algorithm above), then the angle between a and b is between 60 and 120. Perhaps the most annoying part of the analysis of this algorithm is proving it runs in polynomial time in the number of bits used to represent the vectors. Unfortunately, translating these ideas to higher dimensions doesn’t yield an exact algorithm in general but this is the starting point for the ideas in [2].

For interest’s sake, while the best approximation guarantee known for poly-time algorithms is exponential, this is still sufficient for the following application. Given a polynomial p[x] over the integers Z or the rationals Q, there is a polynomial-time (polynomial in the degree of p[x] and the number of bits used to represent the coefficients) that will completely factor p[x] into factors that are irreducible over Z[x] or, respectively, Q[x]. An essential subroutine involves finding short vectors in a lattice and the algorithm in [2] finds sufficiently short vectors for this application.

[1] S. Khot, Hardness of approximating the shortest vector problem in

lattices, Journal of the ACM, 52 (5), 2005, pp 789-808.

[2] A.K. Lenstra, H.W. Lenstra, L. Lov sz. Factoring polynomials with

rational coefficients. Mathematische Ann., 261, 1982, pp 513-534.

[3] V. V. Vazirani, “Approximation Algorithms”, Springer-Verlag, 2003.