2
$\begingroup$

I heard a few times that if we allow computations with infinite precision, we could have unrealistic powers of computation up to the point of solving NP problems efficiently.

Is it true? If yes:

  • what is the model of computations used to define infinitely precise arithmetic? Like do we need to assume that accessing to the n-th digit of a number can be done in constant time and that operations between two numbers is done in constant time? (Otherwise I guess it would be possible to efficiently simulate this in classical computer using lazy arithmetic.)
  • given this model of computation, how would you use it to break NP complete problems?
  • is it possible to approximate physically such model, for instance in an analogic way considering the (unrealistic) assumption that there is no noise?
$\endgroup$
2
  • 3
    $\begingroup$ yuvalfilmus.cs.technion.ac.il/Courses/AlgebraicMethods/2017/… $\endgroup$ Commented Jun 2, 2022 at 18:31
  • 1
    $\begingroup$ Analog computers work with physical quantities and are much less accurate than what we achieve even with single-precision floating-point. And the discrete nature of matter kills any hope of representing real numbers. $\endgroup$ Commented Jun 2, 2022 at 18:56

1 Answer 1

3
$\begingroup$

One such model of computation is explained in this note. In this model of computation, we are allowed to perform arithmetic operations, logical operations, and comparisons on integers of arbitrary precision. We encode Boolean functions $f\colon \{0,1\}^n \to \{0,1\}$ as bitstrings of length $2^n$, thinking of the index of a bit as encoding a truth assignment. Explicitly, if $i = i_{n-1} 2^{n-1} + \cdots + i_0$, where $i_0,\ldots,i_{n-1} \in \{0,1\}$, then the $i$'th bit of the bitstring (where the $0$th bit is the LSB) is $f(i_0,\ldots,i_{n-1})$.

Given functions $f,g$, we can compute $f \land g, f \lor g, \lnot f$ using bitwise operations. The projection function $(x_1,\ldots,x_n) \mapsto x_i$ corresponds to the bitstring $$ 2^{2^{i-1}} \prod_{I \neq i-1} (1 + 2^{2^I}) $$ which can be computed efficiently (in this model) using repeated squaring.

Given a Boolean formula (or even a Boolean circuit) on $n$ inputs, we can efficiently compute the bitstring corresponding to the function computed by the formula. Comparing the result to zero, we can solve SAT.

In fact, we can even solve QBF. Roughly speaking, to implement a quantifier, we decompose the bitstring into two halves, and then either AND them (in the case of $\forall$) or OR them (in the case of $\exists$). This is possible if we also allow the shift operation.

$\endgroup$
1
  • $\begingroup$ Thanks a lot! Basically, all operations could be efficiently implementable in a lazy way, except the comparison to zero that can loop over an exponential number of elements in constant time… so it's cheating a bit but I guess all solutions will cheat similarly ^^ $\endgroup$ Commented Jun 4, 2022 at 18:47

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.