Theory of Computing is a science which attempts to identify “what the most efficient way to solve a given computational task?”
“efficient”
…with respect to what resources?
- time
- space/memory
- randomness
- communication / interaction
- quantum-ness (https://theory.stanford.edu/~abouland/)
“most”
study of impossibilities: lower bounds
For instance, we want a result like “SAT cannot be solved in Polynomial Time” (then in which case P != NP)
history of the theory of computing
computability theory
computability theorists consider themselves to deal with the problem of: “what computational tasks can be solved at all?” We know that because of the halting problem, not everything is solvable. (Turing 1936)
complexity theory
Its a subclass of computability theory, but only with respect to problems that can be solve.
grand challenges of complexity theory
- P vs. NP (belief P != NP): if the solution of a particular computational task can be verified quickly, does this mean the solution can be found quickly?
- NP vs. coNP (belief NP != coNP): are there theorems that are simple to state but require lengthy proofs?
- P vs. NC (belief P != NP): is every algorithm efficiently parallelizable? or are there inherently sequential tasks?
- P vs. L (belief P != L): can every P algorithm be recompiled to use “essentially no memory”
- P vs. PSPACE (belief P != PSPACE): does solvable without much memory implies solvable without much time
- P vs. BPP (belief P == BPP): can every efficient randomized algorithms be deterministic (or are there only fast random algorithms but no fast deterministic algorithms?)