In 2009, the Center for Computational Intractability organized a workshop on the status of “Impagliazzo’s Worlds,” which are possible resolutions of the P=NP question that have different implications for cryptography. Russell Impagliazzo gave the first talk, with the following abstract:
“A personal view on average-case complexity” introduced five possible worlds as metaphors for the outcomes of fundamental questions in complexity concerning whether and to what extent intractable problems exist. In this talk, I will try to put forward some concrete open problems addressing these issues. I will emphasize those problems I believe are within reach, in that there is no fundamental reason they should be difficult, but whose solutions would cast light on which of the five worlds is the case and what these worlds look like.
A video of this talk is here, and I watched it recently. Most surprising to me, perhaps, was how many open questions there are even if we live in Algorithmica, the world where P=NP. I will summarize this section of Impagliazzo’s talk in the rest of this post.
In a recent comment on this blog, Jim Blair said, “I think there is one school of thought in Theoretical Physics where they attempt to use mathematical symmetries to predict the existence of unknown particles.” I wanted to address this for a moment, because 2011 might be a year in which decades of work in theoretical physics is rendered irrelevant by empirical observation.
Supersymmetry (often abbreviated SUSY) is a heavily-studied physical theory that postulates the existence of “superpartners” of known elementary particles — complementary particles that are heavier and differ by a half-spin. However, as posted in Nature News, recent data from the Large Hadron Collidor are casting increasing doubt on the correctness of SUSY. From that article:
“Privately, a lot of people think that the situation is not good for SUSY,” says Alessandro Strumia, a theorist at the University of Pisa in Italy, who recently produced a paper about the impact of the LHC’s latest results on the fine-tuning problem4. “This is a big political issue in our field,” he adds. “For some great physicists, it is the difference between getting a Nobel prize and admitting they spent their lives on the wrong track.” [John] Ellis [of CERN] agrees: “I’ve been working on it for almost 30 years now, and I can imagine that some people might get a little bit nervous.”
Honestly, I think there’s an important lesson here for theoretical computer science and computational complexity theory: don’t base your life’s work on unproven assumptions, divorced from empirical fact. Otherwise, you risk someone coming along and showing that, hey, we live in Pessiland (or wherever), and all your hard work is confined to a footnote of history. (Pessiland is a possible cryptographic world that we may live in; Russell Impagliazzo proposed five such possible worlds in 1995. For more details, including a comment by Boaz Barak about which world experts seem to think we live in, see here.)
In a previous post, I discussed one of my favorite proofs from the Theoretical Computer Science question-and-answer site. Today, I’d like to present another short-but-awesome piece from the same site.
Adam Crume asked:
If a problem is NP-hard (using polynomial time reductions), does that imply that it is P-hard (using log space or NC reductions)? It seems intuitive that if it is as hard as any problem in NP that it should be as hard as any problem in P, but I don’t see how to chain the reductions and get a log space (or NC) reduction.
I didn’t really think much about this question — or much of it, to be brutally honest — and a couple commenters didn’t either. Crume basically got criticized for mixing apples and oranges, asking an ill-formed question because he combined two different notions of reduction in the same sentence. Fortunately, Noam Nisan answered with the following three-sentence gem:
No such implication is known. In particular it may be that in which case all problems (including trivial ones) are NP-hard under poly-time reductions (as the reduction can just solve the problem), but trivial ones (in particular ones that lie in L) are surely not P-hard under logspace reductions (as otherwise L=P). The same goes for NC instead of L.
I learned a lot from that answer, and I bet others did too, given how dismissive we were of the question. This reminded me of perhaps my favorite research advice: don’t be afraid to ask dumb questions (and answer them).
Adam Crume’s original question is here.
I’ve gotten interested in the future of science blogging in general, and computer science blogging in particular. I’ll write a “real” post about this at some point, but I wanted to mention something brief today.
There’s a relatively new blog, Retraction Watch, focusing on medicine and the life sciences. Their statement of purpose is here. Their blog entry for today is about a retraction of a paper from Applied Mathematics Letters. The post speaks for itself, and I won’t quote it here, but I thought this new process of science blogs taking journals to task might be of interest to people who stop by my page.
Slide from Jeannette Wing's 2008 talk "Computational Thinking and Thinking About Computing"
I’ve been a fan of Jeannette Wing’s crusade to teach computational thinking since I read her 2006 CACM article on the subject a couple years ago. And, a few weeks back, she spoke at Northwestern about computational thinking, so I got to see her live. (Slides from a 2008 version of her talk are here.)
In brief, the notion of incorporating computational thinking into our curriculum means teaching the fundamentals of thinking algorithmically, much as we now (are supposed to) teach the fundamentals of thinking mathematically. Wing makes a good argument that this is different from “just” teaching mathematics in a different way: according to her, computational thinking borrows from both mathematics and engineering, in that its goal is to solve problems as well as possible, given material constraints. Continue reading