In 2009, the Center for Computational Intractability organized a workshop on the status of “Impagliazzo’s Worlds,” which are possible resolutions of the P=NP question that have different implications for cryptography. Russell Impagliazzo gave the first talk, with the following abstract:
“A personal view on average-case complexity” introduced five possible worlds as metaphors for the outcomes of fundamental questions in complexity concerning whether and to what extent intractable problems exist. In this talk, I will try to put forward some concrete open problems addressing these issues. I will emphasize those problems I believe are within reach, in that there is no fundamental reason they should be difficult, but whose solutions would cast light on which of the five worlds is the case and what these worlds look like.
A video of this talk is here, and I watched it recently. Most surprising to me, perhaps, was how many open questions there are even if we live in Algorithmica, the world where P=NP. I will summarize this section of Impagliazzo’s talk in the rest of this post.
In a recent comment on this blog, Jim Blair said, “I think there is one school of thought in Theoretical Physics where they attempt to use mathematical symmetries to predict the existence of unknown particles.” I wanted to address this for a moment, because 2011 might be a year in which decades of work in theoretical physics is rendered irrelevant by empirical observation.
Supersymmetry (often abbreviated SUSY) is a heavily-studied physical theory that postulates the existence of “superpartners” of known elementary particles — complementary particles that are heavier and differ by a half-spin. However, as posted in Nature News, recent data from the Large Hadron Collidor are casting increasing doubt on the correctness of SUSY. From that article:
“Privately, a lot of people think that the situation is not good for SUSY,” says Alessandro Strumia, a theorist at the University of Pisa in Italy, who recently produced a paper about the impact of the LHC’s latest results on the fine-tuning problem4. “This is a big political issue in our field,” he adds. “For some great physicists, it is the difference between getting a Nobel prize and admitting they spent their lives on the wrong track.” [John] Ellis [of CERN] agrees: “I’ve been working on it for almost 30 years now, and I can imagine that some people might get a little bit nervous.”
Honestly, I think there’s an important lesson here for theoretical computer science and computational complexity theory: don’t base your life’s work on unproven assumptions, divorced from empirical fact. Otherwise, you risk someone coming along and showing that, hey, we live in Pessiland (or wherever), and all your hard work is confined to a footnote of history. (Pessiland is a possible cryptographic world that we may live in; Russell Impagliazzo proposed five such possible worlds in 1995. For more details, including a comment by Boaz Barak about which world experts seem to think we live in, see here.)
In a previous post, I discussed one of my favorite proofs from the Theoretical Computer Science question-and-answer site. Today, I’d like to present another short-but-awesome piece from the same site.
Adam Crume asked:
If a problem is NP-hard (using polynomial time reductions), does that imply that it is P-hard (using log space or NC reductions)? It seems intuitive that if it is as hard as any problem in NP that it should be as hard as any problem in P, but I don’t see how to chain the reductions and get a log space (or NC) reduction.
I didn’t really think much about this question — or much of it, to be brutally honest — and a couple commenters didn’t either. Crume basically got criticized for mixing apples and oranges, asking an ill-formed question because he combined two different notions of reduction in the same sentence. Fortunately, Noam Nisan answered with the following three-sentence gem:
No such implication is known. In particular it may be that in which case all problems (including trivial ones) are NP-hard under poly-time reductions (as the reduction can just solve the problem), but trivial ones (in particular ones that lie in L) are surely not P-hard under logspace reductions (as otherwise L=P). The same goes for NC instead of L.
I learned a lot from that answer, and I bet others did too, given how dismissive we were of the question. This reminded me of perhaps my favorite research advice: don’t be afraid to ask dumb questions (and answer them).
Adam Crume’s original question is here.
I’ve gotten interested in the future of science blogging in general, and computer science blogging in particular. I’ll write a “real” post about this at some point, but I wanted to mention something brief today.
There’s a relatively new blog, Retraction Watch, focusing on medicine and the life sciences. Their statement of purpose is here. Their blog entry for today is about a retraction of a paper from Applied Mathematics Letters. The post speaks for itself, and I won’t quote it here, but I thought this new process of science blogs taking journals to task might be of interest to people who stop by my page.
Slide from Jeannette Wing's 2008 talk "Computational Thinking and Thinking About Computing"
I’ve been a fan of Jeannette Wing’s crusade to teach computational thinking since I read her 2006 CACM article on the subject a couple years ago. And, a few weeks back, she spoke at Northwestern about computational thinking, so I got to see her live. (Slides from a 2008 version of her talk are here.)
In brief, the notion of incorporating computational thinking into our curriculum means teaching the fundamentals of thinking algorithmically, much as we now (are supposed to) teach the fundamentals of thinking mathematically. Wing makes a good argument that this is different from “just” teaching mathematics in a different way: according to her, computational thinking borrows from both mathematics and engineering, in that its goal is to solve problems as well as possible, given material constraints. Continue reading
Northwestern University awarded Terence Tao the 2010 Nemmers Prize in Mathematics, and, as a result, he gave a public lecture here last week. The material was similar to the content of a presentation he gave a couple years ago that is on Youtube. I attended the lecture at Northwestern, not so much for the material per se, but because I wanted to see how he explained mathematical concepts to a general audience. You can’t see the effect of his words in the Youtube video, because the camera is focused on him. I wanted to understand his interaction with the other people in the room.
Figure 10 from Minimum Edge Length Partitioning of Rectilinear Polygons by Lingas et al., 1982.
The title of this blog post is the same as that of a seminal paper in computational geometry and VLSI design by Lingas, Pinter, Rivest and Shamir from 1982. The authors present an algorithm to produce a minimum-length rectangular partitioning of a rectilinear polygon without holes. (A rectilinear polygon, also called an orthogonal polygon, is one whose sides are all axis-parallel, i.e., all either “horizontal” or “vertical.”) They also prove the NP-completeness of the same optimization problem in the case where the rectilinear polygon has rectilinear holes.
Variations of this problem come up all the time in VLSI design. There have been many subseqent papers on approximation algorithms, faster methods for tractable cases, and so on. The Handbook of Discrete and Computational Geometry has a survey of known results in Chapter 26. However, I recently needed to review the original NP-completeness proof, and apparently it exists nowhere online. Dozens of papers (and books) cite the result, but no one seems to reprove it, and it did not appear in a journal with an online archive. I took the liberty of scanning it, so it would be available for download somewhere (pdf file here: MinimumEdgeLengthPartitioningOfRectilinearPolygons), and with the blog post’s title identical to the article’s title, I hope that anyone searching for it in future might find this link without much effort.
A wasp simultaneously laying an egg in a fig ovary and pollinating the stigmas with her forelegs. Source: figweb.org
In March 1960, J.C.R. Licklider began his article Man-Computer Symbiosis:
The fig tree is pollinated only by the insect Blastophaga grassorum. The larva of the insect lives in the ovary of the fig tree, and there it gets its food. The tree and the insect are thus heavily interdependent: the tree cannot reproduce without the insect; the insect cannot eat without the tree…. This co-operative “living together in intimate association, or even close union, of two dissimilar organisms” is called symbiosis.
Nature’s degree of specification is even more extreme than the quotation implies: different fig trees are uniquely pollinated by different fig wasps, and, in fact, the wasp that appears in the image above is not the same species Licklider referred to. (More information available at this figweb page, or the Wikipedia page on fig wasps.) Licklider envisioned a similar degree of specification of human-computer interface: each operator might have a distinct way to interact with the computer, so each member of the group could perform their tasks in parallel, each talking to the central machine. However, in this vision, there is one computer per person, or “symbiotic cooperation between a computer and a team of men.” What happens when there is more than one computer per person in the world? What if, for each person in the world, there are thousands, or millions of computational devices? David Tennenhouse termed the outnumbering of humans by computers a “breakpoint,” and argued in the year 2000 that we were rapidly approaching that breakpoint, so it was essential to reinvent computer science. Continue reading