Monthly Archives: March 2011

Life in Algorithmica

In 2009, the Center for Computational Intractability organized a workshop on the status of “Impagliazzo’s Worlds,” which are possible resolutions of the P=NP question that have different implications for cryptography.  Russell Impagliazzo gave the first talk, with the following abstract:

“A personal view on average-case complexity” introduced five possible worlds as metaphors for the outcomes of fundamental questions in complexity concerning whether and to what extent intractable problems exist.  In this talk, I will try to put forward some concrete open problems addressing these issues.  I will emphasize those problems I believe are within reach, in that there is no fundamental reason they should be difficult, but whose solutions would cast light on which of the five worlds is the case and what these worlds look like.

A video of this talk is here, and I watched it recently.  Most surprising to me, perhaps, was how many open questions there are even if we live in Algorithmica, the world where P=NP.  I will summarize this section of Impagliazzo’s talk in the rest of this post.

Continue reading

Large Hadron Collider data may falsify supersymmetry

In a recent comment on this blog, Jim Blair said, “I think there is one school of thought in Theoretical Physics where they attempt to use mathematical symmetries to predict the existence of unknown particles.”  I wanted to address this for a moment, because 2011 might be a year in which decades of work in theoretical physics is rendered irrelevant by empirical observation.

Supersymmetry (often abbreviated SUSY) is a heavily-studied physical theory that postulates the existence of “superpartners” of known elementary particles — complementary particles that are heavier and differ by a half-spin.  However, as posted in Nature News, recent data from the Large Hadron Collidor are casting increasing doubt on the correctness of SUSY.  From that article:

“Privately, a lot of people think that the situation is not good for SUSY,” says Alessandro Strumia, a theorist at the University of Pisa in Italy, who recently produced a paper about the impact of the LHC’s latest results on the fine-tuning problem4. “This is a big political issue in our field,” he adds. “For some great physicists, it is the difference between getting a Nobel prize and admitting they spent their lives on the wrong track.” [John] Ellis [of CERN] agrees: “I’ve been working on it for almost 30 years now, and I can imagine that some people might get a little bit nervous.”

Honestly, I think there’s an important lesson here for theoretical computer science and computational complexity theory: don’t base your life’s work on unproven assumptions, divorced from empirical fact.  Otherwise, you risk someone coming along and showing that, hey, we live in Pessiland (or wherever), and all your hard work is confined to a footnote of history.  (Pessiland is a possible cryptographic world that we may live in; Russell Impagliazzo proposed five such possible worlds in 1995.  For more details, including a comment by Boaz Barak about which world experts seem to think we live in, see here.)

Update on HBGary Federal and Anonymous

In a previous post, I discussed how Anonymous hacked into HBGary Federal and exposed plans to use false documents and sock puppetry to discredit Wikileaks and US labor unions.  The US Congress has begun a formal investigation into the relationship between the Department of Defense and the companies HBGary Federal, Palantir Technologies, and Berico Technologies.  (Article by Wired; by Forbes.)

Of perhaps more significance to the social history of computing, Anonymous has started a recruitment campaign, Operation New Blood (#opnewblood), based on their success in taking down professional security firms, and exposing the plans against Wikileaks and unions.  There is quite a bit of motion around this, including, for example, a well-produced recruitment video that is labeled as a class project.  The video is almost seven minutes long; I will quote a couple excerpts.

With a company in shambles, a CEO’s life derailed, and a dark secret uncovered, Anonymous is beginning to look less like a hacker group.  It begins to look like your best interest, as well as mine….  Since the conception of Anonymous, they have been responsible for various operations around the world, from bringing Internet service to the Egyptian people during their recent revolution, to opposing massive government agencies and corporations.

To be clear, I’m not a member of Anonymous, nor do I intend to become one, if for no other reason than my belief that structure and government are actually necessary, and I don’t see a future in anarchic movements.  However, I think this situation is a big deal, because I expect the recruitment push to find significant traction among people with computer skills who feel disaffected by society — and that group of disaffected computer folk is growing, as computer science becomes deprofessionalized.  I also believe — though I have no hard evidence for this — that the age and economic standing of the “average active Anon” is already on the rise, because over the last several years, their activities seem to have moved from juvenile baiting to occasional “freedom fighting” to this current position of an Emma Goldmanesque anarchic class warfare.

I predict a marked increase in politically and economically motivated hacktivism over the next five years, and a concomitant governmental backlash of aggressive new laws and enforcement on the use of computers and the posting and transfer of data.

Trivial reduction question is deep

In a previous post, I discussed one of my favorite proofs from the Theoretical Computer Science question-and-answer site.  Today, I’d like to present another short-but-awesome piece from the same site.

Adam Crume asked:

If a problem is NP-hard (using polynomial time reductions), does that imply that it is P-hard (using log space or NC reductions)? It seems intuitive that if it is as hard as any problem in NP that it should be as hard as any problem in P, but I don’t see how to chain the reductions and get a log space (or NC) reduction.

I didn’t really think much about this question — or much of it, to be brutally honest — and a couple commenters didn’t either.  Crume basically got criticized for mixing apples and oranges, asking an ill-formed question because he combined two different notions of reduction in the same sentence.  Fortunately, Noam Nisan answered with the following three-sentence gem:

No such implication is known. In particular it may be that L \neq P=NP in which case all problems (including trivial ones) are NP-hard under poly-time reductions (as the reduction can just solve the problem), but trivial ones (in particular ones that lie in L) are surely not P-hard under logspace reductions (as otherwise L=P).   The same goes for NC instead of L.

I learned a lot from that answer, and I bet others did too, given how dismissive we were of the question.  This reminded me of perhaps my favorite research advice: don’t be afraid to ask dumb questions (and answer them).

Adam Crume’s original question is here.

Retraction Watch’s post about retraction of a mathematical paper

I’ve gotten interested in the future of science blogging in general, and computer science blogging in particular.  I’ll write a “real” post about this at some point, but I wanted to mention something brief today.

There’s a relatively new blog, Retraction Watch, focusing on medicine and the life sciences.  Their statement of purpose is here.  Their blog entry for today is about a retraction of a paper from Applied Mathematics Letters.  The post speaks for itself, and I won’t quote it here, but I thought this new process of science blogs taking journals to task might be of interest to people who stop by my page.

Computational action: the question I didn’t ask Jeannette Wing

Slide from Jeannette Wing's 2008 talk "Computational Thinking and Thinking About Computing"

I’ve been a fan of Jeannette Wing’s crusade to teach computational thinking since I read her 2006 CACM article on the subject a couple years ago.  And, a few weeks back, she spoke at Northwestern about computational thinking, so I got to see her live.  (Slides from a 2008 version of her talk are here.)

In brief, the notion of incorporating computational thinking into our curriculum means teaching the fundamentals of thinking algorithmically, much as we now (are supposed to) teach the fundamentals of thinking mathematically.  Wing makes a good argument that this is different from “just” teaching mathematics in a different way: according to her, computational thinking borrows from both mathematics and engineering, in that its goal is to solve problems as well as possible, given material constraints. Continue reading

The deprofessionalization of computer science

Source: The Economist

I don’t mean by the title that computer scientists are behaving less professionally.  Rather, I mean that the jobs available for people with advanced degrees in computer science have much lower professional standing than they did even five years ago, to say nothing of 25.  This happened to social workers in the 1970s, and to physicists in the 1980s.  A convenient slogan to explain this situation is that there are “too many PhD’s” in those fields.  The connotation to such a phrase is, “Well, aren’t you stupid for going into a career that has no future, it’s your fault you’re facing problems now, stop whining.”  However, if we step back from the slogan, and question its context, we can see a larger picture.  There are too many PhDs for a society that does not value research enough to provide jobs for those qualified to perform it. Continue reading

Terence Tao explains public key cryptography

Northwestern University awarded Terence Tao the 2010 Nemmers Prize in Mathematics, and, as a result, he gave a public lecture here last week.  The material was similar to the content of a presentation he gave a couple years ago that is on Youtube.  I attended the lecture at Northwestern, not so much for the material per se, but because I wanted to see how he explained mathematical concepts to a general audience.  You can’t see the effect of his words in the Youtube video, because the camera is focused on him.  I wanted to understand his interaction with the other people in the room.

Continue reading

Minimum edge length partitioning of rectilinear polygons

Figure 10 from Minimum Edge Length Partitioning of Rectilinear Polygons by Lingas et al., 1982.

The title of this blog post is the same as that of a seminal paper in computational geometry and VLSI design by Lingas, Pinter, Rivest and Shamir from 1982.  The authors present an O(n^4) algorithm to produce a minimum-length rectangular partitioning of a rectilinear polygon without holes.  (A rectilinear polygon, also called an orthogonal polygon, is one whose sides are all axis-parallel, i.e., all either “horizontal” or “vertical.”)  They also prove the NP-completeness of the same optimization problem in the case where the rectilinear polygon has rectilinear holes.

Variations of this problem come up all the time in VLSI design.  There have been many subseqent papers on approximation algorithms, faster methods for tractable cases, and so on.  The Handbook of Discrete and Computational Geometry has a survey of known results in Chapter 26.  However, I recently needed to review the original NP-completeness proof, and apparently it exists nowhere online.  Dozens of papers (and books) cite the result, but no one seems to reprove it, and it did not appear in a journal with an online archive.  I took the liberty of scanning it, so it would be available for download somewhere (pdf file here: MinimumEdgeLengthPartitioningOfRectilinearPolygons), and with the blog post’s title identical to the article’s title, I hope that anyone searching for it in future might find this link without much effort.

Continue reading

The man-computer symbiotic break point

A wasp simultaneously laying an egg in a fig ovary and pollinating the stigmas with her forelegs. Source: figweb.org

In March 1960, J.C.R. Licklider began his article Man-Computer Symbiosis:

The fig tree is pollinated only by the insect Blastophaga grassorum.  The larva of the insect lives in the ovary of the fig tree, and there it gets its food.  The tree and the insect are thus heavily interdependent: the tree cannot reproduce without the insect; the insect cannot eat without the tree…. This co-operative “living together in intimate association, or even close union, of two dissimilar organisms” is called symbiosis.

Nature’s degree of specification is even more extreme than the quotation implies: different fig trees are uniquely pollinated by different fig wasps, and, in fact, the wasp that appears in the image above is not the same species Licklider referred to.  (More information available at this figweb page, or the Wikipedia page on fig wasps.)  Licklider envisioned  a similar degree of specification of human-computer interface: each operator might have a distinct way to interact with the computer, so each member of the group could perform their tasks in parallel, each talking to the central machine.  However, in this vision, there is one computer per person, or “symbiotic cooperation between a computer and a team of men.”  What happens when there is more than one computer per person in the world?  What if, for each person in the world, there are thousands, or millions of computational devices?  David Tennenhouse termed the outnumbering of humans by computers a “breakpoint,” and argued in the year 2000 that we were rapidly approaching that breakpoint, so it was essential to reinvent computer science. Continue reading