Category Archives: Uncategorized

Polygon rectangulation, part 2: Minimum number of fat rectangles

This post was chosen as an Editor's Selection for ResearchBlogging.org
This post is the second in a series on polygon rectangulation. In my previous post, I discussed methods to decompose an orthogonal polygon into a minimum number of rectangles.  (See that post for definitions and motivation.)  In my next post, I will consider finding a rectangulation of a minimum length — a topic very important in VLSI.  In this post, I will consider a modification to the minimum-number-of-rectangles problem; the modification was motivated by concerns in VLSI, but, as yet, only a theoretical algorithm exists, with a running time of O(n^{42}).  (That is not a typo, and it is obtained through a “natural” dynamic programming solution to the problem I am about to state.)

Printed circuits are created through a process called photolithography, in which electron beams etch a design onto a substrate.  While these electron beams are, in one sense, extremely narrow, as the Wikipedia article on VLSI states, current photolithography techniques “tend closer to the fundamental laws of optics.”  Among other things, this means that the fixed minimum width of an electron beam is suddenly important.  In principle, it implies a “fat fingers” problem.  Suppose our substrate is in the shape of orthogonal polygon P, and we use a rectangulation technique from the previous post to rectangulate P.  We may not be able to apply the rectangulation in real life, because we have no guarantee that all of the rectangles are wider than our electron beam.  Therefore, we would like to constrain the space of rectangulations we consider to ones that are feasible to etch — informally, ones that contain only “fat” rectangles.  We formalize this optimization problem as follows.

Fat Rectangle Optimization Problem: Given an orthogonal polygon P, maximize the shortest side \delta over all rectangulations of P.  Among the partitions with the same \delta, choose the partition with the fewest number of rectangles.

This optimization problem has been studied by O’Rourke and co-authors in at least three papers.  In this blog post, I will focus on consideration of The Structure of Optimal Partitions of Orthogonal Polygons into Fat Rectangles, by O’Rourke and Tewari (2004). Continue reading

Polygon rectangulation, part 1: Minimum number of rectangles

Over the next few posts, I will consider problems of polygon rectangulation: given as input P an orthogonal polygon (all interior angles are 90 or 270 degrees), decompose P into adjacent, nonoverlapping rectangles that fully cover P.  Different problems impose different conditions on what constitutes a “good” rectangulation.  Today we will discuss how to find a rectangulation with the least number of rectangles.

Polygon decomposition is a method often used in computer graphics and other fields, in order to break a (perhaps very complex) shape into lots of small manageable shapes.  Polygon triangulation may be the best-studied decomposition problem.  (When triangulating, we don’t require that the input polygon be orthogonal, and our objective is to cut the polygon into triangles according to some notion of “best” decomposition.)  There is an extensive literature on polygon rectangulation as well, because of its connection to VLSI design.  Suppose, for example, that our input P represents a circuit board, and we want to subdivide the board by placing as little additional “ink” on the board as possible, in order to save money on each unit.  However, because of mechanical limitations, we can only place ink horizontally or vertically — i.e., only create rectangulations of P.  Many questions in VLSI design are closely related to finding a rectangulation of minimum total length, which I will discuss in a future post.  The algorithm for minimum-length rectangulation is more complicated than the one I will present today for minimum-number-of-rectangles rectangulation, so today’s post can be considered a warm-up.

The attendees of the recent Midwest Theory Day know that Ming-Yang Kao and I found an application of rectangulation to DNA self-assembly.  I will blog about that in the new year.  The only other application of rectangulation to self-assembly that I know about is A Rectangular Partition Algorithm for Self-Assembly, by Li and Zhang, which appeared in a robotics conference.  (Readers interested in the latest Midwest Theory Day are invited to check out a “workshop report” I wrote on the CSTheory Community Blog.)

These slides (pdf format) by Derrick Stolee contain many lovely pictures about polygon rectangulation.  I think they may be a bit hard to follow all the way through as there is no “attached narrative,” but I recommend them anyway. :-) Continue reading

Can we derandomize BP.PP?

Two days ago, I added a “bounty” to a year-old unanswered question that Noam Nisan asked on CSTheory about the relationship between the Polynomial Hierarchy and the complexity class PP (probabilistic polynomial time).  I remember thinking when the question was first asked, “Wow, there is something important going on here,” but no one has posted an answer, even though the question has received a thousand views.  I hope this blog entry can drive a bit more traffic that direction, because I bet someone, somewhere does know the answer — and I am quite confident that person isn’t me!

The issue, as I understand it, is whether the polynomial hierarchy (PH) is contained within PP in “the real world.”  There exist oracles (possible worlds) in which \textsf{PH} \subsetneq \textsf{PP}.  (See the answers to this question of Huck Bennett for references and explanation.)  On the other hand, Scott Aaronson in this answer makes the case for \textsf{PH} \subseteq \textsf{PP} in the real world, because Toda’s Theorem implies that \textsf{PH} \subseteq \textsf{BP.PP}, where \textsf{BP.PP} as Aaronson puts it, “is to \textsf{PP} as \textsf{BPP} is to \textsf{P}.”  Since we expect (because of plausible derandomization assumptions) that \textsf{BPP}=\textsf{P} in real life — and, in general, that randomization does not help much, given these plausible assumptions, then “probably” \textsf{BP.PP}=\textsf{PP} so by Toda’s Theorem, \textsf{PH} \subseteq \textsf{PP}.

This sounded great to me, but I now realize I have no idea what \textsf{BP.PP} is, not really.  Fortunately for my ego, this seems to be the crux of Nisan’s question.  His point, as I understand it, is that just because we expect to be able to derandomize \textsf{BPP}, that does not mean we can derandomize \textsf{BP.PP}, precisely because the oracles I mentioned at the beginning exist.  Known derandomization techniques seem to relativize (that is, they give the same results with respect to all oracles), and Aaronson’s claim implies that \textsf{PH} \subseteq \textsf{PP} does not evaluate to the same truth value under all oracles.  So if \textsf{PH} \subsetneq \textsf{PP} with respect to some oracle A, that “must” mean that the pseudorandom generators do not exist for \textsf{BP.PP} in A, else we could derandomize it.  Nisan’s feeling/conjecture is that somewhere in such oracle constructions, there must be an implied upper bound for \textsf{PP}, keeping it below \textsf{PH}.

In a nutshell, what is the relationship of derandomization to \textsf{BP.PP}?

An independent discovery of Costas arrays

In today’s post, I will discuss a little-known combinatorics paper by E.N. Gilbert from 1965, in which he independently discovered the “logarithmic Welch” construction of Costas arrays.  Costas arrays are named after the late IEEE fellow John Costas, whose seminal paper on what are now called Costas arrays appeared, coincidentially, in 1965, the same year as Gilbert’s paper.  I had never heard of Costas arrays until just a few weeks ago, when I needed a combinatorial object with their properties for a problem I am trying to solve about error-prevention in molecular self-assembly.  So I approached the literature in a nonstandard way, found Gilbert’s paper, and eventually wrote experts on Costas arrays to confirm that Theorem 6 of Gilbert’s Latin Squares Which Contain No Repeated Digrams did indeed produce a construction that independently appeared in the literature “for the first time” in 1982.  Therefore, my objective with this blog post is twofold: (1) to make better known an area of combinatorics that may be familiar to information theorists but not to computer scientists; and (2) to change in a small way the intellectual history of this area, so Gilbert is recognized as a co-inventor of an important class of combinatorial designs.  (ADDED LATER: It appears I may have accomplished (2).  The Wikipedia page on Gilbert now states that he was a co-inventor of Costas arrays, and cites this blog entry as justification.  This is my first Wikipedia citation, to my knowledge.  Somewhere along the way, I seem to have morphed from off-the-wall commentator to reliable source.)

First, though, a quick pointer for anyone interested in DNA computation: Dave Doty just contributed an entry to the CSTheory Community Blog on the DNA Computing conference that took place last month at CalTech.  Dave was on the local arrangements committee, and, in his post, he talks about both technical results presented at the conference, and suggestions for organization of future CS conferences.  Good stuff. Continue reading

Conference reporters wanted, sponsorship available

There is a new post category on the CSTheory StackExchange Community Blog — Conference/Workshop Reports — and Dave Pritchard (who often goes by the nick daveagp online) has just posted the first such report, covering the sixth annual meeting of EuroComb in Budapest, a conference in honor of Gyula Katona’s 70th birthday, and the European Symposium on Algorithms (ESA), which is part of ALGO this year (so the link points there).  Dave’s post is excellent, thank you very much!

I mentioned recently on this blog that we are looking for more people who can blog from conferences and workshops they attend, but there is a new wrinkle.  StackExchange has just inspired a formal policy of sponsoring (some) site bloggers who attend conferences.  (See this post, especially the section “Sponsoring Community Leaders to Attend.”)  Further, Jeff Atwood, founder of StackExchange, just posted on the CSTheory meta site, encouraging theoretical computer scientists to take him up on this.  Suresh Venkatasubramanian has started a thread, looking for reporters for the upcoming FOCS.

Here are five other conferences that are starting soon; all have a significant TCS component (with TCS broadly defined).

  1. Conference on Computer Science Logic (Sept 12-15)
  2. International Conference on Functional Programming (Sept 19-21)
  3. International Conference on DNA Computing and Molecular Programming (Sept 19-23)
  4. International Symposium on Distributed Computing (Sept 20-22)
  5. International Symposium on Graph Drawing (Sept 21-23)

Anyway, you get the idea, I’m sure.  I just wrote someone who will be at DNA, asking about blogging.  If you will be attending an upcoming TCS event — or if know someone who will be, please share this information — please contact Joe Fitzsimons or me, or leave an answer on Suresh’s thread.  Thanks.

Theoretical Physics Q and A Site

UPDATE: The site has reached 100% commitment, and is about to launch a beta Q&A period.

There is an active proposal at StackExchange to build a research-level Q and A site for theoretical physics. There is a Physics StackExchange site already, for questions at the high school or undergraduate level.  The new site would be expert-level only, like MathOverflow for math, or CSTheory for theoretical computer science.

The new site is currently in the “commitment” phase, meaning more people need to add their names to a list of individuals who want to participate in the site when it does live. If enough people commit (at 79% of requirement as I write this) the site will enter a “private beta” phase, where only those who committed will be able to ask and answer questions for a while.

More information, and the commitment page, is here.

CSTheory blog feed was broken, now fixed

The RSS feed of the CSTheory Community Blog was broken for a couple weeks.  Thanks to Jukka Suomela, we discovered this (just recently), and the feed is now valid again.  Here are the posts that appeared while the feed was down.

  1. Lower Bounds by Negative Adversary Method, by Artem Kaznatcheev.  Artem provides a technical introduction to proving lower bounds in quantum query complexity, by using the new technique of the negative adversary method.  He contrasts it with the older technique of the polynomial method.
  2. If a family of forbidden subgraphs is hard, does that imply that the graph class is hard?  Written by me.  I consider a question asked on CSTheory: is there a relationship between the computational complexity of recognizing a forbidden family of subgraphs, and the computational complexity of the graph class defined by the forbidden family? Hugo Nobrega gave an inspired answer to this question, but ultimately, things are not fully resolved.  An intriguing “open problem” for people interested in graph minors and forbidden subgraphs.
  3. Happy birthday, cstheory!  Written by Suresh Venkatasubramanian.  Suresh reviews some of the open problems, original proofs, career advice and technical discussions that have appeared on cstheory.stackexchange.com since it was founded Aug 16, 2010.
  4. On Learning Regular Languages, by Lev Reyzin.  Lev gives a technical introduction to learning theory applied to regular languages, for example, whether it is NP-complete to find the minimum regular expression that captures a regular language.  He considers multiple learning models and mentions open problems.

I hope you enjoy the posts!

As always, we are looking for one-time or regular contributors for the community blog.  The following is new: we would like to start a regular “column” of Conference Reports, where TCS’ers report, either formally and technically, or informally and more personably (or both), about conferences they are currently attending.  We already have one correspondent for ESA (European Symposium on Algorithms) (thank you Dave!), which starts September 5th, but it is a large meeting and additional perspective(s) would be welcomed.  If you would like to blog about a TCS conference or workshop you will be attending, please get in touch with me or Joe Fitzsimons, or add your name to the Community Blog Contributor Signup Sheet.  Thank you.

 

 

DNA self-assembly of multicolored rectangles

(UPDATE: My statements in this blog entry about the Göös Orponen paper were based on an earlier draft, which was on an author’s web page, but was not the final, published version.  I have updated the link to their paper, and struck out my statements that are not consistent with their final draft.  In particular, the final version of their paper reads, “Ma & Lombardi show a certain derivative of the above optimization problem NP-hard in [7].  However, to our knowledge, a proof of the NP-hardness of the PATS problem as stated above is lacking.”  Please see the comment of Mika Göös below for more information.)

One of my research interests is the DNA self-assembly of multicolored shapes.  Almost all DNA self-assembly literature — both theory and practice — has focused on the assembly of either 1-color or 2-color shapes.  Nevertheless, if we think of the DNA assembly as a substrate, or scaffolding, upon which transistors or nanomachines are placed later, different colors can correspond to OR gates or NAND gates, or the “landing pads” of much more sophisticated objects.  A while back, I did a literature search on the self-assembly of multicolored shapes, and I thought I “knew everything” that was currently in the literature.  However, I was wrong.  I recently discovered papers in the literature track of Computer Assisted Design of integrated circuits (a track I had never followed before, and clearly I will have to start), whose objective is to tackle DNA self-assembly of multicolored rectangles, an obviously related special case of “my” problem.  I will discuss two of these papers in this blog entry.

The two papers, both by Xiaojun Ma and Fabrizio Lombardi of Northeastern University, are On the Computational Complexity of Tile Set Synthesis for DNA Self-Assembly, and Combinatorial Optimization Problem in Designing DNA Self-Assembly Tilesets.  The second is much shorter, reads like an announcement of results, and appeared in a workshop.  The first appeared in a journal and contains a full proof of an NP-completeness result.  I had initially intended this blog entry to be a technical discussion of the NP-completeness result, which I find clever, and I think it defines a graph model of tilesets that might be useful in other settings.  However, when I looked into the literature, I found things were a bit of a mess.   A subsequent paper by other authors incorrectly states that Ma and Lombardi proved something stronger than they actually did, and this error is understandable, because Ma and Lombardi changed the informal description of their theorem in between the workshop paper and the journal paper.  (The workshop paper makes a stronger claim, which the journal proof does not support.) So the rest of this entry reads more like a detective story — or a he-said she-said dialogue (!) — than an exposition.  It took me a couple days to separate claims from proofs, and I hope that, by writing this up, I will save other researchers some time.

Continue reading

Accepted papers for DNA Computing and Molecular Programming

The list of accepted papers for the 17th International Conference on DNA Computing and Molecular Programming is now up.  This isn’t breaking news, but I haven’t seen it mentioned on other theory blogs, so I figured it was worth posting.

Some of these papers, and ones they cite, will be my blogging focus over the next few weeks.

The Dershowitz/Falkovich proof of the Extended Church-Turing Thesis

In a previous post, I considered a proof of the Church-Turing Thesis that Dershowitz and Gurevich published in the Bulletin of Symbolic Logic in 2008.  It is safe to say that the proof is controversial — not because it is technically inaccurate, but because it relies on an axiomatization of computation that excludes randomness, parallelism and quantum computing.  (For more discussion of that paper, please see the previous blog entry, or the question about disproving the Church-Turing Thesis on CSTheory.)  Nevertheless, one of the authors, Dershowitz, and his PhD student Falkovich, have just published a proof of an even more controversial statement, the Extended Church-Turing Thesis.  It is this new proof which I will now discuss.

The Extended Church-Turing Thesis is the statement that every physically realizable computational device can be simulated efficiently by Turing machines.  (By contrast, the “mere” Church-Turing Thesis says that Turing machines can simulate any computational device, but perhaps at the cost of an exponential blowup in time and/or space.)  Unlike the Church-Turing Thesis, which computer science convential wisdom appears to think is true, many CS people believe the Extended Church-Turing Thesis to be false, because quantum computers are expected to be strictly stronger than classical computers.  (The most famous example of this difference in strength is a quantum algorithm that can factor large integers in polynomial time, due to Peter Shor.  Most CS people believe that classical computers cannot factor integers in polynomial time, and many cryptography programs depend on this infeasibility.)  Dershowitz and Falkovich sidestep this issue by attempting to axiomatize only “classical” algorithms — no quantum, no randomization, no massive parallelism.  They define an algorithm’s implementation, a program, an efficient program, and then prove that any efficient program (in their general axiomatic sense) can be simulated in linear time by a Random Access Machine (RAM).  Since a RAM can be simulated efficiently by a Turing machine, the Extended Church-Turing Thesis follows as a consequence. Continue reading