# Tag Archives: general audience

## Ian Stewart’s Mathematics of Life

This post is based on a book review I recently wrote on The Mathematics of Life, by Ian Stewart. A final version of the review will appear in a future issue of SIGACT News.  Please feel free to download a pdf version of the full preprint, or just read an abbreviated version of it here, in blog format.

Introduction

Ian Stewart is one of the premier popularizers of mathematics.  He has written over twenty books about math for lay audiences.  He has also co-authored science fiction, and books on the science of science fiction (three books on “the science of discworld”).  In his newest effort, The Mathematics of Life, Stewart focuses his talents on the mathematics of biology, and the result is superb.  In an easy, flowing read, with dozens of diagrams and scholarly footnotes — but without a single formula — he introduces the reader to a wide range of interactions between mathematicians and biologists.  I heartily recommend this book.

## Nanoexplanations registered at ResearchBlogging.org

I’ve registered this blog with ResearchBlogging.org, an aggregator and indexer of blog posts about peer-reviewed research.  Their statement of purpose on their About page is:

Do you like to read about new developments in science and other fields? Are you tired of “science by press release”? ResearchBlogging.org is your place. ResearchBlogging.org allows readers to easily find blog posts about serious peer-reviewed research, instead of just news reports and press releases.

ResearchBlogging.org has been active in its current incarnation since 2008, and has over 1,000 blogs registered.  Most of these blogs appear to be in the bi0logical and physical sciences, but anyone who blogs about peer-reviewed research according to their guidelines is eligible to register a blog.  Once registered, a blogger includes a specially-generated citation in the content of a blog post, the ResearchBlogging.org feedreader recognizes the citation, and then indexes the blog entry, and also includes it in the overall site feed.

My first post indexed by ResearchBlogging.org was Connected Guards in Orthogonal Art Galleries, which just appeared.  The special citation appears at the very end of the post.  You can see how the post appears in my blog’s ResearchBlogging.org profile.  I’ve gotten decent traffic from being included in this way, even though my post was not general-audience-friendly, and I think the general idea of the site is fantastic, so I plan to continue participating in the future.

From the ResearchBlogging.org feeds for Computer Science/Engineering, and for Mathematics, it appears that there are no active “hard science” mathematical science blogs.  The posts related to theorem-proof culture are from physics or computational biology.  There’s definitely some interesting reading, though.  My current favorite: Sexting as a form of attachment anxiety, by psychiatrist Walter van den Broek.

ResearchBlogging.org also performs meta-analysis of the participatory blogs.  See, for example, this article on gender disparity among science bloggers (men disproportionately outnumber women, both on the RB.org site, and in general).

So here’s a list of questions I don’t know the answer to, and I would like to request comments about it.  There are quite a few excellent blogs in computer science (not just theory) and mathematics, but there doesn’t appear to be the same level of online scientific participation in those fields as there is in chemistry, biology or medicine.  The low participation in RB.org is one example of this phenomenon.  Are there practical and cultural reasons for this?  Is the TCS community too small to “need” such social-networking tools?  Does blogging about research-level mathematics inherently require one to talk to a very small group, or to “waste” several paragraphs just defining notation and concepts?  Or are computer scientists and mathematicians behind the times, and in need of learning from the interaction tools used in other scientific disciplines?

## Large Hadron Collider data may falsify supersymmetry

In a recent comment on this blog, Jim Blair said, “I think there is one school of thought in Theoretical Physics where they attempt to use mathematical symmetries to predict the existence of unknown particles.”  I wanted to address this for a moment, because 2011 might be a year in which decades of work in theoretical physics is rendered irrelevant by empirical observation.

Supersymmetry (often abbreviated SUSY) is a heavily-studied physical theory that postulates the existence of “superpartners” of known elementary particles — complementary particles that are heavier and differ by a half-spin.  However, as posted in Nature News, recent data from the Large Hadron Collidor are casting increasing doubt on the correctness of SUSY.  From that article:

“Privately, a lot of people think that the situation is not good for SUSY,” says Alessandro Strumia, a theorist at the University of Pisa in Italy, who recently produced a paper about the impact of the LHC’s latest results on the fine-tuning problem4. “This is a big political issue in our field,” he adds. “For some great physicists, it is the difference between getting a Nobel prize and admitting they spent their lives on the wrong track.” [John] Ellis [of CERN] agrees: “I’ve been working on it for almost 30 years now, and I can imagine that some people might get a little bit nervous.”

Honestly, I think there’s an important lesson here for theoretical computer science and computational complexity theory: don’t base your life’s work on unproven assumptions, divorced from empirical fact.  Otherwise, you risk someone coming along and showing that, hey, we live in Pessiland (or wherever), and all your hard work is confined to a footnote of history.  (Pessiland is a possible cryptographic world that we may live in; Russell Impagliazzo proposed five such possible worlds in 1995.  For more details, including a comment by Boaz Barak about which world experts seem to think we live in, see here.)

## Update on HBGary Federal and Anonymous

In a previous post, I discussed how Anonymous hacked into HBGary Federal and exposed plans to use false documents and sock puppetry to discredit Wikileaks and US labor unions.  The US Congress has begun a formal investigation into the relationship between the Department of Defense and the companies HBGary Federal, Palantir Technologies, and Berico Technologies.  (Article by Wired; by Forbes.)

Of perhaps more significance to the social history of computing, Anonymous has started a recruitment campaign, Operation New Blood (#opnewblood), based on their success in taking down professional security firms, and exposing the plans against Wikileaks and unions.  There is quite a bit of motion around this, including, for example, a well-produced recruitment video that is labeled as a class project.  The video is almost seven minutes long; I will quote a couple excerpts.

With a company in shambles, a CEO’s life derailed, and a dark secret uncovered, Anonymous is beginning to look less like a hacker group.  It begins to look like your best interest, as well as mine….  Since the conception of Anonymous, they have been responsible for various operations around the world, from bringing Internet service to the Egyptian people during their recent revolution, to opposing massive government agencies and corporations.

To be clear, I’m not a member of Anonymous, nor do I intend to become one, if for no other reason than my belief that structure and government are actually necessary, and I don’t see a future in anarchic movements.  However, I think this situation is a big deal, because I expect the recruitment push to find significant traction among people with computer skills who feel disaffected by society — and that group of disaffected computer folk is growing, as computer science becomes deprofessionalized.  I also believe — though I have no hard evidence for this — that the age and economic standing of the “average active Anon” is already on the rise, because over the last several years, their activities seem to have moved from juvenile baiting to occasional “freedom fighting” to this current position of an Emma Goldmanesque anarchic class warfare.

I predict a marked increase in politically and economically motivated hacktivism over the next five years, and a concomitant governmental backlash of aggressive new laws and enforcement on the use of computers and the posting and transfer of data.

## The deprofessionalization of computer science

Source: The Economist

I don’t mean by the title that computer scientists are behaving less professionally.  Rather, I mean that the jobs available for people with advanced degrees in computer science have much lower professional standing than they did even five years ago, to say nothing of 25.  This happened to social workers in the 1970s, and to physicists in the 1980s.  A convenient slogan to explain this situation is that there are “too many PhD’s” in those fields.  The connotation to such a phrase is, “Well, aren’t you stupid for going into a career that has no future, it’s your fault you’re facing problems now, stop whining.”  However, if we step back from the slogan, and question its context, we can see a larger picture.  There are too many PhDs for a society that does not value research enough to provide jobs for those qualified to perform it. Continue reading

## The man-computer symbiotic break point

A wasp simultaneously laying an egg in a fig ovary and pollinating the stigmas with her forelegs. Source: figweb.org

In March 1960, J.C.R. Licklider began his article Man-Computer Symbiosis:

The fig tree is pollinated only by the insect Blastophaga grassorum.  The larva of the insect lives in the ovary of the fig tree, and there it gets its food.  The tree and the insect are thus heavily interdependent: the tree cannot reproduce without the insect; the insect cannot eat without the tree…. This co-operative “living together in intimate association, or even close union, of two dissimilar organisms” is called symbiosis.

Nature’s degree of specification is even more extreme than the quotation implies: different fig trees are uniquely pollinated by different fig wasps, and, in fact, the wasp that appears in the image above is not the same species Licklider referred to.  (More information available at this figweb page, or the Wikipedia page on fig wasps.)  Licklider envisioned  a similar degree of specification of human-computer interface: each operator might have a distinct way to interact with the computer, so each member of the group could perform their tasks in parallel, each talking to the central machine.  However, in this vision, there is one computer per person, or “symbiotic cooperation between a computer and a team of men.”  What happens when there is more than one computer per person in the world?  What if, for each person in the world, there are thousands, or millions of computational devices?  David Tennenhouse termed the outnumbering of humans by computers a “breakpoint,” and argued in the year 2000 that we were rapidly approaching that breakpoint, so it was essential to reinvent computer science. Continue reading

Presentation slide proposing methods to discredit and destroy Wikileaks. Obtained from leaked HBGary emails. Source: ArsTechnica

Looks pretty bad, doesn’t it?  Well, it’s worse.  Not only did the security firm HBGary prepare a package of dirty tricks against Wikileaks, hoping to get paid by Bank of America’s law firm to put them into action, but they also constructed a similar package to use against labor unions, hoping to drum up business from the US Chamber of Commerce.  As I write this, there is no confirmation that either B of A or the US C of C actually paid for these services to be rendered, but the authenticity of the leaked emails does not appear to be in doubt.  The CEO of Palantir, whose company logo appears on the slide I linked, has apologized at least twice, severed all ties with HBGary, and placed on leave the engineer who developed this slide.

The best coverage I have seen of this sordid affair is at Ars Technica.  Many commenters at Ars have stated that Nate Anderson should win a Pulitzer Prize for his coverage of the hack that led to the leaked emails, and the ongoing aftermath.  That’s not hyperbole: this article by Anderson is the most riveting tech news story I have read in years, maybe ever.

A couple months ago, Richard Lipton proposed a method to stop Wikileaks.  Essentially, the method boiled down to this: for every potentially compromising document generated, automatically generate a set of documents that look like it, but are different somehow — statements are contradicted but otherwise identical, numerical values are inflated or deflated, etc.  Then, if the “real” documents are leaked, ensure all the shadow documents are leaked as well, so nobody knows what to believe.  From the Palantir slide’s first bullet point, it appears that practice is keeping abreast of theory, or perhaps leaping ahead: “Create messages around actions to sabotage or discredit the opposing organization.  Submit fake documents and then call out the error.”

More than 70,000 leaked emails from HBGary, HBGary Federal and rootkit.com are available for download and search on at least five mirror sites worldwide.  They got there because Aaron Barr, CEO of HBGary Federal, went to the media with the (incorrect) claim that he had uncovered the identities of key members of the hacker group Anonymous.  In response, Anonymous entered his computers, erased gigabytes of research data, downloaded and decrypted his hashed password database, remotely wiped his iPad, seized control of his LinkedIn profile and Twitter account — and, oh yes, posted 70,000+ emails that tell a story of three companies that specialized in dirty cybertricks, which is why the fallout from this story will be studied for months, or longer.

I took a graduate class in cryptography.  I even did well.  I had heard of salting passwords and dictionary attacks before this, but I didn’t really understand them.  I had an intellectual grasp of them yes, but I’m talking now about the type of understanding that grabs your solar plexus, squeezes and won’t let go until you’ve really really got it.  I believe this Ars Technica article by Peter Bright should be required reading in every cryptography class, and in every CS class when computer security is discussed.  Normally I would also link to Wikipedia articles on “salting passwords,” for example, but not this time, I won’t.  Bright does a superb job of making you feeeeeel how important it is to defend yourself against a dictionary attack, and I don’t want anything to get in the way of that.  Bottom line: security professionals protected themselves like amateurs, and found their defenses easily compromised once their CEO went out of his way to provoke a hacker collective known for its willingness to attack.

It would not surprise me if Lipton’s idea gained traction, very soon.  If nothing else, it would make searching through the email database far more difficult, because naive search algorithms would generate lots of false positives.  It might be worth turning the question around, to ask something I don’t know how to answer: Is there an algorithmic method to separate real documents from shadow documents, assuming they are uploaded together in the same torrent?

## Computing with billions of cores

Supercomputer performance and projection. Source: top500.org

On November 11, 2010, the Chinese supercomputer Tianhe-1A earned the title of “fastest computer in the world,” when it was clocked performing operations at the rate of 2.57 petaflops.  Two of the top five computers are in China; two are in the USA; one is in Japan.  Two weeks ago, IBM announced plans to build a machine that will surpass them all: ten petaflops, with 750,000 cores.  IBM projects that by 2020, they will be able to deliver machines with millions of cores.

FLOPS stands for “floating point operations per second,” so one gigaflop is one billion floating point operations per second.  A teraflop is is one trillion flops, and a petaflop is one quadrillion ($10^{15}$) flops.  A floating point operation is an operation like addition or division of two numbers with fractional components.  The processors of PCs are usually measured differently: MIPS (millions of integer operations per second), or cycles per second, e.g. gigahertz.  The relation between a processor’s cycle speed and its flops can be complex, because it depends on pipelining of data and other factors, so some people consider flops to be a better measure of how fast a computer really is when it is trying to solve problems.

The graphical processing units (GPUs) of desktop PCs are pretty frikkin fast, too.  The AMD Firestream 9270 claims to perform over one teraflop “in raw single precision performance.”  So a high-end home gaming system in 2011 can run about as fast as the world’s fastest computer in 1998.  Perhaps more significantly, my Droid Incredible smartphone houses a Qualcomm Snapdragon processor that runs at 1 Ghz, and Qualcomm plans to release a 2.5 Ghz quad-core chip for use in phones by 2012.  This is critical to our current discussion, because there are now more than five billion cell phones in the world — to say nothing of other mobile devices, like RFID chips — and projects distributed across many computers already run faster than Tianhe-1A.

As a researcher in biomolecular computation, I think about things like, “Suppose we had five hundred million cores, but they could only talk to one another slowly, via diffusion.  What problems might we solve efficiently that we cannot solve now?”  (Erik Winfree in this talk at ASPLOS 2008 made the point that a submarine has $10^6$ parts and $10^{10}$ transistors, while a whale has $10^{17}$ cells and $10^{27}$ proteins.)  Only recently, though, I have come to realize that my question doesn’t just apply at a cellular scale: it applies to the very near future of macro-scale computing as well.

Experts in pervasive computing have understood this for years, I am sure, and I imagine I am unfashionably late to the party.  Still, I wonder how many computer scientists — students, teachers, or practitioners — understand this on a gut level.  According to Wikipedia, the distributed computing project Folding @ Home sustained 6.2 petaflops in April 2010, several months before the highest-performance computers in the world, um, couldn’t even break past three petaflops ;-) (A participant in Folding @ Home loads software onto his or her computer(s) that runs a program in the background (e.g., when the computer is idle) that determines a protein’s structure when it folds up.  There is no known fast algorithm for this — indeed, folding is an NP-complete problem — but the problem parallelizes very well.  Hand each user a different molecule, and everyone is good to go.)

In stark contrast to, say, the Turing machine, which was conceptualized many years before the first modern computer was actually built, current computer science theory appears to be struggling to articulate an understanding of this new reality, even as the high-performance engineers and managers of distributed projects are charging ahead.  In 2009, the Workshop on Theory and Many-Cores went so far as to say:

This low level of activity should be a concern to the theory community, for it is not clear, for example, what validity the theory of algorithms will have if the main model of computation supported by the vendors is allowed to evolve away from any studied by the theory.

This problem seems more like an institutional one than a mathematical one to me.  Current university faculty worldwide were educated with a one-processor world view, and it is not as though all interesting problems there have been solved.  So why not work on interesting problems that are already accessible, instead of trying to think a brand new way and risk that the new problems might be less interesting, or still inaccessible?  Still, the sheer computing power available within a few years by, for example, running an app in the background on every cell phone in China, is a prospect I find both intoxicating and otherworldly.  Whoever answers the question I cannot will literally predict the future.