Adam KrautBlogGitHub

Antibody Docking on the Amazon Cloud

19 May, 2009 - 5 min read

Today an article I wrote for Bio-IT World was published describing Antibody docking experiments that are running on Amazon EC2. Since my final edits didn’t make the deadline I wanted to post the entire article here with some inline links.

It was 18 months ago in this column that Mike Cariaso proclaimed, “Buying CPUs by the hour is back” in reference to our work with Amazon’s Elastic Compute Cloud (EC2). Back then, we were perhaps a bit far ahead of the hype vs. performance curve of cloud computing. A handful of forward- thinking companies were finding ways to scale out web services. Few research groups were putting EC2 instances to work for real number crunching in the life sciences. In the last two years, utility computing has begun to make an impact on real world problems (and budgets) in many industries. For researchers starved for computing power, the flexibility of the pay-as-you-go access model is compelling. The Amazon EC2 process makes the grant process used by national Supercomputing centers look arcane and downright stifling. Innovative and ‘bursty’ research requires dynamic access to a large pool of CPU and storage. Computational drug design is a great place to begin to clear the air about the reality of this emerging technology.

Accelerating the creation of novel therapeutics is priority one for the research side of the pharmaceutical industry. Much time is spent optimizing the later phases of clinical trials in many pipelines. However, IT and infrastructure decisions made much earlier in the process can have a profound impact on the momentum and direction of the entire endeavor. For protein engineers at Pfizer’s Bioinnovation and Biotherapeutics Center, the challenging task of Antibody docking presents computational roadblocks. All-atom refinement is the major high performance computing challenge in this area.

Respectable models of a protein’s three-dimensional structure can usually be generated on a single workstation in a matter hours. After building multiple models, a refinement step typically produces the most accurate models. Atomic detail is necessary to validate whether newly modeled antibodies will bind their target epitopes and to get a clear picture of the protein-protein interactions and binding interfaces of these immunogenic molecules.

One of the most successful frameworks for studying protein structures at this scale is Rosetta++, developed by David Baker at the University of Washington. Baker describes Rosetta as “a unified kinematic and energetic framework… (that) allows a wide-range of molecular modeling problems … to be readily investigated.” Refinement of antibody docking involves small local perturbations around the binding site followed by evaluation with Rosetta’s energy function. It’s an iterative process that requires a massive amount of computing based on a small amount of input data. The mix of computational complexity with a pleasantly parallel nature makes the task suitable for both high-end supercomputers and Internet-scale grids.

BBC Two

When Giles Day and the informatics team at Pfizer BBC designed its antibody- modeling pipeline using Rosetta, it soon realized it had a serious momentum killer. Each antibody model took 2¬–3 months using the 200-node cluster. With dozens of new antibodies to model, the project was at a standstill until they could get enough compute capacity to do the appropriate sampling. Furthermore, the pipeline was invoked with unpredictable frequency since it was dependent upon discovery in other departments. What it needed was a scale-out architecture to support “surge capacity” in docking calculations. This surge could happen frequently or not at all, making capacity planning extremely difficult.

Traditionally options were limited to expanding in-house resources by adding more nodes to the cluster or reducing the sampling. The only true option was to throw more CPUs at the problem — a doubled capacity could potentially halve a two-month calculation – but would necessitate acquisition, deployment, and operational costs. After evaluating those costs, they contracted the BioTeam to provide them with a cloud based solution. The result was a scalable architecture custom fit to their workloads and built entirely on Amazon Web Services (AWS). As was clearly evidenced at this year’s Bio-IT World Expo, the Cloud is mainstream today. Moreover, the AWS team is years ahead of the competition. AWS is unveiling new features and API improvements almost every month. The AWS stack is fast becoming a first choice by BioTeam for cost-effective virtual infrastructure and high-performance computing on-demand.

The architecture employed for docking at Pfizer makes use of the nearly the entire suite of services offered by Amazon. A huge array of Rosetta workers can be spun up on EC2 by a single protein engineer and managed through a web browser. As Chris Dagdigian pointed out in his recent keynote at Bio-IT World: While the cloud is quite hyped, this isn’t rocket science. The Simple Storage Service (S3) stores inputs and outputs, SimpleDB tracks job meta-data, and the Simple Queue Service glues it all together with message passing. What Amazon did right in 2007 was elastic compute and storage. What they do better in 2009 is to provide developers everywhere with a complete stack for building highly efficient and scalable systems without a single visit to a machine room. The workloads at Pfizer that previously took months are now done overnight and the research staff can focus on results without pushing their projects on the back shelf.