Apr
16
2009

Macromolecular Modeling for Molecular Systems Engineering

Macromolecular systems engineering can help us meet some of the most important technological challenges in the world today, ranging from medicine to renewable energy, and the development of better-integrated computational design tools will accelerate progress. Macromolecular modeling capabilities are advancing rapidly, but much of their potential for supporting systems engineering has yet to be exploited. In this post, I’d like to describe the fundamental nature of the problems and outline some of what needs to be done to make their potential a reality. By Dr. Eric Drexler.

Why are design tools important?

The state of the art in molecular fabrication would enable us to build amazing nanosystems, if only we could design them. Rich opportunities are emerging from the convergence of protein engineering, structural DNA nanotechnology, and the advances that have in recent years produced a vast range of nanoscale, non-biomolecular components. DNA origami has shown us how to construct precisely ordered, aperiodic, addressable frameworks on a scale of hundreds of nanometers; protein engineering and models from nature show us that engineered proteins can organize diverse functional components into smaller aperiodic structures, and zinc-finger technologies (for example) suggest how these could be plugged into larger DNA frameworks. The challenge is to put it all together, and for self-assembling systems, this task reduces to the problem of designing the parts.

Protein modeling for science and design

Protein engineering is central to this concept of composite nanosystems, and the history and practice of protein engineering illustrate basic issues in building a design technology on a foundation of science-oriented modeling.

The field has come a long way. I greatly enjoyed my first RosettaCon last year, and I’m impressed by the people, the software, and the reported research. Once upon a time (1981) I published a PNAS paper that’s now down at the bottom of the citation tree for de novo protein engineering. Progress since then has validated the central insight, which is that engineering design is fundamentally different from scientific inquiry, and that this difference has consequences for assessing when a modeling technology is adequate for design and how to apply it.

In the case of protein engineering, no distinction had been drawn between the two protein folding problems, prediction and design, and opinion held that successful design must await successful prediction. I argued that the problems are radically different, and that design is arguably easier. (Carl Pabo cited this and termed design an “inverse folding problem”, a concept that was subsequently turned around and used for fold prediction.)

The fundamental difference between scientific modeling and engineering design is that in a design process, the physical system isn’t a given entity, but is instead found through exploration guided by a functional objective. This has far-reaching consequences.

Supporting design exploration

A designer works by generating and testing concepts, making choices, and filling in increasing detail. The initial stages of the design process typically require rapid screening of low-resolution candidate designs; later stages focus on fewer candidates with more physical detail and accuracy.

The requirements of the design workflow make a range of software capabilities very valuable:

  • Interactive display and editing of structures, constraints, spatially structured penalty functions, etc.
  • Quick application of approximate physics and partial optimization (e.g., structural relaxation, side-chain repacking)
  • Smooth application of more accurate but computationally expensive modeling methods
  • Task-relevant summaries of modeling results and comparison of results from variant designs
  • Integration with a database for storing and organizing sets of related designs, models, modeling results, and so forth
  • An open, extensible framework for all of the above.

Widening design scope

The drive toward developing more advanced self-assembled nanosystems will demand software systems able to model physical systems that contain diverse materials and components. This will increase the importance of:

  • Extensible force fields with good facilities for editing and evaluation
  • Protein design code that performs combinatorial search in heterogeneous environments
  • Adaptation of combinatorial search techniques to an increasing range of non-peptide foldamers (e.g. peptoids)
  • Use of physics-based energy functions for materials that lack huge databases of empirical structures

Increasing design scale

Advancing methodologies for making large, heterogeneous molecular nanosystems will increase the need for multiscale modeling:

  • Developing coarse-grain models from atomistic simulations
  • Integrating coarse-grain and atomistic models
  • Integrating solid-body models with atomistic and particle-based models

A crucial question: When is a model good enough?

Judging when a model is accurate enough can be a crucial part of deciding whether a design problem is ready for serious work. As with protein fold prediction vs. fold design, however, a design problem may become tractable before a seemingly similar scientific problem has been solved.

Consider the criteria for acceptable failure rates: In a science problem where the objective is prediction, a 90% probability of failure in a given instance is just what it seems — a 90% probability of failure to achieve the objective. In an engineering problem, by contrast, the objective is to provide a suitable design, and the same 90% probability of failure in prediction in each given instance may result in a 100% success rate in achieving the objective (with a mean of 10 trial designs per success). What fails for science may suffice for design.

Another difference has more complex ramifications: Designers can apply predictability as a criterion for the design itself. Nature has no reason to cater to our modeling limitations, but designers can restrict themselves to using relatively well understood building blocks (such as alpha-helix bundles) and to relatively predictable interactions. Further, where the objective is to maximize something (such as stability), a designer can aim to provide a margin of safety to compensate for a model’s margin of error (artificial proteins provide examples; aircraft provide others). Again, what fails for science may suffice for design (and some modeling techniques that have been shelved may deserve a second look).

For all these reasons, the state of the art in macromolecular modeling is ready to support a wider range of design applications that is commonly realized, but harnessing the science to support design will take work. We’re ready for a broader push into the world of macromolecular systems engineering, the potential rewards are enormous, and there are many opportunities to contribute to progress through software development.


Dr. Eric Drexler is a researcher and author whose work focuses on advanced nanotechnologies and directions for current research.

Be Sociable, Share!

Enjoyed this Post ?

Subscribe by E-mail:

Subscribe in a reader. Follow us on twitter.

Powered by WordPress | Aeros Theme | TheBuckmaker.com WordPress Themes
© 2009 Rosetta Design Group LLC