Between Code and Cells: When Biological Modelling Hits Rough Waters

Navigating the instabilities and frictions of computational biology

Introduction: When Biology Meets Computer Science

Imagine trying to predict the exact moment a normal cell turns cancerous, or understanding why some infections persist despite treatment. These are the complex puzzles that computational biology aims to solve. By creating sophisticated software models of biological systems, scientists can simulate processes that would be too expensive, time-consuming, or simply impossible to observe directly in the laboratory. Yet beneath the promise of this high-tech science lies a less discussed truth: building biological models is filled with instabilities, frictions, and constant troubleshooting 1 4 .

Far from being a clean replacement for test tubes and lab benches, computational modeling exists in a tight entanglement with traditional biology 1 3 . This article explores how these very frictions—the mismatches between digital models and messy biological reality—are not setbacks but rather essential drivers of scientific discovery, pushing researchers toward deeper understanding through what scholars call "knowing-in-progress" 4 .

Biological Complexity

Living systems exhibit emergent properties that are difficult to capture in computational models, creating fundamental frictions between digital representations and biological reality.

Computational Limitations

Even with advanced computing power, models must make simplifications that can lead to instabilities when compared with experimental results.

The Unseen Challenges of Digital Biology

What Are These "Frictions" and "Instabilities"?

In computational biology, "frictions" refer to the resistances that arise when translating wet-lab biology into computer code 1 4 . These occur when biological data must be understood and used from both biological and computational perspectives, creating challenges in representation, translation, and collaborative practice 4 . Similarly, "instabilities" appear when models produce unpredictable or contradictory results that don't align with laboratory observations, revealing gaps in our understanding 1 .

These challenges are particularly pronounced in multi-scale models that attempt to span vastly different biological levels—from molecular interactions measured in nanoseconds to organism-level changes that unfold over years 6 . As biological complexity increases across these scales, so does the potential for instability in the models 5 .

Why Biologists Embrace the Mess

Contrary to what one might expect, biologists demonstrate a remarkable pragmatism when working with incomplete or messy models 4 . As researcher Evelyn Fox Keller observes, omissions and inferences are simply expected parts of building biological models—a necessary compromise when grappling with complex living systems 4 . This comfort with uncertainty stems from an understanding that models are tools for exploration rather than perfect replicas of reality.

The productive frictions between computational models and experimental biology drive scientific discovery forward, revealing new questions and insights that neither approach could achieve alone.

Common Friction Points in Biological Modeling

A Closer Look: The Bio-Model Analyzer (BMA)

The Experiment: Modeling Cell Fate

To understand how computational biologists work through these challenges, let's examine a specific tool mentioned in the research: the Bio-Model Analyzer (BMA) 4 . This web-based program applies sophisticated algorithms to determine cell fate in highly complex gene regulatory networks—essentially helping predict why cells develop into specific types (like muscle or nerve cells) or when they might become cancerous 4 .

The BMA tool specifically tests for what biologists call "stability"—a concept borrowed from systems theory that refers to a biological system's tendency to maintain its state despite disturbances 4 . In the context of cell biology, this might mean understanding why some cells resist changing into different types or what makes cancer cells suddenly proliferate uncontrollably.

Methodology: Step-by-Step

The process of using tools like BMA typically involves several key stages that highlight where frictions commonly emerge:

1
Biological Question Formulation

Researchers identify a specific biological problem, such as understanding the triggers for a particular cell differentiation process.

2
Data Collection and Translation

Experimental biological data must be translated into a format the computational tool can understand. This is a prime friction point, as qualitative biological observations must be quantified and structured 4 .

3
Model Building

Researchers construct a computational model representing the biological system, using formal frameworks like Boolean Networks or Petri Nets . These frameworks allow biologists to represent biological entities and their interactions in mathematically precise ways.

4
Simulation and Analysis

The model is run through multiple simulations to observe how the system behaves under different conditions. Instabilities often surface here, as the model may produce unexpected results that don't align with laboratory observations 1 4 .

5
Iterative Refinement

Researchers continually adjust the model based on these discrepancies, working back and forth between computational predictions and laboratory experiments in a process of "knowing-in-progress" 4 .

Common Frameworks for Biological Modeling

Framework How It Works Ideal For
Boolean Networks Represents biological entities as simple "on/off" switches Modeling gene regulatory networks where proteins are either present or absent
Petri Nets Uses tokens moving through networks to represent dynamic processes Modeling metabolic pathways or signaling cascades
Multi-Scale Models Combines multiple modeling approaches across different biological scales Understanding how molecular changes affect cellular or tissue-level behavior 6

Working Through the Frictions: A Case Study

The development and use of BMA exemplifies how computational biologists productively work through frictions rather than avoiding them 4 . For instance, when the tool's predictions didn't match laboratory observations, researchers didn't discard the model—instead, they investigated the discrepancies as potential clues to deeper biological truths.

This process mirrors what happens in other computational biology contexts, such as when researchers perform sensitivity analysis on multi-scale models 5 . Sensitivity analysis involves systematically testing how uncertainties in a model's parameters affect its outputs—essentially probing the model's instabilities to understand which factors most significantly impact its behavior.

Technique Purpose When Used
Sensitivity Analysis Assesses how uncertainty in model inputs affects outputs Identifying which parameters most significantly impact model behavior 5
Latin Hypercube Sampling Efficiently samples parameter spaces while ensuring full coverage Exploring how models behave across wide parameter ranges 5
Surrogate Modeling Creates simplified versions of complex models to reduce computational cost Making computationally intensive models more manageable for analysis 5

This embrace of friction represents a significant shift from traditional views of scientific models as simplified representations of reality. Instead, computational biologists increasingly view their models as dynamic tools for exploration that are co-constitutive of biological knowledge itself 1 4 .

The Scientist's Toolkit: Essential Research Reagents

Just as traditional biologists rely on specific laboratory tools, computational biologists depend on a different set of "research reagents" to build and analyze their models:

Modeling Software

Provides frameworks for constructing and visualizing biological models

Example: Bio-Model Analyzer (BMA) for gene regulatory networks 4

Sensitivity Analysis Tools

Quantifies how uncertainty in parameters affects model outcomes

Example: Extended Fourier Amplitude Sensitivity Testing (eFAST) 5

Sampling Algorithms

Efficiently explores parameter spaces to understand model behavior

Example: Latin Hypercube Sampling (LHS) for stratifying parameter ranges 5

Formal Modeling Frameworks

Provides mathematical structure for representing biological systems

Example: Boolean Networks and Petri Nets for precise modeling

Surrogate Models/Emulators

Creates efficient approximations of computationally expensive models

Example: Neural networks or random forests for predicting simulation responses 5

Data Integration Platforms

Combines diverse biological data sources for comprehensive modeling

Example: Multi-omics data integration platforms

Conclusion: The Generative Power of Friction

The story of computational biology reveals a profound truth about how science advances: frictions and instabilities are not obstacles to be eliminated but rather productive forces that drive discovery 1 4 . By working through the tensions between digital models and biological reality, researchers engage in what anthropologists of science call "knowing-in-progress"—the practical back-and-forth that characterizes genuine scientific exploration 4 .

This perspective helps counter what the researchers call "extreme claims" that computational biology will replace conventional experimental biology 1 3 . Instead, computation is becoming tightly entangled with forms of scientific knowing and doing, creating a richer, more collaborative future for biological research 4 .

As we continue to model increasingly complex biological systems—from cellular processes to whole organisms—the ability to work creatively with instability and friction may prove to be one of computational biology's greatest assets. Rather than seeking perfect digital replicas of biological reality, the field's true power lies in its capacity to generate new questions, reveal unexpected connections, and deepen our wonder at the complexity of life.

References