The Dyad (formerly JuliaSim) approach to building machine learning surrogates that meet real-world engineering requirements and industrial processes.
What You'll Learn in This Solution Brief
The Industrial Surrogate Challenge
Traditional approaches to surrogate modeling focus primarily on neural network architectures like Physics-Informed Neural Networks (PINNs), DeepONets, Continuous-Time Echo State Networks (CTESNs), and Fourier Neural Networks (FNO). While these can reproduce simulation behavior when given sufficient data, they fail to address the fundamental challenge: integrating next-generation machine learning into industrial modeling workflows and meeting real-world requirements infrastructure.
Critical Industrial Requirements for Deployable Surrogates
Predictable Accuracy Standards
Industrial applications require surrogates that can reliably meet specified accuracy requirements. Unlike generative AI where models remain stable until language changes, surrogate models must be retrained every time specifications or questions change. Engineers need processes that guarantee desired accuracy (such as "1% accuracy on landing forces") rather than hoping the best neural architecture will work.
Key insight: What matters is not the neural architecture but having a process that can reliably generate accurate neural approximations.
Clear Verification and Validation Standards
Industrial deployment requires answering critical questions:
What visualizations help understand where surrogates are most accurate/inaccurate?
How can retraining improve accuracy in specific parameter spaces without sacrificing accuracy elsewhere?
What metrics indicate when surrogates may not be reliable for given parameter ranges?
How can domain experts with no machine learning experience safely deploy accurate models?
Reliable Prediction Time Performance
The common question of whether training time "pays for itself" misses the point for industrial applications. Many real-world use cases enable analysis that would be impossible without the surrogate's speed improvements. For example:
Power grid control requiring models to run at specific frequencies (X Hz) for real-time deployment
Applications where billions of dollars can be gained by having more accurate models within strict time constraints
Control scenarios where compute cost bounds are fundamental limiting factors
The Value vs. Training Cost Reality
The document reveals that training time is rarely the limiting factor in industrial applications. With cloud computing making parallelism essentially cost-free, data generation becomes "embarrassingly parallel" - the time to run 5,000 different parameters equals the time for the most expensive single configuration.
The real question becomes: Is the value of the surrogate greater than the cost of its training? This depends on how many engineers benefit from improved workflows and productivity, given that we're discussing highly skilled professionals in high-tech industries.
Digital Echo: Process-Focused Surrogate Development
Dyad's (formerly JuliaSim) approach centers on Digital Echo, a neural surrogate system built around industrial processes rather than mathematical architectures. Digital Echo focuses on:
Process-Driven Design
Reliable neural architectures that improve with more data without hyperparameter tuning complications
Visualization suites enabling scientists to understand surrogate accuracy profiles
Automated cloud infrastructure masking training costs to maximize productivity
Accessibility for domain scientists with no machine learning background
Industrial Integration Focus
Digital Echo represents a completely different approach to surrogates - it's about ensuring someone can "click a button and get a surrogate that meets industrial requirements." The system prioritizes productivity over performance, focusing on meeting specification sheets rather than optimizing mathematical structures.
Perfect for: Engineering teams deploying machine learning in industrial applications, technical leaders evaluating surrogate modeling solutions, organizations requiring reliable AI integration with existing processes, and teams seeking to bridge the gap between cutting-edge ML research and production engineering requirements.
Essential reading for understanding why successful industrial AI deployment requires focusing on processes and requirements rather than just neural network architectures.