AI is hard. You know it, and we wouldn’t be writing about it if it wasn’t a mission-critical challenge facing businesses today. Unlike the software development lifecycle, where good code is built to last for some time, AI models are subject to the dynamism and vicissitudes of the world they’re designed to represent.
AI models are like six-year-olds during quarantine: They need constant attention . . . otherwise, something will break.
Tending to models requires scrutiny in terms of the data they receive, the predictions they make, and, ultimately, the accuracy of the model. This takes time, effort, and coordination. It requires constant oversight and rework, straining already-limited data science resources and causing friction across teams. Enterprise AI is a pipe dream when machine-learning capabilities are hamstrung in this way.
For these reasons, a more repeatable, governable, and efficient process for monitoring and retraining models has been on the wish list of data scientists for years. ModelOps (also referred to as MLOps) aims to be the solution. Though it sounds like a forgotten Bond plotline combining supermodels and special operations forces, ModelOps offers something far more exciting (at least as far as data scientists are concerned). Not only does it help on the later stages of production, the emerging discipline and associated tools comprising ModelOps aim to help more broadly to make the entire end-to-end lifecycle more repeatable, governable, safer, and faster.
To learn more about the emerging role of ModelOps, please listen to my interview with my colleague Mike Gualtieri, and look for our full report coming soon.
Note: A Forrester subscription is required to access the audio interview featured in this post.
(Jeremy Vale, senior research associate, coauthored this post.)