As firms are finding out, the unit economics of AI/ML is not like software. It requires more manual manipulation of data than one might expect – including ingesting data, cleaning data, tuning models – and deployment doesn’t scale like pure software does. Every customer has their own unique datasets.
The Department of Defense has had enough trouble adapting its hardware-oriented acquisition system to buying software. Will AI/ML present an even greater challenge or does it lend itself to the traditional labor services model?
Perhaps just as great a challenge will be initiating defense AI/ML programs at scale, which requires predictive control through a lifecycle cost estimate. As former Undersecretary of Defense Comptroller David Norquist said, "Artificial intelligence is different because the potential benefits are less clear; you know what you're going to get with a hypersonic missile."
The Center for Government Contracting of the George Mason University School of Business and the Wharton Aerospace Community co-hosted an important discussion on the scalability, unit economics and cost estimating methodologies of AI/ML projects with a tremendous panel including: Sheldon Fernandez, CEO of Darwin AI; Ryan Connell, DCMA Commercial Pricing; and Diego Oppenheimer, CEO with Craig Perrin of Algorithmia.
Viewers of this video will walk away with a fuller understanding of which techniques and frameworks to employ that can build a path for the future of AI and ML. A path that leads to results with a pricing structure for government consumers and business providers.
Watch on-demand: