It will streamline all your processes, fill every gap in your workforce, and reveal new areas for profitability and cost savings. What CTO wouldn’t want to invest in it?
We’re referring to the promise, though perhaps not the reality, of Generative AI (Gen AI). Right now, we are experiencing an “AI Bubble,” in some ways similar to the dot com hype of the 1990’s, where software company sales departments are making some promises their technology can’t quite fulfil. For example, some of the demos we’ve seen of AI functionality – from Tesla, Amazon, and others – have turned out to be remotely operated by humans.
When software companies exaggerate the AI functionality of their products, routinely adding “AI” to product descriptions, it can be difficult to separate the genuine claims from the falsehoods. A chatbot solution, for example, may offer AI functionality on its highest tier product, while requiring clients on other tiers to code chat scenarios.
Enterprise-sized Problems
The genuine excitement around generative AI has led enterprises to dedicate significant budget to add the newest and greatest AI to their products. Yet, despite its immense popularity (now used by 40% of small businesses), Gen AI projects are still at risk of cancellation. At a Gartner event last summer, Distinguished VP Analyst Rita Sallam summed up the dilemma: “After last year’s hype, executives are impatient to see returns on GenAI investments, yet organizations are struggling to prove and realize value. As the scope of initiatives widen, the financial burden of developing and deploying GenAI models is increasingly felt.”
How Much Higher Will Software Licenses Go?
AI functionality comes with a significant price tag: it’s estimated that a large AI model costs approximately $100 million to train, with newer generations projected to cost ten times that amount. At that price point, the enterprise tools your organization uses may not be able to provide AI at a price that keeps them profitable. Vendors are exploring paying for AI functionality by moving customers to subscription models which could cost customers more; overall, AI is driving a rise in SaaS subscription prices.
If enterprises are still struggling to make Generative AI relevant in their offerings, will you be able to make it work for your agency? In an interview, Salesforce’s Marc Benioff famously quipped, “We all know now that Microsoft Copilot is basically the new Microsoft Clippy, that customers have not gotten value from it.” If, as reported, Microsoft users are abandoning Copilot for ChatGPT, what are the chances that your users will not bother using new AI tools embedded in their software? There’s still great optimism around AI solutions right now. However, if tools like Copilot underperform, investor enthusiasm for AI may eventually cool.
The Painful Truth: Your Data May Not be Ready for AI
Leaving aside what might happen this year or next, the harder truth for government agencies is that your data needs to be in order, and to achieve that requires a significant investment in data governance. Ben Baldi of testing solution software Tricentis explained the challenge for government agencies in this interview: “…without high-quality, well-structured data, even the most sophisticated AI tools will fall short. That’s why improving data readiness is becoming a top priority for federal CIOs—and rightly so. Data readiness means having data that’s clean, structured, and ready to be used for analysis and decision-making.” Local governments have the added responsibility of ensuring that the troves of sensitive PII data they hold are safeguarded and not scraped by data-hungry AI tools.
It’s not surprising then, that an AI readiness report from Presidio found that 38% of government organizations have yet to adopt any Generative AI at all. And of the local government agencies that are early adopters, many don’t have AI policies in place to prevent misuse of their data.
One area where AI may introduce risk is public safety. Last year Axon, a maker of police body cams, introduced a ChatGPT-powered AI solution, Draft One, to help officers with one of their most tedious chores – drafting police reports from video captured during encounters with the public. Unfortunately, AI’s ability to accurately translate spoken words and the context of conversations is still a work in progress. Another issue is the tendency for AI to occasionally hallucinate in place of providing facts. The software is designed with some guardrails – like injecting nonsensical phrases into its generated content to ensure it is proofread by the officer.
Generative AI solutions like Draft One may drastically improve efficiency, but enthusiasm for their use may need to be tempered. The burden of managing expectations around AI capabilities will fall to government IT leaders.
If your agency needs help with IT infrastructure, public safety, cybersecurity, or other technologies, talk to us! We are local government specialists experienced in supporting government agencies like yours.