Large language models and generative AI have captured the public imagination in the last year. But what does the brave new world of AI tools herald for companies and government institutions?
Igor Jablokov, CEO and founder of Pryon, recently sat down with analyst Christina Stathopoulos to discuss his thoughts on the state of AI for the enterprise. For those who don’t have time to listen to the full podcast episode, here are our top three takeaways.
1. AI will augment people, not replace them
Companies that have tried to replace employees with AI tools have seen middling to disastrous results. This isn’t terribly surprising: AI isn’t going to get rid of the need for people en masse. However, AI will help employees become more effective and powerful in their roles.
All companies want to get 10x bigger over the next decade, but there’s not going to be a 10x increase in the world’s population within our lifetimes. What does that mean? People will have to get more efficient at their jobs. Which means employees will need AI tools to help them do things more efficiently, like get the answers they need faster than ever before.
As Pedro Domingos, Professor of Computer Science at the University of Washington and author of The Master Algorithm, says about technologies like AI: “It's not man versus machine; it's man with machine versus man without. Data and intuition are like horse and rider, and you don't try to outrun a horse; you ride it.” In this vein, companies will use AI to supercharge their employees, not supplant them.
2. AI isn’t actually all that new, but taking AI from the consumer world to government and enterprise is a new challenge
When most people think of AI today, they think of tools that have emerged in the last year or two, like ChatGPT. There’s a massive amount of hype around both AI startups and larger tech companies implementing AI for the first time. It isn’t dissimilar to what happened around the turn of the millennium, when companies added “.com” to their names and saw their valuations skyrocket.
However, just as the basic technologies powering the Internet didn’t emerge from thin air in 1997, AI isn’t brand new — it’s actually been around in numerous forms for decades. Translation tools, semi-autonomous vehicles, text autocorrection, computational photography, voice assistants, and speech recognition software are just a few examples of AI-based technologies that have been in use for a long time.
In the last few years, we’ve seen an explosion in computing power, allowing for truly large language models (LLMs) and massive information ingestion. What hasn’t changed, though, is the consumer-grade approach to AI tools. Existing AI solutions, from voice assistants like Alexa to tools like DeepMind, have all been built with consumers in mind.
This is not necessarily a bad thing; consumer tech companies are great at de-risking new interaction models (like Apple did with the computer mouse and Multi-Touch) and distributing them at scale. But it takes a while to take consumer-grade tools and make them ready for critical environments, like government agencies and large enterprises. The first iPhone didn’t come with support for enterprise mobile management (EMM) or VPNs, for example; it took industry pressure and years of engineering for these crucial security measures to get built.
Getting AI ready for government and enterprise is why we built Pryon. These kinds of entities have high bars for accuracy, security, scalability, and speed — and that’s where Pryon shines.
3. The risks of consumer-grade AI tools in the enterprise are too high to ignore
The companies focused on building LLMs, like OpenAI and Meta, weren’t focused on a purpose-built enterprise experience that’s reliable, always available, and service-able. Even more than uptime and configurability, though, was the fact that these firms didn’t consider data governance, data provenance, and security, which are the top priorities for any enterprise or government installation.
First, most generative AI tools haven’t been shy about ingesting loads of data from wherever they can find it — not just the open web, but also books and any information users provide them, even if it’s proprietary company data. Generative AI tools then use this data to train their models. While newer, supposedly enterprise-grade solutions like ChatGPT Enterprise say they won’t use company data for training purposes, that doesn’t mean employees won’t have access to potentially sensitive information from other companies whose information has already been used to train the LLM.
Second, these tools are fraught with risk because the copyright status of generated output is unclear. For example, if you use ChatGPT to generate snippets of code that you then embed into your company’s product, you may be taking code that was newly created by an actual person and uploaded to Github. The attribution and ownership risks are massive.
Finally, there’s the threat of hallucination. Consumer-first generative AI tools often make up incorrect information, stating it as if it’s fact. In high-stakes environments where information accuracy and trustworthiness are paramount, the propensity for generative AI tools to hallucinate can present serious risks, as they can mislead users with incorrect or incomplete information.
More risks abound: prompt injection attacks, the ability for savvy users to reverse-engineer training data, and a lack of per-user access controls giving any employee access to all company information.
Listen to the full podcast recording for more of Igor’s insights, including why he believes the web as we know it died last year, the three types of AI companies, and the 4 P’s of enterprise content.