Three key takeaways: “The State of AI for the Enterprise”

How to responsibly bring AI to government and enterprise

Large language models and generative AI have captured the public imagination in the last year. But what does the brave new world of AI tools herald for companies and government institutions?

Igor Jablokov, CEO and founder of Pryon, recently sat down with analyst Christina Stathopoulos to discuss his thoughts on the state of AI for the enterprise. For those who don’t have time to listen to the full podcast episode, here are our top three takeaways.

1. AI will augment people, not replace them

Companies that have tried to replace employees with AI tools have seen middling to disastrous results. This isn’t terribly surprising: AI isn’t going to get rid of the need for people en masse. However, AI will help employees become more effective and powerful in their roles.

All companies want to get 10x bigger over the next decade, but there’s not going to be a 10x increase in the world’s population within our lifetimes. What does that mean? People will have to get more efficient at their jobs. Which means employees will need AI tools to help them do things more efficiently, like get the answers they need faster than ever before.

As Pedro Domingos, Professor of Computer Science at the University of Washington and author of The Master Algorithm, says about technologies like AI: “It's not man versus machine; it's man with machine versus man without. Data and intuition are like horse and rider, and you don't try to outrun a horse; you ride it.” In this vein, companies will use AI to supercharge their employees, not supplant them.

2. AI isn’t actually all that new, but taking AI from the consumer world to government and enterprise is a new challenge

When most people think of AI today, they think of tools that have emerged in the last year or two, like ChatGPT. There’s a massive amount of hype around both AI startups and larger tech companies implementing AI for the first time. It isn’t dissimilar to what happened around the turn of the millennium, when companies added “.com” to their names and saw their valuations skyrocket.

However, just as the basic technologies powering the Internet didn’t emerge from thin air in 1997, AI isn’t brand new — it’s actually been around in numerous forms for decades. Translation tools, semi-autonomous vehicles, text autocorrection, computational photography, voice assistants, and speech recognition software are just a few examples of AI-based technologies that have been in use for a long time.

In the last few years, we’ve seen an explosion in computing power, allowing for truly large language models (LLMs) and massive information ingestion. What hasn’t changed, though, is the consumer-grade approach to AI tools. Existing AI solutions, from voice assistants like Alexa to tools like DeepMind, have all been built with consumers in mind.

This is not necessarily a bad thing; consumer tech companies are great at de-risking new interaction models (like Apple did with the computer mouse and Multi-Touch) and distributing them at scale. But it takes a while to take consumer-grade tools and make them ready for critical environments, like government agencies and large enterprises. The first iPhone didn’t come with support for enterprise mobile management (EMM) or VPNs, for example; it took industry pressure and years of engineering for these crucial security measures to get built.

Getting AI ready for government and enterprise is why we built Pryon. These kinds of entities have high bars for accuracy, security, scalability, and speed — and that’s where Pryon shines.

3. The risks of consumer-grade AI tools in the enterprise are too high to ignore

The companies focused on building LLMs, like OpenAI and Meta, weren’t focused on a purpose-built enterprise experience that’s reliable, always available, and service-able. Even more than uptime and configurability, though, was the fact that these firms didn’t consider data governance, data provenance, and security, which are the top priorities for any enterprise or government installation.

First, most generative AI tools haven’t been shy about ingesting loads of data from wherever they can find it — not just the open web, but also books and any information users provide them, even if it’s proprietary company data. Generative AI tools then use this data to train their models. While newer, supposedly enterprise-grade solutions like ChatGPT Enterprise say they won’t use company data for training purposes, that doesn’t mean employees won’t have access to potentially sensitive information from other companies whose information has already been used to train the LLM.

Second, these tools are fraught with risk because the copyright status of generated output is unclear. For example, if you use ChatGPT to generate snippets of code that you then embed into your company’s product, you may be taking code that was newly created by an actual person and uploaded to Github. The attribution and ownership risks are massive.

Finally, there’s the threat of hallucination. Consumer-first generative AI tools often make up incorrect information, stating it as if it’s fact. In high-stakes environments where information accuracy and trustworthiness are paramount, the propensity for generative AI tools to hallucinate can present serious risks, as they can mislead users with incorrect or incomplete information.

More risks abound: prompt injection attacks, the ability for savvy users to reverse-engineer training data, and a lack of per-user access controls giving any employee access to all company information.

Listen to the full podcast recording for more of Igor’s insights, including why he believes the web as we know it died last year, the three types of AI companies, and the 4 P’s of enterprise content.

Large language models and generative AI have captured the public imagination in the last year. But what does the brave new world of AI tools herald for companies and government institutions?

Igor Jablokov, CEO and founder of Pryon, recently sat down with analyst Christina Stathopoulos to discuss his thoughts on the state of AI for the enterprise. For those who don’t have time to listen to the full podcast episode, here are our top three takeaways.

1. AI will augment people, not replace them

Companies that have tried to replace employees with AI tools have seen middling to disastrous results. This isn’t terribly surprising: AI isn’t going to get rid of the need for people en masse. However, AI will help employees become more effective and powerful in their roles.

All companies want to get 10x bigger over the next decade, but there’s not going to be a 10x increase in the world’s population within our lifetimes. What does that mean? People will have to get more efficient at their jobs. Which means employees will need AI tools to help them do things more efficiently, like get the answers they need faster than ever before.

As Pedro Domingos, Professor of Computer Science at the University of Washington and author of The Master Algorithm, says about technologies like AI: “It's not man versus machine; it's man with machine versus man without. Data and intuition are like horse and rider, and you don't try to outrun a horse; you ride it.” In this vein, companies will use AI to supercharge their employees, not supplant them.

2. AI isn’t actually all that new, but taking AI from the consumer world to government and enterprise is a new challenge

When most people think of AI today, they think of tools that have emerged in the last year or two, like ChatGPT. There’s a massive amount of hype around both AI startups and larger tech companies implementing AI for the first time. It isn’t dissimilar to what happened around the turn of the millennium, when companies added “.com” to their names and saw their valuations skyrocket.

However, just as the basic technologies powering the Internet didn’t emerge from thin air in 1997, AI isn’t brand new — it’s actually been around in numerous forms for decades. Translation tools, semi-autonomous vehicles, text autocorrection, computational photography, voice assistants, and speech recognition software are just a few examples of AI-based technologies that have been in use for a long time.

In the last few years, we’ve seen an explosion in computing power, allowing for truly large language models (LLMs) and massive information ingestion. What hasn’t changed, though, is the consumer-grade approach to AI tools. Existing AI solutions, from voice assistants like Alexa to tools like DeepMind, have all been built with consumers in mind.

This is not necessarily a bad thing; consumer tech companies are great at de-risking new interaction models (like Apple did with the computer mouse and Multi-Touch) and distributing them at scale. But it takes a while to take consumer-grade tools and make them ready for critical environments, like government agencies and large enterprises. The first iPhone didn’t come with support for enterprise mobile management (EMM) or VPNs, for example; it took industry pressure and years of engineering for these crucial security measures to get built.

Getting AI ready for government and enterprise is why we built Pryon. These kinds of entities have high bars for accuracy, security, scalability, and speed — and that’s where Pryon shines.

3. The risks of consumer-grade AI tools in the enterprise are too high to ignore

The companies focused on building LLMs, like OpenAI and Meta, weren’t focused on a purpose-built enterprise experience that’s reliable, always available, and service-able. Even more than uptime and configurability, though, was the fact that these firms didn’t consider data governance, data provenance, and security, which are the top priorities for any enterprise or government installation.

First, most generative AI tools haven’t been shy about ingesting loads of data from wherever they can find it — not just the open web, but also books and any information users provide them, even if it’s proprietary company data. Generative AI tools then use this data to train their models. While newer, supposedly enterprise-grade solutions like ChatGPT Enterprise say they won’t use company data for training purposes, that doesn’t mean employees won’t have access to potentially sensitive information from other companies whose information has already been used to train the LLM.

Second, these tools are fraught with risk because the copyright status of generated output is unclear. For example, if you use ChatGPT to generate snippets of code that you then embed into your company’s product, you may be taking code that was newly created by an actual person and uploaded to Github. The attribution and ownership risks are massive.

Finally, there’s the threat of hallucination. Consumer-first generative AI tools often make up incorrect information, stating it as if it’s fact. In high-stakes environments where information accuracy and trustworthiness are paramount, the propensity for generative AI tools to hallucinate can present serious risks, as they can mislead users with incorrect or incomplete information.

More risks abound: prompt injection attacks, the ability for savvy users to reverse-engineer training data, and a lack of per-user access controls giving any employee access to all company information.

Listen to the full podcast recording for more of Igor’s insights, including why he believes the web as we know it died last year, the three types of AI companies, and the 4 P’s of enterprise content.

Three key takeaways: “The State of AI for the Enterprise”

How to responsibly bring AI to government and enterprise

Large language models and generative AI have captured the public imagination in the last year. But what does the brave new world of AI tools herald for companies and government institutions?

Igor Jablokov, CEO and founder of Pryon, recently sat down with analyst Christina Stathopoulos to discuss his thoughts on the state of AI for the enterprise. For those who don’t have time to listen to the full podcast episode, here are our top three takeaways.

1. AI will augment people, not replace them

Companies that have tried to replace employees with AI tools have seen middling to disastrous results. This isn’t terribly surprising: AI isn’t going to get rid of the need for people en masse. However, AI will help employees become more effective and powerful in their roles.

All companies want to get 10x bigger over the next decade, but there’s not going to be a 10x increase in the world’s population within our lifetimes. What does that mean? People will have to get more efficient at their jobs. Which means employees will need AI tools to help them do things more efficiently, like get the answers they need faster than ever before.

As Pedro Domingos, Professor of Computer Science at the University of Washington and author of The Master Algorithm, says about technologies like AI: “It's not man versus machine; it's man with machine versus man without. Data and intuition are like horse and rider, and you don't try to outrun a horse; you ride it.” In this vein, companies will use AI to supercharge their employees, not supplant them.

2. AI isn’t actually all that new, but taking AI from the consumer world to government and enterprise is a new challenge

When most people think of AI today, they think of tools that have emerged in the last year or two, like ChatGPT. There’s a massive amount of hype around both AI startups and larger tech companies implementing AI for the first time. It isn’t dissimilar to what happened around the turn of the millennium, when companies added “.com” to their names and saw their valuations skyrocket.

However, just as the basic technologies powering the Internet didn’t emerge from thin air in 1997, AI isn’t brand new — it’s actually been around in numerous forms for decades. Translation tools, semi-autonomous vehicles, text autocorrection, computational photography, voice assistants, and speech recognition software are just a few examples of AI-based technologies that have been in use for a long time.

In the last few years, we’ve seen an explosion in computing power, allowing for truly large language models (LLMs) and massive information ingestion. What hasn’t changed, though, is the consumer-grade approach to AI tools. Existing AI solutions, from voice assistants like Alexa to tools like DeepMind, have all been built with consumers in mind.

This is not necessarily a bad thing; consumer tech companies are great at de-risking new interaction models (like Apple did with the computer mouse and Multi-Touch) and distributing them at scale. But it takes a while to take consumer-grade tools and make them ready for critical environments, like government agencies and large enterprises. The first iPhone didn’t come with support for enterprise mobile management (EMM) or VPNs, for example; it took industry pressure and years of engineering for these crucial security measures to get built.

Getting AI ready for government and enterprise is why we built Pryon. These kinds of entities have high bars for accuracy, security, scalability, and speed — and that’s where Pryon shines.

3. The risks of consumer-grade AI tools in the enterprise are too high to ignore

The companies focused on building LLMs, like OpenAI and Meta, weren’t focused on a purpose-built enterprise experience that’s reliable, always available, and service-able. Even more than uptime and configurability, though, was the fact that these firms didn’t consider data governance, data provenance, and security, which are the top priorities for any enterprise or government installation.

First, most generative AI tools haven’t been shy about ingesting loads of data from wherever they can find it — not just the open web, but also books and any information users provide them, even if it’s proprietary company data. Generative AI tools then use this data to train their models. While newer, supposedly enterprise-grade solutions like ChatGPT Enterprise say they won’t use company data for training purposes, that doesn’t mean employees won’t have access to potentially sensitive information from other companies whose information has already been used to train the LLM.

Second, these tools are fraught with risk because the copyright status of generated output is unclear. For example, if you use ChatGPT to generate snippets of code that you then embed into your company’s product, you may be taking code that was newly created by an actual person and uploaded to Github. The attribution and ownership risks are massive.

Finally, there’s the threat of hallucination. Consumer-first generative AI tools often make up incorrect information, stating it as if it’s fact. In high-stakes environments where information accuracy and trustworthiness are paramount, the propensity for generative AI tools to hallucinate can present serious risks, as they can mislead users with incorrect or incomplete information.

More risks abound: prompt injection attacks, the ability for savvy users to reverse-engineer training data, and a lack of per-user access controls giving any employee access to all company information.

Listen to the full podcast recording for more of Igor’s insights, including why he believes the web as we know it died last year, the three types of AI companies, and the 4 P’s of enterprise content.

Three key takeaways: “The State of AI for the Enterprise”

How to responsibly bring AI to government and enterprise

WHAT YOU WILL LEARN

Why knowledge friction costs enterprises thousands of hours of productivity. Four common approaches to solving the enterprise productivity problem — and why they fall shortHow to unlock the value of AI and solve your productivity problem with a Knowledge AI platform

Knowledge friction: The productivity problem that has plagued organizations for decades.

70% of employees report spending an hour or more searching for a single piece of information.*

Companies are creating more content and data than ever – but most of it isn’t discoverable. This frustrating disconnect between content creators and content consumers is what we call knowledge friction. And it costs enterprises thousands of hours of productivity.  

*2024 Survey on Information Discovery, Unisphere Research.

Four ways enterprises have tried to solve the problem — and how Knowledge AI can help.  

Companies have tried various solutions to generate actionable answers from their existing content, yet the enterprise productivity problem remains. Where web search, legacy enterprise search, homegrown solutions, and AI startups all fall short, Knowledge AI is eliminating knowledge friction once and for all.

Unlock the value of AI with an Enterprise AI Platform

Using cutting-edge technology like natural language processing (NLP) and retrieval-augmented generation (RAG), an enterprise Knowledge AI platform swiftly extracts valuable knowledge from existing content, reading it like a human would, and transforming that knowledge into answers for your teams.

Find out how to unlock the value of AI and solve the enterprise productivity problem for good.

Get the Guide

Power enterprise answers with Pryon

Pryon comprehensively transforms enterprise content into accurate, instant, and verifiable answers.  Get started simply with Pryon AI Labs — a low risk, no-code, lab environment with guidance from our expert solutions team.

For media or investment inquiries, please email info@pryoninc.com.

Three key takeaways: “The State of AI for the Enterprise”

How to responsibly bring AI to government and enterprise

Insights from public research at the ready

Stop endlessly searching through PubMed to find the right citation, research guidance, or trial data. RAE lets you ask a question, queries millions of information sources, and delivers the right answer in less than a second, pointing you to the source document(s) in case you’d like to gather more context.

Uncover insights from your own research

Your own research data may contain insights that are even more valuable than the information hidden in MEDLINE and other biomedical literature. That’s why RAE allows researchers to upload their own research, such as clinical trial findings, patient data, and internal unpublished research papers.

Highly accurate and trustworthy

RAE delivers  trustworthy, verifiable, always up-to-date answers, which are critical in a research setting. RAE’s best-in-class retrieval model uses advanced machine learning, computer vision, and optical character recognition to read complex information — even handwritten documents and diagrams — like a human would. RAE never hallucinates, since it only pulls from trusted research content, and delivers over 90% accuracy out of the box (with further improvements over time).

Scales to fit the varied needs of any 
research enterprise

For many organizations, life sciences research can comprise tens of thousands of voluminous research articles. RAE’s massive storage and compute resources enable it to ingest terabytes of research data — including PDFs, text files, images, video, and more — and transform that data into accurate answers.

Safe and secure

To ensure private research data and queries remain private, RAE runs entirely on-premises, not in a public cloud environment. RAE comes preloaded with Pryon, a Knowledge AI platform. Pryon’s AI models do not train on your data, so your data remains yours and yours alone. The system additionally ensures security against external parties with a self-contained, SOC 2 Type II-compliant architecture. All running securely on-prem on Dell PowerEdge servers.

Research answer engines can be transformational

“We can’t wait to see how pharmaceutical companies, research institutes, and development organizations use the Pryon | Dell Research Answer Engine. With this solution, life science experts can spend less time searching PubMed and internal resources for answers and more time conducting game-changing research.”

— Alex Long, Head of Strategy, Life Sciences at Dell Technologies

Why use Dell and Pryon’s Research Answer Engine?

RAE helps accelerate the research process by enabling researchers to quickly get answers to their questions directly from trusted sources, such as MEDLINE and private research. Life science researchers no longer need to waste time and energy hunting for valuable information when they could instead be helping develop new treatments.

Ready to get started?

Request a demo or email  lifesciences@pryoninc.com

Learn more about Dell in Healthcare at Dell.com/Healthcare

Three key takeaways: “The State of AI for the Enterprise”

How to responsibly bring AI to government and enterprise

WHAT YOU WILL LEARN:

- Where stalled information access is holding enterprises back
- The leading barriers to delivering business-critical information to end users
- The top AI use cases enterprise leaders are exploring for 2024
- Strategies for implementing AI solutions without exposing your organization to risk

Rapid information access is a must-have in today’s digital economy

92% of enterprise leaders agree that access to fast, accurate information from unstructured content is vital to their business.

Information remains out of reach for end users

70% of leaders report that employees in their organization spend more than an hour looking for a piece of information, with nearly a quarter (23%) spending more than 5 hours.

Enterprises are looking to AI for help

The two leading use cases for AI involve helping users better understand and get answers from information spread across the enterprise.

Explore all the insights in the 2024 Survey on Enterprise Information Discovery from Unisphere Research

Power enterprise answers with Pryon

Pryon comprehensively transforms enterprise content into accurate, instant, and verifiable answers.

Get started simply with Pryon AI Labs — a low risk, no-code, lab environment with guidance from our expert solutions team.   

Three key takeaways: “The State of AI for the Enterprise”

How to responsibly bring AI to government and enterprise

Large language models and generative AI have captured the public imagination in the last year. But what does the brave new world of AI tools herald for companies and government institutions?

Igor Jablokov, CEO and founder of Pryon, recently sat down with analyst Christina Stathopoulos to discuss his thoughts on the state of AI for the enterprise. For those who don’t have time to listen to the full podcast episode, here are our top three takeaways.

1. AI will augment people, not replace them

Companies that have tried to replace employees with AI tools have seen middling to disastrous results. This isn’t terribly surprising: AI isn’t going to get rid of the need for people en masse. However, AI will help employees become more effective and powerful in their roles.

All companies want to get 10x bigger over the next decade, but there’s not going to be a 10x increase in the world’s population within our lifetimes. What does that mean? People will have to get more efficient at their jobs. Which means employees will need AI tools to help them do things more efficiently, like get the answers they need faster than ever before.

As Pedro Domingos, Professor of Computer Science at the University of Washington and author of The Master Algorithm, says about technologies like AI: “It's not man versus machine; it's man with machine versus man without. Data and intuition are like horse and rider, and you don't try to outrun a horse; you ride it.” In this vein, companies will use AI to supercharge their employees, not supplant them.

2. AI isn’t actually all that new, but taking AI from the consumer world to government and enterprise is a new challenge

When most people think of AI today, they think of tools that have emerged in the last year or two, like ChatGPT. There’s a massive amount of hype around both AI startups and larger tech companies implementing AI for the first time. It isn’t dissimilar to what happened around the turn of the millennium, when companies added “.com” to their names and saw their valuations skyrocket.

However, just as the basic technologies powering the Internet didn’t emerge from thin air in 1997, AI isn’t brand new — it’s actually been around in numerous forms for decades. Translation tools, semi-autonomous vehicles, text autocorrection, computational photography, voice assistants, and speech recognition software are just a few examples of AI-based technologies that have been in use for a long time.

In the last few years, we’ve seen an explosion in computing power, allowing for truly large language models (LLMs) and massive information ingestion. What hasn’t changed, though, is the consumer-grade approach to AI tools. Existing AI solutions, from voice assistants like Alexa to tools like DeepMind, have all been built with consumers in mind.

This is not necessarily a bad thing; consumer tech companies are great at de-risking new interaction models (like Apple did with the computer mouse and Multi-Touch) and distributing them at scale. But it takes a while to take consumer-grade tools and make them ready for critical environments, like government agencies and large enterprises. The first iPhone didn’t come with support for enterprise mobile management (EMM) or VPNs, for example; it took industry pressure and years of engineering for these crucial security measures to get built.

Getting AI ready for government and enterprise is why we built Pryon. These kinds of entities have high bars for accuracy, security, scalability, and speed — and that’s where Pryon shines.

3. The risks of consumer-grade AI tools in the enterprise are too high to ignore

The companies focused on building LLMs, like OpenAI and Meta, weren’t focused on a purpose-built enterprise experience that’s reliable, always available, and service-able. Even more than uptime and configurability, though, was the fact that these firms didn’t consider data governance, data provenance, and security, which are the top priorities for any enterprise or government installation.

First, most generative AI tools haven’t been shy about ingesting loads of data from wherever they can find it — not just the open web, but also books and any information users provide them, even if it’s proprietary company data. Generative AI tools then use this data to train their models. While newer, supposedly enterprise-grade solutions like ChatGPT Enterprise say they won’t use company data for training purposes, that doesn’t mean employees won’t have access to potentially sensitive information from other companies whose information has already been used to train the LLM.

Second, these tools are fraught with risk because the copyright status of generated output is unclear. For example, if you use ChatGPT to generate snippets of code that you then embed into your company’s product, you may be taking code that was newly created by an actual person and uploaded to Github. The attribution and ownership risks are massive.

Finally, there’s the threat of hallucination. Consumer-first generative AI tools often make up incorrect information, stating it as if it’s fact. In high-stakes environments where information accuracy and trustworthiness are paramount, the propensity for generative AI tools to hallucinate can present serious risks, as they can mislead users with incorrect or incomplete information.

More risks abound: prompt injection attacks, the ability for savvy users to reverse-engineer training data, and a lack of per-user access controls giving any employee access to all company information.

Listen to the full podcast recording for more of Igor’s insights, including why he believes the web as we know it died last year, the three types of AI companies, and the 4 P’s of enterprise content.

Ready to kickstart your AI Strategy?

Reach out to us today!

Three key takeaways: “The State of AI for the Enterprise”

How to responsibly bring AI to government and enterprise

Large language models and generative AI have captured the public imagination in the last year. But what does the brave new world of AI tools herald for companies and government institutions?

Igor Jablokov, CEO and founder of Pryon, recently sat down with analyst Christina Stathopoulos to discuss his thoughts on the state of AI for the enterprise. For those who don’t have time to listen to the full podcast episode, here are our top three takeaways.

1. AI will augment people, not replace them

Companies that have tried to replace employees with AI tools have seen middling to disastrous results. This isn’t terribly surprising: AI isn’t going to get rid of the need for people en masse. However, AI will help employees become more effective and powerful in their roles.

All companies want to get 10x bigger over the next decade, but there’s not going to be a 10x increase in the world’s population within our lifetimes. What does that mean? People will have to get more efficient at their jobs. Which means employees will need AI tools to help them do things more efficiently, like get the answers they need faster than ever before.

As Pedro Domingos, Professor of Computer Science at the University of Washington and author of The Master Algorithm, says about technologies like AI: “It's not man versus machine; it's man with machine versus man without. Data and intuition are like horse and rider, and you don't try to outrun a horse; you ride it.” In this vein, companies will use AI to supercharge their employees, not supplant them.

2. AI isn’t actually all that new, but taking AI from the consumer world to government and enterprise is a new challenge

When most people think of AI today, they think of tools that have emerged in the last year or two, like ChatGPT. There’s a massive amount of hype around both AI startups and larger tech companies implementing AI for the first time. It isn’t dissimilar to what happened around the turn of the millennium, when companies added “.com” to their names and saw their valuations skyrocket.

However, just as the basic technologies powering the Internet didn’t emerge from thin air in 1997, AI isn’t brand new — it’s actually been around in numerous forms for decades. Translation tools, semi-autonomous vehicles, text autocorrection, computational photography, voice assistants, and speech recognition software are just a few examples of AI-based technologies that have been in use for a long time.

In the last few years, we’ve seen an explosion in computing power, allowing for truly large language models (LLMs) and massive information ingestion. What hasn’t changed, though, is the consumer-grade approach to AI tools. Existing AI solutions, from voice assistants like Alexa to tools like DeepMind, have all been built with consumers in mind.

This is not necessarily a bad thing; consumer tech companies are great at de-risking new interaction models (like Apple did with the computer mouse and Multi-Touch) and distributing them at scale. But it takes a while to take consumer-grade tools and make them ready for critical environments, like government agencies and large enterprises. The first iPhone didn’t come with support for enterprise mobile management (EMM) or VPNs, for example; it took industry pressure and years of engineering for these crucial security measures to get built.

Getting AI ready for government and enterprise is why we built Pryon. These kinds of entities have high bars for accuracy, security, scalability, and speed — and that’s where Pryon shines.

3. The risks of consumer-grade AI tools in the enterprise are too high to ignore

The companies focused on building LLMs, like OpenAI and Meta, weren’t focused on a purpose-built enterprise experience that’s reliable, always available, and service-able. Even more than uptime and configurability, though, was the fact that these firms didn’t consider data governance, data provenance, and security, which are the top priorities for any enterprise or government installation.

First, most generative AI tools haven’t been shy about ingesting loads of data from wherever they can find it — not just the open web, but also books and any information users provide them, even if it’s proprietary company data. Generative AI tools then use this data to train their models. While newer, supposedly enterprise-grade solutions like ChatGPT Enterprise say they won’t use company data for training purposes, that doesn’t mean employees won’t have access to potentially sensitive information from other companies whose information has already been used to train the LLM.

Second, these tools are fraught with risk because the copyright status of generated output is unclear. For example, if you use ChatGPT to generate snippets of code that you then embed into your company’s product, you may be taking code that was newly created by an actual person and uploaded to Github. The attribution and ownership risks are massive.

Finally, there’s the threat of hallucination. Consumer-first generative AI tools often make up incorrect information, stating it as if it’s fact. In high-stakes environments where information accuracy and trustworthiness are paramount, the propensity for generative AI tools to hallucinate can present serious risks, as they can mislead users with incorrect or incomplete information.

More risks abound: prompt injection attacks, the ability for savvy users to reverse-engineer training data, and a lack of per-user access controls giving any employee access to all company information.

Listen to the full podcast recording for more of Igor’s insights, including why he believes the web as we know it died last year, the three types of AI companies, and the 4 P’s of enterprise content.