Dive Brief:
- CIOs are worried about the accuracy of AI-generated information as pressure mounts to deploy the technology, according to a Juniper Networks survey in partnership with Wakefield Research.
- Nearly 9 in 10 tech leaders believe it may not be possible to know if their company’s AI output is accurate, the survey of 1,000 global executives involved with AI at their organization found.
- The vast majority of respondents, 91%, say employees trust AI more than they should. More than three-quarters of executives expect the increase in AI deployments will lead to employees taking on more responsibility, according to the data.
Dive Insight:
AI is making waves across enterprises as deployments accelerate, and the disruption will impact employee workflows, for better or worse.
While businesses are lured by prospective productivity boosts and seamless user experiences, tech leaders will need to underline the importance of understanding, and mitigating, AI risks.
When it comes to accuracy, many generative AI tools have been trained on data and information found on the internet, which can have drawbacks. The wealth of knowledge provides these tools with information on copious amounts of topics, but the quality of that information isn’t always up to par.
Business leaders have to identify these risks for employees and train them on how to move forward while using generative AI. These tools have a tendency to hallucinate, for example, requiring employees to operate with a healthy level of skepticism when assessing generated outputs.
But mitigation techniques don’t have to stop at fact-checking. Businesses can work to ground generated responses in accurate information by training solutions on in-house data, which is what Juniper Networks does, the report said.
Enterprises also know that the performance of generative AI solutions can change. Monitoring tools can give CIOs a sense of whether a tool’s accuracy is degrading over time and alert employees when necessary.