31% of Organizations Using Generative AI Ask It To Write Code

[ad_1]

Code development, content creation and analytics are the top generative AI use cases. However, many enterprise users don’t trust gen AI to be private.

Artificial intelligence writing code.
Image: Vectors/Adobe Stock

Forty percent of data analysis leaders currently use generative artificial intelligence in their work, including to write code, analytics platform company Alteryx found in a report released August 15. Alteryx surveyed 300 data leaders across four countries — Australia, the U.K., the U.S. and Canada — about their use of generative artificial intelligence, qualms around its use and more.

Jump to:

A little over half of businesses have experimented with AI

Surveyed companies using generative AI employed it in content generation (46%), analytics insights summary (43%), analytics insights generation (32%), code development (31%) and process documentation (27%).

Most companies surveyed are curious about AI but don’t use it as part of their everyday process. The majority, 53%, said they are “exploring” or “experimenting” with the technology. Only 13% have AI models in place already and are working on optimizing them. In the middle sits the 34% who are “formalizing,” moving from pilot programs to production on a generative AI solution.

SEE: Gartner found that generative AI will have a transformational benefit (TechRepublic)

Of those who do use generative AI in any capacity, most found a positive impact: 55% reported modest benefits, and 34% reported substantial benefits. The benefits they found included increased market competitiveness (52%), improved security (49%) and enhanced performance or functionality of their products (45%). Another 10% found they didn’t benefit at all, and 1% found it too early to say.

CEOs often drive AI adoption

Often, it takes only one business leader to adopt generative AI as their pet project and encourage the rest of the company to adopt it. In 98% of cases, organizations report that a single person in a leadership position drove their generative AI strategy. In most cases, that leader was the CEO (30%), with slightly fewer organizations following the directives of a head of IT (25%) or chief data or analytics officer (22%). Conversely, among companies not using generative AI, 35% said they had “no one to take the lead with implementation.”

Interestingly, there is an element of hobbyist enthusiasm to the business adoption of generative AI. According to the survey, 81% of people who use generative AI at work also use it for personal or recreational purposes outside of work.

Tech leaders have qualms about generative AI

Many companies still have concerns about the security, copyright rules or efficacy of generative AI. Organizations that haven’t implemented generative AI said they didn’t do so because of concerns about data privacy (47%), lack of trust in the results produced by the system (43%), lack of sufficient expertise (39%) and not having anyone on staff to take the lead on implementing generative AI (34%).

Of the organizations already using generative AI in their work, the most pressing concerns were data ownership (29%), data privacy (28%) and IP ownership (28%).

One way to solve some of these concerns is human oversight — 64% said they believe generative AI can be used now as long as a human has veto power over the output. And there is a high degree of trust among workers who already use generative AI; 70% think it can “deliver initial, rapid results that I can review and modify to completion.”

SEE: Everything you need to know about Google’s generative AI, Bard. (TechRepublic)

Others — 71% — agreed to the idea that risks around generative AI can be managed by using the technology within frameworks set up by trusted software vendors.

Whether generative AI will replace roles for human workers is complicated. There’s an impression among 77% of surveyed people who already use generative AI that it could replace entire roles.

Other risks include privacy concerns, novel security vulnerabilities and copyright infringement when AI models are trained on original work. One possible solution is working within fair use principles, Asa Whillock, vice president and general manager of machine learning at Alteryx, pointed out in an email to TechRepublic: “Leaders must understand, however, that the trust of AI and LLMs is reliant on the quality of data inputs. Insights that are generated by AI models are only as good as the data they have access to,” Whillock said.

Organizations are still discovering how generative AI may benefit them

“Though the pulse survey indicates that many companies are still in the nascent stages of adoption, there’s a growing awareness of the benefits, and early adopters are already reaping the rewards,” wrote Heather Ferguson, Alteryx editorial manager, in a blog post.

“If implemented strategically, generative AI provides a massive opportunity for data democratization that will positively impact business operations, decisions and outcomes due to the cases for integrating LLMs (large language models) responsibly with low-code/no-code,” said Whillock.

[ad_2]

Tech Republic