Breaking Boundaries: Integrating Generative AI for Developers

September 25, 2023

Many organizations are experiencing deficits in technical skills and gaps in productivity that ultimately harm their bottom line. To overcome those obstacles and drive growth, businesses can use generative AI tools like ChatGPT and Bard to handle large volumes of data and increase the speed and output of basic tasks without draining valuable human capital. Already, businesses are finding that saving time is one of the most valuable benefits generative AI can deliver. However, there must be oversight of deployments to avoid recurring pitfalls, as the technology contains just as much risk as it does opportunity.

Generative AI provides developers with helpful task automation while simultaneously creating new challenges. Security, bias and IP questions have DevOps teams scratching their heads at how to best ethically and operationally implement generative AI. Developers must embrace these challenges to remain competitive in an increasingly automated and technologically-driven world.

Maintaining Confidentiality of Company Code

Generative AI’s ability to understand and solve code-related issues makes it an ideal tool to help software developers optimize code for faster feature development and quicker releases. By identifying where it can contribute to tasks, like legacy code conversion or automation of processes, companies can free up valuable developer time for building the tools that can grow the business or drive new initiatives. An added benefit is that development teams can automate the coding of new applications or systems by updating old code or re-using existing examples.

ChatGPT, for example, can also turbocharge development time by scouring swathes of information on GitHub and other forums to find examples of whatever a coder is trying to build. A 2022 study published in the Proceedings of the Association for Computing Machinery on Programming Languages (PACMPL) found that developers could code faster and work more efficiently thanks to generative AI. Tedious searches in tech forums and manual copy-and-paste procedures are replaced by simply asking the right questions (via prompts).

However, generative AI tools are not closed systems. As users enter data into ChatGPT, it is saved and aggregated to continuously train the language model, opening the door for potential leaks of sensitive company information. There is a serious, potential risk of employees using AI tools as sensitive data already makes up 11% of what employees paste into ChatGPT.

As a third-party system, AI tools absorb every query, comment and question without any privacy guardrails for users. If an AI bot’s security is compromised, businesses can unintentionally hand over data to bad actors. Even without a hacker’s involvement, sharing confidential customer or partner information may violate contractual agreements and create long legal battles. Developer teams must remain conscious of AI’s security risks.

Avoiding Bias Creeping Into Results

It is of utmost importance that businesses make reliable, fair and traceable decisions based on the latest facts. Currently, ChatGPT-4 hasn’t reached the maturity level necessary to be trusted by most companies without significant verification measures. Already, testers have persuaded ChatGPT to make statements that are either inaccurate, biased or both.

One key concern in generative AI is the ingrained and accidental bias in answers due to the human-generated content used to train large language models. ChatGPT works by searching for textual content relating to the question and recombining it for a distillation. In an almost democratic way, the majority opinion will dominate. For instance, if most people believe that the Swedish are hard drinkers, ChatGPT will say so. While the technology provides remarkably accurate responses in an alarmingly convincing way, it’s important to remember that it’s not always the most reliable or balanced source that wins.

Furthermore, OpenAI, the creator of ChatGPT, has explicitly stated that its goal is to create a system where queries can be pre-processed before they are fed to the neural net. Answers can then be post-processed before being shown to users. But this safeguard lacks clarity around its objectives and opens the door for ideological or political manipulation.

Companies considering implementing AI tools must carefully examine their potential applications to ensure they avoid scenarios where bias can result in insights and output that could end up harming the company and its reputation.

Minimizing Developer Function

Enterprises are looking for tools to integrate generative AI into existing business processes to enhance numerous functions and deliver greater value to their customers. While the technology still isn’t capable of replacing human developers responsible for updating and maintaining largescale software, updates to generative AI tools and integrations are occurring at a rapid pace, leaving businesses to determine which programs are most viable for developer function.

As the economy continues to tumble closer to a recession, companies seek ways to minimize costs. And those looking to use generative AI to work with the source code and technical documentation the company uses to run its own operations—rather than the open source code tools like ChatGPT provide—face budgetary decisions. Implementing generative AI programs into the business model can be extremely costly if done wrong, leaving executives to weigh the benefits of adopting the tech versus the cost of trying to hire a full team of developers.

Connecting the Enterprise to Avoid AI Pitfalls

For businesses looking to build the connected experiences their employees, partners and customers desire, generative AI is a great resource to accomplish this. However, it’s just one piece of the puzzle that is a modern enterprise. These tools can only deliver smart answers if they get smart questions and businesses connect their internal processes and applications to enable machine learning on accurate and holistic datasets.

Although AI poses many benefits for developers, such as increased efficiency, accuracy and the ability to handle large amounts of data, they must be wary of the potential security, ethical and budgetary concerns that impact employees, customers and the enterprise as a whole.

However, once the concerns are overcome or managed, the power of large language models will be leveraged in all kinds of technology and business use cases. Certainly a truly transformational shift and a step change towards a world where technology and software play an unprecedented role in business and society..

BY: STEFAN SIGG ON JULY 27, 2023

https://devops.com/breaking-boundaries-integrating-generative-ai-for-developers/

Latest Articles

All Articles
VMware Looks to Streamline Multi-Cloud Computing Management

VMware Looks to Streamline Multi-Cloud Computing Management

VMware today integrated its console for managing cloud instances with its VMware Cloud Foundation to streamline the deployment of its software in on-premises IT environments.

Cloud
AI: A Game-Changer for SRE Work-Sharing and Technical Debt

AI: A Game-Changer for SRE Work-Sharing and Technical Debt

Sharing Wisdom of Production: AI tools like Splunk and Datadog can analyze log files and telemetry data to uncover hidden patterns and trends, improving system understanding and enabling knowledge sharing.

Tech
Why Multi-Cloud Cost Optimization is Harder Than it Looks

Why Multi-Cloud Cost Optimization is Harder Than it Looks

Nine out of 10 large companies have already adopted multiple clouds, and IT analysts expect that even more businesses will embrace multi-cloud architectures over the next several years.

SRE