Generative AI has been dominating discussions within the technology sector over the last six months due to the potential to revolutionize several aspects of daily work, including IT professionals. Since the release of OpenAI’s ChatGPT-3 in November 2022, there has been an increase in talks surrounding the implications of generative AI for technology and society. Recently, OpenAI launched an API that could alter the nature of corporate and consumer applications, accompanied by the release of GPT-4, an updated large language model (LLM) that is intelligent enough to pass the SAT or bar exam. Generative AI allows the production of original content in response to queries, the generation of software application code based on natural language prompts, and the operation as a virtual assistant.
A Short History Lesson
Even before the release of GPT-3, generative AI had already made its way into familiar tools used by IT professionals, such as Red Hat’s Ansible infrastructure-as-code software. In October 2022, IBM and Red Hat launched Project Wisdom to train a generative AI model that would create Ansible playbooks simply by typing in a sentence. By making it easier to create, find, and improve automation content and explain what a playbook does without running it, Project Wisdom aims to improve automation content.
The ability of generative AI to take on coding tasks that were initially exclusive to human developers has instigated concerns among software engineers about the possibility of being replaced by such programs. While complete replacement may be unlikely, generative AI could drastically change the nature of work for programmers, shifting their expertise from directly instructing machines via coding languages to prompt engineering. Furthermore, some software engineering tasks, such as test generation, are predicted to be taken over by AI, including functional tests.
Modern infrastructure managed by IT operations professionals in roles such as site reliability engineer (SRE) is largely code-driven. In the rapidly growing field of platform engineering, IT pros act as a liaison between application developers and complex back-end infrastructure, creating infrastructure-as-code templates to ensure applications are deployed smoothly and according to enterprise policies in test environments and production.
Replacing Busywork
As generative AI improves, specific IT ops skills and workflows could become its domain. Besides infrastructure-as-code, observability could see LLMs play a more significant role in the future. Conversational interfaces for getting reports on business metrics, server performance metrics, or any other data, with the ability to access the data and understand when it’s connected, could be achieved using generative AI. Moreover, test generation and automation for resilience workflows, such as chaos engineering and security penetration testing, could also be suited for generative AI.
Chaos engineering is a technique that tests how resilient systems are to unexpected conditions, such as server outages or cyberattacks. Chris Riley, senior manager of developer relations at marketing tech firm HubSpot, believes generative AI could perform repetitive testing work that humans don’t have time for. Virtual penetration analysts or virtual bug bounty bots could continually poke around, testing what works and what doesn’t work and even testing documentation with real-world scenarios. Moreover, generative AI could identify gaps in systems versus waiting to hear about them from somebody, providing many interesting use cases. The list of potential uses and advantages of AI is growing, and since there has been a long-term shortage of programmers throughout the world, the sector will suffer to some degree. That said, it is up to policymakers to decide to what extent governments should intervene to protect jobs, because AI will be eliminating jobs throughout many sectors, not just programming.