Earlier this month, I gave a talk on Large Language Models (LLMs) and their impact on application management from an IT pro perspective at the AppManagEvent in Utrecht. Unfortunately, that talk wasn't recorded, so I made a video recreating that talk, because I wanted to share with you. Below is a summary of what I discussed in the video, including a high-level view of how LLMs work, the implications on the overall job market, and specifically how it pertains to application management, mainly application packaging.
How Large Language Models (LLMs) work
First, it is necessary to give a quick introduction to how Large Language Models (LLMs) work. LLMs employ transformers, a neural network architecture originally created for translation. The input, i.e., the word or words, goes into the encoder, which then identifies which words are relevant and to what extent by giving them weight (i.e., a score). It is important to note that the inputs are not handled in order, but rather based on the context/weight.
The training of the LLM is done in a semi-supervised way, which means first, it takes unlabeled (not supervised) data sets and analyzes them, then is fine-tuned through supervised training to improve.
LLM effects on the job market
There is a lot of data and predictions about how AI will replace, eliminate, or create jobs, as well as increase productivity, for the short and long term. Given that AI increases the productivity of employees, with some reports of up to 10x, we can see in the short-term reduced hiring and minimal layoffs. A Goldman Sachs report states that Generative AI could impact US employment by replacing 7% of jobs, complementing 63%, and not affecting 30%.
The long-term effects are not as positive for the job market. There is no consensus, as timings are highly debated. According to the World Economic Forum (WEF), 50% of all employees will need reskilling by 2025 as more companies adopt AI. Also, according to the report, 83 million global jobs will be eliminated by 2027. The long-term prospects look daunting, especially since we are talking about the next two to five years.
Comparing LLMs in IT application management
In the video, I compared Google Bard and Microsoft Bing to see how they handle IT application management questions. I asked them both about Google Chrome, specifically:
- What is the latest version?
- Bing gave the 2nd latest version, but the newer version was just released that day
- Bard gave an even older version
- What is the severity of the vulnerabilities fixed?
- Bard listed the high and medium severities and listed some of them
- Bing listed a few of the fixes but couldn’t answer about the level of severity
- What are the silent installation options?
- Bard was not able to assist with this question
- Bing showed where it can be downloaded and the silent command line details.
- Where can I obtain it?
- Bard was not able to assist with this question
- Bing gave several options for how to download it
Bing also gave relevant other prompts, such as the difference between an MSI and EXE install, showing how in tune it is with what I am trying to accomplish from the original prompt.
As a further comparison, at the Chatbot Arena website, I tested the same questions against several different unlabeled LLMs. In this test, llama-2 by Meta gave the best overall answers.
Prompt engineering: How you talk to AI influences your results
When AI first really took off earlier this year, the idea of a job as a prompt engineer was mainly a joke. However, the way you engage with an LLM can alter the responses.
First, start with assigning the AI a role, and you can be descriptive, i.e., “You are an IT professional with 10 years of experience and a background in PowerShell automation.” This will dictate the AI’s style.
Second, give the AI what you want to accomplish, and what you don’t want. In the video, I gave the example of “Write a Powershell script to collect what software is installed on my computer. Do not include 32-bit applications.” Now, it has the parameters of what I do and do not want.
If you have one, provide an example of the answer you seek so the AI can generate a similar answer.
Putting AI to the test
Using PowerShell
From my experience, the difference between an entry-level and senior-level desktop engineer is knowing how to use PowerShell. However, thanks to these chatbots, anyone can produce valuable PowerShell content. Without having any knowledge of how PowerShell works, you can ask AI (in the video I ask Bard) to write a script.
I asked Bard to write a PowerShell script that lists all installed software and version number into an Excel spreadsheet. Bard quickly creates a script, which I take and run, but it produces an error. I copied the whole error and pasted it back into Bard without any other context.
Bard explained what the error was, gave steps on how to fix it, and provided an example modified script that takes the error into account. I then ran the script again, and it worked, except I received a prompt to save the file, which I don’t want to happen. I gave Bard the additional parameter of not wanting to have to save the file, it modified the script again to fix that issue.
Identifying setup.exe Command Line Options
I asked Bard to give me the Command Lines to silently install Zoom, ArcGIS, SAP for Windows, etc. Bard gave me the command line options, as well as additional arguments I might want to add, and relevant tips to know for installation.
Be prepared to challenge answers
While a valuable tool, you cannot assume the answers you are getting are correct. Some research may still be necessary, and testing is absolutely necessary.
I asked Bard, "Which applications require repackaging?" Bard responded quickly with wrong answers. To see the full details of what it said and how I pushed back on each answer, please watch the video. To summarize, I countered every answer Bard gave, and it had to correct itself and apologized, saying it is still learning.
Concluding thoughts
It is clear that LLMs can be a valuable tool to help us do better work faster and show it would be a mistake not to take advantage of it. Do not be discouraged about the potential negative impacts the future may hold, but rather embrace it as the greatly beneficial tool that it represents today. At the same time, be sure to question the answers you get and test them ahead of real-world implementation.