I was initially very reluctant to do anything with AI other than show people what it could do to get a reaction from them, and worry about whether or not some AI product(s) would soon replace all knowledge workers, leaving me an unwitting participant in a dystopian future I did not choose.
After watching some of the news from the sidelines this year and mostly forgetting about the whole thing, I came back around to it recently, after OpenAI released version 4 of their chatbot. I wanted to see what it could do, and decided that paying for it would allow me to get a closer look and use the newer version and features.
I was surprised initially at how much better it seemed to be than version 3, and started to use it at work to supplement google searching for vague technical details and questions about otherwise obscure and/or deep technical topics. After a week or two of this I have a few observations:
- It can save an enormous amount of time searching for answers to detailed technical questions about how something complex works.
- You can use it to summarize an otherwise overly-large topic in a few paragraphs – this can save you a ton of time when you want to learn about some topic but don’t want to wade through 3-4 different websites and their SEO-friendly verbose topical coverage of your interest.
- It can LIE to you. And you won’t be able to easily tell when this is happening. One trick I use is to ask it if something is possible before I ask it to tell me how to do something. If you ask it how-to first, it may decide to fill in any gaps in its knowledge with stuff that ‘just fits in there’. Be careful.
- OpenAI is in a similar position to the Microsoft of yesteryear, in that they can watch the entire industry from a high point and just keep adding more and more features to their product, waiting for other, smaller companies to build successful ideas first. I wouldn’t suggest betting against them in the future. Tech is a race to market, and the first company there tends to dominate, even years later, unless mistakes are made.
Getting back to devops here, I have to say that one of my pet peeves is spending hours trying to figure out some small issue or configuration change with an open source product. Now I know open source makes the world go around, but really, some of it is so complex and poorly documented that the money you save by not paying a vendor for a similar product gets spent instead in many hours of you banging your head against the keyboard. In addition, you often find weird bugs while rolling something out and only later after much digging do you see that still-open Github issue in the project’s repo. So how might we bring our AI monster into this equation to save us time and headache here?
Lets ask it:
You are an expert on everything related to the open source opentelemetry project in github. You carefully search through anything you know about the project based on the repos or related documentation sites, and only answer questions if you can point to a specific piece of code or readme in the opentelemetry repos or official documentation to support your answers. If you aren't sure of something you will say so instead of filling in missing information. You can get your code and readme information as well as links to other documentation sources from any repo here: https://github.com/open-telemetry
I will now ask you a question about opentelemetry:
I am running the collector in my EKS clusters right now on version 0.81.0. I noticed that some k8s metrics are being duplicated by the collectors. Can you suggest possible reasons for this problem?
And it’s reply, after showing it was actually browsing the web with a browser(!!) while ‘thinking’:
The duplication of Kubernetes metrics by the OpenTelemetry Collector in your EKS clusters might be related to a known issue in the receiver/k8sobjects module. In version 0.81.0, a bug was identified where duplicate data could be ingested in watch mode if the client connection was reset https://github.com/open-telemetry/opentelemetry-collector-contrib/releases#:~:text=,24806%E3%80%91.
This issue could potentially lead to the duplication of Kubernetes metrics in scenarios where the collector is running in a Kubernetes environment and experiencing client connection resets. To address this, you might want to check the release notes for versions subsequent to 0.81.0 for any fixes or workarounds related to this issue. Additionally, verifying the stability of your client connections and the watch mode configuration in the collector might also help in troubleshooting this problem.
Now in reality this issue was found and solved by another member of our team by upgrading the collector (which introduces another bug we have to deal with – yay open source), but you can see how in the heat of battle during a busy workday this could potentially save you a huge amount of time!
The remaining objection I have to this of course is that I still feel that being able to find answers without the use of AI is a critical part of the skill-set of any technical worker. I worry that in the future entire generations of tech workers will have a lopsided set of skills that could cause other problems, and that we will become so dependent on AI that without it we are lost. This, however, is precisely what people were worried about re: the internet – back in the day. So maybe I should worry less and just ‘get more done‘??