• Home
    /
  • Resources
    /
  • Podcast
    /
  • The Unplanned Show, Episode 3: LLMs and Incident Response
PagerDuty image

The Unplanned Show, Episode 3: LLMs and Incident Response

The Unplanned Show, Episode 3: LLMs and Incident Response

“Because customer trust is basically PagerDuty’s number one priority, and that comes in the form of reliability, quality, security, all those things. So, what becomes really critical in these features is that there’s a human in the loop.”

A software engineer, a data scientist, and a product manager walk into a generative AI project… Using technology that didn’t exist a year ago, they identify a customer pain point they might be able to solve, build on teammates’ experience with building AI features, and test how to feed inputs and constrain outputs into something useful. Hear the full conversation here.

Read the article referenced in the episode here.

“I think it’s a shift in the technology world. We had big shifts before like the browser, mobile, and things like this. This is a big shift in the industry, and that will be also disruptive, and that’s fun to be part of this.”

Summary generated with help from chatGPT:

In this episode of The Unplanned Show, the host welcomes three guests, Leeor Engel, a senior engineering manager at PagerDuty, Everaldo Aguiar, a senior manager on the data science team at PagerDuty, and Ben Wiegelmann, a product manager based in Lisbon. The discussion centers around the challenges of managing unplanned work in highly digitized environments, particularly focusing on the recent integration of large language models and generative AI capabilities into PagerDuty’s products. The conversation delves into the underlying problem they aimed to address, which involves streamlining communication during incidents, such as system outages, and reducing the burden of writing status updates. Ben Wiegelmann, in his role as a product manager, emphasizes the importance of staying focused on the problem rather than falling in love with the solution, highlighting how the generative AI feature aims to simplify the process of summarizing incident-related conversations and updating stakeholders, ultimately optimizing response efforts.

“We know from customers they have up to three people on call 24×7 to just update stakeholders, and we think with this solution we can actually also bring this down to just one person reviewing it.”

Next, the discussion focuses on the challenges of utilizing large language models (LLMs) and generative AI in the context of incident status updates at PagerDuty. The host and guests explore the importance of embedding context into the technology and avoiding a wide-open conversational window to enhance user experience. Leeor Engel emphasizes the need for a human in the loop to ensure customer trust and reliability, allowing users to refine and correct generated content before sending. The conversation delves into the complexities of status updates during incidents, highlighting different phases and the stress associated with the initial update. Everaldo Aguiar discusses how LLMs, with their ability to recall and retain context, serve as a game-changer in managing incidents, providing a unique interface for interaction. The interview underscores the significance of LLMs in overcoming language barriers and writer’s block, enabling the generation of summaries that users can review before communicating with stakeholders.

“Having this technology that can help you gather data from different sources and generate a summary while making sure that I am in the loop and having a chance to review what’s being sent to my stakeholders is definitely something that we didn’t have available at our disposal even just a year ago.”

The conversation turns to the implementation of the new technology and the challenges of ensuring a positive user experience with large language models (LLMs) and generative AI. Ben Wiegelmann discusses the three phases users typically go through when interacting with products like ChatGPT, emphasizing the importance of simplifying the user experience to avoid frustration and prompt engineering. Leeor Engel highlights the complexities behind the scenes, noting the need to adapt to different ways clients interact with PagerDuty while maintaining a consistent positive experience. Everaldo Aguiar adds nuance to the discussion, explaining that even with the abstraction of prompts, the output from LLMs may vary, requiring additional constraints to ensure relevant and appropriate information is generated, considering factors like timeline details and avoiding unnecessary information.

“What we tried with our product was: we have good AI people in our team, right? They are experts in writing prompts, and we bring it down to one click for the specific use case, and the person can see this magic with one click and doesn’t have to go through the frustration [and] doesn’t have to learn prompt engineering.”

The discussion then focuses on the combination of art and science in implementing large language models (LLMs) and generative AI, emphasizing the importance of embedding best practices into the technology. The conversation highlights the challenges of merging prompt engineering and refining processes, drawing on the implicit best practices accumulated over years of experience. The interviewees discuss the significance of incorporating knowledge about data and refining processes into the prompts to guide users toward effective use of the technology. The conversation also touches on PagerDuty’s OpsGuides, which are publicly shared and now being integrated into the product, providing users with best practices for incident response and communication. Shifting to the topic of data sources, the interview explores the utilization of conversations in Slack, notes in the UI, and automated timeline updates to enhance the richness of information fed into the LLM models.

“It’s not just that we’re saving the user time because we know, ‘Hey, this data is valuable in this context,’ but also here’s the right way to craft this, and those two things are coming together, the knowledge of the data and the process refinement and knowledge.”

The interview concludes with a discussion about the flexibility in communication channels for incident data, emphasizing the importance of good data for effective AI output. The team expresses excitement about the evolving technology landscape, the rapid development of AI features, and the unique approach PagerDuty takes in embedding technology into the product. They highlight the significance of user feedback and iterate on the technology to exceed customer expectations. The conversation ends with a focus on the potential of AI to impress and the anticipation of refining and improving the system over time.

“We believe that we should be also flexible to switch out sources. It goes all back to this big data quote […] “garbage in, garbage out.” And that’s also the problem we have here with AI. It’s not just big data analytics, but today also with AI we need good data to have good output.”

Watch the interview:


"The PagerDuty Operations Cloud is critical for TUI. This is what is actually going to help us grow as a business when it comes to making sure that we provide quality services for our customers."

- Yasin Quareshy, Head of Technology at TUI

Top 50 Best Products for Mid-Market 2023 Top 50 Best IT Management Products 2023 Top 100 Best Software Products 2023