play_arrow

keyboard_arrow_right

skip_previous play_arrow skip_next
00:00 00:00
playlist_play chevron_left
volume_up
chevron_left
play_arrow

Best Products

RunLLM – AI Support Engineer for Complex Issues

thusitha.jayalath@gmail.com July 30, 2025


Background
share close

This podcast describes RunLLM, an AI Support Engineer designed to resolve complex customer support issues by analyzing logs, code, and documentation, rather than simply providing basic responses. Built on UC Berkeley research, this tool aims to reduce engineering time, cut down mean time to resolution (MTTR), and significantly deflect support tickets. Testimonials highlight its high-quality answers and its ability to act as a valuable team member, going beyond typical chatbot capabilities. The updated RunLLM v2 introduces features like agentic reasoning, multi-agent support, and customizable workflows via a Python SDK, allowing for tailored and efficient support operations across various teams.

The only way to discover the limits of the possible is to go beyond them

Arthur C. Clarke

Frequently Asked Questions

What is RunLLM, and what problem does it solve?

RunLLM is an AI Support Engineer designed to resolve complex customer support issues, rather than just respond to them. It addresses the challenge that traditional AI chatbots often fall short in handling intricate product queries, requiring more than just a simple knowledge base lookup. RunLLM aims to make customer support dramatically more scalable, allowing human teams to focus on high-value customer relationships.

How does RunLLM differ from traditional AI chatbots or vector databases?

Unlike traditional chatbots that often “parrot docs” or rely solely on vector databases and general AI models like GPT-4, RunLLM is built with an “agentic reasoning” engine. This allows it to deeply understand user questions, take actions like asking for clarification, searching and refining knowledge base searches, and even analyzing logs and telemetry data. It’s designed to debug, write validated custom code, and integrate across various platforms like documentation sites, Slack, and Zendesk, making it a more comprehensive and proactive problem-solver.

What are the key features of RunLLM v2?

RunLLM v2 introduces several significant enhancements:

  • Agentic Reasoning: Enables the AI to understand user questions deeply and perform actions like asking for clarification, searching knowledge bases, refining searches, and analyzing logs and telemetry.
  • Multi-agent Support: Allows for the creation of multiple AI agents tailored to specific teams (support, success, sales), each with its data and instructions for customized behavior.
  • Custom Workflows: Provides a Python SDK that empowers teams to control how the agent handles different situations, the types of responses it gives, and when it escalates conversations.

What tangible benefits have early customers experienced using RunLLM?

Early customers have reported impressive results:

  • DataHub: Achieved $1 million in cost savings in engineering time.
  • vLLM: RunLLM handles 99% of all community questions.
  • Arize AI: Experienced a 50% reduction in support workload. Additionally, users like Unsloth have noted that RunLLM’s answers are of “very high quality” and “seemed to have been written by one of our team members,” often getting questions right 95% of the time.

How does RunLLM ensure the quality and accuracy of its responses?

RunLLM’s focus is on delivering answer quality comparable to a team’s top support engineer. It achieves this through its agentic reasoning, which goes beyond simply retrieving information to truly understanding the context and taking necessary actions like analyzing logs or code. When it does get something wrong, it directly links to the sources of information, allowing users to investigate further. The team also emphasizes continuous learning and encourages feedback to rapidly improve the AI’s performance.

Can RunLLM be customized to fit specific team needs?

Yes, customization is a core aspect of RunLLM v2. With multi-agent support, teams can create agents with specific data and instructions for different departments (support, success, sales). Furthermore, the new Python SDK allows teams to define custom workflows, controlling the agent’s behavior, response types, and when conversations should be escalated, ensuring the AI aligns with unique operational requirements.

What kind of support issues is RunLLM designed to resolve?

RunLLM is built to resolve complex support issues, particularly those requiring an understanding of logs, code, and documentation. It can debug problems, potentially write validated custom code solutions, and integrate with various developer environments. This goes beyond simple FAQ-style questions to truly assist with technical and intricate product challenges.

How can new users try RunLLM?

Prospective users can try RunLLM for free by creating an account and simply pasting the URL to their documentation site. RunLLM will then process the data, allowing users to start asking questions about their product within minutes. The company encourages users to “ask your hardest question, and see how far it gets” and provide feedback for continuous improvement.

Check The Product

Download now: RunLLM – AI Support Engineer for Complex Issues

file_download Download

Rate it
Previous episode
Post comments (0)

Leave a reply

Your email address will not be published. Required fields are marked *