top of page

Making LLMs (Really) Useful in the Real World


Tobias Zwingmann explores a crucial topic in AI adoption: how to move beyond the hype and make Large Language Models (LLMs) genuinely useful in business environments. His main point? LLMs are powerful, but without tools and integration, they often fall short in solving real-world problems.

The Limitation of LLMs

LLMs are great at generating text but it has clear limitations, especially when it comes to:

  • Precise calculations: Numbers get treated as just another type of token, with no inherent mathematical meaning attached to them

  • Real-time information: LLMs can only work with the data they were trained on

  • External verification: They have no built-in way to check if what they're saying is actually true

  • Executing code: While LLMs are excellent at writing code, they can’t run the code by themselves but need a runtime to do this.

Why This Matters For Your Business Relying on raw LLM capabilities leads to disappointment in business environments. You can’t expect a model to replace spreadsheets, access internal systems, or automate tasks—unless it’s paired with tools. The solution is to let LLMs do what they do best (like handling a conversation), while delegating specialized tasks to purpose-built services. This is where "tool use" comes in.

In essence, tool use enables an LLM to recognize when it needs help and trigger an API call to an external service. Practical Deep-Dive: Spreadsheet Calculations

Zwingmann uses a relatable example: calculations in spreadsheets. He shares a 5-step approach using ChatGPT's “Custom GPT” setup:

  • Step 1: Upload the spreadsheet to Grid (or connect it to Google Sheets)

  • Step 2: Select which cells should be exposed to the LLM

  • Step 3: Copy the auto-generated instructions to your LLM

  • Step 4: Create a new custom action

  • Step 5: Copy and paste your private API key as well as the provided schema

Other Common Tool Use Cases


  • Knowledge retrieval: LLMs can query Wikipedia, company knowledge bases, or documentation before responding (typically RAG)

  • Real-time data access: Tools can pull current stock prices, weather forecasts, or other time-sensitive information from real-time APIs

  • Code execution: LLMs can write code, then actually run it and see the results before giving you the final solution

  • Search: Let the LLM do a (web) search to gather the necessary information.

The Growing Tool Ecosystem

Tool use is becoming a standardized approach in the AI industry. Anthropic has recently launched their Model Context Protocol (MCP) that provides a unified interface for exposing tools to LLMs so you don’t have to wire up every service manually (like we did in the example above). MPC implies three main advantages:

  • Reliability: Tools that handle specialized tasks dramatically improve the accuracy and reliability of AI solutions

  • Flexibility: As new tools emerge, they can be integrated without needing to retrain the entire AI model

  • Extensibility: Your custom business systems can be connected to AI through the same tool-use patterns.


Conclusion

Tool use represents a significant shift in how we interact with and benefit from LLMs. Rather than expecting LLMs to excel at everything, we're building an ecosystem where they can delegate specialized tasks to purpose-built tools – resulting in more reliable, capable AI systems.

Recent Posts

See All
Building a Profitable AI Roadmap

🔗 https://shre.ink/Tobias-Zwing-Profitable-AI AI adoption is booming, but many companies struggle to make their AI investments...

 
 
 

Comments


bottom of page