LLM Learning Series 2. Function Calling

Introduction For typical LLM interactions, a single prompt or a few rounds of chat are sufficient to achieve the desired result. However, some tasks require the LLM to access information beyond its internal knowledge base. For example, retrieving today’s weather information for a specific city or searching for a particular anime necessitates calling external functions. What is Function Calling? Function calling in LLMs empowers the models to generate JSON objects that trigger external functions within your code. This capability enables LLMs to connect with external tools and APIs, expanding their ability to perform diverse tasks. Function Calling Execution Steps User calls LLM API with tools and a user prompt: The user provides a prompt and specifies the available tools. What is the weather like in San Francisco? Define Tool Schema tools = [ { "type": "function", "function": { "name": "get_current_weather", "description": "Get the current weather in a given location", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA", }, "unit": { "type": "string", "enum": ["celsius", "fahrenheit"]}, }, "required": ["location"], }, }, } ] Define Dummy Function # Example function hard coded to return the same weather def get_current_weather(location, unit="fahrenheit"): """Get...

May 5, 2024

LLM Learning Series 1. Prompt Engineering

Mastering the Art of LLM Prompts Large Language Models (LLMs) like GPT-4 and Claude possess remarkable capabilities. However, unlocking their full potential requires effective communication through well-crafted prompts. This guide delves into the art of prompt engineering, offering a step-by-step approach – from fundamental principles to advanced techniques – to harness the true power of LLMs. Step 1: Choosing the Optimal Model Latest and Greatest: Newer models like GPT-4 Turbo offer significant advantages over predecessors like GPT-3.5 Turbo, including smoother natural language understanding. For simpler tasks, extensive prompt engineering may be less crucial. Benchmarking: Utilize resources like LLM leaderboards and benchmark results to compare models and identify the best fit for your specific needs. Examples: For nuanced language translation, GPT-4 Turbo’s contextual understanding is likely superior to older models. For tasks that require both capabilities and speed, the Llama-3-70b open-source model is an excellent option. Step 2: Establishing Clear Communication Clarity and Specificity Explicit Instructions: Treat the LLM as a collaborator requiring clear direction. Define the task, desired outcome, format, style, and output length explicitly, avoiding ambiguity. Contextual Grounding: Provide relevant background information and context to guide the LLM towards the desired response, considering the intended audience and purpose. Separation of Concerns: Clearly separate instructions from context using ### or """....

April 27, 2024

借助 LLM 和 Telegram 机器人,让背单词不再枯燥

背英语单词总是 Abandon? 向多邻国 🦉 取经,让单词主动提醒自己 背单词,在英语学习中实在无法避免,从小学到研究生,甚至部分工作岗位也需要记单词。 但抱着单词书啃,或者手机上一板一眼背单词,效率实在太低。现在 LLM 这么火,为何不利用起来? 毕竟 LLM 中间的 L 就是 Language 的意思,LLM 对付其他严谨任务可能差点意思,但英语等语言可是它的强项,能用技术解决问题,就不要麻烦自己! 本文就来分享一下,怎么用欧路词典 API、LLM...

April 19, 2024