我的播客之旅 在这个信息爆炸的时代,如何高效地获取知识并持续学习成为了一个重要课题。本文将分享我利用播客进行英语学习和技术视野拓展的经历,希望能为想尝试听播客的朋友提供一些参考。 背景:疫情带来的大片空闲时间 2020年的疫情改变了许多人的工作方式,远程办公和混合办公模式逐渐成为主流。这一变化带来了更多的在家空闲时间和通勤时间。面对中国社会经济前景的不确定性,我决定利用这些时间提升自身竞争力,特别是在英...
LLM Learning Series 2. Function Calling
Introduction For typical LLM interactions, a single prompt or a few rounds of chat are sufficient to achieve the desired result. However, some tasks require the LLM to access information beyond its internal knowledge base. For example, retrieving today’s weather information for a specific city or searching for a particular anime necessitates calling external functions. What is Function Calling? Function calling in LLMs empowers the models to generate JSON objects that trigger external functions within your code. This capability enables LLMs to connect with external tools and APIs, expanding their ability to perform diverse tasks. Function Calling Execution Steps User calls LLM API with tools and a user prompt: The user provides a prompt and specifies the available tools. What is the weather like in San Francisco? Define Tool Schema tools = [ { "type": "function", "function": { "name": "get_current_weather", "description": "Get the current weather in a given location", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA", }, "unit": { "type": "string", "enum": ["celsius", "fahrenheit"]}, }, "required": ["location"], }, }, } ] Define Dummy Function # Example function hard coded to return the same weather def get_current_weather(location, unit="fahrenheit"): """Get...
LLM Learning Series 1. Prompt Engineering
Mastering the Art of LLM Prompts Large Language Models (LLMs) like GPT-4 and Claude possess remarkable capabilities. However, unlocking their full potential requires effective communication through well-crafted prompts. This guide delves into the art of prompt engineering, offering a step-by-step approach – from fundamental principles to advanced techniques – to harness the true power of LLMs. Step 1: Choosing the Optimal Model Latest and Greatest: Newer models like GPT-4 Turbo offer significant advantages over predecessors like GPT-3.5 Turbo, including smoother natural language understanding. For simpler tasks, extensive prompt engineering may be less crucial. Benchmarking: Utilize resources like LLM leaderboards and benchmark results to compare models and identify the best fit for your specific needs. Examples: For nuanced language translation, GPT-4 Turbo’s contextual understanding is likely superior to older models. For tasks that require both capabilities and speed, the Llama-3-70b open-source model is an excellent option. Step 2: Establishing Clear Communication Clarity and Specificity Explicit Instructions: Treat the LLM as a collaborator requiring clear direction. Define the task, desired outcome, format, style, and output length explicitly, avoiding ambiguity. Contextual Grounding: Provide relevant background information and context to guide the LLM towards the desired response, considering the intended audience and purpose. Separation of Concerns: Clearly separate instructions from context using ### or """....
借助 LLM 和 Telegram 机器人,让背单词不再枯燥
背英语单词总是 Abandon? 向多邻国 🦉 取经,让单词主动提醒自己 背单词,在英语学习中实在无法避免,从小学到研究生,甚至部分工作岗位也需要记单词。 但抱着单词书啃,或者手机上一板一眼背单词,效率实在太低。现在 LLM 这么火,为何不利用起来? 毕竟 LLM 中间的 L 就是 Language 的意思,LLM 对付其他严谨任务可能差点意思,但英语等语言可是它的强项,能用技术解决问题,就不要麻烦自己! 本文就来分享一下,怎么用欧路词典 API、LLM...
Traefik Architecture and Source Code Analysis: A Deep Dive
Traefik is a widely adopted open-source HTTP reverse proxy and load balancer that simplifies the routing and load balancing of requests for modern web applications. It boasts dynamic configuration capabilities and supports a multitude of providers, positioning itself as a versatile solution for orchestrating complex deployment scenarios. In this blog post, we will delve into the architecture of Traefik and dissect the key components of its source code to furnish a more nuanced understanding of its operational mechanics. Traefik Architecture: A High-Level Overview At its core, Traefik’s architecture is composed of several integral components that collaborate to facilitate dynamic routing and load balancing: Static Configuration: These are foundational settings for Traefik, encompassing entry points, providers, and API access configurations. They can be specified via file, command-line arguments, or environment variables. Dynamic Configuration: This pertains to the routing rules, services, and middlewares that are adaptable based on the state of the infrastructure. Traefik’s compatibility with a myriad of providers, such as Docker, Kubernetes, Consul Catalog, among others, underscores its dynamism. Providers: Acting as the bridge between Traefik and service discovery mechanisms, providers are tasked with sourcing and conveying dynamic configuration to Traefik. Each provider is tailored to integrate with different technologies like Docker, Kubernetes, and Consul....
Streamlining Real-Time Data: Master HTML5 SSE like ChatGPT
Introduction In the age of real-time interactivity where services like ChatGPT excel, it’s crucial for developers to leverage technologies that allow for seamless data streaming in their applications. This article will delve into the world of HTML5 Server-Sent Events (SSE), a powerful tool akin to the technology behind conversational AI interfaces. Similar to how ChatGPT streams data to provide instant responses, SSE enables web browsers to receive updates from a server without the need for repetitive client-side requests. Whether you’re building a chat application, a live notification system, or any service requiring real-time data flow, this guide will equip you with the knowledge to implement SSE efficiently in your applications, ensuring a responsive and engaging user experience. Understanding Server-Sent Events (SSE) Server-Sent Events (SSE) is a web technology that facilitates the server’s ability to send real-time updates to clients over an established HTTP connection. Clients can receive a continuous data stream or messages via the EventSource JavaScript API, which is incorporated in the HTML5 specification by WHATWG. The official media type for SSE is text/event-stream. Here is an illustrative example of a typical SSE response: event:message data:The Current Time Is 2023-12-30 23:00:21 event:message data:The Current Time Is 2023-12-30 23:00:31 event:message data:The Current Time Is 2023-12-30 23:00:41 event:message data:The Current Time Is 2023-12-30 23:00:51 Fields in SSE Messages Messages transmitted via SSE may contain the following fields:...
如何成为开源项目的贡献者
简介 基本流程 挑选项目 工作中接触 日常使用 熟悉项目使用的技术栈 …… 发现问题 代码 拼写 文档 测试 …… fork 修改 代码 测试 注释 文档 签署开源贡献协议 CLA DCO 提交 pull request CI review merge 后续 关闭 issue 等待 release 持续贡献,成为维护者 总体原则 如何成为合格的开源项目贡献者 确定你的技能和技术栈,选择与之匹配的开源项目 了解开源项目的代码结构、功能和规范,并阅读其贡献指南 各类贡献都可以,可以是修复 bug、添加功能、编写文档、测试等 从小的修改开始,比如修改文...
Structured concurrency
简介 定义 根据维基百科的解释: Structured concurrency is a programming paradigm aimed at improving the clarity, quality, and development time of a computer program by using a structured approach to concurrent programming. The core concept is the encapsulation of concurrent threads of execution (here encompassing kernel and userland threads and processes) by way of control flow constructs that have clear entry and exit points and that ensure all spawned threads have completed before exit. Such encapsulation allows errors in concurrent threads to be propagated to the control structure’s parent scope and managed by the native error handling mechanisms of each particular computer language. It allows control flow to remain readily evident by the structure of the source code despite the presence of concurrency. To be effective, this model must be applied consistently throughout all levels of the program – otherwise concurrent threads may leak out, become orphaned, or fail to have runtime errors correctly propagated. Structured concurrency is analogous to structured programming, which introduced control flow constructs that encapsulated sequential statements and subroutines. 简单来说:结构化并发(Structu...
Go 1.18 泛型介绍
什么是泛型 泛型程序设计(generic programming)是程序设计语言的一种风格或范式。泛型允许程序员在编写代码时使用一些以后才指定的类型,在实例化时作为参数指明这些类型。 Golang 泛型基本用法 示例 map 操作 package main import ( "fmt" ) func mapFunc[T any, M any](a []T, f func(T) M) []M { n := make([]M, len(a), cap(a)) for i, e := range a { n[i] = f(e) } return n } func main() { vi := []int{1, 2, 3, 4, 5, 6} vs := mapFunc(vi, func(v int) string { return "<" + fmt.Sprint(v * v) + ">" }) fmt.Println(vs) } min max 函数 package main import ( "fmt" ) type ordered interface { ~int | ~int8 | ~int16 | ~int32 | ~int64 | ~uint | ~uint8 | ~uint16 | ~uint32 | ~uint64 |...
通过 gRPC-Gateway 开发 RESTful API
gRPC-Gateway 简介 gRPC-Gateway 是 protoc 的一个插件,工作机制是读取一个 gRPC 服务定义并生成一个反向代理服务器,将 RESTful JSON API 翻译成 gRPC。 这个服务器是根据编写的 gRPC 定义中的自定义选项来生成的。 安装使用 依赖工具 工具 简介 安装 protobuf protocol buffer 编译所需的命令行 http://google.github.io/proto-lens/installing-protoc.html protoc-gen-go 从 proto 文件,生成 .go 文件 https://grpc.io/docs/languages/go/quickstart/ protoc-gen-go-grpc 从 proto 文件,生成 gRPC 相关的 .go 文件 https://grpc.io/docs/languages/go/quickstart/ protoc-gen-grpc-gateway 从 proto 文件,生成 gRPC-gateway 相关的 .go 文件 https://github.com/grpc-ecosystem/grpc-gateway#installation protoc-gen-openapiv2 从 proto 文件,生成 swagger 文档所需的参数文件 https://github.com/grpc-ecosystem/grpc-gateway#installation buf protobuf 管理工具,可选,简化命令行操作和protobuf 文件管理 https://docs.buf.build/installation 步骤 编...
Python 与 Go 之间的并发模式差异
Python并发方式 在 Python 中,早期并发方式以传统的多进程和多线程为主,类似 Java,同时,有不少第三方的异步方案(gevent/tornado/twisted 等)。 在 Python 3 时期,官方推出了 asyncio 和 async await 语法,作为 Python 官方的协程实现,而逐渐普及。 进程 多进程编程示例: from multiprocessing import Process def f(name): print('hello', name) if __name__ == '__main__': p = Process(target=f, args=('bob',)) p.start() p.join() multiprocessing 与 threading 的 API 接近,比较容易创建多进程的程序,是 Python 官方推荐作为绕过多线程 GIL 限制的一种方案。 但需要注意,创建进程的参数...
Distributed Systems for Fun and Profit 笔记(二)
2. 抽象的上下不同层次 系统模型 分布式系统中的程序: 在独立节点上同时运行 通过可能引入不确定性和消息丢失的网络连接 并且没有共享内存或共享时钟 系统模型列举了与特定系统设计相关的许多假设,实现分布式系统的环境和设施的假设: 节点具有什么功能以及它们如何失败 通信连接如何运行以及它们如何可能失败 整个系统的属性,例如关于时间和顺序的假设 健壮的系统模型做出最弱假设,强有力的假设创建易于推理的系统模型 此模型中的节点 作为...
Distributed Systems for Fun and Profit 笔记 (一)
0. 前言 “Distributed Systems for Fun and Profit” 是 mixu 2013 年在 网络上 免费发布的一本介绍分布式系统的小册子。 分布式的两种结果: 信息以光速传播 独立节点独自失败 分布式系统处理距离和多个节点的问题 1. 从高层次角度看分布式系统 计算机的基本任务 存储 计算 分布式编程就是用多机解决在单机上的相同问题,通常此问题单机已经不能满足要求。 小规模时,单个节点上升级硬件可以解决问题,但随着问题规模增大,单节点升级硬件无法解决或者成本过高时就需要分布式系统。当前...
编译 CPython 心得
什么情况下需要自己编译 CPython 大多数操作系统都提供了编译好的 CPython 版本,一般直接通过包管理器安装就能满足需求,但是某些情况下,就需要自己编译 CPython 来满足特定需求了: 操作系统提供的 Python 版本太低,并且 Python 官网、系统包管理源没有提供预编译的新版本 Python 预编译版本不符合性能、扩展等方面的要求,比如没有开启编译器优化、OpenSSL/SQLite 版本不满足要求等 参与 CPython 开发或者尝鲜,尝试 Alpha/Beta/RC 等版本的 Python 低版本 Linux 发行版上编译 CPython 时的注...