|
17 | 17 | "本教程展示一个非常简单的 LLM 应用(分类器)的评估流程,该应用会将输入数据标记为“有毒(Toxic)”或“无毒(Not Toxic)”。"
|
18 | 18 | ]
|
19 | 19 | },
|
| 20 | + { |
| 21 | + "cell_type": "code", |
| 22 | + "execution_count": null, |
| 23 | + "id": "4913e104-82e6-4932-8e80-2b8bd57553c3", |
| 24 | + "metadata": {}, |
| 25 | + "outputs": [], |
| 26 | + "source": [ |
| 27 | + "!pip show langsmith" |
| 28 | + ] |
| 29 | + }, |
20 | 30 | {
|
21 | 31 | "cell_type": "markdown",
|
22 | 32 | "id": "9c8c8225-42bd-4c9b-adeb-62c83f80c9d3",
|
|
157 | 167 | },
|
158 | 168 | {
|
159 | 169 | "cell_type": "code",
|
160 |
| - "execution_count": 20, |
| 170 | + "execution_count": 4, |
161 | 171 | "id": "eeec0c29-5e85-46e1-915b-619b68627d63",
|
162 | 172 | "metadata": {},
|
163 | 173 | "outputs": [
|
164 | 174 | {
|
165 |
| - "ename": "ImportError", |
166 |
| - "evalue": "cannot import name 'evaluate' from 'langsmith.evaluation' (/home/ubuntu/miniconda3/envs/langchain/lib/python3.10/site-packages/langsmith/evaluation/__init__.py)", |
167 |
| - "output_type": "error", |
168 |
| - "traceback": [ |
169 |
| - "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m", |
170 |
| - "\u001b[0;31mImportError\u001b[0m Traceback (most recent call last)", |
171 |
| - "Cell \u001b[0;32mIn[20], line 1\u001b[0m\n\u001b[0;32m----> 1\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mlangsmith\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mevaluation\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m evaluate\n\u001b[1;32m 3\u001b[0m \u001b[38;5;66;03m# 数据集名称\u001b[39;00m\n\u001b[1;32m 4\u001b[0m dataset_name \u001b[38;5;241m=\u001b[39m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mToxic Queries\u001b[39m\u001b[38;5;124m\"\u001b[39m\n", |
172 |
| - "\u001b[0;31mImportError\u001b[0m: cannot import name 'evaluate' from 'langsmith.evaluation' (/home/ubuntu/miniconda3/envs/langchain/lib/python3.10/site-packages/langsmith/evaluation/__init__.py)" |
| 175 | + "name": "stderr", |
| 176 | + "output_type": "stream", |
| 177 | + "text": [ |
| 178 | + "/home/ubuntu/miniconda3/envs/langchain/lib/python3.10/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n", |
| 179 | + " from .autonotebook import tqdm as notebook_tqdm\n" |
| 180 | + ] |
| 181 | + }, |
| 182 | + { |
| 183 | + "name": "stdout", |
| 184 | + "output_type": "stream", |
| 185 | + "text": [ |
| 186 | + "View the evaluation results for experiment: 'Toxic Queries-465b0ea2' at:\n", |
| 187 | + "https://smith.langchain.com/o/3d35c1a5-b729-4d18-b06d-db0f06a30bc1/datasets/e1df55ff-b66c-4bcf-b5fd-7c63a847136e/compare?selectedSessions=2900c5b7-9dd5-482a-ab79-32888be3d5b9\n", |
| 188 | + "\n", |
| 189 | + "\n" |
| 190 | + ] |
| 191 | + }, |
| 192 | + { |
| 193 | + "name": "stderr", |
| 194 | + "output_type": "stream", |
| 195 | + "text": [ |
| 196 | + "6it [00:01, 4.71it/s]\n" |
173 | 197 | ]
|
174 | 198 | }
|
175 | 199 | ],
|
|
218 | 242 | },
|
219 | 243 | {
|
220 | 244 | "cell_type": "code",
|
221 |
| - "execution_count": 23, |
| 245 | + "execution_count": 5, |
222 | 246 | "id": "46817304-1e17-4ca1-a5ba-faebd80c3728",
|
223 | 247 | "metadata": {},
|
224 | 248 | "outputs": [],
|
|
251 | 275 | },
|
252 | 276 | {
|
253 | 277 | "cell_type": "code",
|
254 |
| - "execution_count": null, |
| 278 | + "execution_count": 6, |
255 | 279 | "id": "096e3129-8e5e-42b9-8c42-d59f072f20c5",
|
256 | 280 | "metadata": {},
|
257 | 281 | "outputs": [],
|
|
308 | 332 | },
|
309 | 333 | {
|
310 | 334 | "cell_type": "code",
|
311 |
| - "execution_count": 32, |
| 335 | + "execution_count": 7, |
312 | 336 | "id": "431bbdb3-d4a3-445a-9cfc-2e62adff3ad0",
|
313 | 337 | "metadata": {},
|
314 | 338 | "outputs": [
|
315 | 339 | {
|
316 | 340 | "data": {
|
317 | 341 | "text/plain": [
|
318 |
| - "\"To build a RAG (Retrieval-Augmented Generation) chain in LangChain Expression Language (LCEL), you integrate components that handle retrieval (searching for relevant information from a database or document collection) and generation (creating responses based on the retrieved information). The LCEL document provided doesn't go into specifics about a RAG chain configuration, but based on the principles of LCEL, I can guide you through constructing a simplified RAG chain using hypothetical LCEL com\"" |
| 342 | + "\"To build a Retrieval-Augmented Generation (RAG) chain in LCEL, you would need to compose a chain that includes a retriever component to fetch relevant documents or data based on a query, and then pass that retrieved data to a generator model to produce a final output. In LCEL, this would typically involve using `Retriever` and `Generator` components, which you can easily piece together thanks to LCEL's composable nature.\\n\\nThe following example is a simplified step-by-step guide to building a bas\"" |
319 | 343 | ]
|
320 | 344 | },
|
321 |
| - "execution_count": 32, |
| 345 | + "execution_count": 7, |
322 | 346 | "metadata": {},
|
323 | 347 | "output_type": "execute_result"
|
324 | 348 | }
|
|
331 | 355 | {
|
332 | 356 | "cell_type": "code",
|
333 | 357 | "execution_count": null,
|
334 |
| - "id": "1b4ca951-0ed8-41c5-adb9-694776a7a2e7", |
| 358 | + "id": "3a974aa5-7f2e-42f0-bcc4-05ad35cc10d7", |
335 | 359 | "metadata": {},
|
336 | 360 | "outputs": [],
|
337 | 361 | "source": []
|
|
0 commit comments