{ "cells": [ { "cell_type": "markdown", "metadata": { "id": "6RLNvyXlDhG2" }, "source": [ "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/mistralai/cookbook/blob/main/pinecone_rag.ipynb) [![Open nbviewer](https://raw.githubusercontent.com/pinecone-io/examples/master/assets/nbviewer-shield.svg)](https://nbviewer.org/github/mistralai/cookbook/blob/main/pinecone_rag.ipynb)" ] }, { "cell_type": "markdown", "metadata": { "id": "FZ6sj8gPDhG4" }, "source": [ "# RAG with Mistral" ] }, { "cell_type": "markdown", "metadata": { "id": "h8o5TRVfDhG4" }, "source": [ "To begin, we setup our prerequisite libraries." ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "thtg9njP4bOh", "outputId": "3e89aded-138a-4a04-d3a3-f5229dc5906e" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\u001b[?25l \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m0.0/215.5 kB\u001b[0m \u001b[31m?\u001b[0m eta \u001b[36m-:--:--\u001b[0m\r\u001b[2K \u001b[91m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[91m╸\u001b[0m \u001b[32m215.0/215.5 kB\u001b[0m \u001b[31m9.7 MB/s\u001b[0m eta \u001b[36m0:00:01\u001b[0m\r\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m215.5/215.5 kB\u001b[0m \u001b[31m5.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", "\u001b[?25h" ] } ], "source": [ "!pip install -qU \\\n", " datasets==2.14.5 \\\n", " mistralai==0.1.8 \\\n", " pinecone-client==4.1.0" ] }, { "cell_type": "markdown", "metadata": { "id": "VReBq2IeDhG5" }, "source": [ "## Data Preparation" ] }, { "cell_type": "markdown", "metadata": { "id": "eY3OglQm4bOj" }, "source": [ "We start by downloading a dataset that we will encode and store. The dataset [`jamescalam/ai-arxiv2-semantic-chunks`](https://huggingface.co/datasets/jamescalam/ai-arxiv2-semantic-chunks) contains scraped data from many popular ArXiv papers centred around LLMs and GenAI." ] }, { "cell_type": "code", "execution_count": 25, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "pQAVgquj4bOk", "outputId": "d39e5a36-4073-463f-8482-33e25e9cf131" }, "outputs": [ { "data": { "text/plain": [ "Dataset({\n", " features: ['id', 'title', 'content', 'prechunk_id', 'postchunk_id', 'arxiv_id', 'references'],\n", " num_rows: 10000\n", "})" ] }, "execution_count": 25, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from datasets import load_dataset\n", "\n", "data = load_dataset(\n", " \"jamescalam/ai-arxiv2-semantic-chunks\",\n", " split=\"train[:10000]\"\n", ")\n", "data" ] }, { "cell_type": "markdown", "metadata": { "id": "wrC-XHrTDhG6" }, "source": [ "We have 200K chunks, where each chunk is roughly the length of 1-2 paragraphs in length. Here is an example of a single record:" ] }, { "cell_type": "code", "execution_count": 26, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "RQg8wiUQ4bOk", "outputId": "7170e9df-6762-4f48-c2af-f5f5326561e9" }, "outputs": [ { "data": { "text/plain": [ "{'id': '2401.04088#0',\n", " 'title': 'Mixtral of Experts',\n", " 'content': '4 2 0 2 n a J 8 ] G L . s c [ 1 v 8 8 0 4 0 . 1 0 4 2 : v i X r a # Mixtral of Experts Albert Q. Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed Abstract We introduce Mixtral 8x7B, a Sparse Mixture of Experts (SMoE) language model. Mixtral has the same architecture as Mistral 7B, with the difference that each layer is composed of 8 feedforward blocks (i.e. experts). For every token, at each layer, a router network selects two experts to process the current state and combine their outputs. Even though each token only sees two experts, the selected experts can be different at each timestep. As a result, each token has access to 47B parameters, but only uses 13B active parameters during inference. Mixtral was trained with a context size of 32k tokens and it outperforms or matches Llama 2 70B and GPT-3.5 across all evaluated benchmarks. In particular, Mixtral vastly outperforms Llama 2 70B on mathematics, code generation, and multilingual benchmarks. We also provide a model fine- tuned to follow instructions, Mixtral 8x7B â Instruct, that surpasses GPT-3.5 Turbo, Claude-2.1, Gemini Pro, and Llama 2 70B â chat model on human bench- marks. Both the base and instruct models are released under the Apache 2.0 license.',\n", " 'prechunk_id': '',\n", " 'postchunk_id': '2401.04088#1',\n", " 'arxiv_id': '2401.04088',\n", " 'references': ['1905.07830']}" ] }, "execution_count": 26, "metadata": {}, "output_type": "execute_result" } ], "source": [ "data[0]" ] }, { "cell_type": "markdown", "metadata": { "id": "euFtJiIz4bOk" }, "source": [ "Format the data into the format we need, this will contain `id`, `text` (which we will embed), and `metadata`." ] }, { "cell_type": "code", "execution_count": 27, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 118, "referenced_widgets": [ "3907ce8afd4a426fb28b8a2c2d840bfb", "9915079dab2a4594bba86c63cd9e9484", "e65667bb33d34b799107f3df0e7bbe1e", "f65063e1674445fbb965aadfdcc11cb4", "888181bb6c774b5eaa22f778f85ca561", "774ff2be3ba447da8e8d2f1544a99ec7", "6ac67cc425e8443f8d25d4ac82d6baf0", "c4d7fd1fa3ba48e28d6130d3660a3dfd", "62523be70fdf47ac9727bf97cb1687c7", "4d968db5db8348ad9f8eee308030a146", "6427bf9b15ac4dc688c315ce4c981b72" ] }, "id": "u-svyAMw4bOl", "outputId": "a87e23ad-5de3-4de8-f44d-b2c544269379" }, "outputs": [ { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "3907ce8afd4a426fb28b8a2c2d840bfb", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Map: 0%| | 0/10000 [00:00= 2:\n", " try:\n", " embeds = []\n", " for j in range(0, len(metadata), batch_size):\n", " j_end = min(len(metadata), j+batch_size)\n", " embed_response = mistral.embeddings(\n", " input=[\n", " x[\"title\"]+\"\\n\"+x[\"content\"] for x in metadata[j:j_end]\n", " ],\n", " model=embed_model\n", " )\n", " embeds.extend([x.embedding for x in embed_response.data])\n", " return embeds\n", " except MistralAPIException as e:\n", " batch_size = int(batch_size / 2)\n", " print(f\"Hit MistralAPIException, attempting {batch_size=}\")\n", " raise e\n" ] }, { "cell_type": "markdown", "metadata": { "id": "ZI76rcTi4bOm" }, "source": [ "We can see the index is currently empty with a `total_vector_count` of `0`. We can begin populating it with `mistral-embed` built embeddings like so:\n", "\n", "**⚠️ WARNING: Embedding costs for the full dataset as of 3 Jan 2024 is ~$5.70**" ] }, { "cell_type": "code", "execution_count": 46, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 84, "referenced_widgets": [ "1f0e9f86a46043aa8fa14e3145873e10", "b998972de18848728feb76c56d52884c", "03aa14d47a064e8c9c53154e160087a1", "4752778000834d2bb10b61e4b5211b49", "a4523e2ef6d842a8a56a93a0e73ab701", "0ab23c473ee4433fbbcd8f2e7241a11a", "133c0ab7265a4e35b00093fa32880b9d", "5be01a4e787f41188e2d57ea37444ed6", "8c4b6a8e8bb34187955a3e571787126a", "cd6298feb96142abb0f2c5b14d2be1a5", "4192629da2094c0bb254479811cf27fb" ] }, "id": "a2xvoFt04bOn", "outputId": "7e84d254-5deb-424e-e156-1aa7011c97de" }, "outputs": [ { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "1f0e9f86a46043aa8fa14e3145873e10", "version_major": 2, "version_minor": 0 }, "text/plain": [ " 0%| | 0/313 [00:00 list[str]:\n", " # encode query\n", " xq = mistral.embeddings(\n", " input=[query],\n", " model=embed_model\n", " ).data[0].embedding\n", " # search pinecone index\n", " res = index.query(vector=xq, top_k=top_k, include_metadata=True)\n", " # get doc text\n", " docs = [x[\"metadata\"]['content'] for x in res[\"matches\"]]\n", " return docs" ] }, { "cell_type": "code", "execution_count": 59, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "C3FASr-04bOn", "outputId": "7bc0e3b7-76f3-43b1-ef11-276998a160d3" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[25] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timo- thée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023. [26] Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023. [27] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Å ukasz Kaiser, and Illia Polosukhin.\n", "---\n", "Mistral 7B outperforms the previous best 13B model (Llama 2, [26]) across all tested benchmarks, and surpasses the best 34B model (LLaMa 34B, [25]) in mathematics and code generation. Furthermore, Mistral 7B approaches the coding performance of Code-Llama 7B [20], without sacrificing performance on non-code related benchmarks. Mistral 7B leverages grouped-query attention (GQA) [1], and sliding window attention (SWA) [6, 3]. GQA significantly accelerates the inference speed, and also reduces the memory requirement during decoding, allowing for higher batch sizes hence higher throughput, a crucial factor for real-time applications. In addition, SWA is designed to handle longer sequences more effectively at a reduced computational cost, thereby alleviating a common limitation in LLMs. These attention mechanisms collectively contribute to the enhanced performance and efficiency of Mistral 7B. Mistral 7B is released under the Apache 2.0 license. This release is accompanied by a reference implementation1 facilitating easy deployment either locally or on cloud platforms such as AWS, GCP, or Azure using the vLLM [17] inference server and SkyPilot 2. Integration with Hugging Face 3 is also streamlined for easier integration. Moreover, Mistral 7B is crafted for ease of fine-tuning across a myriad of tasks. As a demonstration of its adaptability and superior performance, we present a chat model fine-tuned from Mistral 7B that significantly outperforms the Llama 2 13B â Chat model. Mistral 7B takes a significant step in balancing the goals of getting high performance while keeping large language models efficient. Through our work, our aim is to help the community create more affordable, efficient, and high-performing language models that can be used in a wide range of real-world applications.\n", "---\n", "3 2 0 2 t c O 0 1 ] L C . s c [ 1 v 5 2 8 6 0 . 0 1 3 2 : v i X r a # Mistral 7B Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed Abstract We introduce Mistral 7B, a 7â billion-parameter language model engineered for superior performance and efficiency. Mistral 7B outperforms the best open 13B model (Llama 2) across all evaluated benchmarks, and the best released 34B model (Llama 1) in reasoning, mathematics, and code generation. Our model leverages grouped-query attention (GQA) for faster inference, coupled with sliding window attention (SWA) to effectively handle sequences of arbitrary length with a reduced inference cost. We also provide a model fine-tuned to follow instructions, Mistral 7B â Instruct, that surpasses Llama 2 13B â chat model both on human and automated benchmarks. Our models are released under the Apache 2.0 license. Code: https://github.com/mistralai/mistral-src Webpage: https://mistral.ai/news/announcing-mistral-7b/ # Introduction In the rapidly evolving domain of Natural Language Processing (NLP), the race towards higher model performance often necessitates an escalation in model size. However, this scaling tends to increase computational costs and inference latency, thereby raising barriers to deployment in practical, real-world scenarios. In this context, the search for balanced models delivering both high-level performance and efficiency becomes critically essential. Our model, Mistral 7B, demonstrates that a carefully designed language model can deliver high performance while maintaining an efficient inference.\n", "---\n", "When evaluated on reasoning, comprehension, and STEM reasoning (specifically MMLU), Mistral 7B mirrored performance that one might expect from a Llama 2 model with more than 3x its size. On the Knowledge benchmarks, Mistral 7Bâ s performance achieves a lower compression rate of 1.9x, which is likely due to its limited parameter count that restricts the amount of knowledge it can store. Evaluation Differences. On some benchmarks, there are some differences between our evaluation protocol and the one reported in the Llama 2 paper: 1) on MBPP, we use the hand-verified subset 2) on TriviaQA, we do not provide Wikipedia contexts. # Instruction Finetuning To evaluate the generalization capabilities of Mistral 7B, we fine-tuned it on instruction datasets publicly available on the Hugging Face repository. No proprietary data or training tricks were utilized: Mistral 7B â Instruct model is a simple and preliminary demonstration that the base model can easily be fine-tuned to achieve good performance. In Table 3, we observe that the resulting model, Mistral 7B â Instruct, exhibits superior perfor- mance compared to all 7B models on MT-Bench, and is comparable to 13B â Chat models. An independent human evaluation was conducted on https://llmboxing.com/leaderboard. Model Chatbot Arena ELO Rating MT Bench WizardLM 13B v1.2 Mistral 7B Instruct Llama 2 13B Chat Vicuna 13B Llama 2 7B Chat Vicuna 7B Alpaca 13B 1047 1031 1012 1041 985 997 914 7.2 6.84 +/- 0.07 6.65 6.57 6.27 6.17 4.53 Table 3: Comparison of Chat models. Mistral 7B â Instruct outperforms all 7B models on MT-Bench, and is comparable to 13B â\n", "---\n", "[12] Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understanding. arXiv preprint arXiv:2009.03300, 2020. [13] Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. arXiv preprint arXiv:2103.03874, 2021. [14] Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, Thomas Hennigan, Eric Noland, Katherine Millican, George van den Driessche, Bogdan Damoc, Aurelia Guy, Simon Osindero, Karén Simonyan, Erich Elsen, Oriol Vinyals, Jack Rae, and Laurent Sifre. An empirical analysis of compute-optimal large language model training. In Advances in Neural Information Processing Systems, volume 35, 2022. [15] Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer.\n" ] } ], "source": [ "query = \"can you tell me about mistral LLM?\"\n", "docs = get_docs(query, top_k=5)\n", "print(\"\\n---\\n\".join(docs))" ] }, { "cell_type": "markdown", "metadata": { "id": "OSOMoBcFf0ma" }, "source": [ "Our retrieval component works, now let's try feeding this into Mistral Large LLM to produce an answer." ] }, { "cell_type": "code", "execution_count": 61, "metadata": { "id": "JERWhNXNf8cR" }, "outputs": [], "source": [ "from mistralai.models.chat_completion import ChatMessage\n", "\n", "\n", "def generate(query: str, docs: list[str]):\n", " system_message = (\n", " \"You are a helpful assistant that answers questions about AI using the \"\n", " \"context provided below.\\n\\n\"\n", " \"CONTEXT:\\n\"\n", " \"\\n---\\n\".join(docs)\n", " )\n", " messages = [\n", " ChatMessage(role=\"system\", content=system_message),\n", " ChatMessage(role=\"user\", content=query)\n", " ]\n", " # generate response\n", " chat_response = mistral.chat(\n", " model=\"mistral-large-latest\",\n", " messages=messages\n", " )\n", " return chat_response.choices[0].message.content" ] }, { "cell_type": "code", "execution_count": 63, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "krsfvRdiiMnX", "outputId": "071db234-b6c5-4496-fb3b-cf3cf74f6250" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Mistral 7B is a 7-billion-parameter language model engineered for superior performance and efficiency. It outperforms the best open 13B model (Llama 2) across all evaluated benchmarks, and the best released 34B model (Llama 1) in reasoning, mathematics, and code generation. Mistral 7B leverages grouped-query attention (GQA) for faster inference, coupled with sliding window attention (SWA) to effectively handle sequences of arbitrary length with a reduced inference cost.\n", "\n", "The model also demonstrates strong performance in reasoning, comprehension, and STEM reasoning benchmarks, often mirroring the performance of larger models. However, its performance on knowledge benchmarks is somewhat lower, likely due to its limited parameter count.\n", "\n", "Mistral 7B has also been fine-tuned on instruction datasets to create Mistral 7B â Instruct, which exhibits superior performance compared to all 7B models on MT-Bench, and is comparable to 13B â Chat models.\n", "\n", "The Mistral 7B model and its fine-tuned variant are released under the Apache 2.0 license, and the team behind Mistral aims to help the community create more affordable, efficient, and high-performing language models that can be used in a wide range of real-world applications.\n" ] } ], "source": [ "out = generate(query=query, docs=docs)\n", "print(out)" ] }, { "cell_type": "markdown", "metadata": { "id": "u8OfJFwq4bOo" }, "source": [ "Don't forget to delete your index when you're done to save resources!" ] }, { "cell_type": "code", "execution_count": 64, "metadata": { "id": "LQiU0IDl4bOo" }, "outputs": [], "source": [ "pc.delete_index(index_name)" ] }, { "cell_type": "markdown", "metadata": { "id": "0gThAy0k4bOo" }, "source": [ "---" ] } ], "metadata": { "colab": { "provenance": [] }, "kernelspec": { "display_name": "ml", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.12" }, "orig_nbformat": 4, "widgets": { "application/vnd.jupyter.widget-state+json": { "03aa14d47a064e8c9c53154e160087a1": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "FloatProgressModel", "state": { "_dom_classes": [], "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "FloatProgressModel", "_view_count": null, "_view_module": "@jupyter-widgets/controls", "_view_module_version": "1.5.0", "_view_name": "ProgressView", "bar_style": "success", "description": "", "description_tooltip": null, "layout": "IPY_MODEL_5be01a4e787f41188e2d57ea37444ed6", "max": 313, "min": 0, "orientation": "horizontal", "style": "IPY_MODEL_8c4b6a8e8bb34187955a3e571787126a", "value": 313 } }, "0ab23c473ee4433fbbcd8f2e7241a11a": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.2.0", "model_name": "LayoutModel", "state": { "_model_module": "@jupyter-widgets/base", "_model_module_version": "1.2.0", "_model_name": "LayoutModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "LayoutView", "align_content": null, "align_items": null, "align_self": null, "border": null, "bottom": null, "display": null, "flex": null, "flex_flow": null, "grid_area": null, "grid_auto_columns": null, "grid_auto_flow": null, "grid_auto_rows": null, "grid_column": null, "grid_gap": null, "grid_row": null, "grid_template_areas": null, "grid_template_columns": null, "grid_template_rows": null, "height": null, "justify_content": null, "justify_items": null, "left": null, "margin": null, "max_height": null, "max_width": null, "min_height": null, "min_width": null, "object_fit": null, "object_position": null, "order": null, "overflow": null, "overflow_x": null, "overflow_y": null, "padding": null, "right": null, "top": null, "visibility": null, "width": null } }, "133c0ab7265a4e35b00093fa32880b9d": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "DescriptionStyleModel", "state": { "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "DescriptionStyleModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "StyleView", "description_width": "" } }, "1f0e9f86a46043aa8fa14e3145873e10": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "HBoxModel", "state": { "_dom_classes": [], "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "HBoxModel", "_view_count": null, "_view_module": "@jupyter-widgets/controls", "_view_module_version": "1.5.0", "_view_name": "HBoxView", "box_style": "", "children": [ "IPY_MODEL_b998972de18848728feb76c56d52884c", "IPY_MODEL_03aa14d47a064e8c9c53154e160087a1", "IPY_MODEL_4752778000834d2bb10b61e4b5211b49" ], "layout": "IPY_MODEL_a4523e2ef6d842a8a56a93a0e73ab701" } }, "3907ce8afd4a426fb28b8a2c2d840bfb": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "HBoxModel", "state": { "_dom_classes": [], "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "HBoxModel", "_view_count": null, "_view_module": "@jupyter-widgets/controls", "_view_module_version": "1.5.0", "_view_name": "HBoxView", "box_style": "", "children": [ "IPY_MODEL_9915079dab2a4594bba86c63cd9e9484", "IPY_MODEL_e65667bb33d34b799107f3df0e7bbe1e", "IPY_MODEL_f65063e1674445fbb965aadfdcc11cb4" ], "layout": "IPY_MODEL_888181bb6c774b5eaa22f778f85ca561" } }, "4192629da2094c0bb254479811cf27fb": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "DescriptionStyleModel", "state": { "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "DescriptionStyleModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "StyleView", "description_width": "" } }, "4752778000834d2bb10b61e4b5211b49": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "HTMLModel", "state": { "_dom_classes": [], "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "HTMLModel", "_view_count": null, "_view_module": "@jupyter-widgets/controls", "_view_module_version": "1.5.0", "_view_name": "HTMLView", "description": "", "description_tooltip": null, "layout": "IPY_MODEL_cd6298feb96142abb0f2c5b14d2be1a5", "placeholder": "​", "style": "IPY_MODEL_4192629da2094c0bb254479811cf27fb", "value": " 313/313 [06:00<00:00,  1.12it/s]" } }, "4d968db5db8348ad9f8eee308030a146": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.2.0", "model_name": "LayoutModel", "state": { "_model_module": "@jupyter-widgets/base", "_model_module_version": "1.2.0", "_model_name": "LayoutModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "LayoutView", "align_content": null, "align_items": null, "align_self": null, "border": null, "bottom": null, "display": null, "flex": null, "flex_flow": null, "grid_area": null, "grid_auto_columns": null, "grid_auto_flow": null, "grid_auto_rows": null, "grid_column": null, "grid_gap": null, "grid_row": null, "grid_template_areas": null, "grid_template_columns": null, "grid_template_rows": null, "height": null, "justify_content": null, "justify_items": null, "left": null, "margin": null, "max_height": null, "max_width": null, "min_height": null, "min_width": null, "object_fit": null, "object_position": null, "order": null, "overflow": null, "overflow_x": null, "overflow_y": null, "padding": null, "right": null, "top": null, "visibility": null, "width": null } }, "5be01a4e787f41188e2d57ea37444ed6": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.2.0", "model_name": "LayoutModel", "state": { "_model_module": "@jupyter-widgets/base", "_model_module_version": "1.2.0", "_model_name": "LayoutModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "LayoutView", "align_content": null, "align_items": null, "align_self": null, "border": null, "bottom": null, "display": null, "flex": null, "flex_flow": null, "grid_area": null, "grid_auto_columns": null, "grid_auto_flow": null, "grid_auto_rows": null, "grid_column": null, "grid_gap": null, "grid_row": null, "grid_template_areas": null, "grid_template_columns": null, "grid_template_rows": null, "height": null, "justify_content": null, "justify_items": null, "left": null, "margin": null, "max_height": null, "max_width": null, "min_height": null, "min_width": null, "object_fit": null, "object_position": null, "order": null, "overflow": null, "overflow_x": null, "overflow_y": null, "padding": null, "right": null, "top": null, "visibility": null, "width": null } }, "62523be70fdf47ac9727bf97cb1687c7": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "ProgressStyleModel", "state": { "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "ProgressStyleModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "StyleView", "bar_color": null, "description_width": "" } }, "6427bf9b15ac4dc688c315ce4c981b72": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "DescriptionStyleModel", "state": { "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "DescriptionStyleModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "StyleView", "description_width": "" } }, "6ac67cc425e8443f8d25d4ac82d6baf0": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "DescriptionStyleModel", "state": { "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "DescriptionStyleModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "StyleView", "description_width": "" } }, "774ff2be3ba447da8e8d2f1544a99ec7": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.2.0", "model_name": "LayoutModel", "state": { "_model_module": "@jupyter-widgets/base", "_model_module_version": "1.2.0", "_model_name": "LayoutModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "LayoutView", "align_content": null, "align_items": null, "align_self": null, "border": null, "bottom": null, "display": null, "flex": null, "flex_flow": null, "grid_area": null, "grid_auto_columns": null, "grid_auto_flow": null, "grid_auto_rows": null, "grid_column": null, "grid_gap": null, "grid_row": null, "grid_template_areas": null, "grid_template_columns": null, "grid_template_rows": null, "height": null, "justify_content": null, "justify_items": null, "left": null, "margin": null, "max_height": null, "max_width": null, "min_height": null, "min_width": null, "object_fit": null, "object_position": null, "order": null, "overflow": null, "overflow_x": null, "overflow_y": null, "padding": null, "right": null, "top": null, "visibility": null, "width": null } }, "888181bb6c774b5eaa22f778f85ca561": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.2.0", "model_name": "LayoutModel", "state": { "_model_module": "@jupyter-widgets/base", "_model_module_version": "1.2.0", "_model_name": "LayoutModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "LayoutView", "align_content": null, "align_items": null, "align_self": null, "border": null, "bottom": null, "display": null, "flex": null, "flex_flow": null, "grid_area": null, "grid_auto_columns": null, "grid_auto_flow": null, "grid_auto_rows": null, "grid_column": null, "grid_gap": null, "grid_row": null, "grid_template_areas": null, "grid_template_columns": null, "grid_template_rows": null, "height": null, "justify_content": null, "justify_items": null, "left": null, "margin": null, "max_height": null, "max_width": null, "min_height": null, "min_width": null, "object_fit": null, "object_position": null, "order": null, "overflow": null, "overflow_x": null, "overflow_y": null, "padding": null, "right": null, "top": null, "visibility": null, "width": null } }, "8c4b6a8e8bb34187955a3e571787126a": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "ProgressStyleModel", "state": { "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "ProgressStyleModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "StyleView", "bar_color": null, "description_width": "" } }, "9915079dab2a4594bba86c63cd9e9484": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "HTMLModel", "state": { "_dom_classes": [], "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "HTMLModel", "_view_count": null, "_view_module": "@jupyter-widgets/controls", "_view_module_version": "1.5.0", "_view_name": "HTMLView", "description": "", "description_tooltip": null, "layout": "IPY_MODEL_774ff2be3ba447da8e8d2f1544a99ec7", "placeholder": "​", "style": "IPY_MODEL_6ac67cc425e8443f8d25d4ac82d6baf0", "value": "Map: 100%" } }, "a4523e2ef6d842a8a56a93a0e73ab701": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.2.0", "model_name": "LayoutModel", "state": { "_model_module": "@jupyter-widgets/base", "_model_module_version": "1.2.0", "_model_name": "LayoutModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "LayoutView", "align_content": null, "align_items": null, "align_self": null, "border": null, "bottom": null, "display": null, "flex": null, "flex_flow": null, "grid_area": null, "grid_auto_columns": null, "grid_auto_flow": null, "grid_auto_rows": null, "grid_column": null, "grid_gap": null, "grid_row": null, "grid_template_areas": null, "grid_template_columns": null, "grid_template_rows": null, "height": null, "justify_content": null, "justify_items": null, "left": null, "margin": null, "max_height": null, "max_width": null, "min_height": null, "min_width": null, "object_fit": null, "object_position": null, "order": null, "overflow": null, "overflow_x": null, "overflow_y": null, "padding": null, "right": null, "top": null, "visibility": null, "width": null } }, "b998972de18848728feb76c56d52884c": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "HTMLModel", "state": { "_dom_classes": [], "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "HTMLModel", "_view_count": null, "_view_module": "@jupyter-widgets/controls", "_view_module_version": "1.5.0", "_view_name": "HTMLView", "description": "", "description_tooltip": null, "layout": "IPY_MODEL_0ab23c473ee4433fbbcd8f2e7241a11a", "placeholder": "​", "style": "IPY_MODEL_133c0ab7265a4e35b00093fa32880b9d", "value": "100%" } }, "c4d7fd1fa3ba48e28d6130d3660a3dfd": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.2.0", "model_name": "LayoutModel", "state": { "_model_module": "@jupyter-widgets/base", "_model_module_version": "1.2.0", "_model_name": "LayoutModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "LayoutView", "align_content": null, "align_items": null, "align_self": null, "border": null, "bottom": null, "display": null, "flex": null, "flex_flow": null, "grid_area": null, "grid_auto_columns": null, "grid_auto_flow": null, "grid_auto_rows": null, "grid_column": null, "grid_gap": null, "grid_row": null, "grid_template_areas": null, "grid_template_columns": null, "grid_template_rows": null, "height": null, "justify_content": null, "justify_items": null, "left": null, "margin": null, "max_height": null, "max_width": null, "min_height": null, "min_width": null, "object_fit": null, "object_position": null, "order": null, "overflow": null, "overflow_x": null, "overflow_y": null, "padding": null, "right": null, "top": null, "visibility": null, "width": null } }, "cd6298feb96142abb0f2c5b14d2be1a5": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.2.0", "model_name": "LayoutModel", "state": { "_model_module": "@jupyter-widgets/base", "_model_module_version": "1.2.0", "_model_name": "LayoutModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "LayoutView", "align_content": null, "align_items": null, "align_self": null, "border": null, "bottom": null, "display": null, "flex": null, "flex_flow": null, "grid_area": null, "grid_auto_columns": null, "grid_auto_flow": null, "grid_auto_rows": null, "grid_column": null, "grid_gap": null, "grid_row": null, "grid_template_areas": null, "grid_template_columns": null, "grid_template_rows": null, "height": null, "justify_content": null, "justify_items": null, "left": null, "margin": null, "max_height": null, "max_width": null, "min_height": null, "min_width": null, "object_fit": null, "object_position": null, "order": null, "overflow": null, "overflow_x": null, "overflow_y": null, "padding": null, "right": null, "top": null, "visibility": null, "width": null } }, "e65667bb33d34b799107f3df0e7bbe1e": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "FloatProgressModel", "state": { "_dom_classes": [], "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "FloatProgressModel", "_view_count": null, "_view_module": "@jupyter-widgets/controls", "_view_module_version": "1.5.0", "_view_name": "ProgressView", "bar_style": "success", "description": "", "description_tooltip": null, "layout": "IPY_MODEL_c4d7fd1fa3ba48e28d6130d3660a3dfd", "max": 10000, "min": 0, "orientation": "horizontal", "style": "IPY_MODEL_62523be70fdf47ac9727bf97cb1687c7", "value": 10000 } }, "f65063e1674445fbb965aadfdcc11cb4": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "HTMLModel", "state": { "_dom_classes": [], "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "HTMLModel", "_view_count": null, "_view_module": "@jupyter-widgets/controls", "_view_module_version": "1.5.0", "_view_name": "HTMLView", "description": "", "description_tooltip": null, "layout": "IPY_MODEL_4d968db5db8348ad9f8eee308030a146", "placeholder": "​", "style": "IPY_MODEL_6427bf9b15ac4dc688c315ce4c981b72", "value": " 10000/10000 [00:02<00:00, 4327.17 examples/s]" } } } } }, "nbformat": 4, "nbformat_minor": 0 }