Navigation überspringen
Universitätsbibliothek Heidelberg
Standort: ---
Exemplare: ---
 Online-Ressource
Titel:Harnessing Ollama
Titelzusatz:create secure local LLM solutions with Python
Mitwirkende:Dichone, Paulo [MitwirkendeR]   i
Institutionen:Packt Publishing, [Verlag]   i
Ausgabe:[First edition].
Verlagsort:[Place of publication not identified]
Verlag:Packt Publishing
E-Jahr:2024
Jahr:[2024]
Umfang:1 online resource (1 video file (3 hr., 15 min.))
Illustrationen:sound, color
Fussnoten:Online resource; title from title details screen (O’Reilly, viewed December 30, 2024)
ISBN:978-1-83702-005-8
 1-83702-005-1
Abstract:Discover how to deploy and harness the power of local LLMs with Ollama in this hands-on course. You'll begin by setting up and configuring Ollama on your system, gaining full control of large language models without relying on the cloud. Learn how to optimize resources, explore model parameters, and test various LLMs, including multimodal models like Llava, for text, vision, and code-generation tasks. Delve deeper into customizing models with the Modelfile and command-line tools to meet specific needs. The course covers executing terminal commands for monitoring, troubleshooting, and deploying models. You'll also integrate Ollama models with Python, leveraging its library and OpenAI API compatibility to build interactive applications. Advanced modules guide you through creating Retrieval-Augmented Generation (RAG) systems using LangChain, embedding databases, and querying capabilities for enhanced performance. As you progress, you'll set up ChatGPT-like interfaces for seamless model interaction and explore advanced workflows like function calling and voice-enabled RAG systems. By the end, you'll master Ollama's ecosystem, equipping you with the skills to build secure, private, and highly efficient LLM-based applications that can operate independently of cloud services. To access the supplementary materials, scroll down to the 'Resources' section above the 'Course Outline' and click 'Supplemental Content.' This will either initiate a download or redirect you to GitHub. What you will learn Design and deploy secure local LLM solutions with Ollama Customize models to meet specific needs and applications Execute commands to control and monitor LLMs effectively Develop Python apps to integrate with Ollama models Create RAG systems using LangChain and vector databases Configure user-friendly interfaces for seamless interaction Audience This course is ideal for developers, AI enthusiasts, and data scientists eager to harness local LLMs for secure and private applications. Whether you're new to Ollama or experienced with AI systems, you'll benefit from its practical focus on deploying and customizing models. A basic understanding of Python and command-line tools is recommended, while prior experience with AI concepts, though helpful, is not required. About the Author Paulo Dichone: Paulo Dichone, a dedicated developer and educator in Android, Java, and Flutter, has empowered over 80,000 students globally with both soft and technical skills through his platform, Build Apps with Paulo. Holding a Computer Science degree and with extensive experience in mobile and web development, Paulo's passion lies in guiding learners to become proficient developers. Beyond his 5 years of online teaching, he cherishes family time, music, and travel, aiming to make impactful developers irrespective of their background.
URL:Aggregator: https://learning.oreilly.com/library/view/-/9781837020058/?ar
Datenträger:Online-Ressource
Sprache:eng
Form-SW:Instructional films
 Nonfiction films
 Internet videos
K10plus-PPN:1916329195
 
 
Lokale URL UB: Zum Volltext
 
 Bibliothek der Medizinischen Fakultät Mannheim der Universität Heidelberg
 Klinikum MA Bestellen/Vormerken für Benutzer des Klinikums Mannheim
Eigene Kennung erforderlich
Bibliothek/Idn:UW / m4660775011
Lokale URL Inst.: Zum Volltext

Permanenter Link auf diesen Titel (bookmarkfähig):  https://katalog.ub.uni-heidelberg.de/titel/69300345   QR-Code
zum Seitenanfang