Navigation überspringen
Universitätsbibliothek Heidelberg
Standort: ---
Exemplare: ---
 Online-Ressource
Verfasst von:Patel, Jay [VerfasserIn]   i
Titel:Getting Structured Data from the Internet
Titelzusatz:Running Web Crawlers/Scrapers on a Big Data Production Scale
Institutionen:Safari, an O’Reilly Media Company. [MitwirkendeR]   i
Verf.angabe:Patel, Jay
Ausgabe:1st edition
Verlagsort:[Erscheinungsort nicht ermittelbar]
Verlag:Apress
Jahr:2020
Umfang:1 online resource (408 pages)
Fussnoten:Online resource; Title from title page (viewed November 12, 2020)
ISBN:978-1-4842-6576-5
Abstract:Utilize web scraping at scale to quickly get unlimited amounts of free data available on the web into a structured format. This book teaches you to use Python scripts to crawl through websites at scale and scrape data from HTML and JavaScript-enabled pages and convert it into structured data formats such as CSV, Excel, JSON, or load it into a SQL database of your choice. This book goes beyond the basics of web scraping and covers advanced topics such as natural language processing (NLP) and text analytics to extract names of people, places, email addresses, contact details, etc., from a page at production scale using distributed big data techniques on an Amazon Web Services (AWS)-based cloud infrastructure. It book covers developing a robust data processing and ingestion pipeline on the Common Crawl corpus, containing petabytes of data publicly available and a web crawl data set available on AWS's registry of open data. Getting Structured Data from the Internet also includes a step-by-step tutorial on deploying your own crawlers using a production web scraping framework (such as Scrapy) and dealing with real-world issues (such as breaking Captcha, proxy IP rotation, and more). Code used in the book is provided to help you understand the concepts in practice and write your own web crawler to power your business ideas. What You Will Learn Understand web scraping, its applications/uses, and how to avoid web scraping by hitting publicly available rest API endpoints to directly get data Develop a web scraper and crawler from scratch using lxml and BeautifulSoup library, and learn about scraping from JavaScript-enabled pages using Selenium Use AWS-based cloud computing with EC2, S3, Athena, SQS, and SNS to analyze, extract, and store useful insights from crawled pages Use SQL language on PostgreSQL running on Amazon Relational Database Service (RDS) and SQLite using SQLalchemy Review sci-kit learn, Gensim, and spaCy to perform NLP tasks on scraped web pages such as name entity recognition, topic clustering (Kmeans, Agglomerative Clustering), topic modeling (LDA, NMF, LSI), topic classification (naive Bayes, Gradient Boosting Classifier) and text similarity (cosine distance-based nearest neighbors) Handle web archival file formats and explore Common Crawl open data on AWS Illustrate practical applications for web crawl data by building a similar website tool and a technology profiler similar to builtwith.com Write scripts to create a backlinks databa...
ComputerInfo:Mode of access: World Wide Web.
URL:Aggregator: https://learning.oreilly.com/library/view/-/9781484265765/?ar
 Cover: http://swbplus.bsz-bw.de/bsz1739149157cov.htm
Datenträger:Online-Ressource
Sprache:eng
Sach-SW:Electronic books ; local
 Electronic books
K10plus-PPN:1739149157
 
 
Lokale URL UB: Zum Volltext
 
 Bibliothek der Medizinischen Fakultät Mannheim der Universität Heidelberg
 Klinikum MA Bestellen/Vormerken für Benutzer des Klinikums Mannheim
Eigene Kennung erforderlich
Bibliothek/Idn:UW / m3806995710
Lokale URL Inst.: Zum Volltext

Permanenter Link auf diesen Titel (bookmarkfähig):  https://katalog.ub.uni-heidelberg.de/titel/68663111   QR-Code
zum Seitenanfang