Dekorationsartikel gehören nicht zum Leistungsumfang.
Getting Structured Data from the Internet
Running Web Crawlers/Scrapers on a Big Data Production Scale
Taschenbuch von Jay M. Patel
Sprache: Englisch

47,10 €*

inkl. MwSt.

Versandkostenfrei per Post / DHL

Lieferzeit 4-7 Werktage

Kategorien:
Beschreibung
Utilize web scraping at scale to quickly get unlimited amounts of free data available on the web into a structured format. This book teaches you to use Python scripts to crawl through websites at scale and scrape data from HTML and JavaScript-enabled pages and convert it into structured data formats such as CSV, Excel, JSON, or load it into a SQL database of your choice.
This book goes beyond the basics of web scraping and covers advanced topics such as natural language processing (NLP) and text analytics to extract names of people, places, email addresses, contact details, etc., from a page at production scale using distributed big data techniques on an Amazon Web Services (AWS)-based cloud infrastructure. It book covers developing a robust data processing and ingestion pipeline on the Common Crawl corpus, containing petabytes of data publicly available and a web crawl data set available on AWS's registry of open data.
Getting Structured Data from the Internet also includes a step-by-step tutorial on deploying your own crawlers using a production web scraping framework (such as Scrapy) and dealing with real-world issues (such as breaking Captcha, proxy IP rotation, and more). Code used in the book is provided to help you understand the concepts in practice and write your own web crawler to power your business ideas.


What You Will Learn
Understand web scraping, its applications/uses, and how to avoid web scraping by hitting publicly available rest API endpoints to directly get data
Develop a web scraper and crawler from scratch using lxml and BeautifulSoup library, and learn about scraping from JavaScript-enabled pages using Selenium
Use AWS-based cloud computing with EC2, S3, Athena, SQS, and SNS to analyze, extract, and store useful insights from crawled pages
Use SQL language on PostgreSQL running on Amazon Relational Database Service (RDS) and SQLite using SQLalchemy
Review sci-kit learn, Gensim, and spaCy to perform NLP tasks on scraped web pages such as name entity recognition, topic clustering (Kmeans, Agglomerative Clustering), topic modeling (LDA, NMF, LSI), topic classification (naive Bayes, Gradient Boosting Classifier) and text similarity (cosine distance-based nearest neighbors)
Handle web archival file formats and explore Common Crawl open data on AWS
Illustrate practical applications for web crawl data by building a similar website tool and a technology profiler similar to [...]
Write scripts to create a backlinks database on a web scale similar to [...], [...], [...], etc., for search engine optimization (SEO), competitor research, and determining website domain authority and ranking
Use web crawl data to build a news sentiment analysis system or alternative financial analysis covering stock market trading signals
Write a production-ready crawlerin Python using Scrapy framework and deal with practical workarounds for Captchas, IP rotation, and more

Who This Book Is ForPrimary audience: data analysts and scientists with little to no exposure to real-world data processing challenges, secondary: experienced software developers doing web-heavy data processing who need a primer, tertiary: business owners and startup founders who need to know more about implementation to better direct their technical team
Utilize web scraping at scale to quickly get unlimited amounts of free data available on the web into a structured format. This book teaches you to use Python scripts to crawl through websites at scale and scrape data from HTML and JavaScript-enabled pages and convert it into structured data formats such as CSV, Excel, JSON, or load it into a SQL database of your choice.
This book goes beyond the basics of web scraping and covers advanced topics such as natural language processing (NLP) and text analytics to extract names of people, places, email addresses, contact details, etc., from a page at production scale using distributed big data techniques on an Amazon Web Services (AWS)-based cloud infrastructure. It book covers developing a robust data processing and ingestion pipeline on the Common Crawl corpus, containing petabytes of data publicly available and a web crawl data set available on AWS's registry of open data.
Getting Structured Data from the Internet also includes a step-by-step tutorial on deploying your own crawlers using a production web scraping framework (such as Scrapy) and dealing with real-world issues (such as breaking Captcha, proxy IP rotation, and more). Code used in the book is provided to help you understand the concepts in practice and write your own web crawler to power your business ideas.


What You Will Learn
Understand web scraping, its applications/uses, and how to avoid web scraping by hitting publicly available rest API endpoints to directly get data
Develop a web scraper and crawler from scratch using lxml and BeautifulSoup library, and learn about scraping from JavaScript-enabled pages using Selenium
Use AWS-based cloud computing with EC2, S3, Athena, SQS, and SNS to analyze, extract, and store useful insights from crawled pages
Use SQL language on PostgreSQL running on Amazon Relational Database Service (RDS) and SQLite using SQLalchemy
Review sci-kit learn, Gensim, and spaCy to perform NLP tasks on scraped web pages such as name entity recognition, topic clustering (Kmeans, Agglomerative Clustering), topic modeling (LDA, NMF, LSI), topic classification (naive Bayes, Gradient Boosting Classifier) and text similarity (cosine distance-based nearest neighbors)
Handle web archival file formats and explore Common Crawl open data on AWS
Illustrate practical applications for web crawl data by building a similar website tool and a technology profiler similar to [...]
Write scripts to create a backlinks database on a web scale similar to [...], [...], [...], etc., for search engine optimization (SEO), competitor research, and determining website domain authority and ranking
Use web crawl data to build a news sentiment analysis system or alternative financial analysis covering stock market trading signals
Write a production-ready crawlerin Python using Scrapy framework and deal with practical workarounds for Captchas, IP rotation, and more

Who This Book Is ForPrimary audience: data analysts and scientists with little to no exposure to real-world data processing challenges, secondary: experienced software developers doing web-heavy data processing who need a primer, tertiary: business owners and startup founders who need to know more about implementation to better direct their technical team
Über den Autor
Jay M. Patel is a software developer with over 10 years of experience in data mining, web crawling/scraping, machine learning, and natural language processing (NLP) projects. He is a co-founder and principal data scientist of Specrom Analytics, providing content, email, social marketing, and social listening products and services using web crawling/scraping and advanced text mining.
Jay worked at the US Environmental Protection Agency (EPA) for five years where he designed workflows to crawl and extract useful insights from hundreds of thousands of documents that were parts of regulatory filings from companies. He also led one of the first research teams within the agency to use Apache Spark-based workflows for chem and bioinformatics applications such as chemical similarities and quantitative structure activity relationships. He developed recurrent neural networks and more advanced LSTM models in Tensorflow for chemical SMILES generation.

Jaygraduated with a bachelor's degree in engineering from the Institute of Chemical Technology, University of Mumbai, India and a master of science degree from the University of Georgia, USA. Jay serves as an editor of a publication titled Web Data Extraction and also blogs about personal projects, open source packages, and experiences as a startup founder on his personal site, [...].

Zusammenfassung

Shows you how to process web crawls from Common Crawl, one of the largest publicly available web crawl datasets (petabyte scale) indexing over 25 billion web pages ever month

Takes you from developing a simple Python-based web scraper on your personal computer to a distributed crawler with multiple nodes running on the cloud

Teaches you to process raw data using NLP techniques and boilerplate removal to extract useful insights that can power businesses with vertical/meta search engines, lead generation and Internet marketing, monitoring of competitors, brands, and prices, and more

Inhaltsverzeichnis

Chapter 1: Introduction to Web Scraping.- Chapter 2: Web Scraping in Python Using Beautiful Soup Library.- Chapter 3: Introduction to Cloud Computing and Amazon Web Services (AWS).- Chapter 4: Natural Language Processing (NLP) and Text Analytics.- Chapter 5: Relational Databases and SQL Language.- Chapter 6: Introduction to Common Crawl Datasets.- Chapter 7: Web Crawl Processing on Big Data Scale.- Chapter 8: Advanced Web Crawlers.-

Details
Erscheinungsjahr: 2020
Genre: Informatik
Rubrik: Naturwissenschaften & Technik
Medium: Taschenbuch
Seiten: 420
Inhalt: xix
397 S.
88 s/w Illustr.
397 p. 88 illus.
ISBN-13: 9781484265758
ISBN-10: 1484265750
Sprache: Englisch
Ausstattung / Beilage: Paperback
Einband: Kartoniert / Broschiert
Autor: Patel, Jay M.
Auflage: 1st ed.
Hersteller: APRESS
Maße: 254 x 178 x 23 mm
Von/Mit: Jay M. Patel
Erscheinungsdatum: 13.11.2020
Gewicht: 0,786 kg
preigu-id: 118956383
Über den Autor
Jay M. Patel is a software developer with over 10 years of experience in data mining, web crawling/scraping, machine learning, and natural language processing (NLP) projects. He is a co-founder and principal data scientist of Specrom Analytics, providing content, email, social marketing, and social listening products and services using web crawling/scraping and advanced text mining.
Jay worked at the US Environmental Protection Agency (EPA) for five years where he designed workflows to crawl and extract useful insights from hundreds of thousands of documents that were parts of regulatory filings from companies. He also led one of the first research teams within the agency to use Apache Spark-based workflows for chem and bioinformatics applications such as chemical similarities and quantitative structure activity relationships. He developed recurrent neural networks and more advanced LSTM models in Tensorflow for chemical SMILES generation.

Jaygraduated with a bachelor's degree in engineering from the Institute of Chemical Technology, University of Mumbai, India and a master of science degree from the University of Georgia, USA. Jay serves as an editor of a publication titled Web Data Extraction and also blogs about personal projects, open source packages, and experiences as a startup founder on his personal site, [...].

Zusammenfassung

Shows you how to process web crawls from Common Crawl, one of the largest publicly available web crawl datasets (petabyte scale) indexing over 25 billion web pages ever month

Takes you from developing a simple Python-based web scraper on your personal computer to a distributed crawler with multiple nodes running on the cloud

Teaches you to process raw data using NLP techniques and boilerplate removal to extract useful insights that can power businesses with vertical/meta search engines, lead generation and Internet marketing, monitoring of competitors, brands, and prices, and more

Inhaltsverzeichnis

Chapter 1: Introduction to Web Scraping.- Chapter 2: Web Scraping in Python Using Beautiful Soup Library.- Chapter 3: Introduction to Cloud Computing and Amazon Web Services (AWS).- Chapter 4: Natural Language Processing (NLP) and Text Analytics.- Chapter 5: Relational Databases and SQL Language.- Chapter 6: Introduction to Common Crawl Datasets.- Chapter 7: Web Crawl Processing on Big Data Scale.- Chapter 8: Advanced Web Crawlers.-

Details
Erscheinungsjahr: 2020
Genre: Informatik
Rubrik: Naturwissenschaften & Technik
Medium: Taschenbuch
Seiten: 420
Inhalt: xix
397 S.
88 s/w Illustr.
397 p. 88 illus.
ISBN-13: 9781484265758
ISBN-10: 1484265750
Sprache: Englisch
Ausstattung / Beilage: Paperback
Einband: Kartoniert / Broschiert
Autor: Patel, Jay M.
Auflage: 1st ed.
Hersteller: APRESS
Maße: 254 x 178 x 23 mm
Von/Mit: Jay M. Patel
Erscheinungsdatum: 13.11.2020
Gewicht: 0,786 kg
preigu-id: 118956383
Warnhinweis

Ähnliche Produkte

Ähnliche Produkte