58,84 €*
Versandkostenfrei per Post / DHL
Aktuell nicht verfügbar
This book goes beyond the basics of web scraping and covers advanced topics such as natural language processing (NLP) and text analytics to extract names of people, places, email addresses, contact details, etc., from a page at production scale using distributed big data techniques on an Amazon Web Services (AWS)-based cloud infrastructure. It book covers developing a robust data processing and ingestion pipeline on the Common Crawl corpus, containing petabytes of data publicly available and a web crawl data set available on AWS's registry of open data.
What You Will Learn
Understand web scraping, its applications/uses, and how to avoid web scraping by hitting publicly available rest API endpoints to directly get data
Develop a web scraper and crawler from scratch using lxml and BeautifulSoup library, and learn about scraping from JavaScript-enabled pages using Selenium
Use AWS-based cloud computing with EC2, S3, Athena, SQS, and SNS to analyze, extract, and store useful insights from crawled pages
Use SQL language on PostgreSQL running on Amazon Relational Database Service (RDS) and SQLite using SQLalchemy
Review sci-kit learn, Gensim, and spaCy to perform NLP tasks on scraped web pages such as name entity recognition, topic clustering (Kmeans, Agglomerative Clustering), topic modeling (LDA, NMF, LSI), topic classification (naive Bayes, Gradient Boosting Classifier) and text similarity (cosine distance-based nearest neighbors)
Handle web archival file formats and explore Common Crawl open data on AWS
Illustrate practical applications for web crawl data by building a similar website tool and a technology profiler similar to [...]
Write scripts to create a backlinks database on a web scale similar to [...], [...], [...], etc., for search engine optimization (SEO), competitor research, and determining website domain authority and ranking
Use web crawl data to build a news sentiment analysis system or alternative financial analysis covering stock market trading signals
Write a production-ready crawlerin Python using Scrapy framework and deal with practical workarounds for Captchas, IP rotation, and more
Who This Book Is ForPrimary audience: data analysts and scientists with little to no exposure to real-world data processing challenges, secondary: experienced software developers doing web-heavy data processing who need a primer, tertiary: business owners and startup founders who need to know more about implementation to better direct their technical team
This book goes beyond the basics of web scraping and covers advanced topics such as natural language processing (NLP) and text analytics to extract names of people, places, email addresses, contact details, etc., from a page at production scale using distributed big data techniques on an Amazon Web Services (AWS)-based cloud infrastructure. It book covers developing a robust data processing and ingestion pipeline on the Common Crawl corpus, containing petabytes of data publicly available and a web crawl data set available on AWS's registry of open data.
What You Will Learn
Understand web scraping, its applications/uses, and how to avoid web scraping by hitting publicly available rest API endpoints to directly get data
Develop a web scraper and crawler from scratch using lxml and BeautifulSoup library, and learn about scraping from JavaScript-enabled pages using Selenium
Use AWS-based cloud computing with EC2, S3, Athena, SQS, and SNS to analyze, extract, and store useful insights from crawled pages
Use SQL language on PostgreSQL running on Amazon Relational Database Service (RDS) and SQLite using SQLalchemy
Review sci-kit learn, Gensim, and spaCy to perform NLP tasks on scraped web pages such as name entity recognition, topic clustering (Kmeans, Agglomerative Clustering), topic modeling (LDA, NMF, LSI), topic classification (naive Bayes, Gradient Boosting Classifier) and text similarity (cosine distance-based nearest neighbors)
Handle web archival file formats and explore Common Crawl open data on AWS
Illustrate practical applications for web crawl data by building a similar website tool and a technology profiler similar to [...]
Write scripts to create a backlinks database on a web scale similar to [...], [...], [...], etc., for search engine optimization (SEO), competitor research, and determining website domain authority and ranking
Use web crawl data to build a news sentiment analysis system or alternative financial analysis covering stock market trading signals
Write a production-ready crawlerin Python using Scrapy framework and deal with practical workarounds for Captchas, IP rotation, and more
Who This Book Is ForPrimary audience: data analysts and scientists with little to no exposure to real-world data processing challenges, secondary: experienced software developers doing web-heavy data processing who need a primer, tertiary: business owners and startup founders who need to know more about implementation to better direct their technical team
Jaygraduated with a bachelor's degree in engineering from the Institute of Chemical Technology, University of Mumbai, India and a master of science degree from the University of Georgia, USA. Jay serves as an editor of a publication titled Web Data Extraction and also blogs about personal projects, open source packages, and experiences as a startup founder on his personal site, [...].
Shows you how to process web crawls from Common Crawl, one of the largest publicly available web crawl datasets (petabyte scale) indexing over 25 billion web pages ever month
Takes you from developing a simple Python-based web scraper on your personal computer to a distributed crawler with multiple nodes running on the cloud
Teaches you to process raw data using NLP techniques and boilerplate removal to extract useful insights that can power businesses with vertical/meta search engines, lead generation and Internet marketing, monitoring of competitors, brands, and prices, and more
Chapter 1: Introduction to Web Scraping.- Chapter 2: Web Scraping in Python Using Beautiful Soup Library.- Chapter 3: Introduction to Cloud Computing and Amazon Web Services (AWS).- Chapter 4: Natural Language Processing (NLP) and Text Analytics.- Chapter 5: Relational Databases and SQL Language.- Chapter 6: Introduction to Common Crawl Datasets.- Chapter 7: Web Crawl Processing on Big Data Scale.- Chapter 8: Advanced Web Crawlers.-
Erscheinungsjahr: | 2020 |
---|---|
Genre: | Informatik |
Rubrik: | Naturwissenschaften & Technik |
Medium: | Taschenbuch |
Inhalt: |
xix
397 S. 88 s/w Illustr. 397 p. 88 illus. |
ISBN-13: | 9781484265758 |
ISBN-10: | 1484265750 |
Sprache: | Englisch |
Ausstattung / Beilage: | Paperback |
Einband: | Kartoniert / Broschiert |
Autor: | Patel, Jay M. |
Auflage: | 1st ed. |
Hersteller: |
Apress
Apress L.P. |
Maße: | 254 x 178 x 23 mm |
Von/Mit: | Jay M. Patel |
Erscheinungsdatum: | 13.11.2020 |
Gewicht: | 0,786 kg |
Jaygraduated with a bachelor's degree in engineering from the Institute of Chemical Technology, University of Mumbai, India and a master of science degree from the University of Georgia, USA. Jay serves as an editor of a publication titled Web Data Extraction and also blogs about personal projects, open source packages, and experiences as a startup founder on his personal site, [...].
Shows you how to process web crawls from Common Crawl, one of the largest publicly available web crawl datasets (petabyte scale) indexing over 25 billion web pages ever month
Takes you from developing a simple Python-based web scraper on your personal computer to a distributed crawler with multiple nodes running on the cloud
Teaches you to process raw data using NLP techniques and boilerplate removal to extract useful insights that can power businesses with vertical/meta search engines, lead generation and Internet marketing, monitoring of competitors, brands, and prices, and more
Chapter 1: Introduction to Web Scraping.- Chapter 2: Web Scraping in Python Using Beautiful Soup Library.- Chapter 3: Introduction to Cloud Computing and Amazon Web Services (AWS).- Chapter 4: Natural Language Processing (NLP) and Text Analytics.- Chapter 5: Relational Databases and SQL Language.- Chapter 6: Introduction to Common Crawl Datasets.- Chapter 7: Web Crawl Processing on Big Data Scale.- Chapter 8: Advanced Web Crawlers.-
Erscheinungsjahr: | 2020 |
---|---|
Genre: | Informatik |
Rubrik: | Naturwissenschaften & Technik |
Medium: | Taschenbuch |
Inhalt: |
xix
397 S. 88 s/w Illustr. 397 p. 88 illus. |
ISBN-13: | 9781484265758 |
ISBN-10: | 1484265750 |
Sprache: | Englisch |
Ausstattung / Beilage: | Paperback |
Einband: | Kartoniert / Broschiert |
Autor: | Patel, Jay M. |
Auflage: | 1st ed. |
Hersteller: |
Apress
Apress L.P. |
Maße: | 254 x 178 x 23 mm |
Von/Mit: | Jay M. Patel |
Erscheinungsdatum: | 13.11.2020 |
Gewicht: | 0,786 kg |