Let us help you find the training program you are looking for.

If you can't find what you are looking for, contact us, we'll help you find it. We have over 800 training programs to choose from.


Course Skill Level:


Course Duration:

2 day/s

  • Course Delivery Format:

    Live, instructor-led.

  • Course Category:

    Big Data & Data Science

  • Course Code:


Who should attend & recommended skills:

Python developers

Who should attend & recommended skills

  • This course is geared for Python experienced developers, analysts or others with Python skills who wish to learn how some Go-specific language features help to simplify building web scrapers along with common pitfalls and best practices regarding web scraping.
  • Skill-level: Foundation-level Go Web Scraping skills for Intermediate skilled team members.
  • This is not a basic class.
  • Python: Basic (1-2 years’ experience).

About this course

Web scraping is the process of extracting information from the web using various tools that perform scraping and crawling. Go is emerging as the language of choice for scraping using a variety of libraries. This course will quickly explain to you, how to scrape data from various websites using Go libraries such as Colly and Goquery. The course starts with an introduction to the use cases of building a web scraper and the main features of the Go programming language, along with setting up a Go environment. It then moves on to HTTP requests and responses and talks about how Go handles them. You will also learn about a number of basic web scraping etiquettes. You will be taught how to navigate through a website, using a breadth-first and then a depth-first search, as well as find and follow links. You will get to know about the ways to track history in order to avoid loops and to protect your web scraper using proxies. Finally the course will cover the Go concurrency model, and how to run scrapers in parallel, along with large-scale distributed web scraping.

Skills acquired & topics covered

  • Working in a hands-on learning environment, led by our Go Web Scraping expert instructor, students will learn about and explore:
  • Using Go libraries like Goquery and Colly to scrape the web
  • Common pitfalls and best practices to effectively scrape and crawl
  • How to scrape using the Go concurrency model
  • Implementing Cache-Control to avoid unnecessary network calls
  • Coordinating concurrent scrapers
  • Designing a custom, larger-scale scraping system
  • Scraping basic HTML pages with Colly and JavaScript pages with chromedp
  • How to search using the “strings” and “regexp” packages
  • Setting up a Go development environment
  • Retrieving information from an HTML document
  • Protecting your web scraper from being blocked by using proxies
  • Controlling web browsers to scrape JavaScript sites

Course breakdown / modules

  • What is web scraping?
  • Why do you need a web scraper?
  • What is Go?
  • Why is Go a good fit for web scraping?
  • How to set up a Go development environment

  • What do HTTP requests look like?
  • What do HTTP responses look like?
  • What are HTTP status codes?
  • What do HTTP requests/responses look like in Go?

  • What is a robots.txt file?
  • What is a User-Agent string?
  • How to throttle your scraper
  • How to use caching

  • What is the HTML format?
  • Searching using the strings package
  • Searching using the regexp package
  • Searching using XPath queries
  • Searching using Cascading Style Sheets selectors

  • Following links
  • Submitting forms
  • Avoiding loops
  • Breadth-first versus depth-first crawling
  • Navigating with JavaScript

  • Virtual private servers
  • Proxies
  • Virtual private networks
  • Boundaries

  • What is concurrency
  • Concurrency pitfalls
  • The Go concurrency model
  • sync package helpers

  • Components of a web scraping system
  • Scraping HTML pages with colly
  • Scraping JavaScript pages with chrome-protocol
  • Distributed scraping with dataflowkit
  • Summary