Google Maps crawler using Selenium

Overview

Google Maps Crawler using Selenium

antifragile project python version GitHub Code style: black try/except style: tryceratops Open in Visual Studio Code Follow guilatrova

Built as part of the Antifragile Dev Project

Selenium crawler that browses Google Maps as a regular user and stores the data in an object.


Sample

Sample

Extracted data example:

 Place(                                                                                                            │ │
│ │ │   name='Pizza Me Santos',                                                                                    │ │
│ │ │   address='Av. Washington Luis, 565 - loja 05 - Boqueirão, Santos - SP, 11055-001',                          │ │
│ │ │   business_hours={                                                                                           │ │
│ │ │   │   'Wednesday': '6–10:30PM',                                                                              │ │
│ │ │   │   'Thursday': '6–10:30PM',                                                                               │ │
│ │ │   │   'Friday': '6–11PM',                                                                                    │ │
│ │ │   │   'Saturday': '6–11PM',                                                                                  │ │
│ │ │   │   'Sunday': '6–10:30PM',                                                                                 │ │
│ │ │   │   'Monday': '6–10:30PM',                                                                                 │ │
│ │ │   │   'Tuesday': '6–10:30PM'                                                                                 │ │
│ │ │   },                                                                                                         │ │
│ │ │   photo_link='https://lh5.googleusercontent.com/p/AF1QipMyVkKioODaU0A_ogHPXosm_QcMndZN6I6YHIDo=w408-h272-k-no│ │
│ │ │   rate='5.0',                                                                                                │ │
│ │ │   reviews='16 reviews',                                                                                      │ │
│ │ │   extra_attrs={                                                                                              │ │
│ │ │   │   'Menu': 'Menu\npizzame-santos.goomer.app',                                                             │ │
│ │ │   │   'Website: pizzame-santos.goomer.app ': 'pizzame-santos.goomer.app',                                    │ │
│ │ │   │   'Phone: (13) 3385-0059 ': '(13) 3385-0059',                                                            │ │
│ │ │   │   'Plus code: 2MHC+WF Boqueirão, Santos - State of São Paulo': '2MHC+WF Boqueirão, Santos - State of São │ │
│ │ Paulo'                                                                                                         │ │
│ │ │   },                                                                                                         │ │
│ │ │   traits={                                                                                                   │ │
│ │ │   │   'Service options': ['No-contact delivery', 'Delivery', 'Takeaway', 'Dine-in'],                         │ │
│ │ │   │   'Accessibility': ['Wheelchair-accessible entrance'],                                                   │ │
│ │ │   │   'Offerings': ['Organic dishes', 'Vegetarian options'],                                                 │ │
│ │ │   │   'Dining options': ['Dessert'],                                                                         │ │
│ │ │   │   'Amenities': ['Good for kids'],                                                                        │ │
│ │ │   │   'Atmosphere': ['Casual'],                                                                              │ │
│ │ │   │   'Crowd': ['Groups'],                                                                                   │ │
│ │ │   │   'Planning': ['Accepts reservations'],                                                                  │ │
│ │ │   │   'Payments': ['Credit cards']                                                                           │ │
│ │ │   }                                                                                                          │ │
│ │ )
Owner
Guilherme Latrova
Sportist, Creator, Software writer, Coffee appreciator, Lucky husband and God servant :)
Guilherme Latrova
Examine.com supplement research scraper!

ExamineScraper Examine.com supplement research scraper! Why I want to be able to search pages for a specific term. For example, I want to be able to s

Tyler 15 Dec 06, 2022
爱奇艺会员,腾讯视频,哔哩哔哩,百度,各类签到

My-Actions 个人收集并适配Github Actions的各类签到大杂烩 不要fork了 ⭐️ star就行 使用方式 新建仓库并同步代码 点击Settings - Secrets - 点击绿色按钮 (如无绿色按钮说明已激活。直接到下一步。) 新增 new secret 并设置 Secr

280 Dec 30, 2022
Scrapy, a fast high-level web crawling & scraping framework for Python.

Scrapy Overview Scrapy is a fast high-level web crawling and web scraping framework, used to crawl websites and extract structured data from their pag

Scrapy project 45.5k Jan 07, 2023
A web Scraper for CSrankings.com that scrapes University and Faculty list for a particular country

A look into what we're building Demo.mp4 Prerequisites Python 3 Node v16+ Steps to run Create a virtual environment. Activate the virtual environment.

2 Jun 06, 2022
抢京东茅台脚本,定时自动触发,自动预约,自动停止

jd_maotai 抢京东茅台脚本,定时自动触发,自动预约,自动停止 小白信用 99.6,暂时还没抢到过,朋友 80 多抢到了一瓶,所以我感觉是跟信用分没啥关系,完全是看运气的。

Aruelius.L 117 Dec 22, 2022
Twitter Claimer / Swapper / Turbo - Proxyless - Multithreading

Twitter Turbo / Auto Claimer / Swapper Version: 1.0 Last Update: 01/26/2022 Use this at your own descretion. I've only used this on test accounts and

Underscores 6 May 02, 2022
Auto Join: A GitHub action script to automatically invite everyone to the organization who star your repository.

Auto Invite To The Organization By Star A GitHub Action script to automatically invite everyone to your organization that stars your repository. What

Max Base 11 Dec 11, 2022
A spider for Universal Online Judge(UOJ) system, converting problem pages to PDFs.

Universal Online Judge Spider Introduction This is a spider for Universal Online Judge (UOJ) system (https://uoj.ac/). It also works for all other Onl

TriNitroTofu 1 Dec 07, 2021
Scraping Thailand COVID-19 data from the DDC's tableau dashboard

Scraping COVID-19 data from DDC Dashboard Scraping Thailand COVID-19 data from the DDC's tableau dashboard. Data is updated at 07:30 and 08:00 daily.

Noppakorn Jiravaranun 5 Jan 04, 2022
SmartScraper: 简单、自动、快捷的Python网络爬虫

SmartScraper: 简单、自动、快捷的Python网络爬虫 Note: The origin developer of SmartScraper is Alireza Mika, I only change a little code of AutoScraper. SmartScraper

DaDeng 9 Apr 16, 2022
ChromiumJniGenerator - Jni Generator module extracted from Chromium project

ChromiumJniGenerator - Jni Generator module extracted from Chromium project

allenxuan 4 Jun 12, 2022
robobrowser - A simple, Pythonic library for browsing the web without a standalone web browser.

RoboBrowser: Your friendly neighborhood web scraper Homepage: http://robobrowser.readthedocs.org/ RoboBrowser is a simple, Pythonic library for browsi

Joshua Carp 3.7k Dec 27, 2022
Binance harvester - A Python 3 script to harvest data from the Binance socket stream and calculate popular TA indicators and produce lists of top trending coins

Binance harvester - A Python 3 script to harvest data from the Binance socket stream and calculate popular TA indicators and produce lists of top trending coins

68 Oct 08, 2022
Bigdata - This Scrapy project uses Redis and Kafka to create a distributed on demand scraping cluster

Scrapy Cluster This Scrapy project uses Redis and Kafka to create a distributed

Hanh Pham Van 0 Jan 06, 2022
This is a simple website crawler which asks for a website link from the user to crawl and find specific data from the given website address.

This is a simple website crawler which asks for a website link from the user to crawl and find specific data from the given website address.

Faisal Ahmed 1 Jan 10, 2022
Linkedin webscraping - Linkedin web scraping with python

linkedin_webscraping This is the first step of a full project called "LinkedIn J

Pedro Dib 4 Apr 24, 2022
Crawler in Python 3.7, 3.8. 3.9. Pypy3

Description Python Crawler written Python 3. (Supports major Python releases Python3.6, Python3.7 and Python 3.8) Installation and Use Setup VirtualEn

Vinit Kumar 2 Mar 12, 2022
A python module to parse the Open Graph Protocol

OpenGraph is a module of python for parsing the Open Graph Protocol, you can read more about the specification at http://ogp.me/ Installation $ pip in

Erik Rivera 213 Nov 12, 2022
Google Scholar Web Scraping

Google Scholar Web Scraping This is a python script that asks for a user to input the url for a google scholar profile, and then it writes publication

Suzan M 1 Dec 12, 2021
A Scrapper with python

Scrapper-en-python Scrapper des données signifie récuperer des données pour les traiter ou les analyser. En python, il y'a 2 grands moyens de scrapper

Lun4rIum 1 Dec 05, 2021