Scrapes labels from etherscan, bscscan & polygonscan, arbitrium, fantom, avalanche website and stores into JSON/CSV.
🔴 Currently broken due to undetected chromedriver not working.
| Chain | Site | Label Count | Status | Last scraped | 
|---|---|---|---|---|
| ETH | https://etherscan.io | 29945 | ✅ ok | 18/6/2023 | 
| BSC | https://bscscan.com | 6726 | ✅ ok | 26/3/2023 | 
| POLY | https://polygonscan.com | 4997 | ✅ ok | 26/3/2023 | 
| OPT | https://optimistic.etherscan.io | 546 | ✅ ok | 29/3/2023 | 
| ARB | https://arbiscan.io | 837 | ✅ ok | 26/3/2023 | 
| FTM | https://ftmscan.com | 1085 | ✅ ok | 26/3/2023 | 
| AVAX | https://snowtrace.io | 1062 | ✅ ok | 26/3/2023 | 
Total Chains: 7
Total Labels: 45198
- On the command-line, run the command pip install -r requirements.txtwhile located at folder with code.
- (Optional) Add ETHERSCAN_USER and ETHERSCAN_PASS to sample.config.jsonand rename toconfig.json
- Run the script with the command python main.py.
- Proceed to enter either eth,bscorpolyto specify chain of interest
- Login to your ___scan account (Prevents popup/missing data)
- Press enter in CLI once logged in
- Proceed to enter either single(Retrieve specific label) orall(Retrieve ALL labels)
- If single: Follow up with the specific label e.g.exchange,bridge....
- If all: Simply let it run (Take about ~1h+ to retrieve all, note that it occassionally crashes as well)
- Individual JSON and CSV data is dumped into datasubfolder.
- Consolidated JSON label info is dumped into combinedsubfolder.