The IDE
This is where you write and test your JavaScript code for web scraping. It provides a complete coding environment with built-in tools for efficient data extraction. Learn more about The basics of Web Scraping.
- Pre-built template code created by our scraper engineers to help you get started quickly with common websites and scraping patterns.
- Use stages when you need to collect data across multiple pages. For example, if you want to collect all products from an Amazon search result page and gather details for each product, you can: Stage 1 (Discovery): Collect product URLs from search results and pass them to Stage 2 Stage 2 (Product Page): Visit each URL to extract product details
-
next_stage,run_stagecommands are available to interact between stages.
- Complete list of available functions with explanations and usage examples. Learn more about Interaction functions and Parser functions.
- Input : Define your input parameters and run a test (preview) with an input set
- Output : The extracted and structured data returned by the collector, containing all configured fields and their corresponding values from the scraped website
- Children : List of children which will be input sets of the next stage
- Run log : Code execution log
- Browser console : scraper browser console logs [browser > developer tool > ‘console tab’]
- Browser network : scraper browser network logs [browser > developer tool > ‘network tab’]
- Last errors : List of latest error information
- Crawl inspector: A debugging tool that displays all pages crawled during a batch job, including both successful and failed pages. For multi-stage scrapers, use the ‘Search for children’ button to view child pages generated from each parent page. Downloaded files are also accessible here.
- Output schema: Displays the structure of data your collector extracts, including field names and data types. Click the edit link to modify the schema.
- Add input parameter : Define an input parameter including its name and type
- New input : Add the value of an input set to test
- Preview : Run a test with a selected input set
- Error mode : Set a code behavior of the scraper error case
- Take screenshot : Take screenshots during preview test. You will be able to check loaded pages during the test.
Dashboard - scraper action menu
The collescraperctor action menu allows performing different actions with the scraper.- Initiate by API - start a data collection without having to enter the control panel
- Initiate manually - Bright Data’s control panel makes it easy to get started collecting data
- Run on schedule - select precisely when to collect the data you need
- Versions - review the modified versions of the scraper
- Report an issue - You can use this form to communicate any problems you have with the platform, the scraper, or the dataset results
- Copy link - copy the link of the scraper to share it with your colleagues
- Tickets - view the status of your tickets
- Advanced options:
- Edit the code - edit the scraper’s code within the IDE.
- Disable scraper - temporarily disable the scraper, but you can reactivate it if needed.
- Delete scraper - permanently delete the scraper.
