Go Sitemap Crawler(Laghu) banner

Go Sitemap Crawler(Laghu)

4 devlogs
20h 39m 52s

Laghu is a Go sitemap crawler that helps a user to crawl any sitemap of any website and scrape all links and info in a fraction of a second. It is a CLI-based application that can make your life much easier and faster for scraping any kind of site…

Laghu is a Go sitemap crawler that helps a user to crawl any sitemap of any website and scrape all links and info in a fraction of a second. It is a CLI-based application that can make your life much easier and faster for scraping any kind of sitemaps and URLs.

This project uses AI

for basic debugging only and for some part of code

Demo Repository

Loading README...

En Passant

so in this devlog added a lot of feature very useful one where now user can see the summary list and the scrapped urls making it more easier now user will have a choice to download it and making the project more smoother to work with and also user can have a bunch of ideas how many urls have scrapped in how many time saving more time and making it more user friendly

Attachment
0
En Passant

Shipped this project!

Hours: 3.82
Cookies: 🍪 88
Multiplier: 23.03 cookies/hr

Updated the project and will work now for linux too, and added some useful projects too

En Passant

Added the linux exe file and updated the readme and the application is kinda pretty much ready

Attachment
0
En Passant

In this devlog i have added a feature that allows all the generated links to be stored in two different J files, making all the links more organized (seo_report.json) and urls_only.json, allowing a user to view the history and use it more efficiently and more cleanly. And after 2 hours of work, it got a lot of error but finally I added the code that works, and will add the frontend to it also.

Attachment
Attachment
Attachment
0
En Passant

Shipped this project!

Hours: 14.48
Cookies: 🍪 187
Multiplier: 12.94 cookies/hr

Laghu is a Go sitemap crawler that helps a user to crawl any sitemap of any website and scrape all links and info in a fraction of a second. It is a CLI-based application that can make your life much easier and faster for scraping any kind of sitemaps and URLs.

En Passant

Completed all the backend of the site. Working smoothly now made a lot of commits in github but just now added the first devlog of it. I tested with all the different urls and it worked smoothly with all. If a user enters the wrong url it will display the message to the user that this cannot be scraped, which makes the user’s tasks easier and faster. I even forgot to post the devlog because I was debugging the code and committing to GitHub. So I am thinking to made it’s frontend too, so I will be working on that too.

Attachment
Attachment
Attachment
Attachment
Attachment
0