13 years ago my god. I wonder what Jon Skeet is doing these days.
I remember when he passed me in the reputation ranking back in the early days and thinking that I needed to be a little bit more active on the site to catch him lol.
That was a great read. Thanks!
This is the way
What's wrong with parsing HTML with regex?
Go and look it up on stack overflow
In short, it's the wrong tool for the job.
In practice, if your target is very limited and consistent, it's probably fine. But as a general statement about someone's behavior, it really sounds like someone is wasting a lot of time and regularly getting sub-par results.
Just a heads up for companies thinking it's wrong to scrap: if you don't want info to be scraped, don't put it on the internet.
But, but, norobots.txt!
Chad doesn't care!
uses curl manually instead
The sad part is that scrapping is often easier then using the api.
Much less beholden to arbitrary rules also. Way too many times companies will just up and lift their API access or push through restrictions. No ty, I'll just access it myself then
Cough Reddit cough
API starter kit
Outdated and unsupported and hasn't been replaced yet but is the standard way to use the service.
Lots of authorization tokens.
The example in the docs doesn't work (if there is one).
You have no idea where the online tutorial got the information because it doesn't have links to resources and the docs have barely anything even though its giant.
Uses asynchronous programming to make it faster but its still much much slower then scrapping without asynchronous programming.
So true that it hurta
I scrape with bash lord help me.
there's literally dozens of us!
or maybe just 2 idk
you scrape WITH BASH?
Awk all the things!
pipe sed pipe grep pipe tr pipe grep... I would say I am a bit of a plumber
as a windows user i say kindly on our behalf thank you for pushing the envelope ✉
Hold on, I thought it was supposed to be realism on the virgin's behalf and ridiculous nonsense on the chad behalf:
All I see is realism on both sides lol
someone's never used a good api. like mastodon
I’ve just discovered selenium and my life has changed.
I created a shitty script (with ChatGPT's help) that uses Selenium and can dump a Confluence page from work, all its subpages and all linked Google Drive documents.
How so?
When a customer needs a part replaced, they send in shipping data. This data has to be entered into 3-4 different web forms and an email. This allows me to automate it all from a single form that has built in error checking so human mistakes are limited.
Company could probably automate this all in the backend but they won’t :shrug:
Using Selenium for this is probably overkill. You might be better off sending direct HTTP requests with your form data. This way you don't actually have to spin up an entire browser to perform that simple operation for you.
That said, if it works - it works!
I'm guessing forms like this have CSRF protection, so you'd probably have to obtain that token and hope they don't make a new one on every request.
Good point. This is also possible to overcome with one additional HTTP request and some HTML parsing. Still less overhead than running Selenium! In any event, I was replying in a general sense: Selenium is easy to understand and seems like an intuitive solution to a simple problem. In 99% of cases some additional effort will result in a more efficient solution.
I used Twitter Scraper to get twitter data for my thesis. Shortly after, it became obsolete.
Why on earth would they have changed that. WEBooB is a way better name.
But it's got boob in it.
Ok then make a spotify scraper
Fuck, I think I've been doing it wrong and this meme gave me more things to learn than any YouTube video has
Memes have always been superior to YouTube videos
Let's see what WEI (if implemented ) will do with the scrapers. The future doesn't look promising.
What's that?
A google/chrome proposal for browser verification, i.e. killing addons and custom browsers.
Nice name, beat me to it
My undergrad project was a scraper - there just wasn’t a name for it yet,
Scrapers have been a thing since the web exists.
One of the first search engines is even called WebCrawler
I have totally no idea what these are about..
Websites and services create APIs for programmers to use them. So Spotify has code that let's you build a program that can use its features. But you need a token they give you after you sign up. The token can be revoked and used to monitor how much of their service you're using. That way they can restrict if its too much.
Scraping is raw dogging the web slut you met at the cougar ranch who went home with you because you reminded her of her dog
This is the greatest definition for scraping I've ever read. You should have it bronzed.
Put like that I want to learn everything about it.
'Scraping' is the process of anonymously and programmatically collecting data from a webpage(s), often without the website's permission and only limited to the content made publicly available. This is in contrast to using an API provided by the database owner which is limited by tokens, access volume, available end points etc.
Everytime I think I'm good with tech, something like this shows up in my feed and makes me realize I know jackshit.
No I mean more what is the use case where it would be worth scrapping on a massive scale?
When the data is on multiple sites or sources.
API licenses can be expensive, and some sources might not even have an API.
I get the concept but a concrete example. What company could possibly want to pay for scraping a site?
Some dude as a hobby I get it, but what, like Amazon will pay some guy to scrape competition prices or something?
Maybe you've got a small company involved in toy buying and reselling, and they want to scrape toy postings from ebay etc. so that they can scroll through a database of different postings and sort it by price or estimated profit or whatever.
I can't imagine data scraping is something companies will quickly admit to, considering the legal issues involved. It was also the norm for a long time -- APIs for accessing user generated data is a relatively new thing.
As for a concrete example: companies using chatGPT. A lot of useful data comes from scraping sites that don't offer an API.
There's a ton of money to be made from scraping, consolidating, and organizing publicly accessible data. A company I worked for did it with health insurance policy data because every insurance company has a different website with a different data format and data that updates every day. People will pay da big bux for someone to wrap all that messiness into a neat, consistent package. Many sites even gave us explicit permission to scrape because they didn't want to set up an api or find some way to send us files.
Right now, gathering machine learning data is hot, cause you need a lot of it to train a model. Companies may specialize in getting, say, social media posts from all kinds of sites and putting them together in a consistent format.
That's why I use geddit
I really hope Libreddit switches to scraping, the "Error: Too many request" thing is so annoying, I have to click the redirect button in Libredirect like 20 times until I can actually see a post.
Still a better experience than Reddits official site tho.
Sorry, I'm ignorant in this matter. Why exactly would you want to scrape websites aside from collecting data for ML? What kind of irreplaceable API are you using? Someone please educate me here.
API might cost a lot of money for the amount of requests you want to send. API may not include some fields in the data you want. API is rate limited, scraping might not be. API requires agreement to usage terms, scraping does not (though the recent LinkedIn scraping case might weaken that argument.)
This kinda reminds me of pirating vs paying.
Using api = you know it will always be the same structure and you will get the data you asked for. Otherwise you will be notified unless they version their api. There is usual good documentation. You can always ask for help.
Scraping = you need to scout the whole website yourself. you need to keep up to date with the the websites structure and to make sure they haven't added ways to block bots (scraping). Error handling is a lot more intense on your end, like missing content, hidden content, query for data. the website may not follow the same standards/structuree throughout the website so you need to have checks for when to use x to get y. The data may need multiple request because they do not show for example all the user settings on one page but in an api call they would or it is a ajax page and you need to run Javascript scripts and click on buttons that may change id, class or text info and they may load data when you do x with Javascript so you need to emulate the webpage.
So my guess is that scraping is used most often when you only need to fetch simple data structures and you are fine with cleaning up the data afterwards. Like all the text/images on a page, checking if a page has been updated or just save the whole page like wayback machine.
As someone who used to scrape government websites for a living (with permission from them cause they'd rather us break their single 100yr old server than give us a csv), I can confirm that maintaining scraping scripts is a huge pain in the ass.
Ooof, i am glad you don't have to do it anymore. I have a customer who is in the same situation. The company with the site were also ok with it (it was a running joke "this [bot] is our fastest user") but it was very sketchy because you had to login as someone to run the bot. thankfully did they always tell us when they made changes so we never had to be surprised.
My understanding is that the result of the LinkedIn case is that you can scrape data that you have permission to view but not to access data that you were not intended to. The end result that ClickWrap agreements are unenforceable.
So uh...as someone who's currently trying to scrape the web for email addresses to add to my potential client list ... where do I start researching this?
Start looking into selenium, probably in Python. It's one of the easier to understand forms of scraping. It's mainly used to web testing, though you can definitely use it for less... nice purposes.
Step one will be learning to code in any language.
Step two is using a library to help with it. HtmlAgilityPack has always been there for me. Don't use regex.
Everyone loves the idea of scraping, no one likes maintaining scrapers that break once a week because the CSS or HTML changed.
I loved scraping until my ip was blocked for botting lol. I know there's ways around it it's just work though
I successfully scraped millions of Amazon product listings simply by routing through TOR and cycling the exit node every 10 seconds.
That's a good idea right there, I like that
This guy scrapes
lmao, yeah, get all the exit nodes banned from amazon.
That’s the neat thing, it wouldn’t because traffic only spikes for 10s on any particular node. It perfectly blends into the background noise.
Queue Office Space style error and scrape for 10 hours on each node.
You guys use IP's?
I'm coding baby's first bot over here lol, I could probably do better
Token ring for me baybeee
Or in the case of wikipedia, every table on successive pages for sequential data is formatted differently.
Just use AI to make changes ¯_(ツ)_/¯
Here take these: \\
¯_(ツ)_/¯\\ Thanks
I'm down with scraping, but "parses HTML with regex" has got me fucked up.
Relevant SO post. https://stackoverflow.com/questions/1732348/regex-match-open-tags-except-xhtml-self-contained-tags#1732454
13 years ago my god. I wonder what Jon Skeet is doing these days.
I remember when he passed me in the reputation ranking back in the early days and thinking that I needed to be a little bit more active on the site to catch him lol.
That was a great read. Thanks!
This is the way
What's wrong with parsing HTML with regex?
Go and look it up on stack overflow
In short, it's the wrong tool for the job.
In practice, if your target is very limited and consistent, it's probably fine. But as a general statement about someone's behavior, it really sounds like someone is wasting a lot of time and regularly getting sub-par results.
Just a heads up for companies thinking it's wrong to scrap: if you don't want info to be scraped, don't put it on the internet.
But, but, norobots.txt!
Chad doesn't care!
uses curl manually instead
The sad part is that scrapping is often easier then using the api.
Much less beholden to arbitrary rules also. Way too many times companies will just up and lift their API access or push through restrictions. No ty, I'll just access it myself then
Cough Reddit cough
API starter kit
So true that it hurta
I scrape with bash lord help me.
there's literally dozens of us!
or maybe just 2 idk
you scrape WITH BASH?
Awk all the things!
pipe sed pipe grep pipe tr pipe grep... I would say I am a bit of a plumber
as a windows user i say kindly on our behalf thank you for pushing the envelope ✉
Hold on, I thought it was supposed to be realism on the virgin's behalf and ridiculous nonsense on the chad behalf:
All I see is realism on both sides lol
someone's never used a good api. like mastodon
I’ve just discovered selenium and my life has changed.
I created a shitty script (with ChatGPT's help) that uses Selenium and can dump a Confluence page from work, all its subpages and all linked Google Drive documents.
How so?
When a customer needs a part replaced, they send in shipping data. This data has to be entered into 3-4 different web forms and an email. This allows me to automate it all from a single form that has built in error checking so human mistakes are limited.
Company could probably automate this all in the backend but they won’t :shrug:
Using Selenium for this is probably overkill. You might be better off sending direct HTTP requests with your form data. This way you don't actually have to spin up an entire browser to perform that simple operation for you.
That said, if it works - it works!
I'm guessing forms like this have CSRF protection, so you'd probably have to obtain that token and hope they don't make a new one on every request.
Good point. This is also possible to overcome with one additional HTTP request and some HTML parsing. Still less overhead than running Selenium! In any event, I was replying in a general sense: Selenium is easy to understand and seems like an intuitive solution to a simple problem. In 99% of cases some additional effort will result in a more efficient solution.
I used Twitter Scraper to get twitter data for my thesis. Shortly after, it became obsolete.
https://github.com/taspinar/twitterscraper/issues/368 rip twitter scraper
I wanted to build a Discord bot that would check NIST for new CVEs every 24 hours. But their API leaves quiiiiiiite a bit to be desired.
Their pages, however…
Just use this https://github.com/CVEProject/cvelistV5/tree/main/cves
Oh yeah, that’s much more robust
It’s all fun and games until you have to support all this shit and it breaks weekly!
That being said, I do miss the simplicity of maintaining selenium projects for work
I use scrapy. It has a steeper learning curve than other libraries, but it's totally worth it.
Splash ftw
Let me introduce you to WooB (formerly WEBooB).
Why on earth would they have changed that. WEBooB is a way better name.
But it's got boob in it.
Ok then make a spotify scraper
Fuck, I think I've been doing it wrong and this meme gave me more things to learn than any YouTube video has
Memes have always been superior to YouTube videos
Let's see what WEI (if implemented ) will do with the scrapers. The future doesn't look promising.
What's that?
A google/chrome proposal for browser verification, i.e. killing addons and custom browsers.
Nice name, beat me to it
My undergrad project was a scraper - there just wasn’t a name for it yet,
Scrapers have been a thing since the web exists.
One of the first search engines is even called WebCrawler
I have totally no idea what these are about..
Websites and services create APIs for programmers to use them. So Spotify has code that let's you build a program that can use its features. But you need a token they give you after you sign up. The token can be revoked and used to monitor how much of their service you're using. That way they can restrict if its too much.
Scraping is raw dogging the web slut you met at the cougar ranch who went home with you because you reminded her of her dog
This is the greatest definition for scraping I've ever read. You should have it bronzed.
Put like that I want to learn everything about it.
'Scraping' is the process of anonymously and programmatically collecting data from a webpage(s), often without the website's permission and only limited to the content made publicly available. This is in contrast to using an API provided by the database owner which is limited by tokens, access volume, available end points etc.
Everytime I think I'm good with tech, something like this shows up in my feed and makes me realize I know jackshit.
ROFL, Chad only thinks that shit works
How exactly do you make money scraping?
By getting someone to hire you to do it.
Mind blowing stuff
No I mean more what is the use case where it would be worth scrapping on a massive scale?
When the data is on multiple sites or sources.
API licenses can be expensive, and some sources might not even have an API.
I get the concept but a concrete example. What company could possibly want to pay for scraping a site?
Some dude as a hobby I get it, but what, like Amazon will pay some guy to scrape competition prices or something?
Maybe you've got a small company involved in toy buying and reselling, and they want to scrape toy postings from ebay etc. so that they can scroll through a database of different postings and sort it by price or estimated profit or whatever.
I can't imagine data scraping is something companies will quickly admit to, considering the legal issues involved. It was also the norm for a long time -- APIs for accessing user generated data is a relatively new thing.
As for a concrete example: companies using chatGPT. A lot of useful data comes from scraping sites that don't offer an API.
There's a ton of money to be made from scraping, consolidating, and organizing publicly accessible data. A company I worked for did it with health insurance policy data because every insurance company has a different website with a different data format and data that updates every day. People will pay da big bux for someone to wrap all that messiness into a neat, consistent package. Many sites even gave us explicit permission to scrape because they didn't want to set up an api or find some way to send us files.
Right now, gathering machine learning data is hot, cause you need a lot of it to train a model. Companies may specialize in getting, say, social media posts from all kinds of sites and putting them together in a consistent format.
That's why I use geddit
I really hope Libreddit switches to scraping, the "Error: Too many request" thing is so annoying, I have to click the redirect button in Libredirect like 20 times until I can actually see a post.
Still a better experience than Reddits official site tho.
Sorry, I'm ignorant in this matter. Why exactly would you want to scrape websites aside from collecting data for ML? What kind of irreplaceable API are you using? Someone please educate me here.
API might cost a lot of money for the amount of requests you want to send. API may not include some fields in the data you want. API is rate limited, scraping might not be. API requires agreement to usage terms, scraping does not (though the recent LinkedIn scraping case might weaken that argument.)
This kinda reminds me of pirating vs paying. Using api = you know it will always be the same structure and you will get the data you asked for. Otherwise you will be notified unless they version their api. There is usual good documentation. You can always ask for help.
Scraping = you need to scout the whole website yourself. you need to keep up to date with the the websites structure and to make sure they haven't added ways to block bots (scraping). Error handling is a lot more intense on your end, like missing content, hidden content, query for data. the website may not follow the same standards/structuree throughout the website so you need to have checks for when to use x to get y. The data may need multiple request because they do not show for example all the user settings on one page but in an api call they would or it is a ajax page and you need to run Javascript scripts and click on buttons that may change id, class or text info and they may load data when you do x with Javascript so you need to emulate the webpage.
So my guess is that scraping is used most often when you only need to fetch simple data structures and you are fine with cleaning up the data afterwards. Like all the text/images on a page, checking if a page has been updated or just save the whole page like wayback machine.
As someone who used to scrape government websites for a living (with permission from them cause they'd rather us break their single 100yr old server than give us a csv), I can confirm that maintaining scraping scripts is a huge pain in the ass.
Ooof, i am glad you don't have to do it anymore. I have a customer who is in the same situation. The company with the site were also ok with it (it was a running joke "this [bot] is our fastest user") but it was very sketchy because you had to login as someone to run the bot. thankfully did they always tell us when they made changes so we never had to be surprised.
My understanding is that the result of the LinkedIn case is that you can scrape data that you have permission to view but not to access data that you were not intended to. The end result that ClickWrap agreements are unenforceable.
So uh...as someone who's currently trying to scrape the web for email addresses to add to my potential client list ... where do I start researching this?
Start looking into selenium, probably in Python. It's one of the easier to understand forms of scraping. It's mainly used to web testing, though you can definitely use it for less... nice purposes.
Step one will be learning to code in any language. Step two is using a library to help with it. HtmlAgilityPack has always been there for me. Don't use regex.
Virgin library user vs. Chad regex dev