Are you a Twitter refugee? Are you still feeling kind of addicted to walled gardens? Let me introduce you to the manifold wonders of everyone with their own magical, shitty, awesome corner of the internet: http://web.archive.org/web/20000301234253/http://geocities.yahoo.com/home/
@skanman you're taking 2022 thinking and applying it to a 1999 world, my good man. Gathered tons of data how? Half the world was still on dialup. Internet explorer and Netscape navigator were using two different JavaScript versions. Ajax as a programming approach was only just starting to be conceived of.
But also, it wasn't data that made Google different. I remember early on, other browsers had crawled more of the web. Google had pagerank, or knowing what to do with the data they had.
@nomi I know, I'm playing the "what if" game. 😂 Ajax just allows for requests and responses without reloading the page, and even it's outdated now. But data still could have been gathered at the request level. That's been happening in the request and response headers for an eternity. I think the real reason is using SQL, storing data in 2 dimensional tables would have been a serious nightmare to conglomerate data, vs. storing it in multidimensional noSQL which wasn't around. Tables are fine for large volumes of the same data, but when you have large volumes of types of data, it's enough to make anyone cry.
I'd like to think as dynamic as Geocities was, and with the amount of stupid images and content hosted, they had the server ability to pull it off. Developer wise, they did pull off making a WYSIWYG in browser page editor. If they could have done that, they could have done this.
Sure, one can argue that Google could just crawl Geocities and scrape that data too, but Geocities knew who the users were, and could've directly targeted them, Google still can't scrape a back end.
But the ability for users to easily dump their own content on to the internet has always been #1. WordPress makes up over 75% of the entire internet. Google only scrapes 5% of the internet. Ergo, if the WordPress backend sent user/usage data to 1 central system, that system would most likely crash instantly 😂 But if it didn't, it would have more data on society as a whole, than Google / Meta / Amazon / Microsoft combined. Geocities was like the original WordPress. It was garbage, but had they done it right, they wouldn't be owned by Yahoo, I think they'd own Yahoo. Google would still be bigger because they went horizontal and vertical at the same time while everyone else was focusing on vertical growth.
Hey sorry dude, I didn't mean to type this much but I love playing out scenarios of our past. Next time let's both just trash Myspace as a team beat down. 😂
@nomi Geocities was a brilliant idea but executed poorly. I still find the idea of private domains fascinating from a big data perspective. They could have gathered way more data than any other company on their users, simply because users made their sites as a personal reflection of themselves, or more valuably, who they wish they were. Had they capitalized on this, they could have grown into a data monster that even Google would envy.