lqdev

While writing the post RSS for new event notifications I mentioned how the flow of getting new event notifications through RSS might work:

When a new event is added to the calendar, the RSS feed can be updated. The channelproperty has the link to the shared calendar you can subscribe to and the individual events have .ics files embedded in the enclosure.

I actually didn't know if that was possible until I looked at the RSS enclosure syntax. Based on that page, as long as you provide the URL to the .ics file, length of the file, and the MIME type (text/calendar), you can link to your events inside enclosures. So it is possible.

That's not the point of this post though. When I was looking at the references, I came across this post talking about the use case for RSS enclosures. I don't know when this post was published but given the problems mentioned with accessing high quality video, I assume it was in the days when the internet was on dial-up or DSL speeds. At that time, downloading webpages or e-mail was slow. Forget about downloading multimedia. As that post mentions, with enclosures, you could schedule your RSS client to download content like videos during off-peak hours so any content you'd want to access would download overnight and be available to you in the morning. About 30 years later, in many places, even with a connection that has 1MB download speeds, downloading a podcast that's 100MB or a video that's 2GB is relatively fast. It might take a while, but compared to 30 years ago, it's "instant". So at this point, it's not really technical challenge as much as it is an access challenge.

Low bandwith speeds forced people to get creative in how they utilized their resources. The internet was not the only time this happened. About 30 years ago or so when cell phones were just becoming mainstream they too suffered from scarcity. Cell phones and plans were very expensive. You had limited minutes and when SMS became a thing, those too were limited. If you wanted to have hour long conversations with friends and loved ones, you'd have to wait until off-peak hours so you could use your "free" minutes. Fast-forward 30 years later, cell phones and cell phone plans are relatively affordable. When you purchase these plans, you get access to unlimited minutes, messages, and data. So again, the technical limitations aren't as prevalent. It's mainly a matter of access and whether your cell phone provider offers good coverage in your area.

This brings me to my point. We're seeing this cycle again with AI. Large Language Models today are expensive to run and I'm not overindexing on the financial perspective. You pay per call, access to these models can be throttled at times, there's only so many GPUs you can get access to in a data center to run them, you pay the latency penalty by going through the network as well as the cost of inferencing, and you also have token limits. About a year into the release of ChatGPT, access to these models has been expanded, the models have gotten more capable (GPT-4V and GPT4 with 128k tokens), it's become cheaper to run these models. At the same time, computing is becoming more efficient and smaller "open-source" models are becoming more capable. Today, we have to be creative with how we utilize our resources, but we're only a year in. If we extrapolate how these cycles play out though, in 30 years we'll forget how challenging it was to use and access early AI technologies and AI will become more abundant like the internet, cell phones, and other technologies before it.


Send me a message or webmention
Back to feed