Luis Quintanilla Avatar Image

Hi, I'm Luis 👋

Latest updates from across the site

Reshare

Agents.md

A simple, open format for guiding coding agents

Think of AGENTS.md as a README for agents: a dedicated, predictable place to provide the context and instructions to help AI coding agents work on your project.

Bookmark

Reclaim Control

RECLAIM CONTROL over your digital life, devices, services and data.

You have the right to control your data, devices, and services. We coordinate, develop and foster communities of Reclaimers who want digital independence. Together, let's reclaim the right to a technical future of data sovereignty, digital autonomy, responsible governance, and collective care online for all.

Bookmark

The Opt Out Project

I have built up a treasure trove of knowledge, devices, and services aimed at keeping me and my family away from the evils of data-collecting algorithmic systems.

That's why I started this site. I've decided to put information about these tools and alternatives in one spot for others to find and use. Many of these tools are reviewed in the tech press, others are more bespoke but there is information out there if you know where to look. My goal is to bring it all together in one place, to guide you through the process, and help you opt out as well.

...there are alternatives out there, better visions of what our lives could be like. There are systems that don't require capturing and storing all our data, mining us exploitatively for profit, and serving us the kinds of misinformation and advertising that has us hating our neighbors.

Blog Post

Mobile-First Static Site Publishing: Discord Bot Pipeline via Azure and GitHub

Ever since I published my first note(microblog post) on my website, I've always wanted a way to quickly publish while on the go. Unfortunately, I never found a good solution.

Because my website is statically generated and the source is hosted on GitHub(check out the colophon for more details), there is no backend for me to talk to. At the same time, I didn't want to build an entire backend to support my website because I want to keep things as lean and cost-efficient as possible.

Since my posts are just frontmatter and Markdown, I use VS Code as my editor. For some time, back when I used to have a Surface Duo, I authored posts from mobile using the github.dev experience. On two screens, while not ideal, it was manageable. After switching devices (because sadly there were no more security updates on the Surface Duo) and upgrading to a dumbphone and later a single screen smartphone, that workflow wasn't feasible.

At that point, what I resorted to was sending messages to myself via Element. The message would contain a link I wanted to check out later. Once I was on my laptop, I would check out the link and if I wanted to post about it on my website, I'd do so then.

That process, while it worked, wasn't necessarily scalable. In part that's a feature because I could spend more time digesting the content and writing a thoughtful article. However, it stopped me from sharing more in the moment and there were posts that were never authored or bookmarks that weren't captured because eventually that link got lost in the river of other links.

Basically what I wanted to replicate was the more instant posting that social media gives you, but do so on my own site.

That led me to doing some thinking and requirement gathering around the type of experience I wanted to have.

When it came to requirements for my solution, I was focused more on the workflow and experience rather on technical details.

Here is a list of those solution requirements:

  • Mobile is the primary publishing interface. Desktop publishing is a nice to have.
  • Be as low-friction as sharing a link via Element or posting on social media
  • Doesn't require implementing my own client or frontend
  • Doesn't require me to use existing micropub clients
  • Handles short-form and media posts supported by my website
    • Notes
    • Responses
      • Repost
      • Reply
      • Like
    • Bookmark
    • Media
      • Image
      • Audio (not used as often but technically supported)
      • Video (not used as often but technically supported)
  • Low-cost

For years, I had been struggling with actually implementing this system. The main part that gave me pause was not implementing my own client or relying on existing micropub clients.

Eventually, I just accepted that it might never happen.

One day, it eventually hit me. If the notes to self Element workflow worked so well, why not use a chat client as the frontend for publishing. At least have it serve as the capture system that would then format the content into a post that gets queued for publishing on GitHub. I'd seen Benji do something similar with his micropub endpoint.

While I could've used Element since that's my preferred platform, I've been contemplating no longer hosting my own Matrix server. So if I went through with this, I'd want something that I didn't feel bad about investing the time on this solution if that chat client went away.

That then left me with Discord as the next best option. Primarily because of its support for bots as well as its cross-platform support across mobile and desktop.

In the end, the solution then ended up being fairly straightforward.

More importantly, with the help of AI, I wrote none of the code.

Using Copilot and Claude Sonnet 4, I was able to go from idea to deployment in 1-2 days. At that time the solution supported all of the posts except for media which I hadn't figured out what the best way of uploading media through Discord was. Figuring that out, implementing it, and deploying it took another day or two.

Since I wanted for my solution to be as low-cost as possible, serverless seemed like a good option. I only pay for compute when it's actually being used which can be infrequent in my case. I don't need the server running 24/7 or even to be powerful. However, I didn't want to write my system as an Azure Function. I wanted the flexibility of deploying on a shared VM or container. A VM though wasn't an option since it's running 24/7. Keeping all of that in mind, my choice was narrowed down to Azure Container Apps which gave me the characteristics I was looking for. Serverless containers.

Once that decision was made, I used Copilot again to figure out how to optimize my container image so that it's space and resource efficient. And while at it, I used Copilot again to figure out the right incantations to get the container deployed to Azure Container Apps.

All-in-all, the solution had been staring at me in the face since I already had a workflow that for the most part worked for me, it just needed some optimizations and with the help of AI, I was able to quickly build and deploy something I'd been ruminating over for years.

The workflow for publishing is as follows:

  1. Invoke the bot in Discord to capture my input using slash command /post and the respective post type.

    Using slash commands to invoke discord publishing bot

  2. Provide post details. For media posts, I can provide an attachment which gets uploaded to Azure Blob Storage.

    A modal in discord with note post fields filled in

  3. Bot creates a branch and PR in my repo with the post content

  4. While logged into GitHub from my phone, if everything looks good, I merge the PR which kicks off my GitHub Actions workflow to build and publish the site including the new post.

  5. Post displays on my website.

The solution is not perfect.

One of the problems I've run into is cold-start. Since I scale my solution down to zero when it's not being used to save on costs, I suffer from the cold start problem. Therefore, when I first invoke the bot, it fails. I have to give it a few seconds and retry the invocation. It's usually about 5 seconds so it's not a huge issue but it does add some friction.

Overall I'm happy with my solution but there are a few improvements I'd like to make.

  • Open-source the repo - Currently I've kept the repo private since it was all AI generated. Since my system is already in production and processes were documented, I need to do a more thorough pass to make sure that no secrets or credentials are checked in or documented anywhere.
  • Improve UX - Discord limits modal fields to 5. Therefore, I'm playing around with the right balance between how much of the input should come from slash commands and how much should come from the modal.
  • Expand supported post types - I'd like to expand the number of posts supported by my publishing client. Reviews are a good example of the type of post I'd like to support as well as RSVPs. Reviews I already support on my website but RSVPs I don't yet. Also, I'd have to fix my Webmentions which are currently broken after upgrading my website.
  • Make it generator agnostic - Currently this only works for my website. With a few tweaks and refactoring, I think I can get the project to a place where it should work with other popular static site generators.
  • One-click deployment - Currently the solution is packaged up as a container so it can be deployed from anywhere. I want to make it even simpler to deploy. One click if possible.
Note

Hello world from the new site

Posting from my brand new redesigned website.

I was working on it for about a month so I plan on doing a longer writeup on what has changed.

There's still a few things that are broken, but for the most part, I'm happy with the progress and the changes that need to be made are incremental.

There's a ton of cleanup as well but again. That is not a blocker to publishing the site.

Blog Post

IndieWeb Create Day - July 2025

Since it was a holiday weekend in the U.S. that kind of snuck up on me, I found myself with nothing planned on a Saturday. So I chose to spend it creating stuff for my website with the IndieWeb community during IndieWeb Create Day.

Over the last few months I've been overthinking my website redesign and while I've made several attempts at it, I've never been satisfied the outcome. I end up throwing away all the progress I've made and go back to the drawing board.

Yesterday, I decided to not let perfect be the enemy of good and the approach I took was creating just a simpler piece of functionality outside of my website. How I integrate it into my website is a future me problem. But I want to work from a place of creativity and complete freedom to think of what could be rather than what is.

With that in mind, I set out to sketch out how I want to create and render media (image, audio, video) posts. The approach I took used a combination of front-matter YAML and custom markdown media extensions. The front-matter YAML is something that I already use for my website and it's something that I want to continue using. However, in contrast to my current website, I like that the front-matter was kept simple and only includes a basic amount of information. The actual post content was handled by my custom markdown extension which leveraged YAML-like syntax to define media content. What's great about this is that it is composable so once I got one type of media working, the rest for the most part "just worked". I could even mix different media types within the same post with no additional work or code changes required. Once I had the skeleton, it was all about refactoring, documentation, adding finishing touches, and vibe-coding some CSS which Claude did a relatively good job with given the aesthetic I was going for.

Overall, I'm happy with the end result.

A screenshot of a website post containing an image and audio player

For more details, you can check out the repo.

At some point, I want to be able to integrate these media posts into my static site generator but for the time being, there are other kinds of posts such as reviews, RSVPs, and other post types that I want to design and eventually also support on my website. I liked the approach I took this time around because it gave me the freedom to explore posibilities rather than constrain my creativity to what I've already built. So I think I'll keep doing the same for subsequent post types.

At the end of the day, it was nice seeing everyone else's projects. My favorite one was Cy's recipe website. I want to be like them when I grow up 🙂.

Blog Post

FediForum Day One Recap

Just wrapped up a successful first day of FediForum.

The vibes and energy were high. Tons of great conversations and projects around the social web.

A few emerging themes I noticed:

  • Identity
  • Portability / Interoperability
  • Feeds
  • Commerce

Ian Forrester kicked us off with his Public Service & The Fediverse keynote (Slides).

One of the ideas that struck a chord of public service integrated into the fediverse. More specifically the interest that sparked in me was that publishing and social shouldn't be two separate things. Following the POSSE principle from the IndieWeb. You publish on your own site and then it's syndicated elsewhere.

This was interesting enough for me I even hosted a session on the topic, I think it was called Tightening the Loop between CMS and the Fediverse. It was my first unconference, so I appreciated the way the agenda was built. Announce your topic, see whether there's interest, put it on the agenda, chat with fellow participants. Super easy.

These are such a huge topic but for the purpose of this post, I'm lumping them together.

https://bounce-migrate.appspot.com/ is one of the projects aiming to make portability easy. What's so interesting is they're making it easy to migrate across protocols. So if you're in one network like ATProto (Bluesky), migrating to the Fediverse should be relatively seamless with https://bounce.so.

Some great discussions that emerged on the topic as well include:

  • Reputation - How do you build a web of trust?
  • Compartmentalization and Deduplication - A single identity or multiple identities? When "following" someone, which of their feeds takes priority?

Talk of feeds was everywhere. I made a note to myself throughout the conference.

It's amazing how big the feeds theme is. Feed ownership, customization, and sharing. All powered by open protocols.

  • Bonfire releases 1.0 - Congrats to the Bonfire team on this milestone. I haven't tried Bonfire myself, but the Circles feature caught my attention. It made me reminiscent of Google+.
  • Surf.Social is now in beta - As an avid user and curator of RSS feeds, I'd heard about Surf before but hadn't really looked into it. The beta release was announced at the conference and I quickly was able to sign up and download it. Kudos to the team on this milestone and thanks for being so responsive to my request to join the beta. I did almost no waiting in the waitlist. Once I have a chance to try it out and get familiar with it, I'll share some thoughts.
  • Channels from the folks at Newsmast Foundation looks like an interesting way to curate and customize feeds. Bring Your Own Timeline Algorithm leverages semantic search to help you seamlessly leverage the power of algorithmic feeds but doing so under your control. Cool use of AI.

There were a few unconference sessions on the topic as well.

It was great to see folks talking about enabling creators to earn a living on open platforms and the social web.

I believe Bandwagon.fm showed of an implementation of a payments and subscription system built on top of Emmisary, a social web toolkit.

Here's a list of other links and projects I was exposed to during the conference.

As always, Cory Doctorow was great way to close out the first day. I even learned a new term, tron-pilled. Which means as a creator of a platform, you're on the side of the users.

Looking forward to tomorrow's sessions!

Blog Post

How do I keep up with AI?

This question comes up a lot in conversations. The short answer? I don’t. There’s just too much happening, too fast, for anyone to stay on top of everything.

While I enjoy sharing links and recommendations, I realized that a blog post might be more helpful. It gives folks a single place they can bookmark, share, and come back to on their own time, rather than having to dig through message threads where things inevitably get lost.

That said, here are some sources I use to try and stay informed:

  • Newsletters are great for curated content. They highlight the top stories and help filter through the noise.
  • Blogs are often the primary sources behind those newsletters. They go deeper and often cover a broader set of topics that might not make it into curated roundups.
  • Podcasts serve a similar role. In some cases, they provide curation like newsletters and deep dives like blogs in others. Best of all, you can tune in while on the go making it a hands-free activity.

For your convenience, if any of the sources (including podcasts) I list below have RSS feeds, I’ve included them in my AI Starter Pack, which you can download and import into your favorite RSS reader (as long as it supports OPML file imports).

If you have some sources to share, send me an e-mail. I'd love to keep adding to this list! If they have a feed I can subscribe to, even better.

I pride myself on being able to track down an RSS feed on just about any website, even if it’s buried or not immediately visible. Unfortunately, I haven't found a feed URL for either OpenAI or Anthropic which is annoying.

OpenAI and Anthropic, if you could do everyone a favor and drop a link, that would be great.

UPDATE: Thanks to @m2vh@mastodontech.de for sharing the OpenAI news feed.

I know I could use one of those web-page-to-RSS converters, but I'd much rather have an official link directly from the source.

Now that I’ve got you here...

Let’s talk about the best way to access all these feeds. My preferred and recommended approach is using a feed reader.

When subscribing to content on the open web, feed readers are your secret weapon.

RSS might seem like it’s dead (it’s not—yet). In fact, it’s the reason you often hear the phrase, “Wherever you get your podcasts.” But RSS goes beyond podcasts. It’s widely supported by blogs, newsletters, and even social platforms like the Fediverse (Mastodon, PeerTube, etc.) and BlueSky. It’s also how I’m able to compile my starter packs.

I've written more about RSS in Rediscovering the RSS Protocol, but the short version is this: when you build on open standards like RSS and OPML, you’re building on freedom. Freedom to use the tools that work best for you. Freedom to own your experience. And freedom to support a healthier, more independent web.

Blog Post

Vibe-Specing - From concepts to specification

Code generation is a common use case for AI. What about the design process that comes before implementation? Personally, I've found that AI excels not just at coding, but also helping formalize abstract ideas into concrete specifications. This post explores how I used AI-assisted design to transform a collection of loosely related concepts into a technical specification for a new system made up of those concepts.

Generally, I've had mixed success with vibe-coding (the practice of describing what you want in natural language and having AI generate the corresponding code). However, it's something that I'm constantly working on getting better at. Also, with tooling integrations like MCP, I can ground responses and supplement my prompts using external data.

What I find myself being more successful with is using AI to explore ideas and then formalizing those ideas into a specification. Even in the case of vibe-coding, what you're doing with your prompts is building a specification in real-time.

I'd like to think that eventually I'll get to the vibe-coding part but before diving straight into the code, I'd like to spend time in the design phase. Personally, this is also the part that I find the most fun because you can throw wild things at the wall. It's not until you implement them that you actually validate whether some of those wild ideas are practical. But I find the design phase a ton of fun.

The result of my latest vibe-specing adventure is what I'm calling the InterPlanetary Knowledge System (IPKS).

Lately, I've been thinking a lot about knowledge. Some concepts that have been in my head are those of non-linear publishing (creating content that can be accessed in any order with multiple entry points, like wikis or hypertext) and distributed cognition (the idea that human knowledge and cognitive processes extend beyond the individual mind to include interactions with other people, tools, and environments). Related to those concepts, I've also been thinking about how digital gardens (personal knowledge bases that blend note-taking, blogging, and knowledge management in a non-linear format) and Zettelkasten (a method of note-taking where ideas are captured as atomic notes with unique identifiers and explicit connections) are ways to capture and organize knowledge.

One other thing that I'm amazed by is the powerful concept of a hyperlink and how it makes the web open, decentralized, and interoperable. When paired with the semantic web (an extension of the web that provides a common framework for data to be shared across applications and enterprises), you have yourself a decentralized knowledgebase containing a lot of the world's knowledge.

At some point, IPFS (InterPlanetary File System, a protocol designed to create a permanent and decentralized method of storing and sharing files) joined this pool of concepts I had in my head.

These were all interesting concepts individually, but I knew there were connections but couldn't cohesively bring them together. That's where AI-assisted specification design came in.

Below is a summary of the collaborative design interaction with Claude Sonnet 3.7 (with web search) that eventually led to the generation of the IPKS specifications. I haven't combed through them in great detail, but what they're proposing seems plausible.

Overall, I'm fascinated by this interaction. Whether or not IPKS ever becomes a reality, the process of using AI to transform abstract concepts into concrete specifications seems like a valuable and fun design approach that I'll continue to refine and include as part of my vibe-coding sessions.


Our conversation began with exploring IPFS (InterPlanetary File System) and its fundamental capabilities as a content-addressed, distributed file system. We recognized that while IPFS excels at storing and retrieving files in a decentralized manner, it needed extensions to support knowledge representation, trust, and semantics.

Key insights from this stage:

  • IPFS provides an excellent foundation with content addressing through CIDs
  • Content addressing enables verification but doesn't inherently provide meaning
  • Moving from document-centric to idea-centric systems requires additional layers

We explored established knowledge management approaches, particularly:

Zettelkasten

The Zettelkasten method contributed these important principles:

  • Atomic units of knowledge (one idea per note)
  • Explicit connections between ideas
  • Unique identifiers for each knowledge unit
  • Emergent structure through relationship networks

Digital Gardens

The Digital Garden concept provided these insights:

  • Knowledge in various stages of development
  • Non-linear organization prioritizing connections
  • Evolution of ideas over time
  • Public visibility of work-in-progress thinking

These personal knowledge management approaches helped us envision how similar principles could work at scale in a distributed system.

When we proposed replacing "IPFS" with "IPKS" (changing File → Knowledge), we recognized the need to define what makes knowledge different from files. This led to identifying several key requirements:

  1. Semantic meaning - Knowledge needs explicit relationships and context
  2. Provenance and trust - Knowledge requires verifiable sources and expertise
  3. Evolution - Knowledge changes over time while maintaining continuity
  4. Governance - Knowledge exists in various trust and privacy contexts

These requirements shaped the layered architecture of the specifications.

Our discussions about distributed cognition highlighted how thinking processes extend beyond individual minds to include:

  • Interactions with other people
  • Cultural artifacts and tools
  • Physical and digital environments
  • Social and technological systems

This concept directly influenced the IPKS design by emphasizing:

  • Knowledge as a collective, distributed resource
  • The need for attribution and expertise verification
  • The value of connecting knowledge across boundaries
  • The role of tools in extending human cognition

Similarly, non-linear publishing concepts shaped how we approached knowledge relationships and navigation in IPKS, moving away from sequential formats toward interconnected networks of information.

Our exploration of complementary technologies led to incorporating:

Decentralized Identifiers (DIDs)

DIDs provided the framework for:

  • Self-sovereign identity for knowledge contributors
  • Cryptographic verification of authorship
  • Persistent identification across systems
  • Privacy-preserving selective disclosure

Verifiable Credentials (VCs)

Verifiable Credentials offered mechanisms for:

  • Expertise validation without central authorities
  • Domain-specific qualification verification
  • Credential-based access control
  • Trust frameworks for knowledge contributors

Semantic Web (RDF/OWL)

Semantic Web standards influenced:

  • Relationship types between knowledge nodes
  • Ontologies for domain knowledge representation
  • Query patterns for knowledge discovery
  • Interoperability with existing knowledge systems

Our conversation about supply chain management provided a concrete use case that helped ground the specifications in practical application. This example demonstrated how IPKS could address real-world challenges:

  • Material Provenance: Using DIDs and verifiable credentials to establish trusted material sources
  • Cross-Organization Collaboration: Enabling knowledge sharing while respecting organizational boundaries
  • Regulatory Compliance: Creating verifiable documentation of compliance requirements
  • Expertise Validation: Ensuring contributors have appropriate qualifications for their roles
  • Selective Disclosure: Balancing transparency with competitive confidentiality

This business context helped shape the Access Control & Privacy specification in particular, highlighting the need for nuanced governance models.

As we moved from abstract concepts to specifications, several technical considerations emerged:

  1. Building on IPLD: Recognizing that InterPlanetary Linked Data (IPLD) already provided foundational components for structured, linked data in content-addressed systems

  2. Modular Specification Design: Choosing to create multiple specifications rather than a monolithic standard to enable incremental implementation and adoption

  3. Backward Compatibility: Ensuring IPKS could work with existing IPFS/IPLD infrastructure

  4. Extensibility: Designing for future enhancements like AI integration, advanced semantic capabilities, and cross-domain knowledge mapping

The IPKS specifications represent a synthesis of our conceptual exploration, grounded in:

  • Established knowledge management practices
  • Decentralized web technologies
  • Real-world business requirements
  • Technical feasibility considerations

Moving from concept to implementation will require:

  1. Reference implementations of the core specifications
  2. Developer tools and libraries to simplify adoption
  3. Domain-specific extensions for particular use cases
  4. Community building around open standards

By building on the combined strengths of IPFS, DIDs, VCs, and semantic web technologies, IPKS creates a framework for distributed knowledge that balances openness with trust, flexibility with verification, and collaboration with governance.

Review

High Priest

High Priest cover
by Timothy Leary
Read
Rating: 4.0/5