Taming the Algorithm: Moving from Algorithmic Feeds to Curating my own AI News

AI
RSS
Productivity
Coding Agents
Published

April 11, 2026

Annoyed by the “X is dead” or “have you seen this game-changer technology?” posts? With algorithmic feeds, this kind of content bubbles to the top on X and LinkedIn. I found this super frustrating. I am looking for the latest real news or deeper, substantive information. The platforms are optimizing for clickbait type views and for me the signal-to-noise ratio on these platforms was unbearable.

A year ago, I decided I was fed up and started looking for alternatives. I want to share the different approaches I tried and finally how I ended up building my own news pipeline. I’m pretty happy with where I landed and believe with today’s coding agents, most anyone can build something similar to their tastes.

Three phases from newsletters to custom pipeline

Phase 1: Newsletters

The move to newsletters felt like the obvious first step. You subscribe to what you trust, the content shows up in your inbox, and you’re no longer at the mercy of an algorithm deciding what’s relevant. For a while this worked well.

The problem is that newsletters are diverse. Some are fast-moving digests — TLDR and The Rundown are good examples — where each issue is a handful of short bullets you can get through in five minutes. Others, like AI News, are enormous link dumps, sometimes with dozens of items per issue. And then there are the deeper, more analytical ones from writers like Sebastian Raschka or Christoph Molnar, which require real reading time and benefit from actually sitting down with them. And let’s thank Substack for creating a platform that supports deep content.

The problem for me was switching contexts. Email isn’t great at signaling effort. A quick 5-item digest and a 6,000-word deep dive arrive looking the same. I am trying to send out a meeting invite and now I have triage these emails. This became another source of stress. And then it seemed like I never really ended up reading those newsletters in the depth they deserved. So while I did like the content, I needed to find a better way to consume this news.

Phase 2: RSS

I loved RSS and Google Reader back in the day. RSS is simple and it works. There’s no algorithm here. It shows you what was published in reverse chronological order. It’s so simple and you never miss anything. Luckily, many sites (including Substack) support it, so I was able to find 80% of the content I cared about on RSS.

I found a reader that worked across macOS and iOS (Reeder) and went back to the old-school way of gathering news. It was a relief. I read when I want to read. I don’t get notification pressure, and the feed isn’t full of clickbait. The reading experience is more calming and enjoyable.

The downside is not every site or newsletter has an RSS feed. What I needed was to convert some existing newsletters as well as popular sites like Reddit into something that would map to my RSS feeds. I have known this for a long time, but I was reluctant to spend the time building a new parsing to RSS pipeline.

Phase 3: Building the Pipeline

Enter coding agents. They changed how I approached coding projects, by cutting down the time from a few weeks to a few days. Their ease of use motivated me to try and build my own pipeline. I started by focusing on newsletters and then I added in websites like Reddit and Hacker News. Let me walk you through the design of my pipeline.

When newsletters arrive in my Gmail, I use the filtering capabilities in Gmail to tag them (DailyMe). I then set up a pipeline that runs every 30 minutes to fetch the newsletters and parse them, and the output flows into my RSS feed. The live result is at rajivshah.com/news.

The second part of the pipeline was for breaking news. While these sites have RSS feeds, they quickly become overwhelming because there can be hundreds of posts in a day. So I built a piece of the pipeline that surfaces only the most popular content from the subreddits I care about. At some point I could even personalize that ranking algorithm further to my specific tastes.

This is something anyone with a coding agent can put together in a few hours. Seriously! You can build this. Here’s the high-level architecture if you want to borrow it.

DailyMe end-to-end pipeline

Ingestion. I use Gmail’s filtering tools to tag any newsletters with a DailyMe label. I also unsubscribed from a lot of newsletters and switched them over to RSS feeds directly, so only about 10% of my newsletter volume actually needs the parsing process. The pipeline ingests those tagged emails every 30 minutes.

Parsing and segmentation. This is the interesting part. A newsletter with one story gets parsed as one entry in the RSS feed. But some newsletters, AI News is the clearest example, are enormous. For these, we break them down into smaller separate posts. The parser uses OpenHands, which calls Claude Sonnet for the LLM parsing, to break these long emails apart into individual stories, each of which gets its own feed entry. When I’m scrolling through my RSS reader, I see discrete items rather than one giant blob I either read in full or skip entirely. Much easier to digest.

Deduplication. Any given piece of AI news tends to get covered by multiple newsletters at once. Without deduplication, the same story about a model release or a benchmark result would show up four or five times in my feed within a few hours. The pipeline canonicalizes URLs and uses title-similarity heuristics to collapse obvious duplicates into a single entry. It’s a simple approach that removes the repetitive coverage that comes from multiple newsletters on breaking news stories.

Ranking. Stories get ranked by a simple mix of recency, how many sources covered them, and a few lightweight heuristics. Topic tags help me filter quickly when I’m looking for a specific kind of item.

Social sources. On top of the newsletter content, I pull in top posts from Reddit (r/LocalLLaMA and similar) and Hacker News, filtered by score threshold so I’m only seeing things that community engagement has already validated. I’m still adjusting the threshold to get the balance right, so my feed gets enough new top posts to be useful, but not so many that it becomes noise.

Infrastructure. I’m running this using OpenHands Cloud’s built-in Automations, which trigger the pipeline every 30 minutes. OpenHands agents run the Gmail polling, LLM-based parsing, deduplication and then write results to a Neon Postgres database. Then a web frontend reads from the Postgres database and renders the feed. Most of this runs on free tiers, and the overall approach is easy to adapt to a different infrastructure setup.

The whole thing is open source at github.com/rajshah4/dailyme if you want to dig into the details.

What Actually Changed

With algorithmic feeds, the platform decides when and what you should be paying attention to. With this setup, I decide. I open my RSS reader when I have time to read, and I get a feed that reflects sources I’ve deliberately chosen rather than a mix of those sources plus whatever the algorithm decided to surface that day.

The deduplication piece turned out to be more valuable than I expected. I subscribe to eight or ten AI newsletters, and when there’s a major model release or a notable paper, all of them cover it. Before the pipeline, I’d read essentially the same information multiple times with slightly different framing. Now I see it once, and I can tell at a glance how many sources covered it, which itself says something about relative importance.

The parsing of long newsletters into individual items also mattered more than I anticipated. It lets me be selective within a newsletter issue. When AI News drops a 50-item digest, I don’t have to either read all of it or skip the whole thing. Individual items surface in the feed and I can make per-item decisions.

The Larger Point

With the rise of coding agents, all of us can better structure our own information environments. While a media company builds for views, you don’t have to follow that approach. Please feel free to borrow my code, but more importantly take my ideas. Wishing you all a bit more calm in your lives.