runs 100% on your machine

Your timeline, without the noise.

xrai runs a local LLM against every tweet and hides the garbage. Nothing leaves your machine. No cloud, no API keys, no accounts.

X is 90% noise now.

Engagement bait, clickbait videos, recycled takes, crypto pumps, rage posts. Scrolling it all costs you hours a week and trashes your attention.

Feed without xrai

"this changed my life" (thread)
RT if you agree
New MCP server released
wait for it 😂 [video]
Your morning routine sucks
GPT-5 speculation #4,812
Postgres 16 replication tips
$PEPE to the moon

Feed with xrai

"this changed my life" (thread) noise
RT if you agree prefilter
New MCP server released signal
wait for it 😂 [video] prefilter
Your morning routine sucks noise
GPT-5 speculation #4,812 noise
Postgres 16 replication tips signal
$PEPE to the moon prefilter

Four dimensions. One local model.

Every tweet runs a fast pipeline. Regex kills obvious junk. The LLM scores the rest on four axes. 3 or 4 out of 4 means signal. 0 to 2 means hide.

1
Tweet rendered in DOM
2
Reply or retweet? Has tech keyword? Already cached? ↓ if no
3
Regex prefilter catches obvious spam, NSFW, clickbait-video, crypto pump ↓ passes
4
Ollama scores tweet on Novelty / Specificity / Density / Authenticity
5
Score 3-4 → SHOW · Score 0-2 → HIDE
N · 0 or 1

Novelty

New information or the 412th recycled take?

S · 0 or 1

Specificity

Concrete numbers and details, or vague claims?

D · 0 or 1

Density

High insight-to-word ratio, or padded word count?

A · 0 or 1

Authenticity

Genuine sharing, or engagement farming?

What it actually does.

No accounts, no telemetry, no cloud. The whole thing is ~2,000 lines of vanilla JS running in a Chrome extension and a local Ollama instance.

[01]

100% local

All classification runs through Ollama on your machine. Zero API calls, zero accounts, zero billing.

[02]

Tech-tuned

Safelist for AI, startup, and engineering content. The model has been benchmarked on 89 real tweets from founders and devs.

[03]

Result cache

Scroll back up? Cached verdict applied instantly. No re-classification on the same tweet ever.

[04]

Regex prefilter

Eleven categories catch obvious junk instantly. No model call wasted on "RT if you agree".

[05]

Self-improving

Classification logs stream to a local collector. Run the improve script to generate prompt fixes from misclassifications.

[06]

Copy-paste replies

Generate replies in your voice. Copy them. Paste them yourself. xrai never touches X's compose box.

Real numbers. 89 tweets. Apple Silicon.

Models scored against hand-labeled tweets pulled from actual timelines and bookmarks. Pick speed or pick accuracy.

Model Accuracy Avg speed Size Use when
phi4-mini 92% 518ms 2.5 GB you want the best signal-noise call
gemma2:2b 88% 231ms 1.6 GB you want the fastest feed response

# node benchmarks/benchmark.js

Three steps. About four minutes.

You need a Mac or a machine that can run Ollama. No other setup. No CLI knowledge beyond copying one command.

STEP 01

Install Ollama

Download from ollama.ai. On Mac it auto-starts on login and lives in your menubar.

ollama.ai
STEP 02

Pull a model

phi4-mini for best accuracy, gemma2:2b for best speed. Either works.

$ ollama pull phi4-mini
STEP 03

Load the extension

Open chrome://extensions, turn on Developer mode, click "Load unpacked", pick the extension folder. Visit x.com.

github.com/phuaky/xrai

Not a bot. Not a scraper.

xrai reads the same DOM your browser is already rendering. It hides things with CSS, same as an ad blocker. Here's what it does not do.

  • No API access. Never calls X's API. Only reads your own rendered feed.
  • No automation. Never clicks, likes, follows, or submits forms.
  • No auto-posting. Reply text is generated. You copy and paste it yourself.
  • No scraping. Processes only what you are currently viewing.
  • CSS hide only. Same technique ad blockers use. Widely accepted.
  • User-initiated. Activates when you open x.com. Inactive otherwise.