- 11:39 — Commentary on the performance degradation of Claude’s models and the impact of recent updates. @3266miles
- 13:36 — Introduction of a new plugin for Claude Code that enhances its capabilities. @nvk
- 17:37 — Praise for the Qwen 3.6 Plus model’s performance compared to GPT-5.4-Codex. @jpidala
- 17:09 — Reminder to cap your 5090 to 400W for inference to maintain performance. @populartourist
- 18:24 — Personal anecdote about calorie counting and its effects on health. @bryan_johnson
- 19:05 — Reflection on the advancements in AI and the public discourse surrounding it. @PositivFuturist
- 19:07 — Discussion on the inefficiencies in using Claude models. @adxtyahq
- 19:23 — Observations on the nature of relationships and dating. @fvckfemi_
- 20:10 — Introduction of a new knowledge base feature in Claude Code. @alliekmiller
- 20:46 — Comparison of AI coding plans and their usage efficiency. @weswinder
- 22:39 — Declaration that AGI is already here but not evenly distributed. @pmarca
- 23:31 — Speculation on the potential of LLMs if they activated all their weights. @Altimor
AI & Tech Developments - Apr 06
📱 Source Tweets
I've been extremely hesitant to call out @AnthropicAI for anecdotal degradation in Claude's performance, but whatever they've shipped to the harness in the last few releases has absolutely lobotomised it.
— @3266miles
Had to be done, surprisingly viable. made it as claude-code plugin... /install-plugin github:nvk/llm-wiki /wiki init /wiki:research "your topic" --sources 10
— @nvk
Qwen 3.6 Plus is an incredible model. Well done to the @Alibaba_Qwen team. It blows GPT-5.4-Codex out of the water for agentic tasks / @openclaw , is 3X faster, and is currently offered free through @OpenRouter.
— @jpidala
Reminder to cap your 5090 to 400W for inference. Qwen3.5 27B runs virtually same speed. I suspect Gemma 4 31B does to. Running sub 60C/140F
— @populartourist
I've instructed my chef to make meals that are exactly 700 calories. But they're more than 700 calories. You know how I know? My resting heart rate is elevated by 3 beats per minute.
— @bryan_johnson
Still can't believe computers can talk now.. Most unbelievable sci-fi development ever and most of the discourse is people poking holes in it.
— @PositivFuturist
anthropic isn't the only reason you're hitting claude code limits. one guy audited ~900 sessions (18,903 turns) and found most of the waste was on his side
— @adxtyahq
realizing real lovers don't even do talking stages. all it takes is one good date and suddenly y'all are together everyday after that
— @fvckfemi_
I'm a knowledge base MONSTER in Claude Code right now. Introducing: Claudeopedia. 1) I took @karpathy's 'llm-wiki' idea doc and 2) Combined it with the /last30days skill and 3) Added a /wiki skill
— @alliekmiller
this is literally the best deal on the market rn for an ai coding plan > $10/month > kimi k2.5 > minimax m2.7 > glm 5. wayyyy more usage than $20 plans with big labs
— @weswinder
I'm calling it. AGI is already here – it's just not evenly distributed yet.
— @pmarca
Apparently an LLM only ever activates a small % of its weights at any given time. Imagine the power if they activated all the weights. We'd have AGI already
— @Altimor