AI & Tech Developments - Apr 07

  • 18:04 — Open-sourcing an MLX port of the ESM-2 protein model family to enhance AI and biology integration. @josephjojoe
  • 18:13 — A new method allows fine-tuning of Gemma 4 and other models in Google Colab. @itsPaulAi
  • 18:20 — Discussion on the benefits of quantizing models to 1-bit/1.58-bit for efficiency. @leloykun
  • 19:07 — The latest version of llm-wiki is out, enhancing AI tools. @nvk
  • 19:21 — A significant development in Hyperliquid with an independent client achieving block hash parity. @Yaugourt
  • 19:22 — TurboQuant+ update introduces support for new architecture, enhancing performance. @Bent302
  • 21:52 — A new distillation of the Hermes Agent shows improved performance with Hermes models. @JoelDeTeves
  • 21:56 — Insights on executing long autonomous sessions with Claude Code and Codex models. @antirez
  • 21:48 — OpenAI seems to be working on an upgraded agent builder, internally named “Hermes,” which will likely be integrated directly into ChatGPT. And “Pluto Model”. @testingcatalog
  • 21:00 — Hermes Agent: This open-source beast actually grows with you — it learns from experience, builds its own skills, improves over time, and keeps persistent memory. Feels like having a truly self-improving AI teammate. Huge thanks to the team! @HaseebMir91

📱 Source Tweets

I'm open-sourcing an MLX port of the ESM-2 protein model family so more people can tinker with AI & biology on Apple silicon! (1/5)

@josephjojoe

You can now fine-tune Gemma 4 (and 500 other open source models) in a free Google Colab

@itsPaulAi

I think people may overlooking why we would want to quantize to 1-bit/1.58-bit in the first place.

@leloykun

llm-wiki version 0.0.10 is out!

@nvk

Stop scrolling. This might be one of the most important thing happening on Hyperliquid right now and almost nobody is talking about it.

@Yaugourt

TurboQuant+ update. New build dropped from @spiritbuun with a lot of changes.

@Bent302

Okay, I have no clue how he did it but this is the best 27B distill I've used to date.

@JoelDeTeves

During the last week I executed very long autonomous sessions of Claude Code Opus 4.6 and Codex GPT 5.4.

@antirez

Open-sourcing an MLX port of the ESM-2 protein model family to enhance AI and biology integration.

@josephjojoe

A new method allows fine-tuning of Gemma 4 and other models in Google Colab.

@itsPaulAi

Discussion on the benefits of quantizing models to 1-bit/1.58-bit for efficiency.

@leloykun

The latest version of llm-wiki is out, enhancing AI tools.

@nvk

A significant development in Hyperliquid with an independent client achieving block hash parity.

@Yaugourt

TurboQuant+ update introduces support for new architecture, enhancing performance.

@Bent302

A new distillation of the Hermes Agent shows improved performance with Hermes models.

@JoelDeTeves

Insights on executing long autonomous sessions with Claude Code and Codex models.

@antirez

OpenAI seems to be working on an upgraded agent builder, internally named "Hermes," which will likely be integrated directly into ChatGPT. And "Pluto Model".

@testingcatalog

Hermes Agent: This open-source beast actually grows with you — it learns from experience, builds its own skills, improves over time, and keeps persistent memory. Feels like having a truly self-improving AI teammate. Huge thanks to the team!

@HaseebMir91