AI & Tech Developments - Mar 29

  • 10:51 β€” Insights on using Qwen 3.5 with Hermes agent for gaming. @sudoingX
  • 16:45 β€” Announcement of Claude Mythos 5.0 Beta rollout by Anthropic. @slash1sol
  • 17:02 β€” Explanation of using Claude Code for efficient coding practices. @kr0der
  • 22:25 β€” Discussion on model setups for Hermes Agent. @JoelDeTeves
  • 22:33 β€” Recommendations for models to run on specific hardware configurations. @0xSero
  • 02:42 β€” Want to talk to the past? Here is an LLM trained entirely from scratch on a corpus of over 28,000 Victorian-era British texts published between 1837 and 1899. @emollick
  • 02:59 β€” It’s wild that every time you run a Codex code review from Claude Code, it finds critical issues. Not 95% of the times, 100%. @davidmarcus
  • 23:47 β€” If you’re using Ollama switch to llama.cpp; if you’re using OpenClaw switch to Hermes. These are basics at this point. @TheAhmadOsman

πŸ“± Source Tweets

hear this from our x/localllama community admin to any gamer on the street saying this, qwen 3.5 27B dense paired with hermes agent is something else. I've tested the same model on openclaw bloat and it becomes useless. tool calls fail, model chains break, sessions crash.

β€” @sudoingX

Claude Mythos 5.0 Beta is already rolling out. Anthropic quietly started giving users access to their next-gen flagship model - the same one from the leaked internal blog post that had everyone talking. It’s live right now: -> Main Claude interface shows Mythos 5.0 Beta

β€” @slash1sol

i got Claude Code to tell me what its /simplify prompt is so i could copy paste it into Codex, since Codex loves writing unnecessary and duplicate code. bookmark this and add it to your Codex prompts: --- description: "Review changed code for reuse, quality, and efficiency,

β€” @kr0der

After testing several, here is my current model setup for Hermes Agent (fits in 24 GB VRAM): Qwen3.5-35B-A3B-UD-IQ4_XS.gguf --n-gpu-layers 99 --ctx-size 131072 --batch-size 4096 --ubatch-size 2048 --temp 0.7 --top-p 0.95 --top-k 20 --min-p 0.0 --presence-penalty 0.5

β€” @JoelDeTeves

Best models to run on your hardware: β€” 64 GB β€” - Qwen3-coder-next-80B-4bit (coding, Claude code, general agent) - Qwen3.5-122B-reap: (browser use, multimodal, tool calling, general agent) β€” 96 GB β€” - GLM-4.6V (multimodal and tool calls) - Hermes-70B (Jailbroken)

β€” @0xSero

Insights on using Qwen 3.5 with Hermes agent for gaming.

β€” @sudoingX

Announcement of Claude Mythos 5.0 Beta rollout by Anthropic.

β€” @slash1sol

Explanation of using Claude Code for efficient coding practices.

β€” @kr0der

Discussion on model setups for Hermes Agent.

β€” @JoelDeTeves

Recommendations for models to run on specific hardware configurations.

β€” @0xSero

Want to talk to the past? Here is an LLM trained entirely from scratch on a corpus of over 28,000 Victorian-era British texts published between 1837 and 1899.

β€” @emollick

It's wild that every time you run a Codex code review from Claude Code, it finds critical issues. Not 95% of the times, 100%.

β€” @davidmarcus

If you’re using Ollama switch to llama.cpp; if you’re using OpenClaw switch to Hermes. These are basics at this point.

β€” @TheAhmadOsman