Thoughts on AI, engineering, and building products
Google went full Apache 2.0 with Gemma 4, Alibaba closed its flagship model behind an API, a startup got GPU futures on Bloomberg Terminal, and OpenAI bought a media company. All on the same Wednesday.
Every frontier AI model scored below 1% on ARC-AGI-3. Humans scored 100%. The new benchmark abandons pattern-matching grids for interactive video game environments, exposing a fundamental gap between memorization and genuine intelligence.
Large language models waste enormous computational depth reconstructing facts they already know. DeepSeek's Engram module fixes this by adding a fast-lookup memory system alongside neural computation, yielding surprising gains not just in knowledge retrieval but in reasoning, coding, and math.
Most copy-trading tools are glorified Telegram bots with no risk controls and zero transparency. So I built CopyAlpha, a full-stack platform that processes signals, enforces risk rules, and executes across CEX and DEX.
How SK Hynix went from near-bankruptcy after a devastating factory fire to controlling the AI supply chain through a decade-long bet on High Bandwidth Memory (HBM) that everyone else thought was insane.
An analysis of Meta's VL-JEPA paper and Yann LeCun's vision for non-generative AI models that predict semantic embeddings instead of tokens.
Why relying too much on LLMs weakens your critical thinking - and how to use AI as a partner rather than a replacement.
My takeaways from Tesla AI Day - from neural networks predicting in vector space to the Dojo supercomputer and the Tesla Bot.