Super HN

New Show
421. The evolution of rationality: How chimps process conflicting evidence
Chimps can take in new evidence, evaluate its strength, and change their minds.
422. Steam Controller
Your games at your fingertips
423. I didn't reverse-engineer the protocol for my blood pressure monitor in 24 hours
The personal website of James Belchamber.
424. VibeThinker-1.5B
Tiny Model, Big Logic: Diversity-Driven Optimization Elicits Large-Model Reasoning Ability in VibeThinker-1.5B - WeiboAI/VibeThinker
425. An Italian Company Builds the First Known Propellantless Space-Propulsion System
Genergo, an Italian deep-tech company based in Como, comes out of stealth and unveils an innovative electromagnetic space-propulsion system that uses no propellant, successfully flight-tested and validated across three space missions and protected by a portfolio of granted international patents.
426. Perkeep lets you permanently keep your stuff, for life
427. The Challenge of Large File Checksums
428. Worries about Open Source in the age of LLMs
Some concerns I have with moving towards "don't use Open Source, get an LLM to generate the code you want" and where it'll take us.
429. TiDAR: Think in Diffusion, Talk in Autoregression
Abstract page for arXiv paper 2511.08923: TiDAR: Think in Diffusion, Talk in Autoregression
430. Why Software Systems Must Default to the Metric System
And Why You Should Migrate Now
431. I implemented an ISO 42001-certified AI Governance program in 6 months
How I implemented an ISO 42001-certified AI Governance program in 6 months
432. Minicoro: Single header stackful cross-platform coroutine library in pure C
Single header stackful cross-platform coroutine library in pure C. - edubart/minicoro
433. LaLiga: ISPs Must Join Anti-Piracy War to Secure Broadcasting Rights
Companies acting as both TV broadcasters and ISPs, must comply with extreme anti-piracy obligations if they air LaLiga matches from 2027.
434. Terminal Latency on Windows
UPDATE 2024-04-15: Windows Terminal 1.19 contains a fix that reduces latency by half! It’s now competitive with WSLtty on my machine. Details in the GitHub Issue.
435. Dave's Garage PDP-11 BBS Menu System
PDP-11 Related Source Code. Contribute to davepl/pdpsrc development by creating an account on GitHub.
436. Julia 1.12 brings progress on standalone binaries and more
437. Google open-sources Android 16 QPR1, two months late
Android 16 QPR1 is finally being pushed to the Android Open Source Project. This should have happened on 2025-09-03. We migrated to full Android 16 QPR1 kernel code (GPLv2 tarball) and firmware in September. We couldn't migrate userspace to QPR1 without it being pushed to AOSP.
438. The effects of pets and human-pet interactions on humans' romantic relationships
439. Why movies just don't feel "real" anymore
440. Towards Greater Leverage: Scaling Laws for Efficient MoE Language Models
Abstract page for arXiv paper 2507.17702: Towards Greater Leverage: Scaling Laws for Efficient Mixture-of-Experts Language Models
441. The Geometry Behind Normal Maps
Learn how tangent space works, how to compute tangents and the TBN matrix, and how normal mapping transforms textures into detailed 3D shading.
442. How to Tolerate Annoying Things
Hassles are part of life, but the way we react often makes them worse. ACT skills can help you handle them with greater ease
443. Ask HN: Why does Y Combinator seem to be consistently funding AI slop?
444. Build a DeepSeek Model from Scratch
Learn how to build the features that set DeepSeek apart from other top LLMs! When DeepSeek started making waves in January 2025, it sounded too good to be true. How could a generative AI model get such incredible performance with such low training and operation costs? By creatively blending a variety of strategies and innovations like Mixture of Experts, Latent Attention, Multi-token Prediction, model distillation, and efficient parallelization, DeepSeek set a new standard for what’s possible in an open LLM. Now, in Build a DeepSeek Model (From Scratch) you can recreate a laptop-scale version of this cutting-edge model yourself! In Build a DeepSeek Model (From Scratch) you will learn how to: Implement DeepSeek’s core architectural innovations, including Multi-Head Latent Attention and Mixture-of-Experts layers Build a production-ready training pipeline with Multi-Token Prediction and FP8 quantization for efficiency and speed Maximize hardware utilization with parallelism strategies like DualPipe Apply post-training methods such as supervised fine-tuning and reinforcement learning to unlock reasoning capabilities Compress and distill large models into smaller, deployable versions for real-world use In Build a DeepSeek Model (From Scratch) you’ll build your own DeepSeek clone from the ground up. First, you’ll quickly review LLM fundamentals, with an eye to where DeepSeek’s innovations address the common problems and limitations of standard models. Then, you’ll learn everything you need to create your own DeepSeek-inspired model, including the innovations that put DeepSeek on the map: Multihead Latent Attention (MLA), Multi-Token Prediction (MTP), Mixture of Experts (MoE), model distillation, and reasoning.
445. A Commentary on the Sixth Edition Unix Operating System
446. How to Pick Your Battles
Not Every Disagreement is Worth the Fight—Learn How to Choose Wisely
447. Show HN: Whirligig.live
go to a gig tonight
448. Jeffrey Epstein Pursued Swiss Rothschild Bank to Finance Israeli Cyberweapons
Epstein to Ehud Barak: “[Ariane de Rothschild] said to me, if Ehud wants to make serious money, he will have to build a relationship with me. take time so that we can truly understand one another.”
449. Why Nietzsche Matters in the Age of Artificial Intelligence
450. Show HN: Minivac 601 Simulator - a 1961 Relay Computer