Super HN

New Show
121. Windows: Prefer the Native API over Win32
zig - General-purpose programming language and toolchain for maintaining robust, optimal, and reusable software.
122. Polis: Open-source platform to find common ground at scale
123. Do Metaprojects
I'm learning to let go, but maybe letting go takes a lifetime.
124. French railway operator tests solar on train tracks
Swiss startup Sun-Ways is testing removable solar panels installed on an operational railway line through a pilot project with French railway operator SNCF in Switzerland.
125. Run Pebble OS in Browser via WASM
126. We urgently need a federal law forbidding AI from impersonating humans
Daniel Dennett was right
127. A brief history of barbed wire fence telephone networks
If you look at the table of contents for my book, Other Networks: A Radical Technology Sourcebook, you'll see that entries on networks before/outside the internet are arranged first by underlying infrastructure and then chronologically. You'll also notice that within the section on wired networks, there are two sub-sections: one for electrical wire and another…
128. ByteDance Seed2.0 LLM: breakthrough in complex real-world tasks
Seed2.0 正式发布
129. The Future for Tyr, a Rust GPU Driver for Arm Mali Hardware
130. GLM5 Released on Z.ai Platform
Meet Z.ai, your free AI-powered assistant. Build websites, create slides, analyze data, and get instant answers. Fast, smart, and reliable, powered by GLM-5.
131. Beginning autonomous operations with the 6th-generation Waymo Driver
Waymo will begin fully autonomous operations with its 6th-generation Driver —an important step in bringing our technology to more riders in more cities. This latest system serves as the primary engine for our next era of expansion, with a streamlined configuration that drives down costs while maintaining our uncompromising safety standards. Designed for long-term growth across multiple vehicle platforms, this system’s expanded capabilities allow us to safely broaden our footprint into more diverse environments, including those with extreme winter weather, at an even greater scale.
132. Zvec is a lightweight, fast, in-process vector database
A lightweight, lightning-fast, in-process vector database - alibaba/zvec
133. Recoverable and Irrecoverable Decisions
134. Thank You, AI
Ok, it is over. End of an era for me. No more self-hosted git. I had a public git server running since 2011, and a public cvs server before that. AI scrapers have hammered the poor, little server to death by flooding the cgit frontend with tons of pointless² requests. Actually a few months ago already. Now I finally decided to not try rebuild the server, be it with or without cgit web frontend. I don't feel like taking up the fight with the scrapers in my spare time, I leave that to people who are in a better position to do so. Most repositories had mirrors on one or two of the large gitforges already. Those are the primary repositories now. Go look at gitlab and github. Last week I've fixed all (I hope) dangeling links to the cgit repsitories to point to the forges instead. Now I'm down to one self-hosted service, which is the webserver hosting mainly this blog and a few more little things. In 2018 I've migrated the blog from wordpress to jekyll, so it is all static pages. Taking this out by AI scrapers overloading the machine should be next to impossible, and so far this has hold up. Nevertheless AI scrapers already managed to trigger one outage. Apparently millions of 404 answers where not enough to convince the bots that there is no cgit service (any more). Apache had no problems to deliver those, but the logs have filled up the disk so fast that logrotate didn't manage to keep things under control with the default configuration. Fixed config. Knook wood. ¹ Title inspired by the 2025 edition of Security Nightmares. Fun watching if you speak german. ² Most inefficient way to get the complete repo. Just clone it, ok?
135. Rendering the Visible Spectrum
136. How to Make a Living as an Artist
An essay by fnnch on making a living as an artist.
137. AWS Adds support for nested virtualization
AWS SDK for the Go programming language. . Contribute to aws/aws-sdk-go-v2 development by creating an account on GitHub.
138. Ring owners are returning their cameras
139. Pentagon Used Anthropic's Claude in Maduro Venezuela Raid
140. Fixing retail with land value capture
141. Postgres Locks Explained: From Theory to Advanced Troubleshooting
142. Email is tough: Major European Payment Processor's Emails aren't RFC-Compliant
Viva.com, one of Europe's largest payment processors, sends verification emails without a Message-ID header — a requirement of RFC 5322 since 2008. Google Workspace rejects them outright. Their support team's response to my detailed bug report: your account has a verified email, so there's no problem.
143. Cache Monet
144. Reports of Telnet's Death Have Been Greatly Exaggerated
We see no evidence that specific core network autonomous systems have blocked Telnet, contrary to previous reports. We specifically see continued non-spoofable Telnet traffic from networks on which GreyNoise saw 100% drop-off. We suspect initial results may have been measurement artifacts or specific threat actors explicitly avoiding GreyNoise infrastructure, though determining this root cause is impossible without internal data.
145. Partial 8-Piece Tablebase
63 TiB of chess knowledge sent across the Atlantic and now available on the Lichess analysis board
146. The Sharp PC-2000 Computer Boombox from 1979
Just cruising the interwebs and found this oddity, the Sharp PC-2001 Boombox Computer from 1979. Not much information can be found, does anybody own...
147. HeyWhatsThat
148. The missing digit of Stela C
One bad thing about archeologists is that some of the successful ones get a big head. People used to think the Olmecs, who made these colossal stone heads, were contemporary with the Mayans. But in 1939, an archaeologist couple, Marion and Matthew Stirling, found the bottom half of an Olmec stone that had part of…
149. Ireland rolls out pioneering basic income scheme for artists
150. Apache Arrow is 10 years old
The Apache Arrow project was officially established and had its first git commit on February 5th 2016, and we are therefore enthusiastic to announce its 10-year anniversary! Looking back over these 10 years, the project has developed in many unforeseen ways and we believe to have delivered on our objective of providing agnostic, efficient, durable standards for the exchange of columnar data. How it started From the start, Arrow has been a joint effort between practitioners of various horizons looking to build common grounds to efficiently exchange columnar data between different libraries and systems. In this blog post, Julien Le Dem recalls how some of the founders of the Apache Parquet project participated in the early days of the Arrow design phase. The idea of Arrow as an in-memory format was meant to address the other half of the interoperability problem, the natural complement to Parquet as a persistent storage format. Apache Arrow 0.1.0 The first Arrow release, numbered 0.1.0, was tagged on October 7th 2016. It already featured the main data types that are still the bread-and-butter of most Arrow datasets, as evidenced in this Flatbuffers declaration: /// ---------------------------------------------------------------------- /// Top-level Type value, enabling extensible type-specific metadata. We can /// add new logical types to Type without breaking backwards compatibility union Type { Null, Int, FloatingPoint, Binary, Utf8, Bool, Decimal, Date, Time, Timestamp, Interval, List, Struct_, Union } The release announcement made the bold claim that "the metadata and physical data representation should be fairly stable as we have spent time finalizing the details". Does that promise hold? The short answer is: yes, almost! But let us analyse that in a bit more detail: the Columnar format, for the most part, has only seen additions of new datatypes since 2016. One single breaking change occurred: Union types cannot have a top-level validity bitmap anymore. the IPC format has seen several minor evolutions of its framing and metadata format; these evolutions are encoded in the MetadataVersion field which ensures that new readers can read data produced by old writers. The single breaking change is related to the same Union validity change mentioned above. First cross-language integration tests Arrow 0.1.0 had two implementations: C++ and Java, with bindings of the former to Python. There were also no integration tests to speak of, that is, no automated assessment that the two implementations were in sync (what could go wrong?). Integration tests had to wait for November 2016 to be designed, and the first automated CI run probably occurred in December of the same year. Its results cannot be fetched anymore, so we can only assume the tests passed successfully. 🙂 From that moment, integration tests have grown to follow additions to the Arrow format, while ensuring that older data can still be read successfully. For example, the integration tests that are routinely checked against multiple implementations of Arrow have data files generated in 2019 by Arrow 0.14.1. No breaking changes... almost As mentioned above, at some point the Union type lost its top-level validity bitmap, breaking compatibility for the workloads that made use of this feature. This change was proposed back in June 2020 and enacted shortly thereafter. It elicited no controversy and doesn't seem to have caused any significant discontent among users, signaling that the feature was probably not widely used (if at all). Since then, there has been precisely zero breaking change in the Arrow Columnar and IPC formats. Apache Arrow 1.0.0 We have been extremely cautious with version numbering and waited until July 2020 before finally switching away from 0.x version numbers. This was signalling to the world that Arrow had reached its "adult phase" of making formal compatibility promises, and that the Arrow formats were ready for wide consumption amongst the data ecosystem. Apache Arrow, today Describing the breadth of the Arrow ecosystem today would take a full-fledged article of its own, or perhaps even multiple Wikipedia pages. Our "powered by" page can give a small taste. As for the Arrow project, we will merely refer you to our official documentation: The various specifications that cater to multiple aspects of sharing Arrow data, such as in-process zero-copy sharing between producers and consumers that know nothing about each other, or executing database queries that efficiently return their results in the Arrow format. The implementation status page that lists the implementations developed officially under the Apache Arrow umbrella (native software libraries for C, C++, C#, Go, Java, JavaScript, Julia, MATLAB, Python, R, Ruby, and Rust). But keep in mind that multiple third-party implementations exist in non-Apache projects, either open source or proprietary. However, that is only a small part of the landscape. The Arrow project hosts several official subprojects, such as ADBC and nanoarrow. A notable success story is Apache DataFusion, which began as an Arrow subproject and later graduated to become an independent top-level project in the Apache Software Foundation, reflecting the maturity and impact of the technology. Beyond these subprojects, many third-party efforts have adopted the Arrow formats for efficient interoperability. GeoArrow is an impressive example of how building on top of existing Arrow formats and implementations can enable groundbreaking efficiency improvements in a very non-trivial problem space. It should also be noted that Arrow, as an in-memory columnar format, is often used hand in hand with Parquet for persistent storage; as a matter of fact, most official Parquet implementations are nowadays being developed within Arrow repositories (C++, Rust, Go). Tomorrow The Apache Arrow community is primarily driven by consensus, and the project does not have a formal roadmap. We will continue to welcome everyone who wishes to participate constructively. While the specifications are stable, they still welcome additions to cater for new use cases, as they have done in the past. The Arrow implementations are actively maintained, gaining new features, bug fixes, and performance improvements. We encourage people to contribute to their implementation of choice, and to engage with us and the community. Now and going forward, a large amount of Arrow-related progress is happening in the broader ecosystem of third-party tools and libraries. It is no longer possible for us to keep track of all the work being done in those areas, but we are proud to see that they are building on the same stable foundations that have been laid 10 years ago.