FediMeteo: A €4 FreeBSD VPS Became a Global Weather Service
374 by birdculture | 87 comments on Hacker News.
Wednesday, December 31, 2025
Tuesday, December 30, 2025
New best story on Hacker News: Show HN: 22 GB of Hacker News in SQLite
Show HN: 22 GB of Hacker News in SQLite
391 by keepamovin | 136 comments on Hacker News.
Community, All the HN belong to you. This is an archive of hacker news that fits in your browser. When I made HN Made of Primes I realized I could probably do this offline sqlite/wasm thing with the whole GBs of archive. The whole dataset. So I tried it, and this is it. Have Hacker News on your device. Go to this repo ( https://ift.tt/Eq7DNXJ ): you can download it. Big Query -> ETL -> npx serve docs - that's it. 20 years of HN arguments and beauty, can be yours forever. So they'll never die. Ever. It's the unkillable static archive of HN and it's your hands. That's my Year End gift to you all. Thank you for a wonderful year, have happy and wonderful 2026. make something of it.
391 by keepamovin | 136 comments on Hacker News.
Community, All the HN belong to you. This is an archive of hacker news that fits in your browser. When I made HN Made of Primes I realized I could probably do this offline sqlite/wasm thing with the whole GBs of archive. The whole dataset. So I tried it, and this is it. Have Hacker News on your device. Go to this repo ( https://ift.tt/Eq7DNXJ ): you can download it. Big Query -> ETL -> npx serve docs - that's it. 20 years of HN arguments and beauty, can be yours forever. So they'll never die. Ever. It's the unkillable static archive of HN and it's your hands. That's my Year End gift to you all. Thank you for a wonderful year, have happy and wonderful 2026. make something of it.
New best story on Hacker News: Show HN: 22 GB of Hacker News in SQLite
Show HN: 22 GB of Hacker News in SQLite
378 by keepamovin | 123 comments on Hacker News.
Community, All the HN belong to you. This is an archive of hacker news that fits in your browser. When I made HN Made of Primes I realized I could probably do this offline sqlite/wasm thing with the whole GBs of archive. The whole dataset. So I tried it, and this is it. Have Hacker News on your device. Go to this repo ( https://ift.tt/Eq7DNXJ ): you can download it. Big Query -> ETL -> npx serve docs - that's it. 20 years of HN arguments and beauty, can be yours forever. So they'll never die. Ever. It's the unkillable static archive of HN and it's your hands. That's my Year End gift to you all. Thank you for a wonderful year, have happy and wonderful 2026. make something of it.
378 by keepamovin | 123 comments on Hacker News.
Community, All the HN belong to you. This is an archive of hacker news that fits in your browser. When I made HN Made of Primes I realized I could probably do this offline sqlite/wasm thing with the whole GBs of archive. The whole dataset. So I tried it, and this is it. Have Hacker News on your device. Go to this repo ( https://ift.tt/Eq7DNXJ ): you can download it. Big Query -> ETL -> npx serve docs - that's it. 20 years of HN arguments and beauty, can be yours forever. So they'll never die. Ever. It's the unkillable static archive of HN and it's your hands. That's my Year End gift to you all. Thank you for a wonderful year, have happy and wonderful 2026. make something of it.
Monday, December 29, 2025
New best story on Hacker News: Show HN: Z80-ฮผLM, a 'Conversational AI' That Fits in 40KB
Show HN: Z80-ฮผLM, a 'Conversational AI' That Fits in 40KB
415 by quesomaster9000 | 95 comments on Hacker News.
How small can a language model be while still doing something useful? I wanted to find out, and had some spare time over the holidays. Z80-ฮผLM is a character-level language model with 2-bit quantized weights ({-2,-1,0,+1}) that runs on a Z80 with 64KB RAM. The entire thing: inference, weights, chat UI, it all fits in a 40KB .COM file that you can run in a CP/M emulator and hopefully even real hardware! It won't write your emails, but it can be trained to play a stripped down version of 20 Questions, and is sometimes able to maintain the illusion of having simple but terse conversations with a distinct personality. -- The extreme constraints nerd-sniped me and forced interesting trade-offs: trigram hashing (typo-tolerant, loses word order), 16-bit integer math, and some careful massaging of the training data meant I could keep the examples 'interesting'. The key was quantization-aware training that accurately models the inference code limitations. The training loop runs both float and integer-quantized forward passes in parallel, scoring the model on how well its knowledge survives quantization. The weights are progressively pushed toward the 2-bit grid using straight-through estimators, with overflow penalties matching the Z80's 16-bit accumulator limits. By the end of training, the model has already adapted to its constraints, so no post-hoc quantization collapse. Eventually I ended up spending a few dollars on Claude API to generate 20 questions data (see examples/guess/GUESS.COM), I hope Anthropic won't send me a C&D for distilling their model against the ToS ;P But anyway, happy code-golf season everybody :)
415 by quesomaster9000 | 95 comments on Hacker News.
How small can a language model be while still doing something useful? I wanted to find out, and had some spare time over the holidays. Z80-ฮผLM is a character-level language model with 2-bit quantized weights ({-2,-1,0,+1}) that runs on a Z80 with 64KB RAM. The entire thing: inference, weights, chat UI, it all fits in a 40KB .COM file that you can run in a CP/M emulator and hopefully even real hardware! It won't write your emails, but it can be trained to play a stripped down version of 20 Questions, and is sometimes able to maintain the illusion of having simple but terse conversations with a distinct personality. -- The extreme constraints nerd-sniped me and forced interesting trade-offs: trigram hashing (typo-tolerant, loses word order), 16-bit integer math, and some careful massaging of the training data meant I could keep the examples 'interesting'. The key was quantization-aware training that accurately models the inference code limitations. The training loop runs both float and integer-quantized forward passes in parallel, scoring the model on how well its knowledge survives quantization. The weights are progressively pushed toward the 2-bit grid using straight-through estimators, with overflow penalties matching the Z80's 16-bit accumulator limits. By the end of training, the model has already adapted to its constraints, so no post-hoc quantization collapse. Eventually I ended up spending a few dollars on Claude API to generate 20 questions data (see examples/guess/GUESS.COM), I hope Anthropic won't send me a C&D for distilling their model against the ToS ;P But anyway, happy code-golf season everybody :)
New best story on Hacker News: Show HN: Z80-ฮผLM, a 'Conversational AI' That Fits in 40KB
Show HN: Z80-ฮผLM, a 'Conversational AI' That Fits in 40KB
400 by quesomaster9000 | 90 comments on Hacker News.
How small can a language model be while still doing something useful? I wanted to find out, and had some spare time over the holidays. Z80-ฮผLM is a character-level language model with 2-bit quantized weights ({-2,-1,0,+1}) that runs on a Z80 with 64KB RAM. The entire thing: inference, weights, chat UI, it all fits in a 40KB .COM file that you can run in a CP/M emulator and hopefully even real hardware! It won't write your emails, but it can be trained to play a stripped down version of 20 Questions, and is sometimes able to maintain the illusion of having simple but terse conversations with a distinct personality. -- The extreme constraints nerd-sniped me and forced interesting trade-offs: trigram hashing (typo-tolerant, loses word order), 16-bit integer math, and some careful massaging of the training data meant I could keep the examples 'interesting'. The key was quantization-aware training that accurately models the inference code limitations. The training loop runs both float and integer-quantized forward passes in parallel, scoring the model on how well its knowledge survives quantization. The weights are progressively pushed toward the 2-bit grid using straight-through estimators, with overflow penalties matching the Z80's 16-bit accumulator limits. By the end of training, the model has already adapted to its constraints, so no post-hoc quantization collapse. Eventually I ended up spending a few dollars on Claude API to generate 20 questions data (see examples/guess/GUESS.COM), I hope Anthropic won't send me a C&D for distilling their model against the ToS ;P But anyway, happy code-golf season everybody :)
400 by quesomaster9000 | 90 comments on Hacker News.
How small can a language model be while still doing something useful? I wanted to find out, and had some spare time over the holidays. Z80-ฮผLM is a character-level language model with 2-bit quantized weights ({-2,-1,0,+1}) that runs on a Z80 with 64KB RAM. The entire thing: inference, weights, chat UI, it all fits in a 40KB .COM file that you can run in a CP/M emulator and hopefully even real hardware! It won't write your emails, but it can be trained to play a stripped down version of 20 Questions, and is sometimes able to maintain the illusion of having simple but terse conversations with a distinct personality. -- The extreme constraints nerd-sniped me and forced interesting trade-offs: trigram hashing (typo-tolerant, loses word order), 16-bit integer math, and some careful massaging of the training data meant I could keep the examples 'interesting'. The key was quantization-aware training that accurately models the inference code limitations. The training loop runs both float and integer-quantized forward passes in parallel, scoring the model on how well its knowledge survives quantization. The weights are progressively pushed toward the 2-bit grid using straight-through estimators, with overflow penalties matching the Z80's 16-bit accumulator limits. By the end of training, the model has already adapted to its constraints, so no post-hoc quantization collapse. Eventually I ended up spending a few dollars on Claude API to generate 20 questions data (see examples/guess/GUESS.COM), I hope Anthropic won't send me a C&D for distilling their model against the ToS ;P But anyway, happy code-golf season everybody :)
Sunday, December 28, 2025
Saturday, December 27, 2025
New best story on Hacker News: Exe.dev
Exe.dev
426 by achairapart | 280 comments on Hacker News.
https://ift.tt/GhpBA1i https://ift.tt/lLsMCDP https://ift.tt/54tncom
426 by achairapart | 280 comments on Hacker News.
https://ift.tt/GhpBA1i https://ift.tt/lLsMCDP https://ift.tt/54tncom
New best story on Hacker News: Exe.dev
Exe.dev
426 by achairapart | 279 comments on Hacker News.
https://ift.tt/GhpBA1i https://ift.tt/lLsMCDP https://ift.tt/54tncom
426 by achairapart | 279 comments on Hacker News.
https://ift.tt/GhpBA1i https://ift.tt/lLsMCDP https://ift.tt/54tncom
Friday, December 26, 2025
Thursday, December 25, 2025
New best story on Hacker News: Show HN: Vibium – Browser automation for AI and humans, by Selenium's creator
Show HN: Vibium – Browser automation for AI and humans, by Selenium's creator
388 by hugs | 110 comments on Hacker News.
i started the selenium project 21 years ago. vibium is what i'd build if i started over today with ai agents in mind. go binary under the hood (handles browser, bidi, mcp) but devs never see it. just npm install vibium. python/java coming. for claude code: claude mcp add vibium -- npx -y vibium v1 ships today. ama.
388 by hugs | 110 comments on Hacker News.
i started the selenium project 21 years ago. vibium is what i'd build if i started over today with ai agents in mind. go binary under the hood (handles browser, bidi, mcp) but devs never see it. just npm install vibium. python/java coming. for claude code: claude mcp add vibium -- npx -y vibium v1 ships today. ama.
New best story on Hacker News: Show HN: Vibium – Browser automation for AI and humans, by Selenium's creator
Show HN: Vibium – Browser automation for AI and humans, by Selenium's creator
384 by hugs | 109 comments on Hacker News.
i started the selenium project 21 years ago. vibium is what i'd build if i started over today with ai agents in mind. go binary under the hood (handles browser, bidi, mcp) but devs never see it. just npm install vibium. python/java coming. for claude code: claude mcp add vibium -- npx -y vibium v1 ships today. ama.
384 by hugs | 109 comments on Hacker News.
i started the selenium project 21 years ago. vibium is what i'd build if i started over today with ai agents in mind. go binary under the hood (handles browser, bidi, mcp) but devs never see it. just npm install vibium. python/java coming. for claude code: claude mcp add vibium -- npx -y vibium v1 ships today. ama.
Wednesday, December 24, 2025
Tuesday, December 23, 2025
Monday, December 22, 2025
Sunday, December 21, 2025
Saturday, December 20, 2025
Friday, December 19, 2025
Thursday, December 18, 2025
New best story on Hacker News: I got hacked: My Hetzner server started mining Monero
I got hacked: My Hetzner server started mining Monero
558 by jakelsaunders94 | 340 comments on Hacker News.
558 by jakelsaunders94 | 340 comments on Hacker News.
Wednesday, December 17, 2025
New best story on Hacker News: Gemini 3 Flash: Frontier intelligence built for speed
Gemini 3 Flash: Frontier intelligence built for speed
644 by meetpateltech | 296 comments on Hacker News.
Docs: https://ift.tt/JInT1vL Developer Blog: https://ift.tt/yL3vz86... Model Card [pdf]: https://ift.tt/QAjeOSK Gemini 3 Flash in Search AI mode: https://ift.tt/EPWlZQm... Deepmind Page: https://ift.tt/tsgDWMG
644 by meetpateltech | 296 comments on Hacker News.
Docs: https://ift.tt/JInT1vL Developer Blog: https://ift.tt/yL3vz86... Model Card [pdf]: https://ift.tt/QAjeOSK Gemini 3 Flash in Search AI mode: https://ift.tt/EPWlZQm... Deepmind Page: https://ift.tt/tsgDWMG
New best story on Hacker News: Gemini 3 Flash: Frontier intelligence built for speed
Gemini 3 Flash: Frontier intelligence built for speed
602 by meetpateltech | 282 comments on Hacker News.
Docs: https://ift.tt/JInT1vL Developer Blog: https://ift.tt/yL3vz86... Model Card [pdf]: https://ift.tt/QAjeOSK Gemini 3 Flash in Search AI mode: https://ift.tt/EPWlZQm... Deepmind Page: https://ift.tt/tsgDWMG
602 by meetpateltech | 282 comments on Hacker News.
Docs: https://ift.tt/JInT1vL Developer Blog: https://ift.tt/yL3vz86... Model Card [pdf]: https://ift.tt/QAjeOSK Gemini 3 Flash in Search AI mode: https://ift.tt/EPWlZQm... Deepmind Page: https://ift.tt/tsgDWMG
Tuesday, December 16, 2025
Monday, December 15, 2025
Sunday, December 14, 2025
Saturday, December 13, 2025
Friday, December 12, 2025
Thursday, December 11, 2025
New best story on Hacker News: Rubio stages font coup: Times New Roman ousts Calibri
Rubio stages font coup: Times New Roman ousts Calibri
374 by italophil | 645 comments on Hacker News.
https://ift.tt/0MGAID1
374 by italophil | 645 comments on Hacker News.
https://ift.tt/0MGAID1
New best story on Hacker News: Rubio stages font coup: Times New Roman ousts Calibri
Rubio stages font coup: Times New Roman ousts Calibri
374 by italophil | 641 comments on Hacker News.
https://ift.tt/0MGAID1
374 by italophil | 641 comments on Hacker News.
https://ift.tt/0MGAID1
Wednesday, December 10, 2025
New best story on Hacker News: Australia begins enforcing world-first teen social media ban
Australia begins enforcing world-first teen social media ban
581 by chirau | 943 comments on Hacker News.
https://ift.tt/Cy4XRuJ https://ift.tt/ivuA2R6 https://ift.tt/A0xdOY3... ( https://ift.tt/riOxCbu )
581 by chirau | 943 comments on Hacker News.
https://ift.tt/Cy4XRuJ https://ift.tt/ivuA2R6 https://ift.tt/A0xdOY3... ( https://ift.tt/riOxCbu )
New best story on Hacker News: Australia begins enforcing world-first teen social media ban
Australia begins enforcing world-first teen social media ban
569 by chirau | 914 comments on Hacker News.
https://ift.tt/Cy4XRuJ https://ift.tt/ivuA2R6 https://ift.tt/A0xdOY3... ( https://ift.tt/riOxCbu )
569 by chirau | 914 comments on Hacker News.
https://ift.tt/Cy4XRuJ https://ift.tt/ivuA2R6 https://ift.tt/A0xdOY3... ( https://ift.tt/riOxCbu )
Tuesday, December 9, 2025
New best story on Hacker News: Ask HN: Should "I asked $AI, and it said" replies be forbidden in HN guidelines?
Ask HN: Should "I asked $AI, and it said" replies be forbidden in HN guidelines?
495 by embedding-shape | 290 comments on Hacker News.
As various LLMs become more and more popular, so does comments with "I asked Gemini, and Gemini said ....". While the guidelines were written (and iterated on) during a different time, it seems like it might be time to have a discussion about if those sort of comments should be welcomed on HN or not. Some examples: - https://ift.tt/z3c4gEy - https://ift.tt/yqevF49 - https://ift.tt/vAjS41W Personally, I'm on HN for the human conversation, and large LLM-generated texts just get in the way of reading real text from real humans (assumed, at least). What do you think? Should responses that basically boil down to "I asked $LLM about $X, and here is what $LLM said:" be allowed on HN, and the guidelines updated to state that people shouldn't critique it (similar to other guidelines currently), or should a new guideline be added to ask people from refrain from copy-pasting large LLM responses into the comments, or something else completely?
495 by embedding-shape | 290 comments on Hacker News.
As various LLMs become more and more popular, so does comments with "I asked Gemini, and Gemini said ....". While the guidelines were written (and iterated on) during a different time, it seems like it might be time to have a discussion about if those sort of comments should be welcomed on HN or not. Some examples: - https://ift.tt/z3c4gEy - https://ift.tt/yqevF49 - https://ift.tt/vAjS41W Personally, I'm on HN for the human conversation, and large LLM-generated texts just get in the way of reading real text from real humans (assumed, at least). What do you think? Should responses that basically boil down to "I asked $LLM about $X, and here is what $LLM said:" be allowed on HN, and the guidelines updated to state that people shouldn't critique it (similar to other guidelines currently), or should a new guideline be added to ask people from refrain from copy-pasting large LLM responses into the comments, or something else completely?
New best story on Hacker News: Ask HN: Should "I asked $AI, and it said" replies be forbidden in HN guidelines?
Ask HN: Should "I asked $AI, and it said" replies be forbidden in HN guidelines?
423 by embedding-shape | 252 comments on Hacker News.
As various LLMs become more and more popular, so does comments with "I asked Gemini, and Gemini said ....". While the guidelines were written (and iterated on) during a different time, it seems like it might be time to have a discussion about if those sort of comments should be welcomed on HN or not. Some examples: - https://ift.tt/z3c4gEy - https://ift.tt/yqevF49 - https://ift.tt/vAjS41W Personally, I'm on HN for the human conversation, and large LLM-generated texts just get in the way of reading real text from real humans (assumed, at least). What do you think? Should responses that basically boil down to "I asked $LLM about $X, and here is what $LLM said:" be allowed on HN, and the guidelines updated to state that people shouldn't critique it (similar to other guidelines currently), or should a new guideline be added to ask people from refrain from copy-pasting large LLM responses into the comments, or something else completely?
423 by embedding-shape | 252 comments on Hacker News.
As various LLMs become more and more popular, so does comments with "I asked Gemini, and Gemini said ....". While the guidelines were written (and iterated on) during a different time, it seems like it might be time to have a discussion about if those sort of comments should be welcomed on HN or not. Some examples: - https://ift.tt/z3c4gEy - https://ift.tt/yqevF49 - https://ift.tt/vAjS41W Personally, I'm on HN for the human conversation, and large LLM-generated texts just get in the way of reading real text from real humans (assumed, at least). What do you think? Should responses that basically boil down to "I asked $LLM about $X, and here is what $LLM said:" be allowed on HN, and the guidelines updated to state that people shouldn't critique it (similar to other guidelines currently), or should a new guideline be added to ask people from refrain from copy-pasting large LLM responses into the comments, or something else completely?
Monday, December 8, 2025
New best story on Hacker News: Microsoft has a problem: lack of demand for its AI products
Microsoft has a problem: lack of demand for its AI products
371 by mohi-kalantari | 311 comments on Hacker News.
371 by mohi-kalantari | 311 comments on Hacker News.
New best story on Hacker News: Microsoft has a problem: lack of demand for its AI products
Microsoft has a problem: lack of demand for its AI products
366 by mohi-kalantari | 310 comments on Hacker News.
366 by mohi-kalantari | 310 comments on Hacker News.
Sunday, December 7, 2025
Saturday, December 6, 2025
Friday, December 5, 2025
Thursday, December 4, 2025
Wednesday, December 3, 2025
Tuesday, December 2, 2025
Monday, December 1, 2025
New best story on Hacker News: DeepSeek-v3.2: Pushing the frontier of open large language models [pdf]
DeepSeek-v3.2: Pushing the frontier of open large language models [pdf]
468 by pretext | 205 comments on Hacker News.
https://ift.tt/nCY1riz https://ift.tt/CwKj5p8
468 by pretext | 205 comments on Hacker News.
https://ift.tt/nCY1riz https://ift.tt/CwKj5p8
New best story on Hacker News: DeepSeek-v3.2: Pushing the frontier of open large language models [pdf]
DeepSeek-v3.2: Pushing the frontier of open large language models [pdf]
444 by pretext | 187 comments on Hacker News.
https://ift.tt/nCY1riz https://ift.tt/CwKj5p8
444 by pretext | 187 comments on Hacker News.
https://ift.tt/nCY1riz https://ift.tt/CwKj5p8
Subscribe to:
Comments (Atom)
New best story on Hacker News: Apple picks Google's Gemini to power Siri
Apple picks Google's Gemini to power Siri 406 by stygiansonic | 235 comments on Hacker News.
-
เคตเคฐोเคฑ्เคฏाเคค เคถเคธ्เคค्เคฐाเคจी เคตाเคฐ เคเคฐूเคจ เคฏुเคตเคाเคा เคेเคฒा เคूเคจ เคตเคฐोเคฐा เคชोเคฒिเคธांเคจी เคेเคฒी เคเคฐोเคชीเคฒा เค เคเค เคตเคฐोเคฐा : เคตเคฐोเคฑ्เคฏाเคคीเคฒ เคेเคธเคฐी เคจंเคฆเคจ เคเคฃเคชเคคी เคเคตเคณ เคเคฐोเคชी เค เคฎो...
-
เคตिเคฆ्เคฏाเคฐ्เคฅ्เคฏांเคตเคฐीเคฒ เค เคฎाเคจुเคท เค เคค्เคฏाเคाเคฐ – เคฎुเค्เคฏाเคง्เคฏाเคชเค เคต เค เคงीเค्เคทเคाเคตเคฐ เคुเคจ्เคนा เคฆाเคเคฒ เคเคฐूเคจ เคคाเคค्เคाเคณ เคाเคฐเคตाเค เคเคฐा. เคเคฆिเคตाเคธी เคाเคฏเคเคฐ เคธेเคจेเคे เคंเคฆ्เคฐเคชूเคฐ เคिเคฒ्เคนा เคเคชाเคง...
-
เคตเคฐोเคฐा เคเคฎเคเคธเคเคฌीเคे เคธเคนाเคฏ्เคฏเค เค เคญिเคฏंเคคा เคช्เคฐเคซुเคฒ เคฒाเคฒเคธเคฐेเคी เคตเคฐोเคฐा เคถเคนเคฐाเคคूเคจ เคนเคाเคฒ เคชเค्เคी เคเคฐा.... เคฎाเคीเคฒ เค เคจेเค เคตเคฐ्เคทांเคชाเคธूเคจ เคตเคฐोเคฐा เคคाเคฒुเค्เคฏाเคค เคाเคฐ्เคฏเคฐเคค...