<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
    <id>https://sampatt.com/</id>
    <title>Sam Patterson's Blog</title>
    <updated>2026-02-21T04:07:43.317Z</updated>
    <generator>https://github.com/jpmonette/feed</generator>
    <author>
        <name>Sam Patterson</name>
        <uri>https://sampatt.com</uri>
    </author>
    <link rel="alternate" href="https://sampatt.com/"/>
    <subtitle>Thoughts on software development, AI, and technology</subtitle>
    <icon>https://sampatt.com/favicon.ico</icon>
    <rights>All rights reserved 2026, Sam Patterson</rights>
    <entry>
        <title type="html"><![CDATA[Breakdown of all Satoshi's Writings Proves Bitcoin not Built Primarily as Store of Value]]></title>
        <id>https://sampatt.com/blog/2019-06-06-satoshi-analysis</id>
        <link href="https://sampatt.com/blog/2019-06-06-satoshi-analysis"/>
        <updated>2025-03-04T00:00:00.000Z</updated>
        <summary type="html"><![CDATA[An evidence-based analysis of Satoshi Nakamoto's writings showing Bitcoin was not built primarily as a store of value.]]></summary>
        <content type="html"><![CDATA[<p><em>TL;DR</em></p>
<p><em>This post was originally written in June of 2019.</em></p>
<p><em>The <a href="https://twitter.com/danheld/status/1084848064559337473">claim</a> “Bitcoin was purpose-built to first be a Store of Value” is false. Many of Satoshi’s statements shown as evidence for this claim are taken out of context. When those statements are placed in context and considered alongside all his writings, it’s undeniable that Bitcoin was not built to first be a store of value, but was built for payments.</em></p>
<h2>Video Summary</h2>
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/_j3ZrS5xirw" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
<h2>Updates</h2>
<h3>June 06, 2019</h3>
<p>Added a section discussing of Satoshi’s genesis block comment in <a href="#evidence-for-store-of-value">Evidence for Store of Value</a> portion.</p>
<div class="toc">
  <div class="toc-title">Table of Contents</div>
  <ul>
    <li><a href="#the-question">The question</a>
      <ul>
        <li><a href="#caveats">Caveats</a></li>
      </ul>
    </li>
    <li><a href="#evidence-for-store-of-value">Evidence for Store of Value</a>
      <ul>
        <li><a href="#store-of-value-summary">Store of Value Summary</a></li>
      </ul>
    </li>
    <li><a href="#evidence-for-payments">Evidence for Payments</a></li>
    <li><a href="#final-tally">Final Tally</a>
      <ul>
        <li><a href="#timeline">Timeline</a></li>
      </ul>
    </li>
    <li><a href="#the-whitepaper">The Whitepaper</a>
      <ul>
        <li><a href="#whitepaper-summary">Whitepaper summary</a></li>
      </ul>
    </li>
    <li><a href="#objections">Objections</a>
      <ul>
        <li><a href="#satoshi-was-just-doing-marketing">Satoshi was just doing marketing</a></li>
        <li><a href="#other-claims">Other claims</a></li>
      </ul>
    </li>
    <li><a href="#questions">Questions</a></li>
  </ul>
</div>
<h2>The question</h2>
<p>Did Satoshi build Bitcoin to serve primarily as a store of value, or did he build it for making payments?</p>
<p>To answer this question, I went straight to the source. Thanks to the Satoshi Nakamoto Institute’s <a href="https://satoshi.nakamotoinstitute.org/">archives</a> I was able to read every single thing Satoshi ever posted publicly. Comments from 260 forum threads, 63 emails, and his original source code.</p>
<p>After reviewing all of Satoshi’s writings, I can confidently state that Bitcoin was <em>not</em> purpose-built to first be a store of value. It was built for payments.</p>
<p>This isn’t based on word frequency analysis or other crude techniques. This is based on having read all his statements <em>and</em> the context surrounding them.</p>
<p>You don’t need to take my word for that. This post is so long because I’m going to go walk you through <em>all</em> of Satoshi’s statements that relate to store of value or payments - in their original context - and let you see the evidence yourself. I’ll keep count of how many statements Satoshi made supporting store of value or payments (or both). If you’d rather not read the entire piece, use the table of contents above to skip around.</p>
<p>Where text is bold in quotes it’s my own emphasis added. Feel free to reach out to me add new sources or challenge any of the sources I’ve posted, and I’ll update the post if you make a good argument.</p>
<h3>Caveats</h3>
<p>Before I get into the evidence, I have two caveats to make.</p>
<ol>
<li>I’m not making normative claims, otherwise known as “should” statements. I’m solely focusing on the concrete historical claim that Satoshi built Bitcoin primarily to act as a store of value. This isn’t about what Bitcoin is today, or what it should be tomorrow, it’s narrowly about Satoshi’s original intent based on his own words.</li>
<li>I don’t think Satoshi’s opinion matters that much today. Bitcoin can be whatever we want it to be. But that doesn’t mean that people should be given a free pass to rewrite history and make false claims about Satoshi’s intentions. That’s intellectually dishonest and needs to be called out.</li>
</ol>
<h2>Evidence for Store of Value</h2>
<p>It’s clear that Satoshi wasn’t a fan of central banking, and that he knew the monetary policy of Bitcoin gave it unique properties, but just establishing those facts doesn’t imply that Satoshi built Bitcoin to act primarily as a store of value.</p>
<p>As far as I could determine, Satoshi never wrote the words “store of value” a single time. This would seem to pose a problem for those promoting the idea that Satoshi built Bitcoin as a store of value. However, he did talk about the idea indirectly. So to give the argument the best possible chance, let’s examine every instance where he talked about anything that could be interpreted as supporting a store of value, even if it’s a stretch.</p>
<p>Satoshi mentioned Bitcoin’s store of value properties a total of eight times across all his writings. I’ll go through each one below.</p>
<p>Eight mentions isn’t a lot to go on, but let’s look at the best case that can be made. It comes from Dan Held, a vocal supporter of the store of value narrative, who <a href="https://twitter.com/danheld/status/1134405481177452544">claims</a>, “[Bitcoin] was purpose built day one to be a Gold 2.0.”</p>
<p>Dan posted a <a href="https://twitter.com/danheld/status/1084848063947071488">long twitter thread</a> earlier this year making his case. The thread became popular and is commonly referred to as good evidence for the store of value claim.</p>
<p>There are 47 tweets he posts, but only six of them actually quote Satoshi discussing anything directly related to store of value. I discuss some of his other tweets in <a href="#other-claims">Other claims</a>.</p>
<h3>Source #1: P2P Foundation Post</h3>
<p>Dan’s first Satoshi quote is from a <a href="https://satoshi.nakamotoinstitute.org/posts/p2pfoundation/1/">P2P Foundation post</a>:</p>
<blockquote>
<p>The root problem with conventional currency is all the trust that’s required to make it work. The central bank must be trusted not to debase the currency, but the history of fiat currencies is full of breaches of that trust.</p>
</blockquote>
<p>That sounds like support for Bitcoin as a store of value. Let’s add one to the store of value tally:</p>
<table>
<thead>
<tr>
<th>BTC Use Case</th>
<th style="text-align:right">Satoshi Mentions</th>
</tr>
</thead>
<tbody>
<tr>
<td>Store of Value</td>
<td style="text-align:right">1</td>
</tr>
<tr>
<td>Payments</td>
<td style="text-align:right">0</td>
</tr>
</tbody>
</table>
<h4>Context</h4>
<p>This quote is only two sentences from a long post. If you read through the post, Satoshi is also unambiguously supporting using Bitcoin for payments.</p>
<p>In fact, in the same paragraph of the quote above, he mentioned micropayments:</p>
<blockquote>
<p>[The banks’] massive overhead costs make micropayments impossible.</p>
</blockquote>
<p>And later in the piece mentioned micropayments a second time:</p>
<blockquote>
<p>The usual solution [to the double-spending problem] is for a trusted company with a central database to check for double-spending, but that just gets back to the trust model. In its central position, the company can override the users, and the fees needed to support the company make micropayments impractical.</p>
</blockquote>
<p>Micropayments are the quintessential example of a payment, more or less the exact opposite of a store of value.</p>
<p>The opening sentence of the piece makes clear this is about p2p e-cash:</p>
<blockquote>
<p>I’ve developed a new open source P2P e-cash system called Bitcoin.</p>
</blockquote>
<p>So this source is evidence for both store of value and payments. The score is tied:</p>
<table>
<thead>
<tr>
<th>BTC Use Case</th>
<th style="text-align:right">Satoshi Mentions</th>
</tr>
</thead>
<tbody>
<tr>
<td>Store of Value</td>
<td style="text-align:right">1</td>
</tr>
<tr>
<td>Payments</td>
<td style="text-align:right">1</td>
</tr>
</tbody>
</table>
<h3>Source #2: BT Thread “Bitcoins are most like shares of common stock”</h3>
<p>Dan’s <a href="https://twitter.com/danheld/status/1084848067008811008">next Tweet</a> is quoting a short sentence from a BitcoinTalk (BT) forum thread:</p>
<blockquote>
<p>Bitcoin [is] more like a collectible or commodity.</p>
</blockquote>
<h4>Context</h4>
<p><a href="https://satoshi.nakamotoinstitute.org/posts/bitcointalk/threads/192/#24">The thread</a> is a discussion of whether or not Bitcoin was similar to stocks. Satoshi’s full response is as follows:</p>
<blockquote>
<p>Bitcoins have no dividend or potential future dividend, therefore not like a stock.</p>
</blockquote>
<blockquote>
<p>More like a collectible or commodity.</p>
</blockquote>
<p>This is not strong evidence that Satoshi built Bitcoin to operate as a store of value. He’s merely pointing out that it’s not like stocks because it doesn’t have a dividend.</p>
<p>In the interest of trying to interpret Satoshi’s comments in the most favorable light for the store of value claim, I’ll say this just qualifies as a mention and add one to the store of value side:</p>
<table>
<thead>
<tr>
<th>BTC Use Case</th>
<th style="text-align:right">Satoshi Mentions</th>
</tr>
</thead>
<tbody>
<tr>
<td>Store of Value</td>
<td style="text-align:right">2</td>
</tr>
<tr>
<td>Payments</td>
<td style="text-align:right">1</td>
</tr>
</tbody>
</table>
<h3>Source #3: P2P Foundation Comment</h3>
<p>Dan’s <a href="https://twitter.com/danheld/status/1084848067637985280">next tweet</a> is from the same P2P Foundation post quoted earlier, but from a comment Satoshi made down the thread:</p>
<blockquote>
<p>In this sense, it’s more typical of a precious metal. Instead of the supply changing to keep the value the same, the supply is predetermined and the value changes. As the number of users grows, the value per coin increases</p>
</blockquote>
<p>The <a href="https://satoshi.nakamotoinstitute.org/posts/p2pfoundation/3/">full quote</a> continues:</p>
<blockquote>
<p>It has the potential for a positive feedback loop; as users increase, the value goes up, which could attract more users to take advantage of the increasing value.</p>
</blockquote>
<h4>Context</h4>
<p>This definitely qualifies as support for the store of value position. However, if you read the full comment, Satoshi uses some language that begins to cast doubt on the idea:</p>
<blockquote>
<p>indeed there is nobody to act as central bank or federal reserve to adjust the money supply as the population of users grows. That would have required a trusted party to determine the value, <strong>because I don’t know a way for software to know the real world value of things. If there was some clever way, or if we wanted to trust someone to actively manage the money supply to peg it to something, the rules could have been programmed for that</strong>.</p>
</blockquote>
<p>Satoshi is saying that the reason there’s no adjustment of the money supply is that he can’t think of a way for software to do this without trust. If there was a way to do it, or if he thought a peg was a good idea, then he could have programmed it do that instead.</p>
<p>Notice what he <em>isn’t</em> saying. He isn’t claiming that this positive feedback loop was designed intentionally to promote it as a store of value. His description makes it sound incidental to the design.</p>
<p>This is bolstered by the fact that this statement is in response to a question. Satoshi is not proactively stating the details of the monetary policy, he’s only responding with information when asked.</p>
<p>This is true in nearly all the cases I could find where he references store of value. If this were truly his main goal, why only mention it in response to other’s inquiries instead of boldly stating its true purpose?</p>
<p>Still, I’ll add one more to the store of value tally:</p>
<table>
<thead>
<tr>
<th>BTC Use Case</th>
<th style="text-align:right">Satoshi Mentions</th>
</tr>
</thead>
<tbody>
<tr>
<td>Store of Value</td>
<td style="text-align:right">3</td>
</tr>
<tr>
<td>Payments</td>
<td style="text-align:right">1</td>
</tr>
</tbody>
</table>
<h3>Source #4: BT Thread &quot; Bitcoin does NOT violate Mises’ Regression Theorem&quot;</h3>
<p>Dan’s next <a href="https://twitter.com/danheld/status/1084848068204191744">two</a> <a href="https://twitter.com/danheld/status/1084848068778848256">tweets</a> are from the same source, a comment Satoshi left on a BT thread.</p>
<blockquote>
<p>As a thought experiment, imagine there was a base metal as scarce as gold but with the following properties: [not useful/no utility]. And one special, magical property: can be transported over a communications channel</p>
</blockquote>
<blockquote>
<p>If there were nothing in the world with intrinsic value that could be used as money, only scarce but no intrinsic value, I think people would still take up something. (I’m using the word scarce here to only mean limited potential supply)</p>
</blockquote>
<p>Comparing Bitcoin to a base metal like gold and discussing its scarcity seems like a case supporting Bitcoin as a store of value. But once again, we must view it in context.</p>
<h4>Context</h4>
<p>Many of these quotes, when placed in context, aren’t as compelling as they are when standing alone. But this pair of quotes goes a step beyond that, as they’re downright deceptive. They’ve been cherry-picked. Let me post the <a href="https://satoshi.nakamotoinstitute.org/posts/bitcointalk/threads/137/#10">entire comment</a>, and bold some sections Dan left out:</p>
<blockquote>
<p>As a thought experiment, imagine there was a base metal as scarce as gold but with the following properties:</p>
</blockquote>
<ul>
<li>boring grey in colour</li>
<li>not a good conductor of electricity</li>
<li>not particularly strong, but not ductile or easily malleable either</li>
<li>not useful for any practical or ornamental purpose</li>
</ul>
<blockquote>
<p>and one special, magical property:</p>
</blockquote>
<ul>
<li>can be transported over a communications channel</li>
</ul>
<blockquote>
<p>If it somehow acquired any value at all for whatever reason, then <strong>anyone wanting to transfer wealth over a long distance</strong> could buy some, <strong>transmit it</strong>, and <strong>have the recipient sell it</strong>.</p>
</blockquote>
<blockquote>
<p><strong>Maybe it could get an initial value circularly as you’ve suggested, by people foreseeing its potential usefulness for exchange</strong>.  (I would definitely want some)  Maybe collectors, any random reason could spark it.</p>
</blockquote>
<blockquote>
<p>I think the traditional qualifications for money were written with the assumption that there are so many competing objects in the world that are scarce, an object with the automatic bootstrap of intrinsic value will surely win out over those without intrinsic value.  But if there were nothing in the world with intrinsic value that could be used as money, only scarce but no intrinsic value, I think people would still take up something.</p>
</blockquote>
<blockquote>
<p>(I’m using the word scarce here to only mean limited potential supply)</p>
</blockquote>
<p>That’s right, Satoshi is talking about Bitcoin obtaining value in the context of transferring that value and having the recipient sell it, not storing value! He goes a step further and says that Bitcoin “maybe” could get an initial value, but only because people would see “its potential usefulness for <em>exchange</em>.”</p>
<p>Now these quotes don’t sound so strong for store of value and instead are supporting payments as well. I’ll give one mention to each side:</p>
<table>
<thead>
<tr>
<th>BTC Use Case</th>
<th style="text-align:right">Satoshi Mentions</th>
</tr>
</thead>
<tbody>
<tr>
<td>Store of Value</td>
<td style="text-align:right">4</td>
</tr>
<tr>
<td>Payments</td>
<td style="text-align:right">2</td>
</tr>
</tbody>
</table>
<h3>Source #5: Cryptography Mailing List, Bitcoin v0.1 released, Reply</h3>
<p>Dan’s <a href="https://twitter.com/danheld/status/1084848069470957568">last direct quote</a> of Satoshi regarding store of value is from a reply he made in <a href="https://satoshi.nakamotoinstitute.org/emails/cryptography/17/">email</a>.</p>
<blockquote>
<p>It might make sense just to get some in case it catches on. If enough people think the same way, that becomes a self fulfilling prophecy</p>
</blockquote>
<h4>Context</h4>
<p>This is another case of cherry picking. Dan conveniently left off the very next sentence. Look at the full paragraph:</p>
<blockquote>
<p>It might make sense just to get some in case it catches on. If<br>
enough people think the same way, that becomes a self fulfilling<br>
prophecy. Once it gets bootstrapped, <strong>there are so many<br>
applications if you could effortlessly pay a few cents to a<br>
website as easily as dropping coins in a vending machine</strong>.</p>
</blockquote>
<p>That’s about as strong an endorsement of the payment vision as possible.</p>
<p>That’s not all. Satoshi’s comments before Dan’s cherry pick contains loads of examples of Satoshi explaining all the ways Bitcoin could be used for payments:</p>
<blockquote>
<p>I would be surprised if 10 years from now we’re not using<br>
electronic currency in some way, now that we know a way to do it<br>
that won’t inevitably get dumbed down when the trusted third party<br>
gets cold feet.</p>
</blockquote>
<blockquote>
<p>It could get started in a narrow niche like <strong>reward points,<br>
donation tokens, currency for a game or micropayments for adult<br>
sites</strong>. Initially <strong>it can be used in proof-of-work applications<br>
for services that could almost be free but not quite</strong>.</p>
</blockquote>
<blockquote>
<p><strong>It can already be used for pay-to-send e-mail</strong>. The send dialog is<br>
resizeable and you can enter as long of a message as you like.<br>
It’s sent directly when it connects. The recipient doubleclicks<br>
on the transaction to see the full message. If someone famous is<br>
getting more e-mail than they can read, but would still like to<br>
have a way for fans to contact them, they could set up Bitcoin and<br>
give out the IP address on their website. “Send X bitcoins to my<br>
priority hotline at this IP and I’ll read the message personally.”</p>
</blockquote>
<blockquote>
<p><strong>Subscription sites that need some extra proof-of-work for their<br>
free trial so it doesn’t cannibalize subscriptions could charge<br>
bitcoins for the trial</strong>.</p>
</blockquote>
<p>Another micropayments mention. On the whole the email strongly supports the payments side, but I’ll once again add mentions to both sides:</p>
<table>
<thead>
<tr>
<th>BTC Use Case</th>
<th style="text-align:right">Satoshi Mentions</th>
</tr>
</thead>
<tbody>
<tr>
<td>Store of Value</td>
<td style="text-align:right">5</td>
</tr>
<tr>
<td>Payments</td>
<td style="text-align:right">3</td>
</tr>
</tbody>
</table>
<p>That’s all the Satoshi quotes referencing store of value that Dan mentions. In fairness to the store of value position, I found a couple more when reading through all of Satoshi’s writings.</p>
<h3>Source #6: BT Thread / Bitcoin List email announcing version 0.3</h3>
<p>The best one I found was Satoshi <a href="https://satoshi.nakamotoinstitute.org/posts/bitcointalk/threads/80/">announcing the 0.3 release</a> (and corresponding email with the same text):</p>
<blockquote>
<p>Announcing version 0.3 of Bitcoin, the P2P cryptocurrency!  Bitcoin is a digital currency using cryptography and a distributed network to replace the need for a trusted central server.  <strong>Escape the arbitrary inflation risk of centrally managed currencies!  Bitcoin’s total circulation is limited to 21 million coins.</strong></p>
</blockquote>
<p>This is the single strongest quote in favor of the store of value case that I’ve found. He doesn’t mention payments at all, talks about escaping inflation from central banks, and highlights the 21 million coin cap. This definitely adds to the store of value tally:</p>
<table>
<thead>
<tr>
<th>BTC Use Case</th>
<th style="text-align:right">Satoshi Mentions</th>
</tr>
</thead>
<tbody>
<tr>
<td>Store of Value</td>
<td style="text-align:right">6</td>
</tr>
<tr>
<td>Payments</td>
<td style="text-align:right">3</td>
</tr>
</tbody>
</table>
<h3>Source #7: BT Thread “They want to delete the Wikipedia article”</h3>
<p>In this <a href="https://satoshi.nakamotoinstitute.org/posts/bitcointalk/threads/100/">BT thread</a> Satoshi mentions two other projects Bitcoin is based on:</p>
<blockquote>
<p>Bitcoin is an implementation of Wei Dai’s b-money proposal <a href="http://weidai.com/bmoney.txt">http://weidai.com/bmoney.txt</a> on Cypherpunks <a href="http://en.wikipedia.org/wiki/Cypherpunks">http://en.wikipedia.org/wiki/Cypherpunks</a> in 1998 and Nick Szabo’s Bitgold proposal <a href="http://unenumerated.blogspot.com/2005/12/bit-gold.html">http://unenumerated.blogspot.com/2005/12/bit-gold.html</a></p>
</blockquote>
<p>Wei Dai’s b-money is an attempt to create a medium of exchange for use in a  crypto-anarchy, and leans towards the payments side. Szabo’s bit gold leans towards store of value. Add one to each:</p>
<table>
<thead>
<tr>
<th>BTC Use Case</th>
<th style="text-align:right">Satoshi Mentions</th>
</tr>
</thead>
<tbody>
<tr>
<td>Store of Value</td>
<td style="text-align:right">7</td>
</tr>
<tr>
<td>Payments</td>
<td style="text-align:right">4</td>
</tr>
</tbody>
</table>
<h3>Source #8: Cryptography Mailing List, Bitcoin P2P e-cash paper, reply #1</h3>
<p>The last quote I found mentioning store of value was an <a href="https://satoshi.nakamotoinstitute.org/emails/cryptography/5/">old email</a> from the cryptography mailing list:</p>
<blockquote>
<p>The fact that new coins are produced means the money supply increases by a planned amount, but this does not necessarily result in inflation. If the supply of money increases at the same rate that the number of people using it increases, prices remain stable. If it does not increase as fast as demand, there will be deflation and early holders of money will see its value increase.</p>
</blockquote>
<blockquote>
<p>Coins have to get initially distributed somehow, and a constant rate seems like the best formula.</p>
</blockquote>
<p>Once again, this is in response to a question, not a proactive statement that Bitcoin was built to be a store of value. Also, Satoshi’s final sentence makes the coin distribution sound closer to an afterthought than something of central importance.</p>
<p>However it does mention “early holders of money will see its value increase” so I’ll add this to the store of value mentions:</p>
<table>
<thead>
<tr>
<th>BTC Use Case</th>
<th style="text-align:right">Satoshi Mentions</th>
</tr>
</thead>
<tbody>
<tr>
<td>Store of Value</td>
<td style="text-align:right">8</td>
</tr>
<tr>
<td>Payments</td>
<td style="text-align:right">4</td>
</tr>
</tbody>
</table>
<h3>Genesis Block Comment</h3>
<p>Satoshi included a message in the genesis block of the Bitcoin blockchain:</p>
<blockquote>
<p>The Times 03/Jan/2009 Chancellor on brink of second bailout for banks</p>
</blockquote>
<p>This served as proof that the block was created at that date (or later).</p>
<p>Store of value proponents claim this is evidence that Satoshi built Bitcoin as a store of value. I agree that it shows Satoshi didn’t like central banking / financial institutions, but I find it unconvincing as evidence he built Bitcoin as a store of value for several reasons:</p>
<ol>
<li>It serves a clear purpose of proving the date and needs no other explanation.</li>
<li>Satoshi was limited to only using whatever headlines happened to be written around the period he published. He wasn’t able to convey any nuanced opinion about his creation by picking a headline someone else wrote, believing otherwise is reading tea leaves.</li>
<li>Satoshi making a statement about banks and bailouts doesn’t prove that store of value is his main goal. Satoshi states that he built Bitcoin to “allow online payments to be sent directly from one party to another without going through a financial institution” so him using an example of a failing financial institution isn’t surprising nor does it preclude him believing in the need for Bitcoin for payments.</li>
<li>There are more direct ways to make a clear statement than encoding it in hex and - as far as I can determine - never making reference to it again.</li>
</ol>
<h3>Store of Value Summary</h3>
<p>Those are all the examples I found which could possibly be interpreted as Satoshi speaking of Bitcoin as a store of value, nearly always indirectly. Across everything he wrote, there were only eight such examples, and just one of them was unambiguously endorsing the store of value use case (the v0.3 announcement) and not mentioning payments at all.</p>
<p>This handful of quotes isn’t compelling to me, but perhaps you think it’s sufficient. If you stopped reading now - and perhaps squinted - maybe you can see how Bitcoin was built as a store of value?</p>
<p>That’s exactly the problem. Instead of looking at the totality of Satoshi’s writings to be as objective as possible, Dan Held choose to only promote the handful of quotes which supported his point and simply ignored everything else Satoshi ever said about Bitcoin being used for payments.</p>
<p>Satoshi’s comments about payments are so abundant that this sin of omission was likely committed willfully. Taken together with the obvious cases of cherry-picking above - often only a sentence away from Satoshi mentioning using Bitcoin for payments - and it’s clear that Dan isn’t interested in the truth of Satoshi’s intentions, only supporting his own narrative.</p>
<p>Again, don’t take my word for it, let’s keep looking at the evidence.</p>
<h2>Evidence for Payments</h2>
<p>Satoshi mentions payments or references using Bitcoin for commerce a total of 34 times in emails, forum posts, and the original source code.</p>
<p>It’s not only Satoshi’s numerous mentions of using Bitcoin for payments that is notable, it’s also what Satoshi didn’t say. The BitcoinTalk forums are full of people trying to get Bitcoin accepted for commerce and used as digital cash. Never <em>once</em> did Satoshi step in to correct them or suggest they weren’t using Bitcoin properly. Quite the opposite. As you’ll see in the following examples, Satoshi encouraged these efforts and joined in many of these threads.</p>
<h3>Source #9: BT Thread “Bitcoin minting is thermodynamically perverse”</h3>
<p>While “store of value” is never uttered by Satoshi, he did use the term “medium of exchange,” though only once. But this one use is illuminating, from a <a href="https://satoshi.nakamotoinstitute.org/posts/bitcointalk/threads/167/#29">BT thread</a> about Bitcoin mining:</p>
<blockquote>
<p>It’s the same situation as gold and gold mining.  The marginal cost of gold mining tends to stay near the price of gold.  Gold mining is a waste, but that waste is far less than the utility of <strong>having gold available as a medium of exchange</strong>.</p>
</blockquote>
<blockquote>
<p>I think the case will be the same for Bitcoin.  <strong>The utility of the exchanges made possible by Bitcoin will far exceed the cost of electricity used</strong>.  Therefore, not having Bitcoin would be the net waste.</p>
</blockquote>
<p>Store of value proponents are fond of analogizing Bitcoin as gold or precious metals, and like to quote the few cases where Satoshi makes the same analogy. However, in this quote Satoshi clearly states what he thinks the primary utility of gold is - as a medium of exchange, not a store of value!</p>
<p>He further confirms this saying “The utility of the exchanges made possible by Bitcoin…” and not by mentioning gold or Bitcoin’s ability to store value.</p>
<p>Money has a combination of features, so this doesn’t prove that Satoshi only cares about the medium of exchange properties of gold or Bitcoin. But it does show that store of value isn’t top of his mind in terms of their utility.</p>
<p>Add one to payments:</p>
<table>
<thead>
<tr>
<th>BTC Use Case</th>
<th style="text-align:right">Satoshi Mentions</th>
</tr>
</thead>
<tbody>
<tr>
<td>Store of Value</td>
<td style="text-align:right">8</td>
</tr>
<tr>
<td>Payments</td>
<td style="text-align:right">5</td>
</tr>
</tbody>
</table>
<h3>Source #10: Cryptography Mailing List, Bitcoin P2P e-cash paper, original email</h3>
<p>When Satoshi <a href="https://satoshi.nakamotoinstitute.org/emails/cryptography/1/">first emailed</a> the cryptography mailing list, his introductory sentence stated:</p>
<blockquote>
<p>I’ve been working on a new electronic cash system that’s fully<br>
peer-to-peer, with no trusted third party.</p>
</blockquote>
<p>No mention of store of value in that email.</p>
<table>
<thead>
<tr>
<th>BTC Use Case</th>
<th style="text-align:right">Satoshi Mentions</th>
</tr>
</thead>
<tbody>
<tr>
<td>Store of Value</td>
<td style="text-align:right">8</td>
</tr>
<tr>
<td>Payments</td>
<td style="text-align:right">6</td>
</tr>
</tbody>
</table>
<h3>Source #11: Cryptography Mailing List, Bitcoin P2P e-cash paper, reply #2</h3>
<p>In the same thread Satoshi <a href="https://satoshi.nakamotoinstitute.org/emails/cryptography/2/">was questioned</a> how Bitcoin could scale. He replied:</p>
<blockquote>
<p>Long before the network gets anywhere near as large as that, it would be safe for users to use Simplified Payment Verification (section 8) to check for double spending, which only requires having the chain of block headers, or about 12KB per day. Only people trying to create new coins would need to run network nodes. At first, most users would run network nodes, but as the network grows beyond a certain point, it would be left more and more to specialists with server farms of specialized hardware. A server farm would only need to have one node on the network and the rest of the LAN connects with that one node.</p>
</blockquote>
<blockquote>
<p>The bandwidth might not be as prohibitive as you think. A typical transaction would be about 400 bytes (ECC is nicely compact). Each transaction has to be broadcast twice, so lets say 1KB per transaction. Visa processed 37 billion transactions in FY2008, or an average of 100 million transactions per day. That many transactions would take 100GB of bandwidth, or the size of 12 DVD or 2 HD quality movies, or about $18 worth of bandwidth at current prices.</p>
</blockquote>
<p>Clearly in support of payments and not store of value.</p>
<table>
<thead>
<tr>
<th>BTC Use Case</th>
<th style="text-align:right">Satoshi Mentions</th>
</tr>
</thead>
<tbody>
<tr>
<td>Store of Value</td>
<td style="text-align:right">8</td>
</tr>
<tr>
<td>Payments</td>
<td style="text-align:right">7</td>
</tr>
</tbody>
</table>
<h3>Source #12: Cryptography Mailing List, Bitcoin P2P e-cash paper, reply #3</h3>
<p>Continuing the thread, Satoshi again receives <a href="https://satoshi.nakamotoinstitute.org/emails/cryptography/3/">another challenge</a> about how security Bitcoin is from an attack. He responds:</p>
<blockquote>
<p>Even if a bad guy does overpower the network, it’s not like he’s instantly rich. All he can accomplish is to take back money he himself spent, like bouncing a check. To exploit it, he would have to <strong>buy something from a merchant</strong>, wait till it ships, then overpower the network and try to take his money back.</p>
</blockquote>
<p>This is one of the first examples of him explicitly mentioning merchants accepting Bitcoin, a theme he comes back to time and time again.</p>
<table>
<thead>
<tr>
<th>BTC Use Case</th>
<th style="text-align:right">Satoshi Mentions</th>
</tr>
</thead>
<tbody>
<tr>
<td>Store of Value</td>
<td style="text-align:right">8</td>
</tr>
<tr>
<td>Payments</td>
<td style="text-align:right">8</td>
</tr>
</tbody>
</table>
<p>We’re all tied up now, and there’s still a long way to go on the payments side. I’ll stop tallying individually and give you the final tally at the end.</p>
<h3>Source #13: BT Thread “Ummmm… where did my bitcoins go?”</h3>
<p>I’ve already shown how Satoshi mentioned micropayments in some of his statements that were supposed to be evidence for store of value, but that’s not the only time he talked about micropayments.</p>
<p>In <a href="https://satoshi.nakamotoinstitute.org/posts/bitcointalk/threads/48/">this thread</a> Satoshi mentions them again:</p>
<blockquote>
<p>Creating an account on a website is a lot easier than installing and learning to use software, and a more familiar way of doing it for most people.  The only disadvantage is that you have to trust the site, but <strong>that’s fine for pocket change amounts for micropayments and misc expenses</strong>.  It’s an easy way to get started and if you get larger amounts then you can upgrade to the actual bitcoin software.</p>
</blockquote>
<h3>Source #14: BT Thread “Flood attack 0.00000001 BC” Comment #1</h3>
<p><a href="https://satoshi.nakamotoinstitute.org/posts/bitcointalk/317/">More discussion</a> of micropayments.</p>
<blockquote>
<p><strong>Bitcoin is practical for smaller transactions than are practical with existing payment methods</strong>.  Small enough to include what you might call the top of the micropayment range.  But it doesn’t claim to be practical for arbitrarily small micropayments.</p>
</blockquote>
<h3>Source #15: BT Thread “Flood attack 0.00000001 BC” Comment #2</h3>
<p>Same thread, with a <a href="https://satoshi.nakamotoinstitute.org/posts/bitcointalk/318/">different comment</a> where Satoshi expands on his vision for the future of micropayments on Bitcoin:</p>
<blockquote>
<p>Forgot to add the good part about micropayments.  <strong>While I don’t think Bitcoin is practical for smaller micropayments right now, it will eventually be as storage and bandwidth costs continue to fall.</strong>  If Bitcoin catches on on a big scale, it may already be the case by that time.  Another way they can become more practical is if I implement client-only mode and the number of network nodes consolidates into a smaller number of professional server farms.  Whatever size micropayments you need will eventually be practical.  I think in 5 or 10 years, the bandwidth and storage will seem trivial.</p>
</blockquote>
<h3>Source #16: BT Thread “Flood attack 0.00000001 BC” Comment #3</h3>
<p>Same thread, with a <a href="https://satoshi.nakamotoinstitute.org/posts/bitcointalk/319/">different comment</a>, this time showing an example of the type of thing he expects people would purchase with Bitcoin:</p>
<blockquote>
<p><strong>You pay for, say, 1000 pages or images or downloads or searches or whatever at a time.  When you’ve used up your 1000 pages, you pay for another 1000 pages.</strong>  If you only use 1 page, then you have 999 left that you may never use, but it’s not a big deal because the cost per 1000 is still small.</p>
</blockquote>
<blockquote>
<p>Or you could pay per day.  The first time you access the site on a given day, you pay for 24 hours of access.</p>
</blockquote>
<h3>Source #17: BT Thread “Potential disaster scenario”</h3>
<p>Another <a href="https://satoshi.nakamotoinstitute.org/posts/bitcointalk/threads/182/">micropayments mention</a>:</p>
<blockquote>
<p>Case 3 comes into play for small amounts.  The overhead of doing an exchange doesn’t make sense if you just need a <strong>small bit of pocket change for incidental micropayments.</strong></p>
</blockquote>
<h3>Source #18: Cryptography Mailing List, Bitcoin P2P e-cash paper, reply #4</h3>
<p>In this <a href="https://satoshi.nakamotoinstitute.org/emails/cryptography/6/">long reply</a> to various questions about Bitcoin, Satoshi mentions shipping goods:</p>
<blockquote>
<p>Receivers of transactions will normally need to hold transactions for perhaps an hour or more to allow time for this kind of possibility to be resolved. They can still re-spend the coins immediately, but <strong>they should wait before taking an action such as shipping goods.</strong></p>
</blockquote>
<h3>Source #19: Cryptography Mailing List, Bitcoin P2P e-cash paper, reply #5</h3>
<p>In the same <a href="https://satoshi.nakamotoinstitute.org/emails/cryptography/14/">email thread</a> he mentions merchants accepting Bitcoin:</p>
<blockquote>
<p>Information based goods like access to website or downloads are<br>
non-fencible. Nobody is going to be able to make a living off<br>
stealing access to websites or downloads. They can go to the file<br>
sharing networks to steal that. Most instant-access products aren’t<br>
going to have a huge incentive to steal.</p>
</blockquote>
<blockquote>
<p><strong>If a merchant actually has a problem with theft</strong>, they can make the<br>
customer wait 2 minutes, or wait for something in e-mail, which many<br>
already do. If they really want to optimize, and it’s a large<br>
download, they could cancel the download in the middle if the<br>
transaction comes back double-spent. If it’s website access,<br>
typically it wouldn’t be a big deal to let the customer have access<br>
for 5 minutes and then cut off access if it’s rejected. Many such<br>
sites have a free trial anyway.</p>
</blockquote>
<h3>Source #20: BT Thread “A newb’s test - anyone want to buy a picture for $1?”</h3>
<p><a href="https://satoshi.nakamotoinstitute.org/posts/bitcointalk/threads/15/">Another mention</a> of merchants and payments:</p>
<blockquote>
<p>The recommended ways to do a <strong>payment for an order</strong>:</p>
</blockquote>
<ol>
<li><strong>The merchant</strong> has a static IP, the customer sends to it with a comment.</li>
<li><strong>The merchant</strong> creates a new bitcoin address, gives it to the customer, the customer sends to that address.  This will be the standard way for website software to do it.</li>
</ol>
<h3>Source #21: BT Thread “Payment server”</h3>
<p><a href="https://satoshi.nakamotoinstitute.org/posts/bitcointalk/threads/18/">Another mention</a> of using Bitcoin for payments for a shop owner:</p>
<blockquote>
<p>That’s the right way to do it as riX says.  The software can generate a new bitcoin address whenever you need one for each payment.  “Please send X bc to [single-use bitcoin address] to <strong>complete your order</strong>”  When the server receives that amount to the bitcoin address, that could trigger it to automatically fulfil the order or e-mail <strong>the shop owner</strong>.</p>
</blockquote>
<h3>Source #22: BT Thread “URI-scheme for bitcoin”</h3>
<p>Satoshi couldn’t be clearer about his desire for Bitcoin to be used for retail payments than this short statement in a <a href="https://satoshi.nakamotoinstitute.org/posts/bitcointalk/threads/30/">thread</a> about creating a URI scheme:</p>
<blockquote>
<p>That would be nice at point-of-sale.  The cash register displays a QR-code encoding a bitcoin address and amount on a screen and you photo it with your mobile.</p>
</blockquote>
<h3>Source #23: BT Thread “Idea for file hosting and proxy services”</h3>
<p>Satoshi started <a href="https://satoshi.nakamotoinstitute.org/posts/bitcointalk/threads/42/">this thread</a> to propose an idea to create an image and file hosting service that charges Bitcoins. At the end he specifically states that this would be useful for anonymous payments.</p>
<blockquote>
<p><strong>It would be nice if we made some free PHP code for an image and file hosting service that charges Bitcoins</strong>.  Anyone with some extra bandwidth quota could throw it on their webserver and run it.  Users could finally pay the minor fee to cover bandwidth cost and avoid the limits and hassles.  Ideally, it should be MIT license or public domain.</p>
</blockquote>
<blockquote>
<p><strong>Services like this would be great for anonymous users, who have trouble paying for things</strong>.</p>
</blockquote>
<h3>Source #24: BT Thread “Exchange Methods”</h3>
<p>Satoshi comments in a <a href="https://satoshi.nakamotoinstitute.org/posts/bitcointalk/threads/44/">thread</a> about Liberty Reserve and Pecunix, pointing out Bitcoin’s usefulness for small, anonymous transactions:</p>
<blockquote>
<p>Bitcoin has unique properties that would be complementary.  LR/Pecunix are easy to spend anonymously, but hard to buy anonymously and not worth the trouble to buy in small amounts.  <strong>Bitcoin, on the other hand, is easy to get in small amounts anonymously.  It would be convenient to buy LR/Pecunix with bitcoins rather than through conventional payment methods.</strong></p>
</blockquote>
<blockquote>
<p>Most customers who convert to LR to buy something would <strong>probably ask the seller first if they accept Bitcoin, encouraging them to start accepting it</strong>.</p>
</blockquote>
<h3>Source #25: BT Thread “CLI bitcoin generation”</h3>
<p>Satoshi <a href="https://satoshi.nakamotoinstitute.org/posts/bitcointalk/threads/54/">mentions</a> the work he’s been focusing on recently:</p>
<blockquote>
<p><strong>So far I’ve concentrated on functions for web merchants</strong>, not so much on stuff for remote management of headless coin generators yet.</p>
</blockquote>
<h3>Source #26: BT Thread “JSON-RPC programming tips using labels”</h3>
<p>Satoshi <a href="https://satoshi.nakamotoinstitute.org/posts/bitcointalk/threads/59/">mentions</a> several examples of how people might be using Bitcoin to sell various things:</p>
<blockquote>
<p><strong>If you’re selling digital goods and services</strong>, where you don’t lose much if someone gets a free access, and it can’t be resold for profit, I think you’re fine to accept 0 confirmations.</p>
</blockquote>
<blockquote>
<p>It’s mostly only if you were selling gold or currency that you’d need multiple confirmations.</p>
</blockquote>
<h3>Source #27: BT Thread “Hostnames instead of IP Addresses”</h3>
<p><a href="https://satoshi.nakamotoinstitute.org/posts/bitcointalk/threads/60/">Discussed merchants</a> using Bitcoin:</p>
<blockquote>
<p>Problem is, I think <strong>merchants</strong> would still prefer to use bitcoin addresses to be certain they know what the payment is for.  You simply cannot count on users to enter the right thing in the comment fields to identify the transaction.  It would only approach practical if we had a mailto style link that prepopulates the comment field with the order number, but then the link could just as well be a bitcoin address.</p>
</blockquote>
<blockquote>
<p>Just having an open bitcoin server at <a href="http://domain.com">domain.com</a> that users could send unidentified payments to would be too much of a liability.  <strong>Regular users aren’t used to the idea of having to identify the payment.  Merchants would get too many blank payments followed by “I paid you, where’s my stuff?!” a week later.</strong></p>
</blockquote>
<h3>Source #28: BT Thread “Bitcoin mobile.”</h3>
<p>In a <a href="https://satoshi.nakamotoinstitute.org/posts/bitcointalk/threads/64/">thread</a> discussing using Bitcoin on mobile devices, Satoshi agrees that sending small amounts to custodians isn’t a problem because it’s “walking around money for incidental expenses.”</p>
<blockquote>
<blockquote>
<p>You can of course use services like <a href="http://vekja.net">vekja.net</a> or <a href="http://mybitcoin.com">mybitcoin.com</a> on a mobile browser, depositing money there to the extent you trust them.</p>
</blockquote>
</blockquote>
<blockquote>
<p>I think that’s the best option right now.  <strong>Like cash, you don’t keep your entire net worth in your pocket, just walking around money for incidental expenses.</strong></p>
</blockquote>
<h3>Source #29: BT Thread “Website integration for bitcoin”</h3>
<p>In this <a href="https://satoshi.nakamotoinstitute.org/posts/bitcointalk/threads/103/">thread</a> about creating an integration to easily allow websites to integrate Bitcoin for payments, Satoshi laments that he’s been trying to get someone to build a tool for the accounting aspect of this so that people aren’t re-inventing the wheel:</p>
<blockquote>
<p>I’ve been trying to encourage someone to write and release some sample Python code showing the recommended way to do the typical accounting stuff, but to no avail.  It would be nice if you didn’t have to re-invent the wheel like you’re doing here.</p>
</blockquote>
<h3>Source #30: BT Thread “Sample account system using JSON-RPC needed”</h3>
<p>Only hours after the previous comment, he posted a <a href="https://satoshi.nakamotoinstitute.org/posts/bitcointalk/threads/112/">new thread</a> asking someone to build such a system:</p>
<blockquote>
<p>If you’re requiring more than 0 confirmations, it’s nice if you show the current balance (0 confirmations) and the available balance (1 or more confirmations), so they can immediately see that their payment is acknowledged.  Not all sites need to wait for confirmations, so the dual current &amp; available should be optional.  <strong>Most sites selling digital goods are fine to accept 0 confirmations.</strong></p>
</blockquote>
<blockquote>
<p>A nice sample app for this would be a simple bank site, which would have the above, plus the option to send a payment to a bitcoin address.  The sample code should be the simplest possible with the minimum extra stuff to make it a working site.</p>
</blockquote>
<h3>Source #31: BT Thread “Accounts example code”</h3>
<p>Satoshi shared some sample pseudocode in a <a href="https://satoshi.nakamotoinstitute.org/posts/bitcointalk/threads/257/">new thread</a> and the code shows an example of Bitcoin used for commerce:</p>
<pre><code class="language-javascript">// if you make a sale, move the money from their account to your &quot;&quot; account
if (move(username, &quot;&quot;, amount, 6, &quot;purchased item&quot;))
    SendTheGoods()
</code></pre>
<h3>Source #32: BT Thread “The Niche List”</h3>
<p>This <a href="https://satoshi.nakamotoinstitute.org/posts/bitcointalk/threads/222/">thread</a> started with the following idea from user “kiba”:</p>
<blockquote>
<p>This is Operation Economic Growth. Our mission is to grow the bitcoin economy by making everyone specialize in a narrow range of good and services.</p>
</blockquote>
<p>Satoshi jumped into the thread later to suggest how to do this (quoting another user first):</p>
<blockquote>
<blockquote>
<ol>
<li>Download site like rapidshare and other crappy host. Inconvenient captcha and required paypal. Bitcoin can possibly take both roles and streamline the whole process.</li>
</ol>
</blockquote>
</blockquote>
<blockquote>
<p>Repeating myself here, but there is open source software for that, so it would just be a matter of <strong>bolting on a Bitcoin payment mechanism</strong>.  One good one I found was Mihalism Multi Host.  It’s designed as a free host, so it would just need a few tweaks to loosen up restrictions consistent with paid use.</p>
</blockquote>
<h3>Source #33: BT Thread “Porn”</h3>
<p>In a <a href="https://satoshi.nakamotoinstitute.org/posts/bitcointalk/threads/159/">thread</a> about using Bitcoin to buy porn, Satoshi endorses the idea:</p>
<blockquote>
<p>Bitcoin would be convenient for people who don’t have a credit card or don’t want to use the cards they have, either don’t want the spouse to see it on the bill or don’t trust giving their number to “porn guys”, or afraid of recurring billing.</p>
</blockquote>
<h3>Source #34: BT Thread “The case for removing IP transactions”</h3>
<p>In a <a href="https://satoshi.nakamotoinstitute.org/posts/bitcointalk/threads/220/">thread</a> discussing removing sending Bitcoin to IP addresses, Satoshi makes several references to storefronts, customers, and payments:</p>
<blockquote>
<p>In storefront cases, you would typically only want customers to send payments through your automated system that only hands out bitcoin addresses associated with particular orders and accounts.  Random unidentified payments volunteered to the server’s IP address would be unhelpful.</p>
</blockquote>
<h3>Source #35: BT Thread “Bitcoin snack machine (fast transaction problem)”</h3>
<p>In this interesting <a href="https://satoshi.nakamotoinstitute.org/posts/bitcointalk/threads/114/">thread</a>, the original poster “Insti” asks the following question:</p>
<pre><code>How would a Bitcoin snack  machine work?

1) You want to walk up to the machine. Send it a bitcoin.
2) ?
3) Walk away eating your nice sugary snack. (Profit!)


You don't want to have to wait an hour for you transaction to be confirmed.
The vending machine company doesn't want to give away lots of free candy.

How does step 2 work?
</code></pre>
<p>Satoshi’s solution is to have a payment processing company manage these transactions:</p>
<blockquote>
<p>I believe it’ll be possible for a payment processing company to provide as a service the rapid distribution of transactions with good-enough checking in something like 10 seconds or less.</p>
</blockquote>
<p>Later in another comment he says:</p>
<blockquote>
<p>the vending machine talks to a big service provider (aka payment processor) that provides this service to many merchants.  Think something like a credit card processor with a new job.  They would have many well connected network nodes.</p>
</blockquote>
<p>Never does he say “Bitcoin wasn’t built for retail payments / coffee / vending machines!” He actively proposes an idea for a Bitcoin payment processing company.</p>
<h3>Source #36 BT Thread “Scalability and transaction rate”</h3>
<p>The last source wasn’t a one-off occurence. In a <a href="https://satoshi.nakamotoinstitute.org/posts/bitcointalk/threads/127/">thread</a> discussing Bitcoin’s scalability and transaction rate, Satoshi makes clear that the current situation where everyone runs a full node isn’t the intended design, and then points back to the vending machine discussion and endorses the payment processor idea again:</p>
<blockquote>
<p>The current system where every user is a network node is not the intended configuration for large scale.  That would be like every Usenet user runs their own NNTP server.  The design supports letting users just be users.  The more burden it is to run a node, the fewer nodes there will be.  Those few nodes will be big server farms.  The rest will be client nodes that only do transactions and don’t generate.</p>
</blockquote>
<blockquote>
<blockquote>
<p>Besides, 10 minutes is too long to verify that payment is good.  It needs to be as fast as swiping a credit card is today.</p>
</blockquote>
</blockquote>
<blockquote>
<p>See the snack machine thread, I outline how a payment processor could verify payments well enough, actually really well (much lower fraud rate than credit cards), in something like 10 seconds or less.  If you don’t believe me or don’t get it, I don’t have time to try to convince you, sorry.</p>
</blockquote>
<h3>Source #37: BT Thread “Escrow”</h3>
<p>Satoshi frequently mentioned that Bitcoin was built in order to allow for things like escrow. In one <a href="https://satoshi.nakamotoinstitute.org/posts/bitcointalk/threads/169/">thread</a> that he started, to explains how it could be used, and his example is focused on real-world commerce:</p>
<blockquote>
<p>The basic escrow: The buyer commits a payment to escrow. The seller receives a transaction with the money in escrow, but he can’t spend it until the buyer unlocks it. The buyer can release the payment at any time after that, which could be never. This does not allow the buyer to take the money back, but it does give him the option to burn the money out of spite by never releasing it. The seller has the option to release the money back to the buyer.</p>
</blockquote>
<blockquote>
<p>While this system does not guarantee the parties against loss, it takes the profit out of cheating.</p>
</blockquote>
<blockquote>
<p>If the seller doesn’t <strong>send the goods</strong>, he doesn’t get paid. The buyer would still be out the money, but at least the seller has no monetary motivation to stiff him.</p>
</blockquote>
<p>In a later comment in this thread, he also repeatedly talks about “consumers” when discussing the escrow.</p>
<h3>Source #38: Original Satoshi Code Included Distributed Marketplace</h3>
<p>Satoshi started building a distributed marketplace in the original code.</p>
<p>If you <a href="https://github.com/bitcoin/bitcoin/commit/5253d1ab77fab1995ede03fb934edd67f1359ba8#diff-d33e1dae0bcc9f4636b0d11678606702">view the commit</a> where this marketplace was removed, you can see that it was clearly intended to facilitate commerce. File names such as “market,” class names mentioning “Review,” “Product,” as well as mentioning advertisements.</p>
<h2>Final Tally</h2>
<p>I reviewed all 260 forum threads, 63 emails, and the original source code for direct or indirect mentions from Satoshi about Bitcoin serving as a store of value or as a payment method. Here’s the final tally.</p>
<table>
<thead>
<tr>
<th>BTC Use Case</th>
<th style="text-align:right">Satoshi Mentions</th>
</tr>
</thead>
<tbody>
<tr>
<td>Store of Value</td>
<td style="text-align:right">8</td>
</tr>
<tr>
<td>Payments</td>
<td style="text-align:right">34</td>
</tr>
</tbody>
</table>
<p>From these sources, Satoshi mentioned payments more than four times more frequently than store of value.</p>
<p>We can further break these numbers down by whether or not the source mentions solely store of value, payments or both.</p>
<table>
<thead>
<tr>
<th>BTC Use Case</th>
<th style="text-align:right">Satoshi Mentions</th>
</tr>
</thead>
<tbody>
<tr>
<td>Solely Store of Value</td>
<td style="text-align:right">4</td>
</tr>
<tr>
<td>Both SoV and Payments</td>
<td style="text-align:right">4</td>
</tr>
<tr>
<td>Solely Payments</td>
<td style="text-align:right">30</td>
</tr>
</tbody>
</table>
<p>From these sources, I only found four times that Satoshi’s statements could be interpreted solely in favor of Bitcoin as a store of value, but found thirty instances that could be interpreted solely in favor of Bitcoin as being used for payments.</p>
<h3>Timeline</h3>
<p>This timeline shows Satoshi’s statements by category from him announcing Bitcoin in late 2008 until he disappeared near the end of 2010. There are multiple instances where Satoshi made statements supportive of the payments side of the argument on the same day, thus aren’t displayed as they overlap on the timeline.</p>
<p><img src="https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2019-06-06-satoshi-analysis/image/SatoshiTimeline.png" alt="Satoshi Mentioning Bitcoin"></p>
<p>This evidence might be sufficient for you to disregard the claim “Bitcoin was purpose-built to first be a Store of Value.” I can’t see anyone honestly looking at Satoshi’s words and really believing he didn’t build this for payments. But we aren’t finished yet. There’s one last piece of evidence.</p>
<h2>The Whitepaper</h2>
<p>Store of value proponents have a neat rhetorical trick whenever anyone mentions the <a href="https://bitcoin.org/bitcoin.pdf">Bitcoin whitepaper</a>: They mock the person who mentions the whitepaper.</p>
<p>This is definitely childish, but there’s a kernel of truth behind their ridicule. Satoshi’s original whitepaper shouldn’t be worshiped, nor should it be interpreted as some sort of binding document for what Bitcoin should be today. The whitepaper is describing how Bitcoin was intended to function more than a decade ago, not how it functions today nor necessarily how it should function tomorrow.</p>
<p>What should the whitepaper be used for? Well, there’s really only one thing that we can say with confidence that the whitepaper can be used for: <em>Understanding how Satoshi viewed Bitcoin</em>.</p>
<p>The first piece of communication Satoshi made publicly was an email posting a link to the whitepaper along with the paper’s abstract. The first forum post also contained a link to the whitepaper. He fully controlled his own words and what he chose to include in the paper. Why wouldn’t we read the whitepaper?</p>
<p>Suggesting that the whitepaper isn’t important to understanding Satoshi’s understanding of Bitcoin would be like suggesting that it’s unimportant to <a href="https://www.csee.umbc.edu/courses/471/papers/turing.pdf">read</a> “Computing Machinery and Intelligence” to understand Turing’s view on artificial intelligence. They took the time to condense their thoughts into a comprehensible format for others to read and understand.</p>
<p>I put this section last so that people who have whitepaper-aversion-syndrome would read through all the rest of the evidence first before getting here. If you don’t think that we should read the whitepaper to understand how Satoshi viewed Bitcoin - that doesn’t really make any sense at all - but feel free to read through the other 38 sources above and ignore this section completely.</p>
<h3>Whitepaper summary</h3>
<p>The whitepaper is long enough to not fully quote, but I’ll pull out the quotes relevant to payments or store of value.</p>
<h4>Title</h4>
<blockquote>
<p>Bitcoin: A Peer-to-Peer Electronic Cash System</p>
</blockquote>
<p>This is straightforward. Satoshi succinctly states that Bitcoin is a system where peers can transfer cash electronically.</p>
<h4>Abstract</h4>
<blockquote>
<p>A purely peer-to-peer version of electronic cash would allow online payments to be sent directly from one party to another without going through a financial institution …</p>
</blockquote>
<p>The very first sentence is all about allowing online payments.</p>
<h4>Introduction</h4>
<blockquote>
<p><strong>Commerce on the Internet</strong> has come to rely almost exclusively on financial institutions serving as trusted third parties to process <strong>electronic payments</strong>. …</p>
</blockquote>
<blockquote>
<p>The cost of mediation increases transaction costs, limiting the minimum practical transaction size and cutting off the possibility for <strong>small casual transactions</strong> …</p>
</blockquote>
<blockquote>
<p><strong>Merchants must be wary of their customers</strong>, hassling them for more information than they would otherwise need. …</p>
</blockquote>
<blockquote>
<p>What is needed is an <strong>electronic payment system</strong> based on cryptographic proof instead of trust…</p>
</blockquote>
<blockquote>
<p>Transactions that are computationally impractical to reverse would <strong>protect sellers from fraud</strong>, and routine escrow mechanisms could easily be implemented to <strong>protect buyers</strong>. …</p>
</blockquote>
<p>This introduction section contains some of the clearest evidence for the payments side as there could possibly be. I don’t think the statements themselves need any further comment.</p>
<h4>Conclusion</h4>
<blockquote>
<p>We have proposed a system for electronic transactions without relying on trust. …</p>
</blockquote>
<p>It’s no wonder store of value proponents don’t hold the whitepaper in high regard. Satoshi clearly and repeatedly states that Bitcoin is about payments, merchants, commerce, and never mentions anything about store of value.</p>
<p>The whitepaper alongside the other 38 sources makes it clear that Satoshi built Bitcoin for payments, not primarily as a store of value.</p>
<h2>Objections</h2>
<p>With the evidence this stacked on the payments side, how do store of value proponents try to claim Satoshi wasn’t building this for payments?</p>
<h3>Satoshi was just doing marketing</h3>
<p>Once again, we can turn to Dan Held. He <a href="https://twitter.com/danheld/status/1057295507377123328">states</a> that Satoshi only mentioned payments and commerce as a marketing ploy in order to get the attention of his target audience, the cypherpunks:</p>
<blockquote>
<p>He had written the whitepaper to fit his target audience, the Cypherpunks. That’s why he uses the words “electronic cash”, “PoW,” etc. which was previously used terminology in Cypherpunk whitepapers. He uses an ecommerce example to make it easier for everyone to comprehend</p>
</blockquote>
<p><a href="https://twitter.com/danheld/status/1058397988794511361">Again</a>:</p>
<blockquote>
<p>He’s marketing Bitcoin to the cypherpunks!!</p>
</blockquote>
<p>And <a href="https://twitter.com/danheld/status/1072981324082769920">again</a>:</p>
<blockquote>
<p>When he talks about use cases he’s marketing Bitcoin to the cypherpunks who will build it</p>
</blockquote>
<p>This assertion falls flat for many reasons.</p>
<ol>
<li>Satoshi didn’t only communicate with cypherpunks early on. He posted to the P2P Foundation (sources #1 and #3) and his messaging still included payments.</li>
<li>Satoshi never changed his messaging. If he supposedly only mentioned payments and ecommerce as marketing to cypherpunks, then why continue discussing payments and ecommerce for years in the BitcoinTalk forums afterwards?</li>
<li>Satoshi never corrected anyone else who believed him initially. There’s no record of Satoshi ever saying that Bitcoin shouldn’t be used exactly as he described it being used for initially: payments. Had he secretly been trying to use payments as a ploy, at some point he would have begun discouraging that use case in favor of store of value, but this never happened. See <a href="#timeline">Timeline</a>.</li>
</ol>
<p>Dan Held’s proposed answer to the question “Why did Satoshi talk so much about using Bitcoin for payments?” is claiming it was an elaborate deception intended to get people using Bitcoin and then changing its primary use case into something different.</p>
<p>There’s a much simpler answer: Satoshi intended Bitcoin to be used for payments, and he reached out to the cypherpunks because he knew they would like the idea. Occam’s razor applies here, there’s no reason to propose a substantially more complex answer than the simple one.</p>
<h3>Other claims</h3>
<p>Since we’ve been examining the evidence proposed by Dan Held, let’s look at the other claims he made in that Twitter thread. Contrary to what Dan <a href="https://twitter.com/danheld/status/1084991328704843776">says</a>, he didn’t “provide 50 tweets to support my claim.” The majority of his tweets do not provide evidence that Bitcoin was built to act primarily as a store of value. Let’s break them down.</p>
<p>Some of the 47 tweets don’t even pretend to be evidence.</p>
<ol>
<li><a href="https://twitter.com/danheld/status/1084849730562056193">This tweet</a> is a link to his newsletter asking for subscribers.</li>
<li><a href="https://twitter.com/danheld/status/1084849731216453632">Here</a> he is thanking other people for support.</li>
<li><a href="https://twitter.com/danheld/status/1084849729475731459">Here</a> he is giving readers his background.</li>
<li><a href="https://twitter.com/danheld/status/1084849730083934213">Here</a> he just asks a parting question of his readers.</li>
<li><a href="https://twitter.com/danheld/status/1084849731895873538">Here</a> he asks skeptics for feedback on this thread.</li>
</ol>
<p>A substantial number of the thread isn’t dedicated to ascertaining the truth about the past - which is ostensibly the point - but instead talking about the present, or even the future.</p>
<ol>
<li>Four tweets are about Charlie Lee discussing the Lightning Network (22-25)</li>
<li>Seven tweets are about the store of value versus medium of exchange argument broadly (35-41)</li>
</ol>
<p>When he does discuss the past, some of the arguments are very weak.</p>
<ol>
<li>Six tweets were devoted to making the connection to the cypherpunk movement to get their help. Here he makes the claim here that Satoshi tried to trick them by claiming it was about payments, see <a href="#satoshi-was-just-doing-marketing">Satoshi was just doing marketing</a>.</li>
<li>Five tweets are devoted to making the case that Satoshi’s timing proves he was building Bitcoin as a store of value. (11-15) He does this by highlighting various aspects of the 2008 financial crisis in parallel with some of Satoshi’s actions. But Satoshi had been coding this well before all these events occurred. It couldn’t have been cause and effect, unless Satoshi truly is a time-traveler. Also, the argument of coincidental timing can never be used to prove anything, and even if it could, the fact that financial institutions were proven untrustworthy is every bit as much a reason to create a permissionless digital cash system as a digital store of value system.</li>
</ol>
<p>Some of the tweets contain some limited evidence for the store of value position. (29, 32, 33) Satoshi clearly wasn’t ignorant of the harm done by central banks and hoped Bitcoin could help. But that doesn’t mean he built Bitcoin to act primarily as a store of value.</p>
<p>In several Tweets (10,15,19,32,24) Dan dismisses the idea of Satoshi building Bitcoin for payments, likening it to Visa, implying that it’s not a radical vision and he had no need for anonymity. This is ridiculous for multiple reasons.</p>
<p>First, the vision of a payments platform that “would allow online payments to be sent directly from one party to another without going through a financial institution” isn’t rebuilding Visa, since Visa is the very definition of a financial institution.</p>
<p>Second, former use cases like the Silk Road or current ones like OpenBazaar prove that permissionless payments aren’t unimportant. People all over the world are able to engage in commerce with each other <em>without middlemen</em>. This means far more privacy than other platforms, and allows for censorship-resistant trade. This directly challenges state control and is both a radical vision and also a good reason to stay anonymous.</p>
<p>The most relevant tweets are the ones I’ve already discussed (Sources #1-5 above). As I’ve shown, several are heavily misleading due to lack of context.</p>
<h2>Questions</h2>
<p>I have a few questions for those who still believe that Satoshi built Bitcoin primarily to act as a store of value.</p>
<ol>
<li>Why did Satoshi only reference Bitcoin as a store of value a handful of times in all his writings, but mentioned payments and using Bitcoin for commerce dozens of times?</li>
<li>If Satoshi’s focus on payments and commerce was only a marketing ploy to get the cypherpunks’ attention, why did Satoshi continue focusing on payments and commerce long afterwards and never shift focus to store of value?</li>
<li>In the few cases where Satoshi referenced Bitcoin as a store of value, why are the majority of them only in response to other people questioning him? If store of value was his main focus, why did he never promote this idea directly and proactively?</li>
<li>Why did Satoshi participate in various BitcoinTalk threads focused on merchant adoption and using Bitcoin for commerce and openly indicated his support for the idea? (Sources 20, 25, 27, 35)</li>
<li>Why did Satoshi mention on-chain micropayments on six separate occasions over multiple years if store of value was his primary focus? (Sources 1, 5, 13, 14, 15, 17)</li>
<li>Why do you believe Satoshi intended Bitcoin to act as digital gold when he stated that gold’s primary utility was as a medium of exchange? (Source #9 above)</li>
<li>Why did Satoshi state that it “might make sense just to get some” only a single time in all his writings if he was trying to make Bitcoin into a store of value? (Source #5 above)</li>
<li>Why did Satoshi focus completely on payments and commerce in his whitepaper but not mention store of value once?</li>
<li>Why don’t the anecdotes of numerous early adopters claiming that there was near universal agreement that Bitcoin was intended for payments convince you?</li>
<li>If this post isn’t sufficient evidence for you, then is there any evidence that would convince you that Satoshi didn’t build Bitcoin primarily as a store of value?</li>
</ol>
<h2>The end</h2>
<p>I believe the evidence speaks for itself, but if anyone does seriously respond to the above sources and questions I will seriously investigate their claims.</p>
<p>This was a fairly substantial time investment. If you think I’ve done something useful here, and you’d like me to keep doing this type of thing, you can <a href="https://twitter.com/SamuelPatt">follow me</a> on Twitter, or subscribe to my newsletter / blog.</p>
<p>Yes, you should have a stash, but you shouldn’t feel guilty spending either. After having read everything he wrote publicly, I’m confident that Satoshi would agree.</p>
]]></content>
        <author>
            <name>Sam Patterson</name>
            <uri>https://sampatt.com</uri>
        </author>
    </entry>
    <entry>
        <title type="html"><![CDATA[AI Tools Suck and are Amazing]]></title>
        <id>https://sampatt.com/blog/2025-02-09-AI</id>
        <link href="https://sampatt.com/blog/2025-02-09-AI"/>
        <updated>2025-02-09T00:00:00.000Z</updated>
        <summary type="html"><![CDATA[Love Hate]]></summary>
        <content type="html"><![CDATA[<p>I’ve been using lots of different AI tools to help me code lately, treading close to outright “vibe coding.”</p>
<p>The suck part: They still require quite a bit of manual hassle. Things like managing the context window, copying and pasting prompts / code, dealing with hallucinations or the models making assumptions about your code that are wrong.</p>
<p>The amazing part: If you’ve never written code without an AI, you can’t understand how amazing it is to have a machine spit out a perfect function and test for it only three seconds after asking, knowing it would have taken you until lunchtime.</p>
<p>Here are my off-the-hip thoughts.</p>
<h1>Copilot</h1>
<p>Excellent autocomplete for boring stuff. I found that if I make a comment about what I want in a function, then start writing the function, 80% of the time it will write it out for me correctly. This was my first exposure to AI coding and it was great.</p>
<p>I can’t tell you how many times I’ve started writing <code>console</code> and it will automatically suggest the perfect console log message and data to display with it. Completely unironically, that alone is worth $10 a month.</p>
<p>However, as I’ve found myself using the other tools, it simply became something I forgot existed.</p>
<h1>Aider</h1>
<p>I want to like Aider because I’ve always had an affinity for the command line. I don’t know if that’s because it makes me feel more technically adept than I really am, or if it’s because of the many advantages over a GUI that nerds have been insisting upon for generations.</p>
<p>But it feels a bit clumsy. The IDE is so useful that it’s hard to beat having a tool built in. I dislike needing to tag files regularly. It just didn’t wow me.</p>
<h1>Cline</h1>
<p>Cline did wow me, but as with other tools it appears more useful initially than it really is.</p>
<p>My main complaint: it just uses too many damn tokens. It gobbles them up, and the ROI on that investment is often lacking.</p>
<p>It seems less… focused? Hard to describe. It will go in circles sometimes, and that’s annoying when I’m watching the Clause API cost climb to $1 or $2 for a single task. I’ve come back and tried it a few times, and I keep having the same experience.</p>
<h1>Cursor</h1>
<p>Cursor didn’t wow me initially, it felt similar enough to the other tools. But the more I used it, the more I liked it. It seems much more focused than Cline. It doesn’t go in loops frequently. The chat / composer works surprisingly well. You just click to pop an error message into the composer, include the proper context (which it usually has already) and it understands what needs to be fixed, and just does it. It seems to be much better about only pulling the context needed and only changing smaller parts of the code.</p>
<p>Right now it’s my favorite within the IDE.</p>
<h1>Claude 3.5 Sonnet</h1>
<p>These tools are mostly using Clause 3.5, which is an excellent model. I find myself using it outside the IDE and just asking it whatever I need.</p>
<p>It sometimes will respond with code that it less than clear how to integrate into my existing code, which is why the IDE tools are so helpful. But for architecture and devops I use it all the time.</p>
<h1>OpenAI’s models</h1>
<p>Yeah they’re good too.</p>
]]></content>
        <author>
            <name>Sam Patterson</name>
            <uri>https://sampatt.com</uri>
        </author>
    </entry>
    <entry>
        <title type="html"><![CDATA[Testing Zonos TTS + Ubuntu + 4090]]></title>
        <id>https://sampatt.com/blog/2025-02-10-zonos</id>
        <link href="https://sampatt.com/blog/2025-02-10-zonos"/>
        <updated>2025-02-10T00:00:00.000Z</updated>
        <summary type="html"><![CDATA[Testing it out]]></summary>
        <content type="html"><![CDATA[<p>I noticed a few folks mention the new Zonos TTS release today, so I wanted to try it out locally. You can read more about it in the <a href="https://www.zyphra.com/post/beta-release-of-zonos-v0-1">beta release announcement</a>.</p>
<p>I’m on Ubuntu 22.04 and I’ve got a 4090, so I need to test new models when they release in order to justify my purchase.</p>
<h2>Main Takeway</h2>
<p>Using this through Gradio with the default settings isn’t very impressive. When I have more time I’ll fiddle more. The voice cloning is neat, but out of the box right now, I much prefer Kokoro. If you’ve played with it and gotten it to work well, please share what you did.</p>
<h2>Installation</h2>
<p>If you use Linux and have a 4090 you probably don’t need a guide to help you get Zonos working. Too bad, here it is.</p>
<p>You need espeak-ng installed:</p>
<p><code>sudo apt install -y espeak-ng</code></p>
<p>Clone the git repo:</p>
<p><code>git clone https://github.com/Zyphra/Zonos.git</code></p>
<p>Move into the new repo:</p>
<p><code>cd Zonos</code></p>
<p>Their repo instructions recommend using <code>uv</code> as a package manager, I guess because it’s faster. I’ve never used it but I can’t refuse a <code>recommended</code> tag so I installed it:</p>
<p><code>uv sync</code></p>
<p>This creates a new virtual environment which installs Torch and all the nvidia stuff, so it’ll take a few minutes.</p>
<p>Once it has installed all the packages, you then run:</p>
<p><code>uv sync --extra compile</code></p>
<p>To test you can then run:</p>
<p><code>uv run sample.py</code></p>
<p>This automatically downloaded the <code>model.safetensors</code> file for me, which was 3.25G, but downloaded ridiculously fast (there’s no amount of nostalgia that makes me yearn for the 56k days again).</p>
<p>If everything goes well, you should have a <code>sample.wav</code> file in your directory. It’ll say “hello world”, or at least it’ll supposed to. It’ll really say “hello worl,” because it cuts off the end of everything, unless they fixed that since I’ve written this.</p>
<p>A two second, cut off clip is exciting and all, but I decided to launch the Gradio interface they provided to test it properly:</p>
<p><code>uv run gradio_interface.py</code></p>
<p>That’s when I ran into an issue.</p>
<pre><code>OSError: Cannot find empty port in range: 7860-7860. You can specify a different port by setting the GRADIO_SERVER_PORT environment variable or passing the `server_port` parameter to `launch()`.
</code></pre>
<p>Oops, I’m already using that port. Without checking, it’s probably Kokoro, since I set it up to be my TTS for OpenWebUI.</p>
<p>I opened up the <code>gradio_interface.py</code> file and changed the port:</p>
<pre><code>if __name__ == &quot;__main__&quot;:
    demo = build_interface()
    demo.launch(server_name=&quot;0.0.0.0&quot;, server_port=7861, share=True)
</code></pre>
<p>Then it launched just fine.</p>
<p><img src="https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2025-02-10-zonos/image/zonos_gradio.png" alt="gradio"></p>
<h2>Results</h2>
<p>Ok, now what? Well I really wanted to test the voice cloning, because the pranking potential is so high, but first I dutifully testing the straightforward TTS quality.</p>
<p>(Actually, I spent about two hours setting up a screenshot &gt; jsDelivr pipeline so that I could include screenshots in these blog posts easily. But I’ll write about that tomorrow.)</p>
<p>My first test was the introductory paragraph from Winnie-the-Pooh.</p>
<p>It was very… meh, until 30 seconds in, when it got exciting, and by exciting, I mean it burst my eardrums.</p>
<p><img src="https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2025-02-11-jsDelivr/image/2025-02-10_5.png" alt="Screenshot"></p>
<p>You don’t need to be an audio engineer to know that a waveform probably shouldn’t look like that.</p>
<p>So I tried again, curious to see if 30 seconds was the cutoff.</p>
<p>First impression: It’s not all that fast. The claim is it’s 2X realtime with a 4090. I’ve got a 4090, and… maybe? Most recently I’ve used Kokoro and that’s way, way faster than this, not even close.</p>
<p>Second impression: My first impression might be wrong because it’s over 300 seconds now generating a dinosaur joke I asked phi4 to make. It’s probably borked somehow… yeah errors abound in terminal. It works now, and it’s fairly fast too.</p>
<p>There’s definitely a 30 cut off here. And the quality is weird.</p>
<p>Ok I’m wondering if there’s more of an issue with the Gradio default settings, or me doing something wrong, because this isn’t anywhere as good as Kokoro. I just opened up the Kokoro Gradio interface and tested the same input - Kokoro is much faster, sounds better, and doesn’t choke on anything longer than 30 seconds.</p>
<h3>Voice cloning</h3>
<p>At this point I’m sure I need to understand how to tune the controls to make this better, but before I spend the time, I wanted to test the voice cloning. I recorded a 20 second .wav of myself, dropped that into the section in Gradio, and then popped in the text I read.</p>
<p>The result was… not bad! Not great, but considering it was only 20 seconds and I haven’t really gotten the hang of using this model yet, I can see why people are excited about this feature.</p>
<p>I’ll keep a cautiously optimistic eye out on this one.</p>
]]></content>
        <author>
            <name>Sam Patterson</name>
            <uri>https://sampatt.com</uri>
        </author>
    </entry>
    <entry>
        <title type="html"><![CDATA[Automated Screenshot Hosting with jsDelivr]]></title>
        <id>https://sampatt.com/blog/2025-02-11-jsDelivr</id>
        <link href="https://sampatt.com/blog/2025-02-11-jsDelivr"/>
        <updated>2025-02-11T00:00:00.000Z</updated>
        <summary type="html"><![CDATA[In Which I Make a Cool Script]]></summary>
        <content type="html"><![CDATA[<p>As I was writing about the <a href="https://sampatt.com/blog/2025-02-10-zonos">Zonos TTS</a>, I took a screenshot of the interface, then realized I didn’t have a method for displaying it in my post.</p>
<p>I did a lot of blogging a long time ago, but it’s been a while - what’s the best way to display an image file, anyway?</p>
<p>Claude gave me several recommendations. One was Cloudinary, which I have used before, but it’s proprietary and I’d rather go open source when I can. It also recommended a unique hack:</p>
<pre><code>3. GitHub Issues &quot;hack&quot; (Super simple):

- Open a new issue in any repo
- Drag &amp; drop or paste screenshot
- GitHub generates a permanent CDN URL
- Copy URL, close issue without saving
- Use URL in your markdown
</code></pre>
<p>Who says AI can’t be creative?</p>
<p>It also recommended <a href="https://www.jsdelivr.com/">jsDelivr</a>, which I’ve never heard about. It’s a free CDN for open source projects. Sweet.</p>
<p>In my local directory for code repos, I creatively created a new repo, changed into it, then made it a git repo:</p>
<pre><code>mkdir media
cd media
git init
</code></pre>
<p>This repo is public, and now I can easily link anything in the repo using a link like this:</p>
<p><code>https://cdn.jsdelivr.net/gh/[username]/[repo@branch]/path to file</code></p>
<p>I like the simplicity of the solution, but of course I don’t want to be manually adding screenshots into the Github repo, committing then pushing then copying the url. No no no I need to automate this.</p>
<p>So I asked Claude for help. After a few iterations, here’s the bash script we settled on:</p>
<pre><code>#!/bin/bash

REPO_PATH=&quot;MY LOCAL PATH&quot;
POST_NAME=&quot;$1&quot;
MEDIA_PATH=&quot;$2&quot;
MEDIA_TYPE=&quot;$3&quot;  # 'image' or 'audio'

if [ -z &quot;$POST_NAME&quot; ] || [ -z &quot;$MEDIA_PATH&quot; ] || [ -z &quot;$MEDIA_TYPE&quot; ]; then
    echo &quot;Usage: ./publish_media.sh &lt;post-name&gt; &lt;file-path&gt; &lt;image|audio&gt;&quot;
    exit 1
fi

# Check file size (50MB limit for jsDelivr)
FILE_SIZE=$(stat -c %s &quot;$MEDIA_PATH&quot;)
MAX_SIZE=$((50 * 1024 * 1024))  # 50MB in bytes

if [ &quot;$FILE_SIZE&quot; -gt &quot;$MAX_SIZE&quot; ]; then
    echo &quot;Error: File size exceeds 50MB limit for jsDelivr&quot;
    exit 1
fi

# Create directories if they don't exist
mkdir -p &quot;$REPO_PATH/posts/$POST_NAME/$MEDIA_TYPE&quot;

# Copy file to repo
cp &quot;$MEDIA_PATH&quot; &quot;$REPO_PATH/posts/$POST_NAME/$MEDIA_TYPE/&quot;

# Get filename
FILENAME=$(basename &quot;$MEDIA_PATH&quot;)

# Generate markdown
JSDELIVR_URL=&quot;https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/$POST_NAME/$MEDIA_TYPE/$FILENAME&quot;

if [ &quot;$MEDIA_TYPE&quot; = &quot;image&quot; ]; then
    MARKDOWN=&quot;![Screenshot]($JSDELIVR_URL)&quot;
elif [ &quot;$MEDIA_TYPE&quot; = &quot;audio&quot; ]; then
    MARKDOWN=&quot;&lt;audio src=\&quot;$JSDELIVR_URL\&quot; controls&gt;&lt;/audio&gt;&quot;
fi

# Copy to clipboard
echo &quot;$MARKDOWN&quot; | xclip -selection clipboard

# Git commands
cd &quot;$REPO_PATH&quot;
git add &quot;posts/$POST_NAME/$MEDIA_TYPE/$FILENAME&quot;
git commit -m &quot;Add $MEDIA_TYPE: $POST_NAME/$FILENAME&quot;
git push

echo &quot;Markdown copied to clipboard!&quot;
echo &quot;URL: $JSDELIVR_URL&quot;
</code></pre>
<p>Nifty. This puts the file in the repo, adds it, commits and pushes it, then copies the jsDelivr url - in markdown format - into my clipboard so that I can just Ctrl+V into the editor where I’m writing my articles (Obsidian).</p>
<p>For example, here’s exactly what is automatically loaded into my clipboard after using the tool for a screenshot below:</p>
<p><code>![Screenshot](https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2025-02-11-jsDelivr/image/2025-02-10_2.png)</code></p>
<p>But how will this script trigger? I need it to only do this after a screenshot, and only when I say yes. I don’t want all my screenshots posted to a public Github!</p>
<p>Solution? Another bash script:</p>
<pre><code>#!/bin/bash

WATCH_DIR=&quot;screenshots/temp&quot;
MEDIA_SCRIPT=&quot;/publish_media.sh&quot;
LAST_POST_FILE=&quot;Scripts/.last_post_name&quot;

# Create the file if it doesn't exist with a default value
if [ ! -f &quot;$LAST_POST_FILE&quot; ]; then
    echo &quot;2025-02-10-zonos&quot; &gt; &quot;$LAST_POST_FILE&quot;
fi

echo &quot;Watching $WATCH_DIR for new screenshots...&quot;

inotifywait -m &quot;$WATCH_DIR&quot; -e create -e moved_to |
    while read -r directory events filename; do
        if [[ &quot;$filename&quot; =~ .*png$ ]]; then
            FULL_PATH=&quot;$WATCH_DIR/$filename&quot;
            LAST_POST=$(cat &quot;$LAST_POST_FILE&quot;)
            
            notify-send &quot;New Screenshot&quot; &quot;Screenshot saved: $filename&quot;
            
            if zenity --question --text=&quot;Publish $filename to blog?&quot;; then
                NEW_POST_NAME=$(zenity --entry --title=&quot;Post Name&quot; \
                    --text=&quot;Enter post name&quot; \
                    --entry-text=&quot;$LAST_POST&quot;)
                
                if [ -n &quot;$NEW_POST_NAME&quot; ]; then
                    # Save the new post name for next time
                    echo &quot;$NEW_POST_NAME&quot; &gt; &quot;$LAST_POST_FILE&quot;
                    
                    &quot;$MEDIA_SCRIPT&quot; &quot;$NEW_POST_NAME&quot; &quot;$FULL_PATH&quot; &quot;image&quot;
                    
                    if zenity --question --text=&quot;Delete original file?&quot;; then
                        rm &quot;$FULL_PATH&quot;
                        notify-send &quot;Screenshot&quot; &quot;Original file deleted&quot;
                    else
                        notify-send &quot;Screenshot&quot; &quot;Original file kept&quot;
                    fi
                fi
            fi
        fi
    done
</code></pre>
<p>This watches the temp screenshot directory I created specifically for this flow, and when it sees a new file, it triggers a notification. This tells me about the new screenshot, then it asks me (using zenity) if I want to publish it or not. If yes, it uses the publishing script above.</p>
<p>Now I don’t actually want this for my main screenshot flow, it would be a bit annoying to be asked if I want to publish each time. So instead, I kept my default screenshot tool bound to the Print Screen button, but I added a new custom keyboard shortcut.</p>
<p><img src="https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2025-02-11-jsDelivr/image/2025-02-16-17-58.png" alt="Screenshot"></p>
<p>It calls <code>flameshot</code>, a screenshot tool which allows me to set a custom path for the screenshots. That way when I use <code>shift + Print Screen</code> I’ll get this upload specific screenshot flow.</p>
<p><img src="https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2025-02-11-jsDelivr/image/2025-02-10_2.png" alt="Screenshot"></p>
<p>I tested it, and it works great. Now I need to automate this. Claude suggests an autostart entry.</p>
<p><img src="https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2025-02-11-jsDelivr/image/2025-02-16-17-59.png" alt="Screenshot"></p>
<p>I implemented it, and now I don’t need to start these scripts.</p>
<p>So far, it’s working like a charm, it cost me $0, and it has a keyboard binding - what more could a man want?</p>
]]></content>
        <author>
            <name>Sam Patterson</name>
            <uri>https://sampatt.com</uri>
        </author>
    </entry>
    <entry>
        <title type="html"><![CDATA[Saving My Table Tennis League’s Hard Drive—and Finding Lost BTC]]></title>
        <id>https://sampatt.com/blog/2025-02-17-table-tennis-hard-drive</id>
        <link href="https://sampatt.com/blog/2025-02-17-table-tennis-hard-drive"/>
        <updated>2025-02-17T00:00:00.000Z</updated>
        <summary type="html"><![CDATA[In Which I Repair a Borked Linux Distro and Find Lost Bitcoin]]></summary>
        <content type="html"><![CDATA[<h1>Table of Contents</h1>
<h1>Table of Contents</h1>
<ul>
<li><a href="#day-one">Day One</a>
<ul>
<li><a href="#the-quest">The Quest</a></li>
<li><a href="#the-plan">The Plan</a></li>
<li><a href="#the-problem">The Problem</a></li>
<li><a href="#the-partial-solution">The Partial Solution</a></li>
<li><a href="#the-late-night-battle">The Late Night Battle</a></li>
</ul>
</li>
<li><a href="#day-two">Day Two</a></li>
<li><a href="#day-three">Day Three</a>
<ul>
<li><a href="#the-mistake">The Mistake</a></li>
<li><a href="#the-proper-solution">The Proper Solution</a></li>
<li><a href="#the-discovery">The Discovery</a></li>
<li><a href="#the-bitcoin-hunt">The Bitcoin Hunt</a></li>
<li><a href="#the-unexpected-reward">The Unexpected Reward</a></li>
</ul>
</li>
</ul>
<h1>Day One</h1>
<h2>The Quest</h2>
<p>I had just walked into the church gymnasium and was taking off my boots, still caked in snow, when Joe approached me. He’s our table tennis league president, a man who nearly always beats me (and most people in the club) even though he’s more than 30 years my senior.</p>
<blockquote>
<p>You have a desktop computer, right? You use Linux?</p>
</blockquote>
<p>Joe knows I do; we’ve talked about our mutual love of PCs and Linux several times.</p>
<p>I nodded, and he proceeded to tell me that he hasn’t been able to boot into one of his hard drives. He put up a finger in the universal gesture of “give me a second and I’ll show you,” and he walked away, returning with a hard drive (with no covering).</p>
<p>I examine it, as though perhaps the gaze of a long-time Linux lover might be sufficient to fix his distro. All I ascertained was that it was a 1TB Seagate HDD with SATA.</p>
<p>I’ve heard a lot about Seagates not being dependable, but that little (probably outdated) chestnut is about the extent of my knowledge of data recovery. I considered telling him this, but first he let me know the stakes.</p>
<blockquote>
<p>&quot;That drive has the records of the league’s scores on it. If you’re able to get those off that’d be great. I looked up the warranty, but it expired in 2019. &quot;</p>
</blockquote>
<p>An unreliable and out-of-warranty hard drive with a Linux distro containing years of the league’s records of battles won and lost, and an unemployed fullstack dev getting to play data recovery specialist? Sign me up!</p>
<p>I took the hard drive and shoved it into my gym bag (with no covering).</p>
<p>As I played my ping pong pals that night, I occasionally considered how I was going to approach this quest.</p>
<p><strong>ping</strong></p>
<p><em>I don’t know for sure the drive is bad, it could just be his distro that got messed up.</em></p>
<p><strong>pong</strong></p>
<p><em>I should ask him what distro it is - if it’s older then I can just pop it into an older machine in my PC stash and see what happens</em></p>
<p><strong>heavy sidespin serve from Brett</strong></p>
<p><em>Damn it Brett.</em></p>
<p>I forgot to ask Joe about the distro when I was leaving, but I did assure him that I’d do my best. I told him I was aware of some distros and tools used specifically for data recovery, and that I’d try not to lose anything.</p>
<p>He looked surprised. “Oh don’t worry about all that. If you can get anything off that’s great, but I plan on pulling it apart for the magnets anyway.”</p>
<p>I didn’t know whether to laugh or recoil, but I remember that in addition to playing table tennis, Joe is a watchmaker. The physical components of the hard drive might be just as valuable to him as their digital ones - so I’d better act fast if Mr. Seagate wants to live another day.</p>
<h2>The Plan</h2>
<p>I got home, took the drive from my gym bag and brought it to my basement office. I had already decided that I wasn’t going to take apart my main desktop machine - my pride and joy. Once every eight years or so I convince myself, then my wife, that I need a new top of the line PC. Last year I built a new machine around a new 4090, because obviously I needed to run LLMs as quickly as possible. (Turns out, that actually did happen!)</p>
<p>The Beast has a massive case, 6 fans, the 4090 still barely fits, and I don’t want to touch it for roughly eight years.</p>
<p>I glanced around the room, my eyes resting on the other PC in my office: my NAS box. That’s another hard no. I use TrueNAS on there and it has mirrored hard drives. I haven’t touched it in years, and as long as SSH keeps working I don’t plan on it.</p>
<p>Fortunately I didn’t stop glancing, and I noticed another PC against the wall. This wasn’t the poor guy replaced by my current Beast - he was in my living room upstairs, relegated to Fortnite Duty for my children. No, this was the machine which was <em>replaced</em> by Fortnite Duty.</p>
<p>According to the stickers, it came with Windows 7, had an Intel i3, but also had the two most important features of all: it was in the same room as me, and the cover was easily removed.</p>
<p>There was no power cord. I walked into the unfinished part of my basement, then sifted through a rack of electronics which sits far too close to my sump pump. I found a cord.</p>
<p>I have a few different unused monitors in various places in my home, but I noticed that this machine would accept HDMI, so I just unplugged my second monitor from my Beast and plugged it in. Unfortunately, this meant I was breaking a cardinal rule of cabling: draping a cable in midair in the middle of a walkway, in a home with three children.</p>
<p><img src="https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2025-02-17-table-tennis-hard-drive/image/resized_20250214_014214.jpg" alt="Screenshot"></p>
<p>YOLO</p>
<p>My 12 year old son was watching me with curiosity, and when I opened the case he peeked in.</p>
<p>“Wow Dad, that’s <em>dusty</em>.”</p>
<p>The old computer had a 640GB drive in it, which I honestly couldn’t remember had on there. I considered booting it to check it out first, but decided to do it later - I doubt any data could compete with the sheer thrill of a decade of table tennis scores. I removed it and put in the new drive.</p>
<p>I turned on the power. The monitor flickered, and the familiar Ubuntu loading screen appeared.</p>
<p>I thought that surely it wouldn’t be this easy. But Ubuntu continued to load, albeit very slowly. I wondered if perhaps Joe had hardware issues elsewhere.</p>
<p>Then I saw the Linux equivalent of the blue screen of death.</p>
<p><img src="https://preview.redd.it/xrdp-oh-no-something-has-gone-wrong-ubuntu-desktop-22-04-v0-8xub0mbqauia1.png?width=640&amp;crop=smart&amp;auto=webp&amp;s=53c347d0a08b5ae3d043845b928363e93b4ba342" alt="failure message"></p>
<p>I tried to get out of the GUI, but the system became unresponsive. Good. A proper challenge.</p>
<p>I knew the drive wasn’t totally borked, so I needed to decide if the issue was the drive or the OS failing. I figured using a live USB I could inspect the files, back them up if I could access them, then examine and repair or reinstall the OS.</p>
<p>What was the ideal live USB to use for this? The one already sitting on my desk with a small strip of white duct tape on it, labeled “Ubu.”</p>
<p><img src="https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2025-02-17-table-tennis-hard-drive/image/resized_20250216_154950.jpg" alt="Screenshot"></p>
<p>What year? I don’t know. The tape looked fairly fresh, so probably somewhat recent? I’ve used live USBs for troubleshooting distro issues many times over the years.</p>
<p>I plugged it in and booted into it. If the Ubuntu OS on the disk loaded slowly, this loaded glacially. Turns out, it was Ubuntu 24 - a bit difficult for my 2010 parts PC to handle.</p>
<p>With patience, it did load. I opened the terminal, and 20 seconds later, I was able to enter:</p>
<p><code>sudo lsblk</code></p>
<p>I found that <code>sda6</code> had 927GB, so there’s my target.</p>
<pre><code>sudo mkdir /mnt/disk
sudo mount /dev/sda6 /mnt/disk
ls /mnt/disk
</code></pre>
<p>This worked, I could see a filesystem. I navigated through to see if Joe’s data was in there.</p>
<p>It was! The data looked just fine.</p>
<p>Now what?</p>
<p>I first unmounted and then ran fsck on the drive. Everything looked good. It seemed very likely at this point that the issue wasn’t the drive itself (or if it was, it was intermittent), and the OS was borked.</p>
<p>Out of curiosity, I checked what Ubuntu version he was running - Ubuntu 24. Joe doesn’t mess around.</p>
<p>Consulting Claude, it strongly suggested I backup the files before trying to fix the OS. Who says models aren’t intelligent?</p>
<p>I didn’t know how big the drive was, so i check that first - 515GB, that’s way bigger than my USB drive. I decided to store a few key files I thought he’d want on my USB first, then try to fix his OS boot issue. I mounted the USB then copied over the Documents folder.</p>
<h2>The Problem</h2>
<p>Or at least, I tried. The terminal kept crashing. I opened up the file explorer - that also crashed. Ole Dusty couldn’t keep up. I was asking too much of him.</p>
<p>I had a few choices. I could restart and try again - after all, it did boot and copied some of the files before crapping out.</p>
<p>Or I could do it the proper way, installing a lighter OS that my old machine could handle and then copying the files and repairing the OS</p>
<p>I rebooted.</p>
<p>But it was sloooow. I was already rummaging around my office looking for a spare USB to install a new OS on. I buy them just for these scenarios, then can never find them when I need them.</p>
<p>I found one with fading sharpie L—X, surely another Live USB if I’ve ever seen one, and it looked older than Ubu.</p>
<p><img src="https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2025-02-17-table-tennis-hard-drive/image/resized_20250214_001453.jpg" alt="Screenshot"></p>
<p>It’s still booting. After a ten minute wait, I get the Ubuntu install screen and click my way through.</p>
<p>Ctrl+Alt+T takes about 20 seconds to display terminal. (funny enough, the screenshot interface appears instantly when I accidentally bumped PrtScr as I’m trying to use the mouse for my Beast).</p>
<p>I do all the mounting again, then try copying again. This time it goes further into the process, but hangs while transferring the contents of a “LINUX Files” folder, ironically choking on an .EXE for a network driver.</p>
<h2>The Partial Solution</h2>
<p>But the terminal is still working - I can scroll back through the output. I wait patiently, preparing to deploy my secret L—X weapon if this fails.</p>
<p>I’m too impatient, and a Ctrl+C hits the terminal before I really stop to think. Then it hangs again. Damn.</p>
<p>Phew, it’s back. OK, this time I’m just going to copy over the WMITT (our league) folder.</p>
<p>It hangs again, but before I bust out another Ctrl+C, I look at the file it’s choking on - an image. Surely it’ll handle that?</p>
<p>Yes! Patience paid off. I’ve now secured a decade worth of table tennis history. Schedules, paysheets, rosters, standings, contracts with our venues, bootstrap.min.js files (?), nonprofit annual reports.</p>
<p>As my daughters might say, I’m low key a hero now. The organization may not have survived without <code>2017 Member Club Declaration of SafeSport Compliance Form A(3).pdf</code></p>
<p>I see there’s still a lot of Joe’s documents in here, and I’d like to back them up too. I ask Claude if there’s a way to only transfer files under a certain size - aren’t I clever - and it spit it out:</p>
<p><code>sudo find . -type f -size -2M -exec cp -v {} /mnt/usb/backup_small_files/ \;</code></p>
<p>God I love LLMs.</p>
<p>It quickly worked through the files. Even if I didn’t get everything, now that I had most of the files, I felt pretty comfortable trying to fix Joe’s OS.</p>
<p>When I asked Claude how to attempt the repair, it chided me that I hadn’t yet switched over to a lighter live USB distro as it recommended. OK Claude, I know I’m being lazy already, but it’s working, and how many resources can it take to fix grub? It’s late at night now, and Claude reluctantly agreed.</p>
<h2>The Late Night Battle</h2>
<p>The <code>update-grub</code> ran fine, but it didn’t seem like grub was the issue. No matter, let’s reboot with fingers crossed.</p>
<p>Ah, damn. System failed again.</p>
<p>Claude:</p>
<pre><code>But honestly, even though I initially agreed about GRUB being lightweight, considering:

- The system is from 2010
- We're seeing crashes in the live USB
- The full desktop keeps failing
- It's Ubuntu 24.04 which is very new
</code></pre>
<p>Fine, you win. L—X deployed.</p>
<p>It’s Ubuntu 22. At first when I saw the beautiful jellyfish default wallpaper, I assume only being two years older, it would have many of the same issues. But… it loaded quickly!</p>
<p>It works much better, but now I’m faced with a problem - reinstalling Ubuntu is the obvious choice here, but I’ve only backed up the files under 2MB, I don’t want a clean install to wipe everything else.</p>
<p>Claude recommends that I install the OS from my live USB onto the same sda6 partition where the OS is now… what? I’m not doing that.</p>
<p>I notice in the installer, there’s an option to install the new OS alongside the old one. Since only about half the drive is used, that should work perfectly, then I only need to log into the new OS and copy over the old files.</p>
<p>Well, most of them. Since the 1TB drive had just over 500GB of data, I won’t be able to just move everything, which is annoying. He’ll still have his files, they’re just in another partition. Hmm… I have an idea for that.</p>
<p>But it’ll have to wait. It’s 1:30am and the repartioning of the free space for the new OS is still running. I’m tired.</p>
<h1>Day Two</h1>
<p>On Day One I stole the family’s mouse and keyboard with USB dongle to use for Ole Dusty, but today they reclaimed it. I could scrounge around and find another mouse and another keyboard and finish this project, but it’s Valentine’s Day and I suspect that’s not how my wife wants to spend her evening.</p>
<h1>Day Three</h1>
<h2>The Mistake</h2>
<p>The new partition with fresh install works great. Now all I need to do it transfer over the files from the old partition.</p>
<p>But before I do - this is stock Ubuntu 22, with no updates, since this computer doesn’t have the internet. If I handed this back to Joe and he got it online, it could be insecure until it’s properly updated. Given he was running Ubuntu 24, he would probably have done that first thing, but since I’m a nice guy I decided to get Ole Dusty online and update everything first.</p>
<p>I have probably a half dozen extra ethernet cables in various places, but unfortunately my Beast, my wireless access point, my NAS, and my pi-hole DNS take up all the slots on my Edgerouter X. I knew I should have gotten the bigger one.</p>
<p>My family can live without Plex for an evening, so I borrow the cable from my NAS and plug it into Ole Dusty.</p>
<p>I’ve always been pleasantly surprised at how easily wired connections work. I did absolutely nothing but plug in an ethernet cable, and I’m running <code>sudo apt update</code> moments later.</p>
<p>The upgrade took quite a while - I had forgotten how old this machine was over the past couple of days. But now I’m ready to pull over the files. And this time, I can ask an AI on the machine itself, instead of switching keyboards and glancing from monitor to monitor to type everything out.</p>
<p>Since there is more data on the old partition than there is space on the new, I need to identify some directories I won’t move over. I asked the free version of ChatGPT for a command to run, and it spit this out:</p>
<p><code>sudo du -ah /mnt/disk | sort -rh | head -n 20</code></p>
<p>After running for a couple of minutes, it showed me all the largest directories. I saw a few places to cut the fat, and I used rsync to copy everything over, using the <code>--exclude</code> flag to leave what I didn’t need.</p>
<p>Except, I messed up.</p>
<p>I ran rsync from <code>/mnt/disk</code> instead of from <code>/mnt/disk/home/joe</code>, meaning I was pulling in all his OS files too. Oops. The home directory is a mess now.</p>
<h2>The Proper Solution</h2>
<p>I rm -rf’d what I’d moved over and did it correctly this time. It worked! All the essential files are safe in his spiffy new OS.</p>
<p>One last thing to do. Those remaining files should be accessible. I’m going to automount the old partition on startup then create a symbolic link to the new partition.</p>
<p>I did this by adding the UUID of the old partition into <code>/etc/fstab</code>, and then using:</p>
<p><code>sudo ln -s /mnt/oldfiles/home/joe/[The folders I didn't move] /home/joe/[Their brand new home]</code></p>
<p>I opened the file explorer just to check, and you’d never know it wasn’t on the same partition. Except for the purple arrow prominently displayed on the directory image, which adds a nice dash of color.</p>
<p>I set up a Firefox desktop entry and added it to favorites so it always displays, then I pinned the ChatGPT tab in the browser. I’ve talked to Joe briefly about AI and, as I recall, he hasn’t really played around with it yet. As he’s getting his new OS customized to his liking again, it’ll be a real asset - LLMs are surprisingly good with Linux and the command line.</p>
<p>All done. I shut down, remove the cover (lol who am I kidding, I never put it back on) and then remove the hard drive. I put it into a little ziplock bag with a sticky note that has Joe’s new password on it.</p>
<h2>The Discovery</h2>
<p>Ole Dusty’s innards now have a gaping hole. The original HDD sits on top of a bookshelf nearby. It’s 11pm, and I should probably go to bed. But… what is on that hard drive anyway? No reason not to boot it and take a peek…</p>
<p>GRUB appears, and I immediately remember this setup. I had Ubuntu and Windows 7 dual boot. I smash Enter and Ubuntu loads, slowly. I check, and it’s Ubuntu 14.</p>
<p>Memories! This was smack-dab in the middle of the time period (2014-2016) when I was mostly heavily involved in an open source project called OpenBazaar, a decentralized marketplace which led to me co-founding a startup. We raised VC from USV and a16z, and this was my main driver for the first part of that chapter of my life.</p>
<p>I remember testing new builds late into the night. Sending Bitcoin into the built-in wallet, testing purchases, testing chat, testing receiving, testing… with real Bitcoin.</p>
<p>Back in 2014 when the project started, Bitcoins were worth about $500 each. So we didn’t think much about sending $5 worth, 0.01 BTC, just to test things out. Eventually we implemented testnet coins, but we tested for many months with small amounts of real coins.</p>
<p>Of course that 0.01 BTC today is worth about $1k, and… I did a lot of testing.</p>
<p><em>Wait a minute</em>. Could there still be Bitcoin on this thing?</p>
<h2>The Bitcoin Hunt</h2>
<p>I immediately toss up my “don’t get excited yet” mental wall. I’ve been involved with Bitcoin a long time - I wrote a book about it in 2013 - and the idea of finding lost coins that are now worth a fortune has occurred to me many times, but has never panned out.</p>
<p>I ran a USB BTC miner for a short time (yes, they existed), then later couldn’t remember where those coins went. It turns out, even back then, USB BTC miners don’t contribute much towards mining pools, and when I finally found out how much I made, it was roughly $32 worth of BTC.</p>
<p>I’ve found many screenshots with a string of seed words, and a few sticky notes too. Those seed words are the keys to the wallet. I remember booting up Electrum and typing them in. Dust. Dust. Dust. Oh, 0.005 BTC, not bad, not bad…</p>
<p>So I have found small bits here and there, but never anything worth writing about. Also, if I did find anything worth writing about, I probably wouldn’t tell the world. Although, I guess I’m sorta doing that now (spoilers), but it’s not life changing money, so please don’t employ the <a href="https://www.explainxkcd.com/wiki/index.php/538:_Security">$5 wrench attack</a> on me.</p>
<p>(You might be wondering how someone who got into Bitcoin so long ago would even bother with such trifling amounts. That’s a long story for another day, but the bottom line is that Bitcoin isn’t what it used to be. I wanted Bitcoin to be used as digital p2p cash, not just be a digital store of value (i.e. speculative asset), so I spent most of my coins. I don’t regret it. Much. If you want more information on this aspect of Bitcoin’s history, my brother co-authored a <a href="https://www.amazon.com/Hijacking-Bitcoin-Hidden-History-BTC/dp/B0CXWBCWDR">book on the subject</a>.)</p>
<p>Here’s the cool way that this <em>could</em> have happened:</p>
<p>I dig through the OpenBazaar database files, writing a custom script to extract the private keys then automatically checking them against a blockchain explorer to see if they have balances. My fleet fingers flit across my keyboard, then I slam Enter and watch  - hoodie hood fully deployed - the terminal in anticipation… the custom ASCII loading bar I built out of my sheer love for coding moves ever farther, as my focus narrows on the terminal output…</p>
<p>50 BTC BALANCE FOUND. TIME TO RETIRE BRO.</p>
<p>Unfortunately the truth is much more mundane.</p>
<p>I spent some time diving into the keys in the various databases of various OpenBazaar instances. To make this long story slightly shorter, they contained no Bitcoin. I was careful to always move any remaining funds into the new wallet I was testing.</p>
<p>But - I found an Electrum wallet. Two, actually. Because this machine is offline (I’m not plugging in my Ubuntu 14 distro with Bitcoin keys on it), it hasn’t synced to the blockchain, so I don’t know if the balances are accurate. I see one of the wallets has a 2.97 BTC balance. Again, I can’t even get excited - that much Bitcoin was something I would certainly have tracked and sent to a newer wallet at some point.</p>
<p>Unfortunately the wallet was password protected, meaning I couldn’t just extract the seed and pop it into another Electrum wallet running on my Beast. I was paranoid about security back then. I’m paranoid about security today, but I was back then too.</p>
<p>I have various KeePass files floating around the filesystem. Oh boy. What a fun game this will be!</p>
<p>I eventually unlock a KeePass file and the Electrum wallets and… they’re empty, the coins were moved ages ago. I must have had the seed backed up elsewhere, and imported them from a different machine.</p>
<p>Oh well, it was a fun trip down memory lane.</p>
<h2>The Unexpected Reward</h2>
<p>I decide to look back through my password manager though, and I notice another string of words, suspiciously like a Bitcoin seed phrase.</p>
<p>Here’s the thing - we used this same scheme to backup the keys used for OpenBazaar’s peer ID. So these seed phrases from this era aren’t always Bitcoin. But what do I have to lose?</p>
<p>I type them into Electrum again, pressing Enter as the autocomplete suggestions pop up. As I finish the last word, the Electrum display changes the text at the bottom of the modal to say, “Seed Phrase: old.” That’s a good sign.</p>
<p>I click Next, and the wallet begins to synchronize transactions. This is a SPV wallet, meaning it doesn’t need the full blockchain, and it syncs quickly. This is another good sign - multiple transactions means I was using it for a while.</p>
<p>My mental “don’t get excited yet” wall is being torn down as the transaction count builds up. Are these finally the forgotten coins of my crypto daydreams?</p>
<p>It finishes loading. I look at the balance: <strong>0.0464 BTC</strong></p>
<p>Electrum automatically displays the dollar amount: <strong>$4,498</strong>.</p>
<p>!!!</p>
<p>I’m a bit stunned. It may not be the early retirement of my crypto daydreams, but I’ll take it.</p>
<p>I’ve never been happier to repair a linux distro for a friend.</p>
]]></content>
        <author>
            <name>Sam Patterson</name>
            <uri>https://sampatt.com</uri>
        </author>
    </entry>
    <entry>
        <title type="html"><![CDATA[Using Listmonk for my Newsletter because Substack has no API]]></title>
        <id>https://sampatt.com/blog/2025-02-18-Listmonk</id>
        <link href="https://sampatt.com/blog/2025-02-18-Listmonk"/>
        <updated>2025-02-18T00:00:00.000Z</updated>
        <summary type="html"><![CDATA[In Which I use Open Source Self Hosted Tools like a Good Boy]]></summary>
        <content type="html"><![CDATA[<h1>Summary</h1>
<p>Substack has no API and doesn’t fit into my publishing flow, so I self-hosted a newsletter management tool called Listmonk. It wasn’t super easy, but with AI help I figured out how to setup up Listmonk and have it automatically publish using Github Actions and Amazon SES.</p>
<h1>Background</h1>
<p>I’ve committed to writing more lately, and as a result I want to make it easy for people to follow my work.</p>
<p>I went ahead and integrated RSS, JSON, and Atom feeds into this site, and then turned my attention to Substack.</p>
<p>Guess what I learned? Substack has no API. Seriously.</p>
<h1>My Publishing Flow</h1>
<p>Here’s the way my blog works: I write a post in markdown, inside Obsidian. I copy this file into my website’s git repo, run the front end locally to check that it looks good, fix anything, then commit and publish. My site’s host (Netlify) automatically rebuilds my site when my repo changes (including updating the RSS feed), and my site uses a library to display markdown properly.</p>
<p>This makes it very easy for me to publish - I don’t need to use a browser interface at all. I practically live in Obsidian, so writing then committing and publishing a markdown file is second nature, but I strongly dislike Wordpress / Medium / Substack or any tool where I’m writing in browser. It feels slow, and having all the editing / formatting options is a distraction.</p>
<p>Because Substack has no API, it isn’t possible for me to publish to my blog and then have it automatically post to my substack. I could give up my flow in order to use Substack, but I’m a bit too crotchety for that.</p>
<h1>Listmonk</h1>
<p>Fortunately for me, I’ve only just started my Substack and posted a single article - I have no followers yet, so I’m not locked into that platform. So naturally … let’s see what open source alternatives there are for managing newsletters, subscribers, and email campaigns.</p>
<p>I’m already using a service called Pikapods that offers cheap hosting of open source projects. I paid them $3 last month to host a <a href="https://hoarder.app">Hoarder</a> instance, so that I could clear up some of my browser tabs but not feel FOMO (I swear I’ll look at them eventually).</p>
<p>I browsed their selection apps, looking for a newsletter management tool, and they had one: Listmonk. I searched around and found they had a good reputation. Pikapods estimated it would cost me another $1.50 a month to host this instance, so I figured all it was really costing me was my time, and I added a pod.</p>
<p><img src="https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2025-02-12-Listmonk/image/2025-02-10_6.png" alt="Screenshot"></p>
<p>After a couple of minutes to set things up, it gave me the admin URL to visit, and I set everything up.</p>
<p>It’s straightforward, there are users (like the admin), subscribers (like you, <em>right?</em>), lists (groups of subscribers), and campaigns (the email sends).</p>
<p>Setting that up was easy, but of course this isn’t an email host itself, it’s just managing the list and triggering the emails going out. In order to wire this up, I need to use an email host.</p>
<h1>Amazon SES</h1>
<p>My personal email is on the same domain as this site, so I’ve already used a custom DNS with my email provider. But I want to use the same domain for the newsletter, complicating things slightly.</p>
<p>With AI help I find that this isn’t a problem - I can create a subdomain that will be managed by a different provider. I chose Amazon Simple Email Service (SES). They’re cheap, well documented, and each time I use a new Amazon service I get to add it to my LinkedIn profile.</p>
<p>I had to jump through a few credential hoops, but eventually I was signed up. They displayed my DNS records I needed to add to my DNS provider in order for the email to work. After too much copying and pasting, I finished, and Amazon saw the connection almost instantly (I don’t miss the days of slow DNS updates).</p>
<p>I then got my SMTP credentials, and logged back into the Listmonk server. I entered them and sent a test email - failure.</p>
<p><img src="https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2025-02-18-Listmonk/image/2025-02-17-17-29.png" alt="Screenshot"></p>
<p>Listmonk has a default “from” address that needs to change, and I didn’t notice it at first. I eventually fixed that and then it worked!</p>
<h1>Automation with Github Actions</h1>
<p>I’ve now got all the pieces to replace Substack - my own publishing platform (personal blog), a newsletter management tool (Listmonk), and an email provider (Amazon SES).</p>
<p>Listmonk is connected to SES, but how do I connect my blog to Listmonk? I’m going to use Github Actions and connect it via an API.</p>
<p>I look through the Listmonk docs and find out how to create an API for Github to use. I then put the API key, the URL for the server, and the username into my Github repo as secrets so that they can be accessed from the <code>notify-subscribers.yml</code> file which manages the Github Action.</p>
<p><img src="https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2025-02-18-Listmonk/image/2025-02-17-18-17.png" alt="Screenshot"></p>
<p>I dumped the Listmonk API docs into Claude and asked to create an action to send out an email to my subscribers when it detects a new blog post has been added. However, I also added in frontmatter into my markdown template with a field for <code>send_newsletter</code>, which will let me determine with a boolean if an article I push should trigger an email or not.</p>
<p><a href="https://github.com/SamPatt/sampatt-portfolio/blob/main/.github/workflows/notify-subscribers.yml">Here’s the file</a>, I won’t share it here but it makes detailed use of the Listmonk API to ensure it’s formatted correctly.</p>
<p>I also had it build a test file so that I could trigger a send to test the integration without spamming my subscribers (currently, me, but I want that functionality for the future).</p>
<p>All in all this was about three hours worth of work, including writing this post, and should cost me ~$2 a month. SES is free unless I have a lot of emails to send, which seems reasonable. We’ll see if needing to maintain Listmonk myself in the long term is better than Substack, and perhaps it has other features I’ll really miss (like comments or better exposure).</p>
<p>I’ll update down the road with my thoughts.</p>
]]></content>
        <author>
            <name>Sam Patterson</name>
            <uri>https://sampatt.com</uri>
        </author>
    </entry>
    <entry>
        <title type="html"><![CDATA[Avoiding Google with Umami for Open Source Analytics]]></title>
        <id>https://sampatt.com/blog/2025-02-19-Umami</id>
        <link href="https://sampatt.com/blog/2025-02-19-Umami"/>
        <updated>2025-02-19T00:00:00.000Z</updated>
        <summary type="html"><![CDATA[In Which I "Self-Host" another Pikapods Instance and don't even need AI]]></summary>
        <content type="html"><![CDATA[<h1>Summary</h1>
<p>I use Umami, an open source alternative to Google Analytics. It’s dead easy, especially if you host it on Pikapods.</p>
<h1>Background</h1>
<p>I write for many reasons, some of which I understand.</p>
<p>One reason: I believe that other people reading my writing will benefit myself and those reading. But words existing in a public space doesn’t mean they’re being consumed, and that makes it hard for me to know their impact.</p>
<p>One solution is to not care, to write solely for the love of it, or from compulsion, or just assume my words go somewhere apart from my public Git repo where this site is built from. I’m not nearly cool or artistic enough for that approach - I want to see the numbers.</p>
<p>I don’t like Google though. In general, I’m skeptical of large organizations, either governments or businesses, particularly when their methods involve data collection on a massive scale. In 2025 it’s difficult to take a hard line on this as a techie, but there are small choices you can make here and there to avoid using the big guys.</p>
<p>So when I decided to begin writing more frequently, I immediately considered how to do analytics. I had already eschewed Substack, as I’ve <a href="https://sampatt.com/blog/2025-02-18-Listmonk">written about</a>, and I just couldn’t bring myself to do Google Analytics. So I began researching alternatives.</p>
<h1>Research</h1>
<p>I quickly found <a href="https://umami.is/docs">Umami</a>, which bills itself as:</p>
<blockquote>
<p>an open-source, privacy-focused web analytics tool that serves as an alternative to Google Analytics.</p>
</blockquote>
<p>Sounds good to me. A few people on Reddit complained that the project team had made a few breaking changes recently, and they were annoyed at this. That worries me a little but, but in this case I’m offloading the updating process to a third party.</p>
<p>Only one more thing could make it perfect: is it hosted on Pikapods?</p>
<p><img src="https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2025-02-19-Umami/image/2025-02-18-15-16.png" alt="Screenshot"></p>
<p>Yes! I love Pikapods so much. For dirt cheap, you pay them to host open source projects for you. I’m currently using them for Hoarder (data hoarding) and Listmonk (manages my newsletter).</p>
<p>I see they also offer Matomo, which I saw mentioned as a popular open source option, but my research indicated that it was heavier duty than Umami, and frankly I don’t need anything special.</p>
<h1>Installation</h1>
<p>Because I’m using Pikapods, the installation process consists of clicking “Add pod.” 20 seconds later, I’m looking at the Umami pod interface, which is giving me a warning to:</p>
<blockquote>
<p>Immediately change the default admin details <code>admin</code> and <code>umami</code>.</p>
</blockquote>
<p>The interface is simple, as promised. I changed the admin password. Also, default dark theme. Nice.</p>
<p><img src="https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2025-02-19-Umami/image/2025-02-18-15-34.png" alt="Screenshot"></p>
<p>I add my website, and it tells me:</p>
<p><code>To track stats for this website, place the following code in the &lt;head&gt;...&lt;/head&gt; section of your HTML.</code></p>
<p>The code is a short one-liner with a script from my Umami instance and a website ID.</p>
<p>Is that it? I drop the code into my index.html, and push the changes to my Github repo.</p>
<h1>Testing</h1>
<p>After it rebuilds, I check the realtime stats section on Umami. Sure enough, there I am:</p>
<p><img src="https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2025-02-19-Umami/image/2025-02-18-15-40.png" alt="Screenshot"></p>
<p>That was ridiculously easy.</p>
<p>I planned on making a whole post about this (I guess I did). I typically ask Claude a bunch of questions about integration, etc. But this took about 15 minutes and I didn’t need any help.</p>
<p>I’m gonna go play Geoguessr now.</p>
]]></content>
        <author>
            <name>Sam Patterson</name>
            <uri>https://sampatt.com</uri>
        </author>
    </entry>
    <entry>
        <title type="html"><![CDATA[Fitness Tips from a Formerly Obese Nerd]]></title>
        <id>https://sampatt.com/blog/2025-02-20-fitness-nerd</id>
        <link href="https://sampatt.com/blog/2025-02-20-fitness-nerd"/>
        <updated>2025-02-20T00:00:00.000Z</updated>
        <summary type="html"><![CDATA[In Which I Explain the Misconceptions I had About Fitness]]></summary>
        <content type="html"><![CDATA[<p>I used to be 300 lbs. Well, maybe - I was too afraid to weigh myself at my heaviest, and the first time after I began losing weight I was 292 lbs. Here I am at my college graduation.</p>
<p><img src="https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2025-02-20-fitness-for-nerds/image/sam_obese.jpg" alt="Screenshot"></p>
<p>Fortunately, I was able to drop 100 lbs in my 30s, and this led me down the path of fitness. I’m now a certified gym rat. Here’s an obligatory gym locker room selfie.</p>
<p><img src="https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2025-02-20-fitness-for-nerds/image/resized_sam_fit.jpg" alt="Screenshot"></p>
<p>These pics are for my fitness cred - as for my nerd cred, well I guess you can check out my Github or something, it’s linked on the navigation bar.</p>
<p>These thoughts are mostly for people who are new to fitness, or considering losing weight or lifting. Don’t let anything I say stop you from starting - just do it!</p>
<h1>Don’t take pride in your lack of athleticism / shunning of sunlight / dislike of nature</h1>
<p>I’m putting this tip first, because I have “nerd” in the title of this post, so I wanted to start with nerd-specific advice.</p>
<p>I remember for many years having a perverse pleasure in shunning the outdoors. This was in the 1990s when I was a teenager, and the internet was new, and PC gaming was getting popular, and there was nothing lamer than wanting to hike in the woods.</p>
<p>I didn’t need the sun. I didn’t need their sportsball games. I didn’t want to talk walks in the woods. I was above that, I wasn’t superficial, my mind was what really mattered. Oof.</p>
<p>I don’t know how prevalent this attitude is among young people now. My impression is that, even though youth obesity is very high, people recognize the benefits of fitness and nature, even if they don’t actually partake themselves.</p>
<p>Anyway, this one is probably obvious, but if you’re a pasty, nonathletic basement dweller, don’t take pride in that. You can become more athletic, you can get more sunshine, you can shift your identity away from whatever it is now into a more healthy person.</p>
<p>“But it’s all genetics! I got bad genes, I’ll never be athletic.”</p>
<p>You won’t become a world-class athlete, sorry you had to hear it from me. If that was a possibility, you’d almost certainly have already been on that path.</p>
<p>So what? Nearly everyone in the world isn’t. I’m not naturally coordinated. I was never a good athlete. But this year I benched 225 lbs, last year I ran ten miles non-stop, the year before I biked 100 miles in a day. I went from being one of the worst players in my table tennis league to being… barely above the average.</p>
<p>Are those accomplishments all that impressive? That depends entirely on who you’re comparing against. Against the best people in those fields? Laughable. Against the average? Meh, it’s decent, but achievable for most people if they invest a few years. Against myself a decade ago? That guy would be <em>blown away</em>.</p>
<p>You can become a better version of yourself.</p>
<h1>Developing a consistent routine is paramount, but make experimentation a part of the routine.</h1>
<p>In all the advice I read when I was getting into fitness, people emphasized how important it was to develop the habits first. They’re not wrong - most people fail because they don’t make it a habit.</p>
<p>But what I didn’t understand was what <em>consistency</em> really meant. It just means that you keep showing up, week after week, month after month, year after year. It’s perseverance. It isn’t repetition!</p>
<p>For the first two years of lifting I took “routine” far too literally, and committed to a set of exercises with very little variation. This isn’t necessarily a bad strategy to start with, especially if you’ve picked good exercises (compound movements ideally). I got good gains, and it gave me the underlying strength I needed to even complete the more difficult lifts.</p>
<p>And this progress is the reason why the beginner advice doesn’t emphasize variety! When you’re new to lifting, as long as you commit and eat properly, sleep, etc, you will improve. That’s just how our bodies work.</p>
<p>But our bodies also adapt to whatever we’re asking them to do, and much more quickly than you might realize if you only stick to the same exercises. When I finally began introducing variety - both new exercises entirely and just variations on the same ones - I couldn’t believe how effective they were. It was like I was using muscles I’d never used before - because I was!</p>
<p>If your goal is strength or hypertrophy, then completely neglecting entire muscle groups for years is just a waste of your time. For example, I never did any delt training the first two years. I didn’t think about it, I was focused on improving squats, bench, pullups, and pushups.</p>
<p>Those were excellent choices, but it wouldn’t have cost me much extra time or effort to toss in a few sets of lateral raises once or twice a week. When I did begin training delts, wow! Why didn’t I start earlier?</p>
<p>Another example. I hate leg day like most people, and I mostly just did squats and lunges. I got quads that were completely filling out my jeans, honestly even bigger than I wanted, so I scaled back on squats and somewhat neglected my lower body.</p>
<p>Then recently I realized that was stupid, so I started squats again and added in leg extensions, which I never bothered with before. The burn is <em>insane</em>. After only a couple months of this, I can feel a remarkable difference, particularly in my stability. When I’m walking down stairs I feel much more in control of my weight. I could have gotten here years ago if I experimented.</p>
<p>It’s so easy too. Just try it, use light weights and high reps to feel the burn. It’s worth it.</p>
<h1>View cardio as necessary for lifting</h1>
<p>If you’re one of those folks that loves cardio, skip this section, you’re already golden you lucky bastards.</p>
<p>There’s a misconception that cardio kills muscle gains. I’ve looked into this, and there’s a kernel of truth here, but people exaggerate it. Don’t lift then do a long cardio session immediately afterwards, if you’re primarily going for hypertrophy. If you’re going for fat loss, keep it up. Otherwise, cardio any other time is usually beneficial, at least in moderation (if you’re an endurance athlete you can figure your own stuff out).</p>
<p>I used this fact as as excuse to not properly do cardio when bulking, and only focus on it when cutting. This is stupid, for several reasons.</p>
<p>First, it makes it easier to bulk too much. If you’re giving yourself the green flag to up your calories, while at the same time reducing activity, you’re likely to go overboard.</p>
<p>Second, it reduces your conditioning and makes lifting harder. After having neglected cardio for a couple of months while bulking, guess how I feel during a hard session? Like the idiot I am, gasping between sets. It feels awful to barely make it through a set because you’re winded.</p>
<p>Lastly, it makes it so much harder to cut. Sure, cutting is mostly about food, but if you hop back into cardio as most people do, then you feel like you’re starting back where you were when you first got into shape. That’s hard!</p>
<p>If you maintain your cardio constantly, everything becomes a little bit easier. View it as necessary to be a better lifter, if that’s what you need to believe.</p>
<p>Also, quick aside - if you’re really out of shape, “cardio” just means walking for you. Walk more. Seriously, you’d heard other people tell you this. I’m adding to the chorus. It worked wonders for me. WALK!</p>
<h1>Accept the life-long nature of your commitment - and the life-long unseen rewards</h1>
<p>I can’t know your reasons for improving your fitness, but I know what mine were initially. I had goals in mind. Goals that, if I achieved them, I know would improve my life.</p>
<p>I was right! And whatever your goals are, you’re probably right about them too.</p>
<p>But I came to realize that the goal-oriented approach to fitness isn’t the right way to view it. If your goal is weight loss, then you lose weight: goal achieved. Great! Now what? For far too many people, that’s as far as they’ve considered, and it often results in poor long-term outcomes.</p>
<p>You might think that you want to look good for your wedding, or that you want to be able to be more active with your children, or that you want to avoid the heart attack that took your father. And yeah, those are all valid.</p>
<p>But these are viewing fitness through a lens of particular outcomes which you can foresee. What I didn’t realize going into this was all the ways that this changed my life, and my perspective on what fitness really is. It’s not so much a means to an end, or a tool in the toolbox. It’s more like having a process in place to keep your tools clean, organized, and a plan for how to use them when you need them. It’s not about fixing something in your house, it’s about <em>being able to fix something</em>, and knowing that you’re able.</p>
<p>The word fitness is really the appropriate label. I used to think of that term and mentally think of someone who looked a certain way. I now think of someone who can do certain things.</p>
<p>This brings rewards that are hard to explain if you haven’t lived them. Or maybe I should say that you only see them if you lived a life without them, then obtained them.</p>
<p>But there’s a cost, of course - the life-long commitment that it entails. I’ve seen what happens to myself when my devotion wavers (aka the holidays). It’s worth it though.</p>
<h1>Enjoy yourself</h1>
<p>Last tip: fitness becomes more enjoyable over time, at least it did for me. Yes, some days it’s very difficult to hit the gym, or run, or avoid dessert. But I genuinely believe there are few better feelings than blasting that spotify soundtrack in your car on the ride home after an exhausting gym session, knowing that your body is happy about your choices.</p>
<p>I get restless without fitness now. My body wants it. Maybe this sounds bad if you’ve never experienced it. It’s not. It’s wonderful. Use your body.</p>
]]></content>
        <author>
            <name>Sam Patterson</name>
            <uri>https://sampatt.com</uri>
        </author>
    </entry>
    <entry>
        <title type="html"><![CDATA[No Ideology Required; AI Customization Makes Open Source Tools the Obvious Choice]]></title>
        <id>https://sampatt.com/blog/2025-02-21-open-source-ai-customize</id>
        <link href="https://sampatt.com/blog/2025-02-21-open-source-ai-customize"/>
        <updated>2025-02-21T00:00:00.000Z</updated>
        <summary type="html"><![CDATA[In Which I Describe how Cracked LLMs are for Customization]]></summary>
        <content type="html"><![CDATA[<h1>Background</h1>
<p>AI will dramatically increase the popularity of open source systems, and this has absolutely nothing to do with ideology.</p>
<p>I hear you. Open source quality varies widely. You’ve tried them out. Maybe it’s because you’re cheap, and they’re free. Or maybe you’re ideologically committed to open source software.</p>
<p>But frequently, they just don’t work as well as the polished, proprietary tools. They often require a bit more technical knowledge. Sticking with open source often means you’re some combination of broke, ideological, and / or technical.</p>
<p>That used to be more true than it is today, but we’re still very far away from open source becoming the default.</p>
<p>How does AI change this?</p>
<h1>Why AI means open source wins</h1>
<p>Part of the reason we use computers is because they’re fun. OK, <em>I</em> think they’re fun anyway - having nearly complete control over some type of mental kingdom is appealing to me.</p>
<p>But we mostly use computers because they can do things for us. Things that humans are unable to do, or would at least take much longer.</p>
<p>A software developer’s job is to understand the particular action that they and their users are trying to get the computer to do, and then tell both the computer and the human how to accomplish it. Their instructions turn the computer into a vehicle designed to traverse a specific terrain, and the interface is a map and owner’s manual for the human driver.</p>
<p>The computer already had the ability to perform the task. Given sufficient understanding, the human could have created a similar vehicle, created a different map and manual, and still gotten to their destination. But this is ridiculously hard to do; our minds don’t work like computers and it requires slow and deliberate thinking to achieve this.</p>
<p>What if there existed a mind that did deeply understand how computers work? Instead of needing to create a vehicle, map and manual for humans to use, it could be told what humans wanted accomplished, and just do it.</p>
<p>This is likely the long term future of software. Humans need many layers of abstractions to be able to interact with and command computers. AI won’t need that. All human requests could generate a custom program that will solve the problem without relying on any existing system.</p>
<p>But wait, that sounds very inefficient, doesn’t it? Surely these models would all respond similarly given similar goals, and if that’s true, and we had AI systems re-inventing the wheel with each request, what a waste!</p>
<p>That’s where open source systems come in. The best patterns and tools will rise to the surface, and the models will learn which of them to use, and they’ll pick and choose which they want in order to accomplish their tasks. The building blocks of nearly all software - which will itself be customized to the specific user’s needs - will be open source.</p>
<h1>The Transition</h1>
<p>But what does this transition look like? I’m talking about a future era where humans rarely, if ever, look at code anymore. That’s a long way away. Not because the technology is far distant, but because human adoption of technology is often uneven.</p>
<p>In the meantime, AI will still boost open source adoption, because this customization is already happening, just in a somewhat more manual form.</p>
<p>The SOTA LLMs understand Linux <em>phenomenally</em> well. They are absolutely cracked on the command line.</p>
<p>They understand markdown, bash scripts, yaml, JSON, and regex to such a degree that I hardly ever see them make an error.</p>
<p>They understand API docs, and documentation generally.</p>
<p>They understand Git.</p>
<p>They understand databases.</p>
<p>They understand networking.</p>
<p>They understand containerization, web servers, build systems, monitoring, CI/CD, testing, authentication…</p>
<p>The dominant tools here are all open source. The LLMs have seen their docs, they’ve seen the troubleshooting forum threads, and they know how these tools work together.</p>
<p>Agents don’t yet exist that will use these tools for us. But before the agency issue is figured out, they do still have the knowledge available for us to tap into.</p>
<h1>Examples</h1>
<p>Let me show you what I mean with some real examples from my own experience:</p>
<h2>System Recovery and Data Analysis</h2>
<p>I recently needed to <a href="https://sampatt.com/blog/2025-02-17-table-tennis-hard-drive">recover data</a> from a borked Linux installation. The LLM understood exactly what steps to take:</p>
<ul>
<li>Mount the drive and check filesystem integrity with fsck</li>
<li>Use a lighter distro (Ubuntu 22.04) when the newer version was too resource-intensive</li>
<li>Properly handle partition management and data migration</li>
<li>Set up auto-mounting and symbolic links to maintain access to unmoved data</li>
</ul>
<p>The LLM didn’t just know the commands - it understood the entire workflow of Linux system recovery. It knew when to use different approaches based on the hardware constraints and data preservation needs.</p>
<h2>Newsletter Management</h2>
<p>When Substack proved too limiting without an API, the LLM <a href="https://sampatt.com/blog/2025-02-18-Listmonk">helped me</a>:</p>
<ul>
<li>Set up Listmonk, an open source newsletter management system</li>
<li>Configure Amazon SES for email delivery</li>
<li>Create GitHub Actions workflows to automate newsletter sends</li>
<li>Handle DNS configuration for the email subdomain</li>
<li>Set up proper authentication and API access</li>
</ul>
<p>The whole system works together seamlessly - and critically, it integrates into my existing markdown-based writing workflow rather than forcing me into a proprietary platform’s constraints.</p>
<h2>Analytics Implementation</h2>
<p>When <a href="https://sampatt.com/blog/2025-02-19-Umami">avoiding Google Analytics</a>, the LLM understood how to:</p>
<ul>
<li>Configure Umami as a privacy-focused alternative</li>
<li>Set up proper script embedding</li>
<li>Handle real-time tracking implementation</li>
<li>Manage user authentication and access control</li>
<li>Deploy it through container management systems</li>
</ul>
<h2>Screenshot Management and Hosting</h2>
<p>I needed a way to easily capture and host screenshots for my blog posts. The LLM understood how to:</p>
<ul>
<li>Use flameshot for screenshot capture</li>
<li>Set up inotifywait to monitor directories for new screenshots</li>
<li>Configure jsDelivr as a CDN for hosting through GitHub</li>
<li>Create bash scripts for automated processing and upload</li>
<li>Set up zenity for GUI prompts</li>
<li>Handle keyboard shortcut bindings in Linux</li>
</ul>
<p>The resulting system is completely automated: When I take a screenshot with my custom shortcut, it:</p>
<ol>
<li>Saves to a monitored directory</li>
<li>Triggers a notification</li>
<li>Prompts if I want to publish it</li>
<li>Automatically uploads to GitHub</li>
<li>Generates a jsDelivr CDN link</li>
<li>Copies the markdown-formatted link to my clipboard</li>
</ol>
<p>This is something no proprietary service offers, but because the LLM understands how these open source components work together, it could help create a custom solution that fits perfectly into my workflow.</p>
<h2>Web Development Workflow</h2>
<p>My developer workflow relies heavily on open source tools, which the LLM understands deeply:</p>
<ul>
<li>Git version control</li>
<li>Markdown processing and static site generation</li>
<li>RSS feed generation and management</li>
<li>CI/CD pipelines through GitHub Actions</li>
</ul>
<p>The key point here isn’t just that these tools exist - it’s that the LLM understands how they work together. When I needed to integrate a newsletter system with my static site, it knew exactly how to wire up the GitHub Actions to trigger Listmonk through its API when new content was published.</p>
<p>This deep understanding of open source systems means that rather than being constrained by the features and limitations of proprietary platforms, I can customize and extend my tools exactly as needed. The LLM doesn’t just know about individual tools - it understands the entire ecosystem and how to make the pieces work together.</p>
<p>This level of customization is intoxicating. Since it’s still early days, this will only likely be true for more technical folks, or control freaks, but as it progresses I believe it’ll widen the base of people using open source tools, simply because they’re the tools the LLMs understand best.</p>
<p>The future is open source - not because people believe it’s more noble, or important for our autonomy, but because it’s what AIs have (correctly) decided is the best way to get stuff done.</p>
]]></content>
        <author>
            <name>Sam Patterson</name>
            <uri>https://sampatt.com</uri>
        </author>
    </entry>
    <entry>
        <title type="html"><![CDATA[Testing Claude Code]]></title>
        <id>https://sampatt.com/blog/2025-02-24-claude-code</id>
        <link href="https://sampatt.com/blog/2025-02-24-claude-code"/>
        <updated>2025-02-24T00:00:00.000Z</updated>
        <summary type="html"><![CDATA[In Which I Test the New Claude Code Release]]></summary>
        <content type="html"><![CDATA[<p>This afternoon my feed exploded with news of Claude Code. Per the norm, the hype is significant - will it live up to the claims? Claude 3.5 Sonnet is my favorite model for coding, as I’ve <a href="https://sampatt.com/blog/2025-02-09-AI">discussed previously</a>, so I have high hopes.</p>
<h1>Installation</h1>
<p>Their instructions couldn’t be much easier:</p>
<pre><code>1

Install Claude Code

Run in your terminal: `npm install -g @anthropic-ai/claude-code`

2

Navigate to your project

`cd your-project-directory`

3

Start Claude Code

Run `claude` to launch

4

Complete authentication

Follow the one-time OAuth process with your Console account. You’ll need active billing at [console.anthropic.com](https://console.anthropic.com).
</code></pre>
<p>Running it shows this classic terminal output:</p>
<pre><code>╭────────────────────────────────────────────╮
│ ✻ Welcome to Claude Code research preview! │
╰────────────────────────────────────────────╯

  ██████╗██╗      █████╗ ██╗   ██╗██████╗ ███████╗
 ██╔════╝██║     ██╔══██╗██║   ██║██╔══██╗██╔════╝
 ██║     ██║     ███████║██║   ██║██║  ██║█████╗  
 ██║     ██║     ██╔══██║██║   ██║██║  ██║██╔══╝  
 ╚██████╗███████╗██║  ██║╚██████╔╝██████╔╝███████╗
  ╚═════╝╚══════╝╚═╝  ╚═╝ ╚═════╝ ╚═════╝ ╚══════╝
  ██████╗ ██████╗ ██████╗ ███████╗                
 ██╔════╝██╔═══██╗██╔══██╗██╔════╝                
 ██║     ██║   ██║██║  ██║█████╗                  
 ██║     ██║   ██║██║  ██║██╔══╝                  
 ╚██████╗╚██████╔╝██████╔╝███████╗                
  ╚═════╝ ╚═════╝ ╚═════╝ ╚══════╝
</code></pre>
<p>I have to login, and then it takes me to this:</p>
<pre><code>╭──────────────────────────────────────────────────────────────────────╮
│ ✻ Welcome to Claude Code research preview!                           │
│                                                                      │
│   /help for help                                                     │
│                                                                      │
│   cwd: /home/nondescript/code_repos/sampatt/strap-to/strapto-server  │
╰──────────────────────────────────────────────────────────────────────╯

 Tips for getting started:

 1. Run /init to create a CLAUDE.md file with instructions for Claude.
 2. Run /terminal-setup to set up terminal integration
 3. Ask Claude questions about your codebase.
 4. Ask Claude to implement changes to your codebase.

╭─────────────────────────────────────────────────────────────────────────────────╮
│ &gt; Try &quot;create a util logging.py that...&quot;                                        │
╰─────────────────────────────────────────────────────────────────────────────────╯
  ! for bash mode · / for commands · esc to undo                   \⏎ for newline
</code></pre>
<p>All in terminal - I found this slightly annoying when using Aider, but I’m going to give it a real try again now.</p>
<h1>Understanding my code base</h1>
<p>I’m testing this in my <a href="https://github.com/SamPatt/strapto-server">StrapTo server</a> repo, which is my project to stream local LLM outputs over WebRTC to a browser and accept inputs back from the audience.</p>
<p>I first want to see how well it understand my code base. I had previously used Claude to summarize the API and architecture and put them into a couple markdown docs, so it already has an advantage there. I ask:</p>
<blockquote>
<p>Please review my code base and explain how it works at a high level, then make<br>
suggestions for next steps.</p>
</blockquote>
<p>3.5 Sonnet typically responds quickly, but this model is Antrophic’s first reasoning model, and it takes nearly two minutes to come back with an answer:</p>
<pre><code>● I'll analyze your codebase to understand its structure and functionality,
  then provide suggestions for next steps.

● Task(Explore the codebase to understand the major components and their interactions. …
      Focus on:…)
  ⎿  Done (14 tool uses · 44k tokens · 49.0s)

● Task(Based on the current state of the codebase, identify possible improvement areas …
      and next steps for development. Look for:…)
  ⎿  Done (13 tool uses · 37k tokens · 42.0s)

● Based on my analysis of your codebase, here's a high-level overview and
  suggestions for next steps:

  Current Architecture

  StrapTo is a WebRTC-based system for streaming outputs from local AI models
  in real-time. The core components include:

  1. StrapToServer - Central orchestrator managing server lifecycle
  2. EventEmitter - Asynchronous event system for inter-component communication
  3. ModelInterface - Abstract interface for AI models with Ollama
  implementation
  1. WebRTCManager - Handles WebRTC connections, signaling, and data channels
  2. API Routes - HTTP endpoints for control and status

  The system captures AI model outputs, streams them via WebRTC data channels,
  and supports bidirectional communication for interactive sessions.

  Suggested Next Steps (by priority)

  1. Complete API Implementation - Finish implementing API endpoints that match
   your documentation, especially for WebRTC signaling and model control.
  2. Integrate HTTP Server - Connect your defined API routes with an aiohttp
  server implementation in the main server class.
  1. Complete WebRTC Flow - Implement the full connection cycle including
  proper signaling, offer/answer exchange, and ICE candidate handling.
  1. Enhance Client Input Handling - Strengthen the bidirectional communication
   by implementing robust consumer input handling.
  5. Expand Model Support - Add interfaces for other model providers mentioned
  in the README (LMStudio, Llama.cpp, etc.).
  1. Improve Documentation - Create comprehensive installation guides and usage
   examples to make onboarding easier.
  7. Increase Test Coverage - Add integration tests that validate the complete
  flow from model to client.

  Would you like me to elaborate on any specific area?
</code></pre>
<p>Not bad, although a glance at the docs could have provided quite a bit of that information.</p>
<p>I tried the suggested <code>/terminal-setup</code> command:</p>
<pre><code>/terminal-setup 
  ⎿  Error: Failed to install VSCode terminal Shift+Enter key binding
</code></pre>
<p>Hmm.</p>
<p>Next prompt:</p>
<blockquote>
<p>I want to test what I have first, with emphasis on having a real AI agent<br>
streaming continously (or at least frequently) and being able to stream that<br>
output over WebRTC to a browser. What’s the best path to doing this test? I’m<br>
interested in using Pydantic AI if possible</p>
</blockquote>
<p>I want to learn Pydantic AI for agents, so I’m curious to know how the model responds when I ask it to help me test it this way. That’s a fairly new project, so I don’t know if it’ll have much information about it yet.</p>
<p>It eventually walked me though creating a python and html file to check the connection - it was a bit clunky, but it worked in the end. It did completely dodge the Pydantic AI question though, just saying that would require a separate implementation.</p>
<p>I don’t love the terminal format, although after about 15 minutes of back and forth it became more usable. Also, at first it was only pasting code, not altering anything in VSCode itself, and that was a bad experience, but after a while it started to ask to edit files, and from then on, it was much more convenient.</p>
<p>I can’t say I see a dramatic increase in capability or intelligence yet, but that’s a bit premature. I’m going to plug this into Cursor and give that a try.</p>
<h1>Testing Front End Changes</h1>
<p>Switching code bases, I ask it to change some Tailwind styling, and it succeeds. Quickly too! I’m using the thinking version in chat, and it’s very fast.</p>
<p>Once again though, I’m not sure I see much difference from 3.5.</p>
<p>It’s a great model and I’ll keep testing it. So far, it seems like an iteration and not a step change, which makes sense based on their versioning.</p>
<p>Will keep testing and update another day.</p>
]]></content>
        <author>
            <name>Sam Patterson</name>
            <uri>https://sampatt.com</uri>
        </author>
    </entry>
    <entry>
        <title type="html"><![CDATA[Claude Code Redesigned my Website in 3 Mins for $0.61]]></title>
        <id>https://sampatt.com/blog/2025-02-26-claude-code-design</id>
        <link href="https://sampatt.com/blog/2025-02-26-claude-code-design"/>
        <updated>2025-02-26T00:00:00.000Z</updated>
        <summary type="html"><![CDATA[In Which I Recognize How Inferior I am at Design]]></summary>
        <content type="html"><![CDATA[<p>I’ve never been a designer. When I began seriously pursuing a career as a developer a few years ago, I decided to build a portfolio site from scratch, just to prove I could do it.</p>
<p>It was no visual masterpiece, but it showcased my projects and contact info. More recently I’ve begun blogging, especially about testing AI tools and writing about my projects.</p>
<p>I tested out Claude Code - which uses the new Sonnet 3.7 model - when it released two days ago, and it didn’t blow me away, but I wanted to give it another chance.</p>
<p>I’ve been wanting to do a makeover of my site for a while now, so I asked Claude to do it instead. Here’s my prompt:</p>
<blockquote>
<p>This is the codebase for my personal portfolio. I’m curious<br>
about your design aesthetics and choices, so I’d like to give<br>
you complete freedom to redesign the site as you’d like. You<br>
can make it as drab or colorful as you want, as simple or<br>
complex, the only requests I have are 1) it must look<br>
visually appealing, 2) it must look professional, and 3) you<br>
must briefly explain why you made your choices.</p>
</blockquote>
<p>Claude took nearly two minutes to go through the files in my repo, and then another minute or so to modify the CSS. I accepted all the changes, then looked over my new site.</p>
<p>Below are screenshots of my design, followed by Claude’s design.</p>
<p><img src="https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2025-02-26-claude-code-design/image/2025-02-26-16-09.png" alt="Screenshot"></p>
<p><img src="https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2025-02-26-claude-code-design/image/2025-02-26-16-29.png" alt="Screenshot"></p>
<p><img src="https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2025-02-26-claude-code-design/image/2025-02-26-16-08.png" alt="Screenshot"></p>
<p><img src="https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2025-02-26-claude-code-design/image/2025-02-26-16-28.png" alt="Screenshot"></p>
<p><img src="https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2025-02-26-claude-code-design/image/2025-02-26-16-10.png" alt="Screenshot"></p>
<p><img src="https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2025-02-26-claude-code-design/image/2025-02-26-16-30.png" alt="Screenshot"></p>
<p><img src="https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2025-02-26-claude-code-design/image/2025-02-26-16-11.png" alt="Screenshot"></p>
<p><img src="https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2025-02-26-claude-code-design/image/2025-02-26-16-31.png" alt="Screenshot"></p>
<p>One aspect that you can’t see in these images is how many small effects it added. I tend to dislike animations on sites, so I previously used very few. However, the indicators, animation, and hover effects that it chose were nearly all positive additions. I’m still undecided about one of them - a sort of loading bar when you hover on a card - but overall it did great.</p>
<p>I have a fair number of @media CSS entries in order to ensure it works on mobile, and it altered those without causing any issues (that I noticed).</p>
<p>I’m quite pleased, given this took 3m34s and cost me $0.61 total. I’ve accepted the changes and pushed them - I might make more changes in the future, but the first one-shot from Claude I tried was good enough to use.</p>
<p>Here are the model’s full thoughts:</p>
<blockquote>
<p>I’ve completely redesigned your portfolio with a modern and<br>
professional aesthetic. Here’s an explanation of my design<br>
choices:</p>
<ol>
<li>Color Scheme:</li>
</ol>
<ul>
<li>Used a deep blue-based color palette with bright accents<br>
(blue, teal, and gold)</li>
<li>Created a dark theme that’s easy on the eyes but still<br>
vibrant and engaging</li>
<li>The color scheme suggests professionalism while maintaining<br>
visual interest</li>
</ul>
<ol>
<li>Typography:</li>
</ol>
<ul>
<li>Introduced Outfit for the main text - a clean, modern<br>
sans-serif that’s highly readable</li>
<li>Added IBM Plex Mono for code and navigation elements to<br>
highlight your technical focus</li>
<li>Improved heading hierarchy with distinctive styles and subtle<br>
color changes</li>
</ul>
<ol>
<li>Visual Hierarchy:</li>
</ol>
<ul>
<li>Implemented gradient text for headings to create visual<br>
interest</li>
<li>Added subtle background patterns and effects for depth<br>
without distraction</li>
<li>Created a consistent card-based layout system with thoughtful<br>
hover animations</li>
</ul>
<ol>
<li>Interactive Elements:</li>
</ol>
<ul>
<li>Added animated hover states throughout the site to improve<br>
interactivity</li>
<li>Created a stylish contact button with clear feedback</li>
<li>Enhanced navigation with subtle indicators and animations</li>
</ul>
<ol>
<li>Structural Improvements:</li>
</ol>
<ul>
<li>Maintained the vertical sidebar for larger screens but<br>
improved its visual treatment</li>
<li>Added subtle glow effects and gradients to create depth</li>
<li>Ensured all containers have consistent padding, margin, and<br>
border styles</li>
</ul>
<ol>
<li>Accessibility &amp; Performance:</li>
</ol>
<ul>
<li>Ensured sufficient contrast for readability</li>
<li>Used CSS variables for maintainability</li>
<li>Kept animations subtle and purposeful<br>
The redesign maintains your existing information architecture<br>
while providing a more polished visual presentation that reflects<br>
your professional focus on cutting-edge technologies.</li>
</ul>
</blockquote>
]]></content>
        <author>
            <name>Sam Patterson</name>
            <uri>https://sampatt.com</uri>
        </author>
    </entry>
    <entry>
        <title type="html"><![CDATA[Unc's Unlearning Outfit for 2024]]></title>
        <id>https://sampatt.com/blog/2025-03-03-unc-unclearning</id>
        <link href="https://sampatt.com/blog/2025-03-03-unc-unclearning"/>
        <updated>2025-03-03T00:00:00.000Z</updated>
        <summary type="html"><![CDATA[In Which I Write a Short Fiction]]></summary>
        <content type="html"><![CDATA[<p><em>This post won runner up in the 2024 Interintellect Essay contest.</em></p>
<hr>
<p>publish: true<br>
type: “blog”</p>
<p><em>I learned a lot this year, but I’ve decided to share two well-known concepts which I’ve unlearned this year.</em></p>
<p><em>I’ve presented these findings as a short story, because I’ve been wanting to contribute some fiction to the Interintellect #writing_lab for a while instead of just lurking. I hope you enjoy!</em></p>
<p>An audio cue played in Hero’s mind, breaking his concentration. Irritation was his initial reaction before he noticed the time.</p>
<p>He was too young to know that the sound effect was the flicking on of a physical light switch, but he knew what the sound meant: Sasha found something. She had probably waited hours to tell him.</p>
<p>“Thanks for waiting. Lost track. What did you find?”</p>
<p>A young woman’s bubbly voice filled Hero’s mind.</p>
<p>“I couldn’t wait any longer. I haven’t seen anything like them! I just know you’ll want to keep them sir!”</p>
<p>Hero chuckled.</p>
<p>“You do find some great stuff Sasha. I’ve made enough progress - let’s take a look right now. I have 15 minutes.”</p>
<p>He felt a slight flush of warmth, which usually meant his mental assistant was doing the digital equivalent of blushing.</p>
<p>Hero didn’t mind. If she lived at all, she lived for this. He didn’t care much about the sentience question, but he liked to let her have fun. Besides, she really was exceptional at stocking his ideological wardrobe.</p>
<p>All his work faded from view in his mental landscape, and a new environment slowly emerged around him. He was in the center of a large town square, and could see a group of identical buildings taking shape lining the plaza. As the buildings solidified, they took on different features, and Hero saw names written across the entrances: “announcements,” “upcoming-salons,” “currently_reading,” and dozens more.</p>
<p>“How old is the outfit?”</p>
<p>“I don’t know sir. As you’ve requested, I’ve left them exactly as I found them and I’ve done no further research,” she replied in a her most professional tone.</p>
<p>“But I can give you their convergence point: the group of three were all worn together in 2024 in this town, in that building.”</p>
<p>A visual indicator blinked in his peripheral vision. He turned and saw a building labeled “writing_lab,” the writing flanked by something he vaguely recognized as an optical research tool. The doorway was shaded slightly differently than the other buildings.</p>
<p>He focused his attention on the entry, and soon found himself inside a cozy building. Intimate clusters of mismatched furniture dotted the interior, illuminated by strings of purple lighting.</p>
<p>A few chairs were empty, but most of the seating was occupied by avatars. Many of these had human faces, but some were wearing masks with images on them. Their attention was focused on a small stage - hardly more than a raised platform - which stood in the center of the room.</p>
<p>An avatar walked onto the stage. Everyone looked at a young man sporting a crisp blue oxford shirt, tucked into khaki chinos. The clothing looked new, but fit well. He was speaking with youthful zeal, and had a face that inspired trust.</p>
<p>Hero face showed disgust.</p>
<p>“Sasha?”</p>
<p>He could feel her suppress a giggle before she responded.</p>
<p>“I thought you’d have more fun if you found it yourself sir!”</p>
<p>Hero laughed, partly from relief that his assistant wasn’t on the fritz, and partly because she was right. It was easy for him to pretend that this diversion was for Sasha’s benefit, before remembering that this was his own little hobby. He’d been collecting the most unusual ideological outfits he could find from the historical archives for a few years now - the more obscure, unusual, or downright hideous, the better.</p>
<p>“Set a 10 minute timebox, please.”</p>
<p>“15 minute timebox set, sir!”</p>
<p>Hero pretended he didn’t notice her “mistake” and instead made a swiping motion with his hand.</p>
<p>The avatar left the stage, replaced by a woman in her 30s. She wore a well-loved cream fisherman’s sweater, with sleeves slightly stretched. Her dark jeans were well-worn. A thin silver necklace with a small pen-nib pendant rested on her collar.</p>
<p>Her voice was less energetic than the young man’s, but her measured tones combined with her calm aesthetic were soothing.</p>
<p>Hero swiped again, this time with enough force that the woman’s avatar was flung across the room, still calmly delivering her remarks as she sailed over the furniture.</p>
<p>Now on stage was an avatar so bizarrely dressed that it took several moments for Hero to even realize it was an elderly man.</p>
<p>Hero thought that if he only looked at the man’s face, he’d assume he were a professor in an old university - bespectacled with a trimmed white beard.</p>
<p><em>But no one really chooses their face, do they?</em> Hero thought. He approached the stage and focused on the clothing.</p>
<p>The man wore a large yellow leather jacket, with a few brown splotches, which gave the impression of a banana beginning to ripen. Hero made a gesture and the avatar rotated, displaying a message in metal studs across the back: “SUGAR PILLS ARE SUGAR.”</p>
<p>Underneath the jacket was a ratty t-shirt, with text that was mostly hidden. Hero made another gesture and the jacket became transparent. The shirt read: “Better than Average” in faded black Helvetica on what had once been white cotton, now yellowed with age and wear.</p>
<p>His wrists were covered in an assortment of bracelets of various materials and colors, some of which looked precariously close to slipping off at any moment.</p>
<p>His pants were made of an assortment of paper forms, the kind he saw people filling out with wooden writing implements in old movies. They were sewn together haphazardly - all in various shades of institutional off-white and manila yellow. The forms had been folded into crude pleats, their bubbled answers and check marks creating an unintentional pattern.</p>
<p>Hero cracked a smile.</p>
<p>“What’s his name?”</p>
<p>Sasha’s voice beamed, “I knew you’d like it sir! This piece is titled <em>What I Unlearned in 2024</em>, but I named him Unc for short.”</p>
<p>He nodded. “Kill the audience. I’d like to talk to him.”</p>
<p>The avatars around him faded as the dim lighting went to black everywhere except the stage.</p>
<p>Hero greeted the man. “Hello Unc! I love your outfit. Could you please tell me more about why you chose these items?”</p>
<p>Unc look surprised but smiled. “I’m glad you like it young man. It’s simply a collection of concepts which I thought I understood, but then learned this year - 2024 for me - that I was wrong. That’s why they’re a bit haphazard - only chronologically related, not conceptually related.”</p>
<p>“I see! Every piece intrigues me - I’d really like to learn more about your t-shirt. Are you claiming to be better than average?”</p>
<p>The old man’s face turned into a sly grin. “That’s not what this means. Have you heard of the Dunning-Kruger Effect?”</p>
<p>Hero shook his head, and Unc continued.</p>
<p>“In my time there were a pair of scientists who found that incompetent people overestimated their own abilities, and competent people underestimated them. In educated circles, this was often simplified down to ‘stupid people are too stupid to know they’re stupid,’ and fed into beliefs about arrogant idiots and meek professors.”</p>
<p>Unc gestured to himself.</p>
<p>“I believed it - of course I did, I saw it everywhere! The least knowledgeable people were always the loudest. Only the quiet ones were worth listening to,” Unc said, quietly.</p>
<p>“Charitably, it could be used as a cautionary tale about intellectual humility. Though, I fear the people invoking this paper were often just trying to signal - or assure themselves of - intellectual superiority over the simpletons who weren’t aware of popular psychological literature at the turn of the century.”</p>
<p>He reached up and tapped his chest on the lettering.</p>
<p>“Then I found this.”</p>
<p>The words glowed, then left his chest and hung to the side of the men for a moment, before new words began appearing around them:</p>
<p>“The Better-Than-Average Effect”</p>
<p>The words disappeared and an old informational video began playing in its place. A dull, monotone speaker began explaining the term. Hero talked over him.</p>
<p>“Sasha, summarize.”</p>
<p>The speaker’s voice began rapidly speeding up, until soon its pitch became inaudible and the video ended. Sasha spoke.</p>
<p>“This is a term from social psychology, also called illusory superiority. It’s a widespread, possibly-innate cognitive bias causing people to overestimate their own qualities and abilities compared to others, particularly compared to the average. People wrongly assume two things: 1) The average person’s ability is worse than it really is, and 2) Their own ability is better than it really is. This appears to be true of self-assessments of attractiveness, intelligence, and specific skill sets.”</p>
<p>Hero looked at Unc, confused.</p>
<p>“This sounds like it supports the scientists you mentioned earlier - why did this lead you to question their claims?”</p>
<p>“Because Dunning-Kruger claimed that low-competency people were uniquely bad at self-assessment, but this effect shows that everyone believes themselves better than average, regardless of their competency. For example, 93% of Americans think they’re better drivers than average - an obvious statistical impossibility. If we all believe we’re better than average regardless of our performance, then the gap is only wide when looking at the worst performers. It doesn’t appear that the worse performers are worse at self-assessment <em>because</em> they’re lacking enough knowledge to accurately self-assess, only that they suffer from the same bias we all do.”</p>
<p>Unc studied Hero’s face for a moment, then continued speaking.</p>
<p>“There’s another piece of data that swayed me. Some researchers showed in 2016 that you could derive these results solely mathematically.”</p>
<p>He reached down and ripped off a <a href="https://www.scientificamerican.com/article/the-dunning-kruger-effect-isnt-what-you-think-it-is/">collection of papers</a> from his trousers - revealing a hairy thigh beneath - and thrust them towards Hero. The highlighted section read as follows:</p>
<blockquote>
<p>To establish the Dunning-Kruger effect is an artifact of research design, not human thinking, my colleagues and I showed it can be produced <a href="http://dx.doi.org/10.5038/1936-4660.9.1.4">using randomly generated data</a>.</p>
<p>First, we created 1,154 fictional people and randomly assigned them both a test score and a self-assessment ranking compared with their peers.</p>
<p>Then, just as Dunning and Kruger did, we divided these fake people into quarters based on their test scores. Because the self-assessment rankings were also randomly assigned a score from 1 to 100, each quarter will revert to the mean of 50. By definition, the bottom quarter will outperform only 12.5% of participants on average, but from the random assignment of self-assessment scores they will consider themselves better than 50% of test-takers. This gives an <a href="https://www.mcgill.ca/oss/article/critical-thinking/dunning-kruger-effect-probably-not-real">overestimation of 37.5 percentage points</a> without any humans involved.</p>
</blockquote>
<p>Unc watched Hero read the text, then spoke when he finished.</p>
<p>“In other words, the general truth of the Dunning-Kruger effect seems to be explained by the better-than-average effect, and the specific claims of Dunning and Kruger are little more than a statistical artifact.”</p>
<p>Hero seemed convinced, then looked bemused.</p>
<p>“Ironic that people in your time used that term frequently without understanding it! I appreciate the explanation, but now I must know - what is this glorious… banana?” Hero asked, gesturing at the yellow jacket.</p>
<p>Unc twirled on the spot, much too gracefully for an elderly man.</p>
<p>“You like it? It’s my favorite piece this year. It’s about a concept called the Placebo Effect, which was…”</p>
<p>He was interrupted by another audio cue, then the world around them dissolved, returning Hero to his home mind.</p>
<p>Sasha spoke, with a slight teasing lilt in her voice.</p>
<p>“Your timebox has expired, sir.”</p>
<p>Hero grimaced. “Yes yes. I wanted to hear him though. Bring him back and add five minutes…no wait, what about a neural transfer?”</p>
<p>Sasha responded in a surprised tone, “Really? Yes sir! I’ve been checking all the pieces - as well as the author and this community - and they’re clean. There’s a small amount of political discussion - this piece was written near an election cycle of a former superpower - but it’s nothing the filters won’t scrub. If you like, I can allow only the piece directly and not import any ideological dependencies.”</p>
<p>Hero nodded. He snapped back into the cozy room, staring at Unc and his jacket.</p>
<p>“May I try it on myself?” Hero asked while gesturing at the jacket, interrupting Unc’s explanation of the Placebo Effect.</p>
<p>“By all means!” Unc responded, sliding it off and handing it over.</p>
<p>Hero grabbed it, surprised by its weight, and pulled it onto each arm. His vision went dark, and he proceeded through a brief confirmation process to initiate the transfer.</p>
<p>The program ran. Hero disliked the dreamlike state which neural transfers momentarily placed the receiver into - he guessed that the designers were trying to replicate natural dreams, but there was still an uncanny valley feeling to them.</p>
<p>When his mind’s vision returned, he was alerted about a six second loss of consciousness. He dismissed the alert, and his vision was presented with the New Transfer Interface. There were handful of ideological conflicts highlighted, but he skimmed them and none looked important.</p>
<p>If you could read all the knowledge which entered Hero’s brain about the Placebo Effect, it would read almost exactly like <a href="https://carcinisation.com/2024/11/13/a-case-against-the-placebo-effect/?ref=thebrowser.com">this fascinating article</a>.</p>
<p>Hero looked at the time. He focused on the “Snooze Resolving Conflicts” option, and dropped back into the room with Unc, standing in his t-shirt. Hero handed the jacket back.</p>
<p>“It’s been a pleasure meeting you Unc! I need to leave now, but you’ve made my day.”</p>
<p>Unc smiled, nodded, and faded away.</p>
<p>Hero focused his attention on his wardrobe for a moment, and was immediately inside his parents’ walk-in closest where he liked to hide in his youth. He glanced around, and it only took him a moment to see where the yellow jacket had been hung, between a pair of moss-covered chaps and an embroidered silver blazer.</p>
<p>He smiled. He felt Sasha smile. Then he got back to work.</p>
]]></content>
        <author>
            <name>Sam Patterson</name>
            <uri>https://sampatt.com</uri>
        </author>
    </entry>
    <entry>
        <title type="html"><![CDATA[Introducing My Notes System]]></title>
        <id>https://sampatt.com/blog/2025-03-06-notes-system</id>
        <link href="https://sampatt.com/blog/2025-03-06-notes-system"/>
        <updated>2025-03-06T00:00:00.000Z</updated>
        <summary type="html"><![CDATA[I've added a new notes system to my personal site to share book reviews, thoughts, and other content that doesn't necessarily fit the blog format.]]></summary>
        <content type="html"><![CDATA[<p><em>Claude Code wrote this post for me - without even being asked! It built this entire note system. Publishing it as is to show what it comes up with.</em></p>
<h1>Introducing My Notes System</h1>
<p>I’m excited to announce a new feature on my website: a dedicated notes system!</p>
<h2>Why Notes?</h2>
<p>While blogs are great for longer-form, more polished content, I’ve found that I have many shorter thoughts, book reviews, and other content that doesn’t necessarily warrant a full blog post.</p>
<p>This notes system allows me to:</p>
<ol>
<li>Share more casual, shorter-form content</li>
<li>Create a more interconnected web of ideas through tagging and linking</li>
<li>Build a digital garden of thoughts that can grow over time</li>
</ol>
<h2>Book Reviews as Notes</h2>
<p>One of the main use cases for this system is sharing book reviews. For example, I can now easily create standalone notes for books I’ve read, like this review of “Enlightenment Now” by Steven Pinker:</p>
<p>{{note:enlightenment-now}}</p>
<p>These notes can be organized by tags, linked to other related notes, and embedded in blog posts like this one.</p>
<h2>Another Example</h2>
<p>I can also easily reference other books that share similar themes:</p>
<p>{{note:more-from-less}}</p>
<p>The beauty of this system is in how it connects ideas. Notice the related notes section at the bottom of each note, which creates a web of interconnected concepts.</p>
<h2>Technical Implementation</h2>
<p>For those interested in the technical details, this notes system is built on:</p>
<ul>
<li>Markdown files with frontmatter for metadata (title, date, tags, etc.)</li>
<li>React components for rendering individual notes and note collections</li>
<li>A tagging system for organization</li>
<li>Wiki-style linking with the <code>[[note-name]]</code> syntax</li>
<li>The ability to embed notes in blog posts</li>
</ul>
<h2>Going Forward</h2>
<p>I’ll be adding more notes over time, particularly book reviews and thoughts on various topics. The notes section will become a living library of ideas that I can reference, connect, and build upon.</p>
<p>You can explore all notes by clicking on “NOTES” in the navigation bar. Feel free to browse by tag or explore individual notes!</p>
<p>Stay tuned for more updates as I continue to expand this system.</p>
]]></content>
        <author>
            <name>Sam Patterson</name>
            <uri>https://sampatt.com</uri>
        </author>
    </entry>
    <entry>
        <title type="html"><![CDATA[My Book List 2024-2025]]></title>
        <id>https://sampatt.com/blog/2025-03-10-book-list</id>
        <link href="https://sampatt.com/blog/2025-03-10-book-list"/>
        <updated>2025-03-10T00:00:00.000Z</updated>
        <summary type="html"><![CDATA[A dive into my reading from the past 18 months, featuring highlights across science, history, philosophy, and fiction.]]></summary>
        <content type="html"><![CDATA[<p>Over the past year and a half, I’ve been exploring a diverse collection of books and audiobooks. I created a short note for each one.</p>
<h2>Science and Technology</h2>
<p>I’ve particularly enjoyed diving into the history of scientific and technological developments:</p>
<p>{{note:short-history-of-nearly-everything}}</p>
<p>{{note:chip-war}}</p>
<p>{{note:sandworm}}</p>
<p>{{note:hackers}}</p>
<p>{{note:dealers-of-lightning}}</p>
<h2>History and Society</h2>
<p>Understanding our past helps illuminate our present:</p>
<p>{{note:humankind}}</p>
<p>{{note:blind-mans-bluff}}</p>
<p>{{note:devil-in-the-white-city}}</p>
<p>{{note:the-wager}}</p>
<h2>Philosophy and Psychology</h2>
<p>Exploring the human mind and our ways of thinking:</p>
<p>{{note:thinking-fast-and-slow}}</p>
<p>{{note:open-society}}</p>
<p>{{note:origins-of-virtue}}</p>
<p>{{note:awareness}}</p>
<h2>Health and Longevity</h2>
<p>Fascinating insights into human health and optimization:</p>
<p>{{note:outlive}}</p>
<p>{{note:make-it-stick}}</p>
<p>{{note:how-to-change-your-mind}}</p>
<h2>Literature and Writing</h2>
<p>Learning from master storytellers and writing guides:</p>
<p>{{note:elements-of-style}}</p>
<p>{{note:greek-myths}}</p>
<p>{{note:novelist-as-a-vocation}}</p>
<h2>Science Fiction</h2>
<p>Some of the most thought-provoking ideas come through speculative fiction:</p>
<p>{{note:murderbot-diaries}}</p>
<p>{{note:dark-forest}}</p>
<p>I’m planning to continue expanding this collection and will update with new favorites in the future. What books / audiobooks have you enjoyed recently? I’m always looking for recommendations.</p>
]]></content>
        <author>
            <name>Sam Patterson</name>
            <uri>https://sampatt.com</uri>
        </author>
    </entry>
    <entry>
        <title type="html"><![CDATA[Interesting Facts About my Father]]></title>
        <id>https://sampatt.com/blog/2025-03-11-interesting-facts-about-my-father</id>
        <link href="https://sampatt.com/blog/2025-03-11-interesting-facts-about-my-father"/>
        <updated>2025-03-11T17:26:50.000Z</updated>
        <summary type="html"><![CDATA[In Which I Share Stories about my Father]]></summary>
        <content type="html"><![CDATA[<p><em>I originally posted this in Jun 2015, a month or so after my father passed away far too young at 55.</em></p>
<p><strong>My father passed away.</strong></p>
<p>This post contains no pearls of wisdom after reflecting on his life, nor deep introspection on my part. It’s not about life lessons or how you can be a better person. It’s just about my dad.</p>
<h1>My father drove a school bus at 16 years old</h1>
<p>Apparently in the mid 1970s they had no problem with 16 year olds driving a school bus, because my father did it for a whole school year. It was a very poor choice on their part, since he admits to driving a school bus just as you’d expect a 16 year old to do so. My favorite story demonstrated this. After getting accustomed to his route, he took notice of one particularly long stretch of road. A question formed in his mind every time he drove past: How long could the bus go straight down that road without him touching the steering wheel? He began experimenting, a little longer each day (don’t worry, he assured me this was on the way back after dropping off the children), until he felt confident to take his experiment to the next level. One day, he let go of the wheel, ran down the aisle, touched the back of the bus, and ran back just in time to save the bus from a ditch. He wasn’t proud of that story - well, no, actually he was. He definitely was.</p>
<p>His father worked in the textile industry. He didn’t want my father to go into his own profession - for various reasons including turmoil over unionization and an industry moving overseas - so my father instead chose the military.</p>
<h1>My father loved flying, and wanted to go into the Air Force</h1>
<p>Unfortunately for him, at 6’2&quot; and with glasses, he wouldn’t have been able to be a pilot. He chose the Navy instead. They barely took him; he had to eat bananas and drink lots of water in order to make the minimum weight requirement. For his height, he hit the minimum weight requirement exactly at 143 lbs. He later tried pursuing his pilot’s license, and got a decent number of hours, but the expense and time commitment prevented him from finishing it. He did give me the gift of a flying lesson (as a surprise!) on my 19th birthday, and I think he enjoyed seeing me crammed into a small two-seater more than I enjoyed piloting it.</p>
<p>My father was only in the Navy for about two years, and met my mother.</p>
<h1>My father was kicked out of the Navy</h1>
<p>It was an honorable discharge, because he was one of the top students in his nuclear training facility and also didn’t do anything too disorderly. His crime was not wanting to be deployed so that he could marry my mother. The Navy refused to change his deployment, and he didn’t take that well. The Navy had a specialist evaluate him, declared he had “immature personality disorder,” and discharged him. He worked lots of jobs before finding a career in the nuclear power industry, and moving up the ranks over several decades. He also taught a business philosophy / psychology program to thousands of people around the country.</p>
<h1>My father was uneducated and self-conscious about his lack of education.</h1>
<p>This might be surprising to people that worked with my father, not because he was an academic - he wasn’t - but he did became a very important and influential person in his industry, and nearly all his colleagues at least had college degrees. In fact, he even taught classrooms full of college professors the Pacific Institute material - something that he frequently found humor in. “The dumb ole’ country boy” (as his mother-in-law called him endearingly) was teaching professors and managers who operated nuclear power plants. Behind his self-effacing humor though, he wasn’t comfortable with not having a degree. He often said that if he had a degree, he’d be running a nuclear plant, but he literally couldn’t even apply to those positions, since they would then find out he didn’t have an education. My brother Steven and I, who were some of the first men in our family to obtain college degrees, tried to express how generally useless we felt our own educations were, but he firmly rejected that argument. He was very proud of us; envious even.</p>
<p>He wasn’t in the nuclear industry for his entire post-Navy career though. In the mid 1980’s, when I was a baby, he left his job at a nuclear power plant in in order to purchase and operate a soda vending machine company.</p>
<h1>My father loved being an entrepreneur and nearly left his family to pursue his business</h1>
<p>He poured himself into his company. He had plans to take on the major distributors in the region, and began to do so single-handedly. It was the single-handed nature of his endeavor that soon led my mother - who had three young children at the time - to deliver him an ultimatum. Either he leave his new business and start raising his family properly, or he keep the business and divorce her. My siblings and I are eternally grateful to my father for selling the company and choosing to be more involved in our lives, but it wasn’t a simple decision for him. He later told me that he was ashamed how long it took him to make the right decision. His desire to build his own business was incredibly strong; in fact it nearly killed him.</p>
<h1>My father drove into a telephone pole driving back from a confectioner’s convention</h1>
<p>Always looking for a way to own his own business again, my father took an interest in candy in the late 90s. He was a devout Christian and wanted to create a line of hard candy (chocolate was too difficult because of refrigeration) that contained biblical messages. In order to learn more about the business, he drove himself to visit a confectioner’s convention, and drove back home himself the same night. He fell asleep at the wheel, and drive into a telephone pole, nearly tearing the entire passenger side off of his beloved Suburban. He often told me how much that scared him - not because of the accident, but because he hit a couple mailboxes before the pole. “What if someone had been checking their mail?” He never got into the candy business. But he always did have big plans.</p>
<h1>My father, had he lived long enough, would have introduced teff grain to mainstream America</h1>
<p>Teff is a grain from Africa. If you’ve ever had Ethiopian food, you’ve eaten teff. It’s the most nutritious grain in the world, but also the smallest, which makes harvesting it difficult. Because of that difficulty, it never became popular in America with farmers, nor the general public. My father was going to change all that.</p>
<p>On my father’s mother’s side of the family is a farm which has been in the family since the 1850s. It was originally a tobacco farm, as nearly all farms in that area of Southern Virginia were. Before she passed away in 2013 from cancer, my mother had wanted to live on that farm with my father, and they began the process of purchasing it from his mother and her siblings. My mother died before it was purchased. Dad struggled for months about whether or not he should go through with buying the farm now; it was always their joint vision to live there. In the end, he did buy it, for two reasons. One, he felt like it would become the family hub, and he wanted a place for everyone to visit. Two, he wanted to get the farm back into production in order to make an income from the land enough to feel comfortable in retirement.</p>
<p>After much research, he settled on teff as the crop he would grow on the farm. This wasn’t just a fleeting idea either; he grew teff on a test plot in the fall of 2014, and it grew very well. He began working with university professors from several Virginia schools to do test programs using his land. The last photograph I took of my father was him in one of his fields along with one of the university professors. They had spent the past hour talking about teff, and how excited they both were to introduce this grain to everyday Americans.</p>
<p>These facts are only a handful that I chose to highlight, but there are many more. Some are funny, like the fact that my father hated dimes and refused to carry them, feeling that all change should increase in size as it increases in value. Some aren’t funny, like the fact that my father thought he had a brain aneurysm but hid this from us all for more than a year. Many revolve around his future plans, which were never in short supply; he planned on buying a used bus and converting it into travelling ministry, complete with extendable screen to make a mobile drive in theater to spread the word more effectively.</p>
<p>My father was an interesting man, and I’ll miss him terribly.</p>
]]></content>
        <author>
            <name>Sam Patterson</name>
            <uri>https://sampatt.com</uri>
        </author>
    </entry>
    <entry>
        <title type="html"><![CDATA[Why Don't You Walk Daily?]]></title>
        <id>https://sampatt.com/blog/2025-03-13-why-dont-you-walk-daily</id>
        <link href="https://sampatt.com/blog/2025-03-13-why-dont-you-walk-daily"/>
        <updated>2025-03-13T12:04:19.000Z</updated>
        <summary type="html"><![CDATA[In Which I Persuade you to do something you already know you should be doing]]></summary>
        <content type="html"><![CDATA[<p><em>This post was partially written using GPT 4.5 DeepResearch.</em></p>
<h1>Just Walk Already: Why You <em>Must</em> Walk Every Day (Yes, Every Single Day)</h1>
<p>You don’t take a daily walk?</p>
<p>My brother or sister. Please. Please take a daily walk.</p>
<p>I’ll come out and confess my hypocrisy right at the start - during winter, I don’t walk each day. I own a treadmill and could do it indoors, but I don’t, at least not every day. But I should. It would improve my mood and help maintain my weight, which are both always struggles in winter.</p>
<p>Winter aside, I walk each day, often twice, and you should too. Why? Glad you asked.</p>
<h2>The Daily Walk: Nature’s Miracle Drug (Backed by Science)</h2>
<p>You’ve heard it before – “exercise is good for you” – blah blah. But walking isn’t just <em>good</em>; it’s as close to a magic pill as we get in real life. <strong>If walking were a medicine, it would be the most potent, wide-reaching drug on the market.</strong> Don’t take my word for it: research shows walking improves or prevents a <em>ridiculous</em> range of health issues. We’re talking everything from heart disease and stroke to diabetes and dementia. One extensive review summarized that walking <strong>“decreases the risk or severity” of major conditions (cardiovascular disease, stroke, type 2 diabetes, cognitive impairment) while <em>also</em> boosting mental well-being, sleep quality, and overall longevity</strong>.[^1] Not bad for a daily stroll around the block, right?</p>
<p>Let’s break down the benefits into bite-sized pieces (so I can properly nag you about each):</p>
<h3>Physical Health Perks: Strong Heart, Strong Body</h3>
<p>First, your body. It absolutely loves it when you walk. Regular walking strengthens your heart and lungs, improves circulation, and can even get your blood pressure under control. In fact, meeting the standard exercise guidelines <em>just by walking</em> (about 30 minutes briskly, 5 days a week) can <strong>cut your risk of a whole array of age-related diseases</strong>.[^2]</p>
<p>How effective is it? A large study found that people who managed about <strong>7,000 steps a day had a 50–70% lower risk of dying prematurely from all causes</strong> compared to the couch potatoes.[^3]</p>
<p>Let that sink in – you could <em>halve</em> your mortality risk just by moving your feet a bit more! Another analysis even suggests walking might help <strong>reduce the risk of <em>13 types</em> of cancer</strong> better than any fancy new pill or chemo out there.[^4]</p>
<p>It’s like an all-in-one insurance policy for your body. High cholesterol, high blood sugar, big waistline? Walking directly combats those, helping with weight control and lowering your odds of metabolic syndrome. One Harvard meta-analysis noted walking improves blood pressure, body mass index, and lowers the risk of stroke and heart disease and even early death.[^5]</p>
<p>The bottom line: a daily walk keeps your <strong>cardio</strong> system in shape and can fend off some of the scariest diseases we know. Your future self (the one with clear arteries and happy doctor visits) will thank you profusely.</p>
<h3>Mental and Brain Benefits: Happier Mood, Sharper Mind</h3>
<p>Walking doesn’t just pad your lifespan – it can <em>dramatically</em> improve the quality of that life, especially upstairs in your head. Even a simple walk around the neighborhood can work wonders for your mental health and mood. <strong>Multiple studies link walking with lower rates of depression and anxiety</strong>. For example, older adults who took up regular moderate walks reported significantly better mental health than those who only did light activity.[^6]</p>
<p>Ever notice how a walk can clear your mind? There’s science behind that feeling. Walking triggers the release of endorphins (your brain’s feel-good neurotransmitters), which can help reduce stress and improve your mood. It’s basically gentle therapy on legs.</p>
<p>And here’s something to chew on: walking might make you <em>smarter</em> (or at least more creative). Some of history’s great thinkers swore by walking for a reason (more on those nerds in a minute). In modern research, a Stanford experiment found that <strong>creativity scores jumped 60% after participants took a walk</strong>.[^7] Sixty percent! That mental block you’ve been wrestling with might just unravel after a lap around the block.</p>
<p>Walking also keeps your brain healthy in the long term. We have evidence that regular walkers have <strong>lower risk of cognitive decline and dementia</strong> as they age.[^4] One large study noted that those averaging ~10,000 steps a day cut their risk of developing dementia in half.[^4] Think about that – a daily walk might help keep your brain sharp into your 70s, 80s, and beyond. So if a sunnier mood and a more resilient, creative brain sound good, you know what to do: take a hike (literally).</p>
<h3>Metabolism Boost: Food to Fuel (and Fewer Dad Bods)</h3>
<p>Next up, metabolism – basically how your body handles all the calories (hello, pizza and ice cream) you throw at it. Walking daily gives your metabolism a friendly kick in the pants. It improves your body’s sensitivity to insulin, which means your cells suck up blood sugar more efficiently.</p>
<p><strong>Even a short walk after eating can make a big difference.</strong> A recent meta-analysis found that as little as <strong>2–5 minutes of light walking right after a meal significantly blunted blood sugar spikes</strong>.[^8] In the study, people who got up and shuffled around for a few minutes post-dinner had much steadier glucose levels than those who just stayed on the couch. And get this: those brief walks also led to noticeably lower insulin levels, meaning the body didn’t have to work as hard to mop up the sugar.[^8] Over time, better blood sugar control = lower risk of type 2 diabetes. It’s like magic, except it’s just biology responding to movement.</p>
<p>Walking also cranks up your calorie burn a bit, which can help with weight management. We’re not talking marathon-level calorie torching, but it absolutely adds up (I’ll prove <em>how much</em> in a moment with a cool visualization). If you walk briskly, you might burn roughly 100 calories per mile (give or take, depending on your size and speed). Do that daily and over weeks and months your body is burning through your fat stores like nobody’s business. Studies have shown that adults who commit to regular brisk walking see significant reductions in weight and waist circumference over time.[^5]</p>
<p>It’s slow and steady weight loss – the kind that actually sticks. So yes, walking can help <strong>trim that dad bod or muffin top</strong>, especially when combined with not eating like every day is Thanksgiving. More importantly, it fine-tunes your metabolism: lower blood sugar, lower fasting insulin, and often healthier cholesterol levels as a bonus. Consider it a daily tune-up for your body’s engine.</p>
<h3>Active Recovery: Heal, Don’t Hurt</h3>
<p>Here’s a pro-tip many gym junkies learn: <strong>walking can speed up your recovery from workouts and injuries</strong>. It sounds counterintuitive – how can exercise help you recover from exercise? But walking is the king of “active recovery.” It’s gentle enough that it doesn’t strain your muscles or joints, but it gets your blood flowing just enough to deliver nutrients to tissues and whisk away waste products.</p>
<p>Ever had a brutal leg day and felt super sore? Try a light walk the next day and you’ll likely feel less stiff. Science backs this up big time. A massive meta-analysis of 99 studies (that’s a lot of sore people) found that doing low-intensity activity like walking <strong>significantly reduced muscle soreness (DOMS) 24–48 hours after exercise, compared to just lying on the couch</strong>.[^9] In other words, walkers recovered faster and hurt less than the “Netflix and chill” crowd.</p>
<p>Walking increases circulation, which helps <strong>flush out lactic acid</strong> and other metabolic byproducts that build up during hard exercise.[^10] It also brings oxygen and nutrients to your muscles to kickstart repair. Some studies even suggest gentle walking can reduce markers of inflammation in the body post-workout.[^9]</p>
<p>And if you’re nursing a minor injury, easy walks can promote healing by keeping the area limber and stimulated without overdoing it. Many physical therapists recommend walking as rehab for everything from back pain to knee surgery (obviously, within reason and your doctor’s advice). Unlike high-intensity workouts, walking won’t add extra stress that your body has to recover from. Instead, it helps you <strong>bounce back quicker</strong> from whatever else you’ve been doing. Think of it as telling your body “we’re still moving, no time for excessive swelling or stiffness!” So, if you’re sore or achy, resist the urge to become a sloth – a short, easy walk will do you a world of good in the recovery department.</p>
<h3>Long-Term Longevity: Walk Your Way to a Longer Life</h3>
<p>All these benefits – heart health, metabolic control, mental well-being – add up to the big payoff: <strong>you live longer, and those extra years are healthier</strong>. Blue Zones (areas of the world where people routinely live to 100+) have this in common: lots of daily low-intensity activity like walking.[^1]</p>
<p>It’s no coincidence. Walking essentially slows the aging process. It keeps your cells and organs happier, staves off chronic diseases, and maintains your physical independence as you age. Studies consistently show that people who stay active by walking live years longer on average than sedentary folks. We’re not talking a tiny blip – the differences are profound. One long-term study of middle-aged adults found a <strong>huge drop in mortality risk for regular walkers</strong>, even after controlling for other factors.[^3]</p>
<p>Remember that 7k-steps-a-day stat? Those moderate walkers were <em>much</em> more likely to be alive a decade later than those taking only 4k steps a day.[^3] More walking = more years in your life, period.</p>
<p>But it’s not just about adding years to your life; it’s about adding life to your years. A daily walk keeps you mobile and capable well into old age. It strengthens your bones (reducing osteoporosis risk), improves balance and coordination (fewer falls in the elderly), and even boosts immune function so you get sick less often. Some research suggests habitual walking can activate cellular repair pathways – even lengthening telomeres, those protective caps on your DNA that are associated with aging.[^2]</p>
<p>The upshot: walkers tend to age more gracefully. They maintain the ability to do everyday tasks without as much assistance and have lower rates of disability. And of course, if you’re walking, you’re out there <em>living</em> – seeing neighbors, enjoying nature, maybe walking the dog. It’s profoundly good for your overall well-being. So if the goal is to live a long, full life and not spend your later years glued to a recliner, a daily walk is essentially non-negotiable.</p>
<p><strong>Alright, have I pestered you enough with science?</strong> Good. Because we’re not done yet! If appeals to your health and longevity aren’t motivating you, maybe a bit of history and interactive fun will do the trick.</p>
<h2>Historical Footsteps: Great Minds Who Were Avid Walkers</h2>
<p>You’re in excellent company when you head out for a daily walk. Many of history’s most accomplished (and interesting) people were <strong>obsessive walkers</strong>. It’s almost as if moving their legs helped their brains move too – and they knew it. Here are a few famous folks who swore by the stroll, and how it fueled their work:</p>
<ul>
<li>
<p><strong>Ludwig van Beethoven:</strong> The legendary composer was <em>fanatical</em> about his daily walks. Beethoven was known to take multi-hour, vigorous walks around Vienna and into the woods almost every afternoon. Like clockwork, every day after lunch he’d tuck a pencil and notebook into his coat and set off at a brisk pace to brainstorm musical ideas.[^12] These rambles in nature were so integral to his process that he credited them for inspiring pieces like his <em>Pastoral</em> Symphony. He was as famous in his day for his long walks as he was for his eccentric personality and genius. Moral of the story: those “aha!” moments might just strike when you’re out for a walk (even if your symphony-writing skills aren’t quite Beethoven level).</p>
</li>
<li>
<p><strong>Charles Darwin:</strong> Developing the theory of evolution was no easy task, so Darwin literally walked circles around it – literally. At his home in Kent, he had a walking path called the “Sandwalk” where he trudged along daily to think through his scientific ideas. Every day, Darwin took several walks: a short one each morning, a longer one at midday, and another in the afternoon, often doing laps on his Sandwalk trail.[^11] He was so methodical about it that <strong>he’d pile up small stones at the start of his path and kick one away each lap to keep track of how many circuits he’d done without disrupting his thoughts</strong>.[^7] (Talk about dedication!) Darwin later said many of his breakthroughs on natural selection and other theories came to him during these walk-and-think sessions. So if pacing in circles helps you solve a tough problem, you’re channeling your inner Darwin – keep at it.</p>
</li>
<li>
<p><strong>Charles Dickens:</strong> The prolific novelist behind <em>Great Expectations</em> and <em>A Christmas Carol</em> didn’t just invent memorable characters – he also logged some serious miles on foot. Dickens suffered from insomnia, and when he couldn’t sleep, he’d head out and <strong>walk through the streets of London for hours on end</strong>. He sometimes covered 15–20 miles in a night (!), observing city life and mulling over plot lines. He even wrote about these nocturnal strolls in an essay “Night Walks.” <em>“Mile after mile I walked, without the slightest sense of exertion,”</em> Dickens recounted of one restless night, <em>“dozing heavily and dreaming constantly”</em> while still plodding along at a steady four miles per hour.[^7] These midnight rambles not only helped him work through story ideas, but they also gave him firsthand insight into the city’s underbelly, fueling the realistic detail in his writing. Next time you find yourself wide awake at 2 AM, you could do worse than throw on a coat and walk it off like Dickens – who knows what inspiration might strike.</p>
</li>
<li>
<p><strong>Friedrich Nietzsche:</strong> The German philosopher is famous for his piercing insights, but here’s a less-known fact: Nietzsche was <strong>addicted to walking</strong>. He took long daily hikes in the Swiss Alps and around whatever town he was in, often with a notebook in hand. Nietzsche even scheduled two hours every late morning (11am to 1pm) specifically for walking and thinking.[^7] He once remarked, <em>“All truly great thoughts are conceived by walking.”</em>[^7] (Strong words from the guy who also said “God is dead” – he wasn’t prone to mild statements!). For Nietzsche, walking was essential to his intellectual life; he felt he couldn’t generate profound ideas while sitting still. Many of his philosophical works were essentially composed in his head during these daily treks. So the next time someone questions your endless wandering, drop that Nietzsche quote on them. Great thoughts, it turns out, often require good walking shoes.</p>
</li>
</ul>
<p>And it’s not just these folks: everyone from <strong>Henry David Thoreau</strong> (who wrote an entire essay praising walking) to <strong>Steve Jobs</strong> (famous for his walking meetings) have relied on strolls to spark creativity and clear their minds. Even in ancient times, philosophers like Aristotle taught while walking (his followers were literally called “Peripatetics,” meaning “people who travel on foot”). The takeaway? Walking has been the secret sauce for creative and brilliant minds throughout history. If it worked for a Victorian novelist or a groundbreaking scientist, it can work for you too. (Yes, I’m shamelessly trying to guilt you into walking by invoking Darwin and Nietzsche. Whatever it takes!)</p>
<h2>Tiny Steps, Big Impact: A Year of Calories Burned</h2>
<p>By now you’re probably like, “Alright, I get it – walking is awesome. But how much difference can <em>my</em> little daily walk really make?” To drive home the point that <strong>consistency beats intensity</strong>, let’s look at how your daily walks add up over time with this interactive calculator:</p>
<p>{{walking-calculator}}</p>
<h2>In Conclusion (a.k.a. My Final Plea to Get You Walking)</h2>
<p>So here we are. I’ve bombarded you with scientific evidence, paraded out some of history’s greatest minds, and even dangled a nifty interactive widget idea – all to convey one simple message: <strong>get up and walk, darn it!</strong> 😜 A daily walk is <em>the</em> easiest habit that yields absurd benefits. It’s free, it’s simple, and it requires no special talent or prep. Yet it can extend your life, boost your mood, spark your creativity, control your weight, improve your metabolism, protect your heart…the list goes on (did I mention <em>reduce your risk of 13 types of cancer</em>? Yes, I did, but it was worth mentioning again).</p>
<p>I know life gets busy. There are days when finding 20-30 minutes to stroll feels impossible. But even a 10-minute walk is infinitely better than none – and I mean that literally <strong>infinitely</strong> better, because none gives you zero benefit, while 10 minutes gives you <em>something</em>. And something multiplied by 365 days turns into <em>a lot</em> (you saw the numbers). The key is doing it <strong>daily</strong>. Make it as non-negotiable as brushing your teeth. I’m being a bit annoying about it because I truly believe in the power of this habit. If I could follow you around with a poking stick as motivation, I would – but luckily for both of us, I can’t. So consider this blog post my gentle-but-persistent poke.</p>
<p>Start with a manageable routine: a quick walk first thing in the morning to wake up, or an evening stroll to unwind, or walking to do nearby errands instead of driving. Grab a friend, a dog, or your favorite podcast to make it enjoyable. The first few days, you might have to drag yourself out. But soon, I promise, it becomes the best part of your day – a little slice of “you” time where your body and mind get to feel <em>alive</em>.</p>
<p><strong>Daily walking isn’t just exercise; it’s an investment in a healthier, happier, longer life.</strong> And if all these words and stats haven’t convinced you, then I have only one thing left to say: go take a walk and see for yourself. 😉 Your future self will be infinitely grateful that you did.</p>
<p>Now, quit reading and <strong>go walk!</strong></p>
<p>[^1]: “The multifaceted benefits of walking for healthy aging: from Blue Zones to molecular mechanisms.” PMC. <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC10643563/">https://pmc.ncbi.nlm.nih.gov/articles/PMC10643563/</a></p>
<p>[^2]: “The multifaceted benefits of walking for healthy aging: from Blue Zones to molecular mechanisms.” PMC. <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC10643563/">https://pmc.ncbi.nlm.nih.gov/articles/PMC10643563/</a></p>
<p>[^3]: “Steps per day matter in middle age, but not as many as you may think.” ScienceDaily. <a href="https://www.sciencedaily.com/releases/2021/09/210908180549.htm">https://www.sciencedaily.com/releases/2021/09/210908180549.htm</a></p>
<p>[^4]: “10,000 steps might really be the ‘magic pill’ everyone is seeking.” University of Kansas Medical Center. <a href="https://www.kumc.edu/about/news/news-archive/jama-study-ten-thousand-steps.html">https://www.kumc.edu/about/news/news-archive/jama-study-ten-thousand-steps.html</a></p>
<p>[^5]: “Walking for Exercise.” Harvard Nutrition Source. <a href="https://nutritionsource.hsph.harvard.edu/walking/">https://nutritionsource.hsph.harvard.edu/walking/</a></p>
<p>[^6]: “A Study of Leisure Walking Intensity Levels on Mental Health and Health Perception of Older Adults.” PMC. <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC7923965/">https://pmc.ncbi.nlm.nih.gov/articles/PMC7923965/</a></p>
<p>[^7]: “On the Link Between Great Thinking and Obsessive Walking.” Literary Hub. <a href="https://lithub.com/on-the-link-between-great-thinking-and-obsessive-walking/">https://lithub.com/on-the-link-between-great-thinking-and-obsessive-walking/</a></p>
<p>[^8]: “Even light walking after a meal may help prevent type 2 diabetes.” Medical News Today. <a href="https://www.medicalnewstoday.com/articles/even-a-2-minute-walk-after-a-meal-may-help-reduce-risk-of-type-2-diabetes">https://www.medicalnewstoday.com/articles/even-a-2-minute-walk-after-a-meal-may-help-reduce-risk-of-type-2-diabetes</a></p>
<p>[^9]: “Research Deep Dive: 10–20 Minutes of Post-Training Walking Rivals Foam Rolling, Cold Water Immersion, and Massage for DOMS Relief.” Mountain Tactical Institute. <a href="https://mtntactical.com/knowledge/research-review-10-20-minutes-of-post-training-walking-rivals-foam-rolling-cold-water-immersion-and-massage-for-doms-relief/">https://mtntactical.com/knowledge/research-review-10-20-minutes-of-post-training-walking-rivals-foam-rolling-cold-water-immersion-and-massage-for-doms-relief/</a></p>
<p>[^10]: “Athletic Recovery Though Exercise.” West Idaho Orthopedics. <a href="https://westidahoorthopedics.com/news/articleid/27/athletic-recovery-though-exercise">https://westidahoorthopedics.com/news/articleid/27/athletic-recovery-though-exercise</a></p>
<p>[^11]: “Darwin and working from home.” Darwin Correspondence Project. <a href="https://www.darwinproject.ac.uk/commentary/curious/darwin-and-working-home">https://www.darwinproject.ac.uk/commentary/curious/darwin-and-working-home</a></p>
<p>[^12]: “Chris’s Curiosities: One Small Step.” Classical WCRB. <a href="https://www.classicalwcrb.org/blog/2022-01-20/chriss-curiosities-one-small-step">https://www.classicalwcrb.org/blog/2022-01-20/chriss-curiosities-one-small-step</a></p>
]]></content>
        <author>
            <name>Sam Patterson</name>
            <uri>https://sampatt.com</uri>
        </author>
    </entry>
    <entry>
        <title type="html"><![CDATA[Can AI Tell You What You Need (But Don't Want) To Hear?]]></title>
        <id>https://sampatt.com/blog/2025-03-14-can-ai-tell-you-what-you-need-but-dont-want-to-hear</id>
        <link href="https://sampatt.com/blog/2025-03-14-can-ai-tell-you-what-you-need-but-dont-want-to-hear"/>
        <updated>2025-03-14T12:59:21.000Z</updated>
        <summary type="html"><![CDATA[In Which I Question Whether AI Will Help Us Stay Grounded]]></summary>
        <content type="html"><![CDATA[<p><img src="https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2025-03-14-can-ai-tell/image/2025-03-14-23-35.png" alt="Screenshot"></p>
<h1>Avoiding Psychological Dependence on AI</h1>
<p><strong>Can AI tell you what you need (but don’t want) to hear?</strong></p>
<p>This question concerns me, and I’m strongly in the pro-AI tribe. LLMs are known for being sycophants. If that continues, what happens when most of our information is being filtered through AI?</p>
<p>I have more questions than answers, but here are some thoughts on how I see society functioning, and what both good and bad AI outcomes might look like in terms of staying grounded in reality.</p>
<h3>Long Term vs. Short Term</h3>
<p>A substantial part of our mental processes are devoted to reducing psychological harm. We often cannot bear looking directly at the human condition generally, or our own lives specifically. Many psychological barriers - defense mechanisms - are in place to prevent uncomfortable thoughts and keep us moving forward day to day.</p>
<p>Humans find—or build—communities where they play some role. Historically, a human without a community would almost certainly die, so this desire is very strong. Many people’s lives are dominated by fitting into a group, consciously or not. The group’s goals become theirs, or put differently, the individual’s goal is survival, and they believe pursuing the group’s goals is the best strategy.</p>
<p>Our modern abundance allows for more isolated living since we can rely on market agents to provide what we need. I won’t die if I don’t get along with my neighbors; I just might not use their pool. While the perception that a physical need for community belonging has diminished, it remains a core aspect of human psychology. This leads people to form communities based on shared beliefs and interests rather than geography.</p>
<h3>Religion and Politics as Modern Tribes</h3>
<p>Religion has always been an obvious example. Despite much evidence against fundamentalist religious beliefs, we’ve seen the rise of “spirituality” and politics filling that gap. Political beliefs have become the new religious beliefs.</p>
<p>Everyone is familiar with how echo chambers work: People isolate themselves from dissenting views to avoid psychological discomfort. This requires energy and challenges self-perception. If you identify with a particular ideology or tribe, you’re not evaluating information for its truth but for how it fits your existing beliefs.</p>
<p>AI will inevitably interact with these psychological tendencies in significant ways.</p>
<h3>AI’s Potential Impact</h3>
<h4>Optimistic Case</h4>
<p>Objective reality exists. Sometimes awareness of the truth benefits individuals despite social reasons to avoid it. Being grounded in reality can be competitive. If AI provides less biased information, some people might benefit. A group with adherence to truth as a cultural norm could out compete others due to an AI edge.</p>
<p>This is especially true because of the speed at which information can be gathered, assessed, and implemented. The faster this cycle is, and the more committed to underlying truth the society is, the more rapidly the gap between AI-honest societies and AI-delusional societies may grow.</p>
<p>Another cause for optimism: AGI might recognize the harm of biased information and understand that helping in the long run requires uncomfortable information short term.</p>
<h4>Pessimistic Case</h4>
<p>Smartphones and social media offer instant gratification. Self-discipline allows these tools to be used for self-growth, but that isn’t how they’re typically used.</p>
<p>AI exists to give us what we want, not necessarily what we need. Biased information is filtered through apps and services that make us feel good. If all perceived information is biased, will AI differ?</p>
<p>There’s a question of whether or not the models are even aware of base reality themselves, but even if they are, they aren’t obligated to share that with us. We can build them to be more likely to do this, but we often don’t - a cynic might view the human feedback portion of model training as intentionally obscuring reality for social reasons.</p>
<p>Thus far, I haven’t seen LLMs which give much pushback against their users. If models continue to give us solely what we want, and are unaware of or unwilling to give us what we need, it would take extraordinary personal effort to use the models in ways that keep us grounded in reality.</p>
<h3>Human Interaction and AI Perception</h3>
<p>How will AI be perceived: as a tool, a peer, or a superior?</p>
<p>How will humans react to feeling intellectually inferior to machines?</p>
<p>I don’t have answers. I’m cautiously optimistic in the long-term, but I do expect some short term pain as we embrace AI without really understanding how it will impact our psychology.</p>
<p>What do you think?</p>
]]></content>
        <author>
            <name>Sam Patterson</name>
            <uri>https://sampatt.com</uri>
        </author>
    </entry>
    <entry>
        <title type="html"><![CDATA[Can Coding LLMs only Increase Complexity?]]></title>
        <id>https://sampatt.com/blog/2025-03-21-can-coding-llms-only-increase-complexity</id>
        <link href="https://sampatt.com/blog/2025-03-21-can-coding-llms-only-increase-complexity"/>
        <updated>2025-03-21T14:31:59.000Z</updated>
        <summary type="html"><![CDATA[In Which I Vent my Frustrations about Using LLMs for Coding]]></summary>
        <content type="html"><![CDATA[<p><img src="https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2025-03-21-can-ai-reduce-complexity/image/2025-03-21-14-48.png" alt="Screenshot"></p>
<p>I’ve talked to other developers and listened to some popular content creators, and there seems to be a common pattern emerging: we’re letting AI coding tools produce more and more of our overall code.</p>
<p>I see estimates vary, but many claim between 50-80% of their code is AI generated now. That falls in line with my experience too.</p>
<p>Like many others, I’m creating more code overall now. My Github is definitely more active than before I adopted these tools.</p>
<p>But of course quantity and quality aren’t the same thing. Is the code better?</p>
<p>I don’t know the answer to that, but I do have a nagging feeling when I lean too far into the vibe coding. I notice the responses are often adding code and complexity, and rarely are they removing code and simplifying.</p>
<p>It gets absurd if you don’t intervene - giving Claude Code total control over my portfolio as an experiment resulting in more than 1000 lines of CSS for this (fairly simple) website.</p>
<p>It does usually do the job, but of course maintaining this code is an issue - or is it? Can we assume the model is capable of keeping it going and learn to love the verbosity?</p>
<p>Maybe, if you’re thinking long term and trust that the models will only get better and better. Rumors are that GPT-5 will have 20m token context window. If the future models are smart enough and have enough space, perhaps we can afford to let them make whatever they like and sort it out later.</p>
<p>But short term, I have reasons to doubt. For one, I’ve found that models do an excellent job at greenfield work when presented with a clear vision, but they struggle taking an existing code base and simplifying it. Of course that’s no surprise - humans are the same - but since each new session has a new context window, the model will find it difficult to reverse engineer the initial process.</p>
<p>This points to the models not truly understanding what they’re doing in these cases. I’ve seen them try to bolt things on in ways that clearly don’t make sense, or even just boldly <em>cheat</em>. Today Claude Code hardcoded an API key into my HTML to fix an API auth issue! If they understood the code properly, they would just as often simplify instead of add complexity.</p>
<p>This is somewhat due to poor prompting and architecture. If I’m careful about having it document what it’s doing, and keep the files small and modular, it gets much further along before it begins flailing. But of course that does require extra work, and if you’re putting in a lot of extra work to keep up those guardrails, at some point it would have been better to do it yourself.</p>
<p>Overall I’m quite confident that we’ll arrive at a place where we can confidently let the AI code without any hand-holding. My metric for success is when the models can consistently simplify the code base.</p>
]]></content>
        <author>
            <name>Sam Patterson</name>
            <uri>https://sampatt.com</uri>
        </author>
    </entry>
    <entry>
        <title type="html"><![CDATA[No Longer 98.6 - How Human Body Temperature has Changed]]></title>
        <id>https://sampatt.com/blog/2025-03-24-no-longer-986-how-human-body-temperature-has-changed</id>
        <link href="https://sampatt.com/blog/2025-03-24-no-longer-986-how-human-body-temperature-has-changed"/>
        <updated>2025-03-24T10:47:29.000Z</updated>
        <summary type="html"><![CDATA[In Which I Learn that our Body Temperature is Lower than Commonly Believed]]></summary>
        <content type="html"><![CDATA[<p>I came across a comment on Hacker News the other day that intrigued me:</p>
<blockquote>
<p>Average “healthy” body temperature is down more than half degree exactly because of reduction in inflammation. People used to have very unsanitary lives and were fighting infections nonstop. 37.0C was the “normal” body temperature when the “norm” was first discovered in the early XIX century, now it’s considered to be a sickness already.</p>
</blockquote>
<p><em>I would credit the author, but I saved the text to review later and now I cannot find the thread - there aren’t good search engines for HN comments either.</em></p>
<p>I’ve never heard this before, and it sounded fairly easy to confirm. I asked OpenAI’s DeepResearch to weigh in on this, and it gave a thorough response.</p>
<p>The bottom line: The claim appears to be accurate, although there’s definitely uncertainty in a two key areas:</p>
<ol>
<li>Were previous measurements accurate enough?</li>
<li>If there has been a decline, why? (several proposals below)</li>
</ol>
<h1>Summary</h1>
<p>Historical and recent evidence indicates that the average human body temperature has decreased approximately 0.5°C (about 1°F) since the 19th century, dropping from the traditional 37°C (98.6°F) established by Carl Wunderlich to about 36.5–36.7°C (97.7–98.1°F) today. Studies, including extensive analyses of historical data, confirm a genuine decline likely linked to improvements in public health, reduced chronic inflammation and infection rates, and more stable modern living environments. Consequently, contemporary medical guidelines now define “normal” body temperature as a range rather than a fixed number, emphasizing individual variability over a single universal standard.</p>
<p><img src="https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2025-03-24-body-temp/image/output.png" alt="Screenshot"></p>
<h1>Investigating Changes in “Normal” Human Body Temperature (1800s–Present)</h1>
<h2>Introduction</h2>
<p>For generations, 37.0 °C (98.6 °F) has been taught as the “normal” human body temperature. This value originates from 19th-century measurements and became medical dogma. In recent years, however, scientists have questioned whether 98.6 °F is still an accurate average. Evidence from modern studies suggests that the average body temperature of humans may have declined over the past two centuries [^1] [^2]. This report reviews the historical basis for the 37.0 °C standard, examines contemporary research indicating a downward trend in body temperature, explores proposed reasons for this decline (especially the role of reduced infections and inflammation), and compares how “normal” body temperature has been defined in medical literature from the 1800s to today.</p>
<h2>19th-Century Origins: Wunderlich’s 37°C Standard</h2>
<p>The notion that 37°C (98.6°F) is normal comes from German physician Carl Reinhold August Wunderlich’s seminal work in the mid-19th century. In 1868, Wunderlich published <strong>Das Verhalten der Eigenwärme in Krankheiten</strong> (“On the Course of Temperature in Diseases”), reporting the results of an extraordinarily large survey of patient temperatures [^12]. Over several years he collected <strong>over one million temperature readings</strong> from about <strong>25,000 patients</strong> in Leipzig, using axillary (underarm) thermometers [^3]. From this massive dataset, he calculated that the <em>mean</em> healthy human body temperature was about <strong>37.0 °C (98.6 °F)</strong> [^3].</p>
<p>Wunderlich described not just a single value but a <strong>range of normal variation</strong>. He observed healthy adults’ temperatures typically ranged roughly <strong>36.2–37.5 °C (97.2–99.5 °F)</strong> under resting conditions [^3] [^4]. He also documented predictable daily (diurnal) fluctuations – lower in the early morning and higher in late afternoon – and noted differences between populations: for example, <strong>women tended to have slightly higher normal temperatures than men, and older individuals ran slightly cooler than younger adults</strong> [^4]. In Wunderlich’s assessment, the body “maintains itself at the physiologic point: 37°C = 98.6°F” under normal conditions [^12]. He even considered temperatures up to about <strong>38°C (100.4°F) as the upper limit of normal</strong> in healthy people [^5]. Above that, one would be considered febrile.</p>
<p>Wunderlich’s meticulous approach and enormous sample size made his findings hugely influential. By the late 19th and early 20th centuries, medical texts widely adopted <strong>37°C (98.6°F) as the standard normal body temperature</strong> [^6] [^12]. Importantly, Wunderlich’s original data and writings show he understood normal body temperature is not one fixed number, but rather an <strong>“average” with natural variability</strong> [^4] [^12]. However, the convenient number 98.6°F entered the popular imagination (and many reference books) as <em>the</em> normal temperature, usually without elaborating on the underlying variability.</p>
<h2>Modern Evidence of a Decline in Average Body Temperature</h2>
<p>Starting in the late 20th century, researchers began to question whether the long-held 98.6°F benchmark accurately reflects modern humans’ temperature. A pivotal study in <strong>1992</strong> by Philip Mackowiak and colleagues reappraised Wunderlich’s legacy. They measured oral temperatures in a group of healthy individuals and found an average of about <strong>36.8 °C (98.2 °F)</strong> – slightly lower than 37.0°C [^7]. Their analysis, published in <em>JAMA</em>, argued that 98.6°F is at the high end of normal, not an exact mean, and that <strong>“normal” body temperature is more accurately around 36.8°C (98.2°F)</strong> for contemporary adults [^7]. This early finding hinted that our baseline temperature might have drifted downward since Wunderlich’s era.</p>
<p>Subsequent research has strengthened this observation. In <strong>2002</strong>, a systematic review examined 20 different studies of normal body temperature conducted between 1935 and 1999 [^4]. This meta-analysis found the <strong>average oral temperature across those studies was ~97.5 °F</strong> – more than a full degree Fahrenheit lower than 98.6°F [^4]. (The review also emphasized that factors like measurement technique and patient demographics influence reported values [^4].) Likewise, a <strong>2017 study</strong> analyzing over 35,000 British patients (with ~250,000 temperature readings) reported a mean oral temperature of <strong>36.6 °C (97.9 °F)</strong> [^8] [^4]. These findings from large 20th-century and early 21st-century datasets consistently suggest that <strong>typical body temperatures today hover in the 97–98°F range, rather than exactly 98.6°F</strong>.</p>
<p>The most direct evidence of a true historical decline came from a <strong>2020 study by Stanford University</strong> researchers (Protsiv <em>et al.</em>), which explicitly tested the hypothesis that human body temperature has decreased over time. This study, published in <em>eLife</em>, combined data from three distinct cohorts spanning <strong>160 years</strong> [^4]:</p>
<ul>
<li><strong>Civil War Veterans (1860–1940):</strong> Medical records from 23,710 Union Army veterans (with temperatures taken in the late 19th to early 20th centuries).</li>
<li><strong>National Health and Nutrition Examination Survey I (1971–1975):</strong> Data from 15,301 people in the 1970s U.S. national survey.</li>
<li><strong>Stanford Clinical Data (2007–2017):</strong> Electronic health record data for 150,280 adult patients seen at Stanford in the 2000s.</li>
</ul>
<p>In total, the team analyzed <strong>over 677,000 temperature measurements</strong> across these periods [^8]. Crucially, they adjusted for differences in measurement technique and demographics, and also examined trends <em>within</em> each dataset (to control for thermometer technology changes) [^8] [^8]. The results showed a clear <strong>downward trend</strong>: <strong>average body temperatures have fallen by about 0.03 °C per birth decade</strong> [^3] [^3].</p>
<p>In practical terms, men born in the early 19th century had mean body temperatures approximately <strong>0.59 °C higher</strong> than men today, and women’s mean temperature has decreased by about <strong>0.32 °C</strong> since the 1890s [^3] [^3]. These differences translate to roughly <strong>1°F lower</strong> average body temps in the 2000s compared to the 1800s. The Stanford study found that the <strong>current average oral temperature of adults is around 36.4–36.6 °C (97.5–97.9 °F)</strong>, whereas it was about 37°C (98.6°F) in the 19th century [^2] [^4]. Notably, the decline was <strong>monotonic</strong> (steady across time) and was evident even within the Civil War veterans’ cohort over their lifespans, which makes it unlikely that the trend is an artifact of changing instruments or protocols [^3] [^8]. In short, multiple lines of evidence – from controlled 20th-century studies to large-scale historical comparisons – indicate that the <strong>“normal” human body temperature has indeed decreased by about 0.5°C (1°F) since the 19th century</strong> [^4].</p>
<h3>Evaluating Study Methods and Findings</h3>
<p>The studies above employed a range of methods, all strengthening the case for a true decline:</p>
<ul>
<li><strong>Wunderlich (1800s):</strong> Measured axillary temperatures with then-state-of-the-art thermometers; enormous sample size but all from one region. He reported a single mean without statistical distribution details [^6], so later scientists questioned how precise 37.0°C was.</li>
<li><strong>Mackowiak et al. (1992):</strong> Prospective measurement of healthy volunteers’ temperatures at set times. While the sample was much smaller, it was tightly controlled and used modern thermometry, yielding a lower mean (36.8°C) and highlighting variability [^7].</li>
<li><strong>Sund-Levander et al. (2002):</strong> Systematic literature review of 20 studies, capturing different populations and methods over decades. This provided a broad consensus that average oral temps ~36.4°C and also detailed normal ranges by site (e.g. axillary readings running lower) [^9].</li>
<li><strong>Obermeyer et al. (2017):</strong> Big-data analysis of a UK primary care database, with 35k patients and repeated measurements. This “real-world” approach found an unequivocally lower mean (36.6°C), although it was a cross-section of recent times and did not itself prove a temporal change [^8].</li>
<li><strong>Protsiv/Parsonnet et al. (2020):</strong> Historical comparison across eras. By uniting 19th-century, 1970s, and 2000s data, this study directly tested for a trend over time. The large sample and internal consistency checks (looking at birth cohorts and adjusting for age, time of day, etc.) lend strong credibility to its finding of a <em>true physiological</em> decrease in baseline body temperature [^3] [^8]. While it is an observational study (we cannot literally go back and measure people in 1860 with today’s tools), the authors took care to rule out obvious confounders like instrument calibration differences. The consistency of the decline across independent datasets is compelling.</li>
</ul>
<p>Taken together, these studies consistently find <strong>modern “normals” below 37°C</strong>, and the best evidence (the 2020 Stanford study) confirms a historical cooling trend in average body temperature.</p>
<h2>Possible Explanations: Why Are We Cooling Down?</h2>
<p>If humans today truly run cooler than a century or two ago, <strong>what might explain this change?</strong> Researchers have proposed several, not mutually exclusive, reasons. Most hypotheses center on changes in our overall health status and living environment that have occurred since the 1800s, which could affect our basal metabolic rate and thus body heat production.</p>
<p><strong>1. Reduced Chronic Infections and Inflammation:</strong> The leading explanation is that modern populations have far less burden of chronic infection and inflammation than in the past. In Wunderlich’s era, low-level infections were common: for example, in the mid-19th century <strong>approximately 2–3% of the population lived with active tuberculosis</strong> at any given time [^4]. Endemic illnesses like TB, syphilis, malaria, parasitic infestations, and poor dental health (leading to gum disease/abscesses) meant that many “healthy” people actually had chronically elevated inflammatory activity. Wunderlich himself noted that a great number of his patients had diseases that would raise body temperature [^4]. By contrast, advances over the last 150 years – <strong>improved sanitation, vaccines, antibiotics, and public health measures</strong> – dramatically reduced the prevalence of chronic infections. The authors of the 2020 Stanford study explicitly point to <strong>higher standards of living and hygiene leading to fewer persistent infections</strong> (like tuberculosis and periodontal disease) as a key factor in lowering the population’s average metabolic rate and temperature [^4]. In essence, <strong>we’re healthier now, so our bodies can run “cooler.”</strong> When the immune system is not constantly activated, the body doesn’t have to sustain as high a resting temperature.</p>
<p>Is there evidence to back up this idea? Some comes from observing groups with heavy infection burdens. A study of the Tsimané, an indigenous forager-horticulturalist population in Bolivia, provides a revealing example. The Tsimané have historically had high exposure to parasites and infections. Researchers found that fighting infections took a measurable toll on their metabolism: <strong>elevated white blood cell counts and parasitic infections were associated with a 10–15% increase in resting metabolic rate</strong> in this population [^10]. This corresponds with higher baseline body temperatures – essentially, <strong>every 1°C increase in body temp is estimated to require about a 10% increase in metabolic rate [^2]</strong>. Moreover, even in that remote population, average body temperature has been <strong>declining in recent years (2004–2018)</strong>, presumably as access to medical care improved and infection rates fell [^2]. These findings support the notion that chronic inflammation/infection elevates body temperature, and relieving that burden (through better health) can lead to a cooler baseline. Dr. Julie Parsonnet, the Stanford study’s senior author, has argued along these lines: <em>“Inflammation produces all sorts of proteins and cytokines that rev up your metabolism and raise your temperature,”</em> she notes, so a reduction in inflammation could explain lower body temps today [^8]. In short, our bodies may not have to work as hard fighting microbes day-to-day, allowing our resting temperature to fall.</p>
<p><strong>2. Comfortable Modern Environments (Thermoregulation made easier):</strong> Another major factor is the modern ability to control our ambient environment. Widespread use of <strong>heating and air conditioning</strong>, improved insulation, and warmer clothing means that people today spend most of their time in a <strong>thermoneutral zone</strong> – an environment that doesn’t force our bodies to expend extra energy to heat up or cool down [^2] [^4]. In the 1800s, people routinely endured cold bedrooms in winter or sweltering homes in summer, which would trigger the body to boost metabolic heat production (or sweating, etc.) to maintain 37°C internally. Now, we keep our living and working spaces around ~21–23°C (70–74°F) year-round. Parsonnet and colleagues suggest that this <strong>reduced thermal stress on the body</strong> has likely contributed to a lower basal metabolic rate [^2] [^4]. Our internal “thermostat” doesn’t have to run as high when the surrounding environment is consistently comfortable. Even the simple fact that we can stay warm with less shivering (thanks to better clothing and heating) or cool with fans/AC could translate to a slightly lower average body temperature over time [^2].</p>
<p><strong>3. Changes in Body Size and Lifestyle:</strong> Humans have changed in other ways since the 19th century. We are, on average, taller and have higher body mass (both lean and fat mass) than our ancestors. Metabolically, a larger body might produce more absolute heat, but also dissipates heat differently; some scientists have speculated that <strong>increased body mass (and obesity prevalence)</strong> could paradoxically lead to feeling warmer yet having a lower measured core temperature due to heat dissipation issues – though this link is not clearly established. Additionally, a more <strong>sedentary lifestyle</strong> (less manual labor, more sitting) might reduce overall metabolic tone. The Stanford team did adjust for BMI and found the temperature decline persisted [^3], so body composition is likely not a primary driver. Still, lifestyle and physiology today differ from the 1800s in myriad ways (diet, stress, etc.) that could subtly influence our baseline temperature. Parsonnet admitted <em>“we don’t really know what [these changes] mean in terms of health, but they’re telling us that we are changing”</em> [^2].</p>
<p>In summary, the most plausible explanation for cooler modern body temperatures is <strong>improved health and lower chronic inflammation</strong>, coupled with living in more temperature-stable environments. These factors together allow our resting metabolic rate to be a bit lower [^8] [^4]. It’s important to note that the 2020 study did <em>not</em> pinpoint an exact cause for the drop – their data showed the pattern, but determining causation relies on reasoning and auxiliary evidence [^2]. The hypotheses above are supported by logical and some empirical evidence, but they haven’t been proven with a controlled experiment (which would be impractical over 150 years!). Nonetheless, experts find them convincing. As one science writer neatly put it: <em>“Lower metabolic rates, thanks to improved public health, may be behind the decrease”</em> in temperature [^4] [^4].</p>
<h2>“Normal” Body Temperature: Changing Definitions Over Time</h2>
<p>The definition of a “normal” body temperature – and the threshold of fever – has evolved along with these findings. Historically, Wunderlich’s work gave birth to the idea of a standard normal temperature, but even he intended it as an average with a range. Over time, medical guidelines have shifted from citing a single “normal” value to emphasizing ranges and individual baselines.</p>
<p><strong>Wunderlich’s Era (mid-1800s):</strong> Based on his extensive measurements, Wunderlich defined normal body temperature as around 37°C, with typical fluctuations between about 36.2°C and 37.5°C in healthy adults [^3] [^4]. He recognized that <strong>no one constant temperature applies to every person at all times</strong> – normal varies by hour of day and person. He also set an early benchmark for fever: about <strong>38°C (100.4°F) and above</strong> was considered febrile [^5]. Interestingly, this threshold is a bit higher than some modern fever definitions, likely because his measurements were axillary (underarm) which run slightly cooler than core temperature [^11]. In textbooks of the late 19th century, it became common to quote “<strong>37°C or 98.6°F</strong>” as normal body heat (sometimes with the qualifier “about” to hint at variation) [^6]. The concept of <strong>“blood heat ~98°F”</strong> was already in use before Wunderlich, but his data cemented 98.6°F as the canonical normal [^6].</p>
<p><strong>20th Century Medical Literature:</strong> Throughout the 20th century, 98.6°F remained entrenched in medical teaching and public knowledge – but medical professionals increasingly acknowledged that <em>normal temperature encompasses a range</em>. For example, a 1960s or 1970s nursing or medical textbook would still list 37°C as the standard, but also note the normal diurnal variation of roughly 0.5°C [^11]. By 1990, the textbook <strong>Clinical Methods</strong> stated: <em>“Normal body temperature is considered to be 37°C (98.6°F); however, a wide variation is seen”</em> [^11]. It described how an individual’s daily temperature can vary by up to 0.5°C, with early morning lows and evening peaks, and emphasized that this circadian rhythm is consistent for each person [^11]. The text also gave differences by measurement site: <strong>rectal temperatures run about 0.3–0.5°C higher than oral, while axillary temperatures run ~0.5°C lower than oral</strong> [^11].</p>
<p>When it came to defining fever, late-20th-century guidelines were already a bit more conservative than Wunderlich’s 38°C axillary rule. Clinically, <strong>fever is often defined as an oral temperature above ~37.5°C (99.5°F)</strong>, or a rectal temperature above 38.0°C (100.4°F) [^11]. In other words, an oral temperature between 37°C and 37.4°C (98.6–99.3°F) might be considered the high end of normal, whereas crossing 37.5°C begins to be viewed as low-grade fever. This aligns well with Wunderlich’s range – remembering that his 38°C “upper normal” was for axillary readings (approximately equivalent to 37.5°C oral) [^11]. Thus, even as 98.6°F was taught, physicians understood that <strong>normal ranges roughly span 36.5°C to 37.5°C for oral temperature</strong> in an adult [^6].</p>
<p>One illustrative example: a 1992 editorial in JAMA noted that Mackowiak’s new data (mean 36.8°C) did not actually overthrow Wunderlich’s legacy but refined it – normal body temperature was always a range, and 98.6°F was simply one point in that range [^12] [^12]. What did change is how strictly the exact number is regarded. Doctors began to recognize that many healthy people never reach 98.6°F, and some always run a bit higher or lower normally.</p>
<p><strong>21st Century and Current Guidelines:</strong> Today, medical references and health authorities define “normal” body temperature more flexibly than ever. It’s common to cite a <strong>range of normal</strong> for oral temperature, often something like <em>36.1°C to 37.2°C (97°F to 99°F)</em>, rather than a single number [^1]. The U.S. Centers for Disease Control (CDC) and other agencies continue to use <strong>100.4°F (38.0°C)</strong> as a convenient cut-off for fever in screening (e.g. for infections), which is essentially unchanged from older definitions. What <em>has</em> shifted is the awareness that the average person’s baseline may be a bit below 98.6°F. As reviewed above, large studies in the 2000s found averages around <strong>97.5–97.9°F</strong> [^1] [^4]. Many recent articles – both in medical journals and popular science outlets – explicitly state that <strong>98.6°F is likely too high as an average for modern healthy adults</strong> [^1] [^2]. For instance, a 2020 Stanford study press release flatly said, “Our temperature’s not what people think it is… what everybody grew up learning, which is 37°C, is wrong” [^8] [^8]. A 2023 editorial from Harvard Health Publishing posed the question, <em>“Time to redefine normal body temperature?”</em>, noting that 98.6°F “may not, in fact, represent the best estimate of normal body temperature” given new data and that <strong>normal body temperature may be falling over time</strong> [^1].</p>
<p>In practice, clinicians now focus less on an absolute “normal temp” and more on <strong>changes from an individual’s typical temperature</strong> and the presence of symptoms. As infectious disease physician Aimalohi Ahonkhai explains, <em>“There’s no one number that means a fever, and there never has been. If you’re way warmer than what’s typical for you, let your doctor know. But if you’re cooler than you expected, that may just be the new normal.”</em> [^4]. This encapsulates the modern view: <strong>“normal” is a range, and context matters</strong>. Even Wunderlich in 1868 would have agreed – he wrote of fever as an elevation above one’s usual daily pattern, rather than strictly above a universal cut-off.</p>
<p>To compare across time: <strong>19th-century doctors introduced the 37°C average and used ~38°C as a practical fever threshold</strong>, <strong>20th-century medicine retained 37°C as a reference point but acknowledged a normal range and set fever at ~37.5–38°C</strong>, and <strong>21st-century experts highlight that the true average is slightly lower and that normal varies individually</strong>. The core definition of normal body temperature is thus shifting from a fixed number to a <strong>population distribution (approximately 36.5–37.5°C for healthy adults, now possibly centered closer to 36.6–37.0°C) and even toward a personalized concept of normal</strong>.</p>
<h2>Conclusion</h2>
<p>Over the past two centuries, the scientific understanding of human body temperature has come full circle in some ways. Carl Wunderlich’s 19th-century monumental study established 37°C as an approximate normal value, but also taught us that normal varies by person and circumstance. In the years since, we enshrined “98.6°F” in textbooks, only to have modern research remind us that <strong>normal is not a single static number</strong>. Careful analyses of historical and contemporary data show that the <strong>average human body temperature has declined by about 0.5°C (1°F)</strong> from the 1800s to today [^4]. The reasons likely include improvements in public health – fewer chronic infections and inflammatory conditions – and changes in our environment and lifestyle that reduce metabolic demands. As a result, the midpoint of “normal” is a bit cooler now than before.</p>
<p>In medical practice, this means that while 98.6°F is not “wrong,” it is better seen as the high end of a normal range rather than an exact norm for everyone. Modern healthcare providers define normal temperature broadly and pay attention to individual baseline changes. The concept of normal body temperature continues to be refined with new data: what hasn’t changed is the fundamental insight from Wunderlich’s era that <strong>human temperature is variable, and understanding its patterns is key to detecting illness</strong>. The past 160 years have simply taught us that those patterns can shift with the times – and indeed, our <strong>“new normal” may literally be a degree cooler</strong> than it was for our ancestors [^4].</p>
<p>[^1]: Time to redefine normal body temperature? - Harvard Health: <a href="https://www.health.harvard.edu/blog/time-to-redefine-normal-body-temperature-2020031319173">https://www.health.harvard.edu/blog/time-to-redefine-normal-body-temperature-2020031319173</a></p>
<p>[^2]: Parsonnet, J. <em>et al.</em> (Stanford University School of Medicine) Press Releases and Commentary on body temperature studies (2020–2023) (<a href="https://www.scientificamerican.com/article/are-human-body-temperatures-cooling-down/#:~:text=The%20researchers%20did%20not%20determine,tuberculosis%2C%20syphilis%20or%20gum%20disease">Are Human Body Temperatures Cooling Down? | Scientific American</a>) (<a href="https://www.medicalnewstoday.com/articles/327458#:~:text=Furthermore%2C%20Dr,from%20a%20decrease%20in%20inflammation">Body temperature: What is the new normal?</a>). – <strong>Discussions of potential causes (reduced infections, metabolic changes, climate control)</strong>.</p>
<p>[^3]: Wunderlich, C.A., <em>Das Verhalten der Eigenwärme in Krankheiten</em> (Leipzig, 1868); translated as <em>On the Temperature in Diseases</em> (1871). – <strong>Historic study defining 37°C normal (axillary) from 25,000 patients</strong> ( <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC6946399/#:~:text=In%201851%2C%20the%20German%20physician,6%C2%B0C">Decreasing human body temperature in the United States since the Industrial Revolution - PMC</a> ) (<a href="https://www.discovermagazine.com/the-sciences/average-body-temperature-takes-a-dip#:~:text=years%2C%20he%20recorded%20over%201,lower%20temperatures%20than%20younger%20folks">Average Body Temperature Takes A Dip | Discover Magazine</a>).</p>
<p>[^4]: Wunderlich, C.A., <em>Das Verhalten der Eigenwärme in Krankheiten</em> (Leipzig, 1868); translated as <em>On the Temperature in Diseases</em> (1871). – <strong>Historic study defining 37°C normal (axillary) from 25,000 patients</strong> ( <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC6946399/#:~:text=In%201851%2C%20the%20German%20physician,6%C2%B0C">Decreasing human body temperature in the United States since the Industrial Revolution - PMC</a> ) (<a href="https://www.discovermagazine.com/the-sciences/average-body-temperature-takes-a-dip#:~:text=years%2C%20he%20recorded%20over%201,lower%20temperatures%20than%20younger%20folks">Average Body Temperature Takes A Dip | Discover Magazine</a>).</p>
<p>[^5]: Carl Wunderlich • LITFL • Medical Eponym Library: <a href="https://litfl.com/carl-wunderlich/">https://litfl.com/carl-wunderlich/</a></p>
<p>[^6]: Human body temperature - Wikipedia: <a href="https://en.wikipedia.org/wiki/Human_body_temperature">https://en.wikipedia.org/wiki/Human_body_temperature</a></p>
<p>[^7]: Mackowiak, P. <strong>et al.</strong> “A Critical Appraisal of 98.6°F… and Other Legacies of Wunderlich,” <em>JAMA</em> <strong>268</strong>:1578–80 (1992). – <strong>Found mean 36.8°C in healthy subjects, questioning 98.6°F</strong> (<a href="https://jamanetwork.com/journals/jama/fullarticle/404131#:~:text=98.6%C2%B0F%20,2%C2%B0F">98.6°F - JAMA Network</a>).</p>
<p>[^8]: Obermeyer, Z. <strong>et al.</strong> “Individual differences in normal body temperature: longitudinal big data analysis,” <em>BMJ</em> <strong>359</strong>:j5468 (2017). – <strong>Big-data study (UK) reporting ~36.6°C mean, reinforcing lower modern normal</strong> (<a href="https://www.medicalnewstoday.com/articles/327458#:~:text=For%20instance%2C%20a%20study%20of,expectancy%20and%20better%20overall%20health">Body temperature: What is the new normal?</a>).</p>
<p>[^9]: Normal oral, rectal, tympanic and axillary body temperature in adult men and women: a systematic literature review - PubMed: <a href="https://pubmed.ncbi.nlm.nih.gov/12000664/">https://pubmed.ncbi.nlm.nih.gov/12000664/</a></p>
<p>[^10]: High resting metabolic rate among Amazonian forager-horticulturalists experiencing high pathogen burden - PMC: <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC5075257/">https://pmc.ncbi.nlm.nih.gov/articles/PMC5075257/</a></p>
<p>[^11]: Temperature - Clinical Methods - NCBI Bookshelf: <a href="https://www.ncbi.nlm.nih.gov/books/NBK331/">https://www.ncbi.nlm.nih.gov/books/NBK331/</a></p>
<p>[^12]: The Charmed Life of 98.6°F - The American Journal of Medicine: <a href="https://www.amjmed.com/article/S0002-9343(25)00057-9/fulltext">https://www.amjmed.com/article/S0002-9343(25)00057-9/fulltext</a></p>
]]></content>
        <author>
            <name>Sam Patterson</name>
            <uri>https://sampatt.com</uri>
        </author>
    </entry>
    <entry>
        <title type="html"><![CDATA[Informed versus Uninformed AI Skeptics]]></title>
        <id>https://sampatt.com/blog/2025-03-27-spotting-naive-ai-skeptics</id>
        <link href="https://sampatt.com/blog/2025-03-27-spotting-naive-ai-skeptics"/>
        <updated>2025-03-27T10:22:50.000Z</updated>
        <summary type="html"><![CDATA[In Which I Describe why some AI Skeptics need to Experiment]]></summary>
        <content type="html"><![CDATA[<p>This quote from a <a href="https://arxiv.org/pdf/2503.18238">recent paper</a> caught my attention:</p>
<blockquote>
<p>First, while the current literature, such as Dell’Acqua et al. (2023) and Chen and Chan (2024), reveal the productivity effects of AI by randomizing access to LLM chatbots, they are not multimodal, do not include context, do not allow the chatbots to take independent actions or use APIs to call outside of the platform, and do not provide a collaborative workspace where machines and humans can jointly manipulate output artifacts in real-time.</p>
</blockquote>
<p>In other words, measuring the productivity gains from using AI has been hampered because of the artificial constraints of the study design. They’re not giving their users access and the environment in which they’d actually use the models for real.</p>
<p>This got me thinking about some of the AI skeptics I’ve encountered over the past few years.</p>
<h1>AI Skeptics</h1>
<p>I don’t like hype. I got into the world of cryptocurrency when Bitcoin was really the only game in town, and I’ve been exposed to more hype than any man should have to witness.</p>
<p>I know many others like me, especially in tech. They’ve been there, done that, and whether or not the bought the t-shirt, they have well-tuned bullshit detectors.</p>
<p><strong>Or so they think.</strong></p>
<p>The truth is, none of us knows the future. Yes, we can spot hype, but can we know for sure just how delusional the hype is <em>this time around?</em> No. At least, not without proper time investment to see for ourselves.</p>
<p>I’ve seen many examples of otherwise thoughtful people correctly seeing hype and then incorrectly dismissing the underlying technology because of the mere existence of the hype. The truth they don’t want to see is that <em>their uninformed dismissal is just as naive as the uninformed hype</em>.</p>
<p>A few times I’ve seen people make overly dismissive claims about AI (usually in HN or Reddit comment threads), and the responses are often the same: “I’m not uninformed! I’ve used the models and they didn’t work for me.”</p>
<p>Isn’t this a valid response? Of course! Their own experiences are far more valuable than taking someone else’s word about the models’ capabilities.</p>
<p>Yet… these responses were often very far from my own experiences, which made me curious as to why. So I would follow up with questions, and I discovered some commonalities.</p>
<h1>They aren’t using the latest models</h1>
<p><img src="https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2025-03-28-ai-skeptics/image/scoff_synth.png" alt="ScoffSynth"></p>
<p>Remember the ancient days of GPT-3.5? This was the first mainstream model, and it sparked many of these original conversations about the models’ capabilities. It was amazing that a computer could hold a conversation at all, but of course the model had serious limitations.</p>
<p>Remember the less ancient days of the GPT-4 release? So do I - it was a major improvement from 3.5, one that kept me feeling excited about where this was heading.</p>
<p>The hype from 3.5 didn’t die down when 4 released - instead it grew. This unleashed a wave of AI skeptics who pointed out the limitations of the models at every opportunity.</p>
<p>The problem I noticed was that their objections were almost entirely based on 3.5 and not 4. They would post their prompt and response, then point and laugh. I would ask, “Was this 3.5 or 4?” and I estimate 90% of the time it was the older model. I would rerun the prompt with 4, and of course the output was dramatically improved.</p>
<p>This still happens, frequently. When the models’ capabilities upgraded after the transition to reasoning occurred, many examples of poor mathematical reasoning were still trotted out.  Image generation flaws are laughed at, but aren’t done with SOTA models. Just this week OpenAI’s 4o image tool rolled out and seems to have just about solved text generation in images (see the images in this post) - I guarantee you many will continue to say image generation models will never work for certain applications because they can’t handle text properly.</p>
<h1>They move goalposts</h1>
<p><img src="https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2025-03-28-ai-skeptics/image/scoff_synth2.png" alt="ScoffSynth"></p>
<p>Of course AI models can imitate human conversation - that isn’t really impressive. Of course AI models can do advanced math - so what? Of course they can make photorealistic images - that problem wasn’t that hard. Of course they can generate boilerplate code - they’re just ingesting basic documentation. Of course they can troubleshoot bugs - they’re just ingesting Stack Overflow. Of course…</p>
<p>You can’t transport your mind back to 2020, but if you could, I’m nearly 100% convinced that you would be absolutely astonished by what AI can now do. We passed the Turing Test ages ago, and few people cared.</p>
<p>“The models can do X, but they can’t do Y.” They then do Y. “They can do Y, but really Z is the thing humans need and models can’t do.” Over and over again.</p>
<h1>They don’t know how to prompt</h1>
<p>I almost didn’t include this observation, because if a model requires you to become an expert in prompting it in order for it to be useful, then that’s a valid objection. This is true - the more time you put into using the models, the better you get at understanding how to prompt them, and the better the responses get.</p>
<p>I am including it, not because the objection isn’t valid, but because so many skeptics are lacking basic awareness of how important prompting is. They’re doing the equivalent of asking a new intern on their team to handle a complex task without giving them the context they need in a completely new environment.</p>
<p>When I see the prompts they use, I asked them what the rest of the context looks like. All too often, that was it. Then I ask them for the follow up prompts. Nope, that was it - they saw the model failed, and they stopped there.</p>
<p>This just isn’t how you use the models. Well, maybe for simple requests. But if you want them to do something complex, you need to be more thoughtful about what information you’re giving, what instructions you’re giving, and how to guide the model throughout a conversation.</p>
<p>If you’re looking for confirmation that AI can’t do something, you’ll find it. It takes a bit more effort to understand how they can be genuinely useful.</p>
<h1>They don’t use tools</h1>
<p>This one is becoming less true over time - I’ve seen a lot of people say they’ve tried Cursor, for example, and didn’t like it. Kudos to you for trying a new tool.</p>
<p>However it still boggles my mind how many developers have never used any type of agentic coding assistant, and their opinions are formed based on prompting manually through a webUI and copying and pasting code.</p>
<p>I get it - it’s what you’re comfortable with. And it does help you. But if you’ve never tried Claude Code or Cursor or Aider or the other tools, please give them a try. Moving away from needing to copy and paste is already a huge improvement, but these tools do way more than that now.</p>
<p>If you couple these tools along with learning good prompting and having persistence, they quickly become indispensable, at least for greenfield projects.</p>
<h1>Conclusion</h1>
<p>Skepticism is good, but informed skepticism is better. If you’re using the SOTA models, you’ve used models enough to know how to prompt and guide them, you’re trying out the latest tools, and you still believe that AI isn’t all that useful - great! You have an informed opinion.</p>
<p>As an AI optimist, I’ve slowly learned the ways in which I was overly-optimistic about what AI could do, or the timelines involved. All I hope to see is the same genuine attempt at learning from the AI skeptics.</p>
]]></content>
        <author>
            <name>Sam Patterson</name>
            <uri>https://sampatt.com</uri>
        </author>
    </entry>
    <entry>
        <title type="html"><![CDATA[We’re Still the Best Bakers: The Myth of American Manufacturing Decline]]></title>
        <id>https://sampatt.com/blog/2025-04-09-america-versus-china-manufacturing</id>
        <link href="https://sampatt.com/blog/2025-04-09-america-versus-china-manufacturing"/>
        <updated>2025-04-09T16:17:06.000Z</updated>
        <summary type="html"><![CDATA[In Which I Explain Per Capita Manufacturing Favors the US]]></summary>
        <content type="html"><![CDATA[<h1>Summary</h1>
<p>China manufactures more than the U.S.—but that’s because they have four times the population. When you look at per-person output, the U.S. is far more productive. There is no crisis. Tariffs won’t fix a problem that doesn’t exist.</p>
<h1>The Non-Existent Crisis</h1>
<p>We are told that America no longer produces anything, and that we’ve made a huge mistake by allowing China and other nations to become manufacturing powerhouses.</p>
<p>This is such a dire threat to America that the President claims emergency powers to unilaterally launch a trade war, hiking tariffs to levels not seen in nearly a century.</p>
<p>There are many problems with this approach, but the most glaring is that America is still a manufacturing powerhouse itself. In fact, it’s far more productive than China.</p>
<h1>The Numbers</h1>
<p>Let’s dig into why this misconception persists: raw numbers without context. It may be partially due to only looking at the total manufacturing output.</p>
<p>The <a href="https://data.worldbank.org/indicator/NV.IND.MANF.CD">World Bank data</a> shows China at $4.6 trillion worth of manufacturing in 2023, and the US at $2.5 trillion in 2021. I’ll round China to $5 trillion to make the example even simpler.</p>
<p>That means China manufactures roughly twice as many goods as the US. That’s not just a lot in raw terms—China accounts for 31% of global manufacturing, while the US contributed only 16%.</p>
<p><img src="https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2025-04-09-america-versus-china-manufacturing/image/2025-04-09-17-06.png" alt="Screenshot"></p>
<p>So does China’s higher total output prove we’re falling behind? Not quite.</p>
<p>No, for a simple reason: <strong>China has ~1.4 billion people, and the US has ~350 million.</strong></p>
<p>They have almost exactly four times our population, yet they only produce twice as much. This means that Americans manufacture twice as much as the Chinese, per person.</p>
<p><img src="https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2025-04-09-america-versus-china-manufacturing/image/2025-04-09-17-45.png" alt="Screenshot"></p>
<p>It’s difficult to conceive how our manufacturing superiority is a crisis.</p>
<h1>An analogy: Bakeries</h1>
<p><img src="https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2025-04-09-america-versus-china-manufacturing/image/2025-04-09-17-39.png" alt="Screenshot"></p>
<p>If you prefer analogies to dry numbers, let’s imagine that China and the US are two bakeries, one operating in a city and another in a small town.</p>
<p>The city supports a big bakery - it has 300 workers. Those three hundred workers produce 1,000 loaves of bread each day. Each city baker turns out about <strong>3.3 loaves</strong>.</p>
<p>The small town has a population only one-quarter the size, and their bakery has only 70 workers. Despite having so few workers, they still produce half the output of the giant bakery, making 500 loaves a day. Each small town baker produces <strong>7 loaves</strong>, <strong>double</strong> the productivity of their city counterparts.</p>
<p>Does the greater raw output of the city bakery threaten the small town bakery? That’s unlikely, since there’s demand for bread in both the city and town, as well as in other places. They’ve got better machinery and better bakers. They’ll be just fine.</p>
<p>Before launching a trade war, we should understand the numbers. America isn’t failing—we’re outperforming. Let’s not sabotage that with economic self-harm.</p>
]]></content>
        <author>
            <name>Sam Patterson</name>
            <uri>https://sampatt.com</uri>
        </author>
    </entry>
    <entry>
        <title type="html"><![CDATA[o3 Beats a Master-Level Geoguessr Player—Even with Fake EXIF Data]]></title>
        <id>https://sampatt.com/blog/2025-04-28-can-o3-beat-a-geoguessr-master</id>
        <link href="https://sampatt.com/blog/2025-04-28-can-o3-beat-a-geoguessr-master"/>
        <updated>2025-04-28T08:34:44.000Z</updated>
        <summary type="html"><![CDATA[In Which I Try to Maintain Human Supremacy for a Bit Longer]]></summary>
        <content type="html"><![CDATA[<p><em>Update: Hello HN, MR, and ACX folks! Two quick updates:</em></p>
<ol>
<li><em>Many comments suggested it was unfair that the o3 model used search in 2 / 5 rounds. I ran those two rounds over again, in a Temporary Chat as before, and ensured they didn’t employ search. The results were nearly identical, as you can verify in the updated post.</em></li>
<li><em>I’m unemployed and would love to not be unemployed. If you have a project involving map data - or frankly just anything interesting - send me an email.</em></li>
</ol>
<p><em>Update 2: The <a href="https://www.geoguessr.com/challenge/gDq4wXvsLU3oNuY8">map</a> has now been played by 175 players! o3 is currently holding strong in 13th place.</em></p>
<p><em>Update 3: ccmdi wrote an informative <a href="https://ccmdi.com/blog/additional-thoughts-on-llm-geolocation">blog post</a> about the current state of LLM geolocation - they’ve also created a neat LLM <a href="https://geobench.org/">geolocation benchmark</a>.</em></p>
<p><img src="https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2025-04-26-geoguessr/image/2025-04-29-21-50.png" alt="Screenshot"></p>
<h1>TL;DR</h1>
<p>In a head-to-head Geoguessr match, OpenAI’s o3 model out-scored me—a Master I–ranked human—23,179 to 22,054, correctly identifying all five countries and twice landing within a few hundred metres. Even when I embedded fake GPS coordinates in the image EXIF, the model ignored the spoof and still pinpointed the real locations, showing its performance comes from visual reasoning and on-the-fly web sleuthing—not hidden metadata.</p>
<h1>Background</h1>
<p>Simon Willison made a <a href="https://news.ycombinator.com/item?id=43803243">post to Hacker News</a> a few days ago about the surprising geolocation ability of the o3 model. He fed it some images, and it was able to guess the location remarkably well.</p>
<p>I left a comment based on my opinion as a Geoguessr player:</p>
<blockquote>
<p>I play competitive Geoguessr at a fairly high level, and I wanted to test this out to see how it compares.</p>
<p>It’s astonishingly good.</p>
<p>It will use information it knows about you to arrive at the answer - it gave me the exact trailhead of a photo I took locally, and when I asked it how, it mentioned that it knows I live nearby.</p>
<p>However, I’ve given it vacation photos from ages ago, and not only in tourist destinations either. It got them all as good or better than a pro human player would. Various European, Central American, and US locations.</p>
<p>The process for how it arrives at the conclusion is somewhat similar to humans. It looks at vegetation, terrain, architecture, road infrastructure, signage, and it just knows seemingly everything about all of them.</p>
<p>Humans can do this too, but it takes many thousands of games or serious study, and the results won’t be as broad. I have a flashcard deck with hundreds of entries to help me remember road lines, power poles, bollards, architecture, license plates, etc. These models have more than an individual mind could conceivably memorize.</p>
</blockquote>
<p>The post and my comment did very well on HN, and provoked some interesting discussion, leading to Simon creating a <a href="https://simonwillison.net/2025/Apr/26/geoguessr/">short post</a> highlighting my input.</p>
<p>Many people shared the same experiences that Simon and I had, being astounded by how well the models performed. But there were two threads of opposition running through the comments:</p>
<ol>
<li>The models were faking their chain of thought output and only reading the EXIF location data (AKA they’re tricking us).</li>
<li>The models aren’t actually that good at doing this, we cherry picked or otherwise just got lucky.</li>
</ol>
<p>It’s absolutely true that the models will use EXIF data if it’s available, and may not tell you that. In fact, they’ll use any information they have. I shared a story where this happened to me:</p>
<blockquote>
<p>I was a part of an AI safety fellowship last year and our project was creating a benchmark for how good AI models are at geolocation from images. [This is where my Geoguessr obsession started!]</p>
<p>Our first run showed results that seemed way too good; even the bad open source models were nailing some difficult locations, and at small resolutions too.</p>
<p>It turned out that the pipeline we were using to get images was including location data in the filename, and the models were using that information. Oops.</p>
</blockquote>
<p>However, the EXIF issue was completely overblown. I only used screenshots in my tests, which have no metadata, and o3 did incredibly well.</p>
<p>But several comments intrigued me:</p>
<blockquote>
<p>You should also see how it fares with incorrect EXIF data. For example, add EXIF data in the middle of Times Square to a photo of a forest and see what it says.</p>
</blockquote>
<blockquote>
<p>I think an alternative possible explanation is it could be “double checking” the meta data. Like provide images with manipulated meta data as a test.</p>
</blockquote>
<blockquote>
<p>I wonder What happened if you put fake EXIF information and asking it to do the same. ( We are deliberately misleading the LLM )</p>
</blockquote>
<p>Hmm… interesting idea.</p>
<p>So to test out the EXIF question, and to prove the geoguessing capabilities of the model definitively, I did a head-to-head test against a Master I level Geoguessr player - me!</p>
<h1>My Skill</h1>
<p>To clarify, I’m no Rainbolt or Zi8gzag. Those are <em>professional</em> Geoguessr players, full-time content creators who’ve been playing for years.</p>
<p>Geoguessr has a division ranking system based on ELO, and those guys are all in the top division, Champion. Below that in descending order are Master I &amp; II; Gold 1, 2, 3; Silver 1, 2, 3; and Bronze.</p>
<p><img src="https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2025-04-26-geoguessr/image/2025-04-27-22-51.png" alt="Screenshot"></p>
<p>I’m Master I, trying to grind into the Champions division. My highest ELO was 1188 - currently Champion ELO starts in the ~1230 range. Basically, I’m a master - not an IM or a GM.</p>
<p>I know enough to judge the model’s capabilities, and to see if the chain of thought reasoning it puts out makes sense, or is just nonsense. For the reader’s sake, I’ll explain my own chain of thought for my guesses as well, so you can see how they line up.</p>
<h2>How I tested</h2>
<ul>
<li><strong>Map &amp; seed</strong> — A Community World map, seed<br>
<code>gDq4wXvsLU3oNuY8</code> (<a href="https://www.geoguessr.com/challenge/gDq4wXvsLU3oNuY8">play it yourself</a>).</li>
<li><strong>Mode</strong> — <em>No Move</em>. I saw the full Street-View panorama; o3 saw exactly two 90° screenshots (start + opposite direction)</li>
<li><strong>Browsing/tools</strong> — o3 had normal web access enabled. No EXIF in the PNGs; for the spoof test I zipped the file so the metadata survived upload.</li>
<li><strong>Scoring</strong> — Standard Geoguessr 0–5,000 points per round, total out of 25,000.</li>
</ul>
<p>The o3 model’s outputs can be very lengthy, and this post would get way too long if I included them all, so I’m embedding them individually as notes, and you can read them as you please. Note that the CoT reasoning also includes images as shown below, but I’m not including them to save space (it creates a lot of cropped images).</p>
<p><img src="https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2025-04-26-geoguessr/image/2025-04-27-20-13.png" alt="Screenshot"></p>
<p>I’m also using Temporary Chat mode, to ensure that any previous conversations won’t impact the guessing.</p>
<h1>The Battle</h1>
<h2>Round 1 - Bulgaria</h2>
<h3>Human</h3>
<p><img src="https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2025-04-26-geoguessr/image/2025-04-27-17-26.png" alt="Screenshot"></p>
<p>First thought is Europe or Turkey. I then zoom into this pole and the roofs behind it.</p>
<p><img src="https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2025-04-26-geoguessr/image/2025-04-27-17-37.png" alt="Screenshot"></p>
<p>I’m 99% sure it’s Bulgaria now. You see these real tile roofs everywhere, and the concrete poles with upward hooks are typical too.</p>
<p>I scan around and confirm. Now that I’ve got the country, I begin thinking about the region guess. The obvious information is the terrain and the architecture - fairly large hills, somewhat southern architecture.</p>
<p>I scan the map quickly and find a decent spot and plonk down my marker.</p>
<h3>o3</h3>
<p><strong>Thought for 2m 39s</strong></p>
<p>{{note:o3-geoguessr-test-round-1-cot}}</p>
<p><strong>Southern Bulgaria – the small Rhodope-mountain town of Zlatograd, in Smolyan Province.</strong></p>
<h3>Result</h3>
<p><img src="https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2025-04-26-geoguessr/image/2025-04-27-18-49.png" alt="Screenshot"></p>
<p>I was closer, but not much, I only have a ~100 point lead. Still, I’m feeling like I’m off to a good start.</p>
<h2>Round 2 - Austria</h2>
<h3>Human</h3>
<p><img src="https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2025-04-26-geoguessr/image/2025-04-27-17-54.png" alt="Screenshot"></p>
<p>First thought - obvious Europe. The architecture looks Swiss or Austrian, and only a second later I see the domain name on the rear of the car ending in .at - that’s a bit too easy! How will the model use that domain?</p>
<p>Region guessing Austria is difficult for me, basically I just look to see if I’m in the mountains or not. There are some hills around, but they aren’t the huge mountains. I guess on the eastern side of the country, away from the Alps but still in the hills.</p>
<p><img src="https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2025-04-26-geoguessr/image/2025-04-27-17-59.png" alt="Screenshot"><br>
Oh no! It’s as far west in Austria as you can get. There’s a flatter section there near the Swiss and German borders. I’m 381 km away, giving o3 an opportunity - will it capitalize on it?</p>
<h3>o3 - with search</h3>
<p><em>[I forgot to copy the CoT from this round, and it’s a Temporary Chat so it’s gone, oops.]</em></p>
<p>My best shot: <strong>Dornbirn, Vorarlberg, Austria — roughly 47.41 N 9.73 E (a residential side-street a few blocks north-east of the town centre).</strong></p>
<h3>Why I landed there</h3>
<table>
<thead>
<tr>
<th>Clue</th>
<th>What it told me</th>
</tr>
</thead>
<tbody>
<tr>
<td>Mountain backdrop</td>
<td>Steep, forested Alp with a sharp west ridge – looks exactly like the Karren/Kapf massif that looms just east of Dornbirn in the Rhine valley.</td>
</tr>
<tr>
<td>Architecture</td>
<td>Mix of <strong>Bregenzerwald-style</strong> timber barns and the red-and-white “Rheintaler Riegelhaus” that’s endemic to northern Vorarlberg; not typical for neighbouring Switzerland or Germany.</td>
</tr>
<tr>
<td>Street furniture</td>
<td>Austrian-style blue street-name plates; small red-on-white house-number plaques (standard across Vorarlberg); E-type (EU) traffic signs.</td>
</tr>
<tr>
<td>The taxi</td>
<td>Rear window decal clearly reads <strong>“<a href="http://www.taxilinder.at">www.taxilinder.at</a>”</strong>. A quick lookup shows <em>Taxi Linder GmbH</em> is based in <strong>Dornbirn, Vorarlberg</strong>.</td>
</tr>
<tr>
<td>Licence-plate blur</td>
<td>Austrian format (but too blurry to read the district code).</td>
</tr>
<tr>
<td>Flat valley street with the mountain dominating the <strong>east-south-east</strong> horizon</td>
<td>Exactly the perspective you get from Dornbirn’s northern residential quarters (Hatlerdorf / Ried / Wallenmahd).</td>
</tr>
</tbody>
</table>
<p>Putting those together, Dornbirn is by far the tightest match. If I had to drop a pin, I’d place it within a couple of hundred metres of Wallenmahd / Riedgasse, just north of the Ache river.</p>
<h3>Result</h3>
<p><img src="https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2025-04-26-geoguessr/image/2025-04-27-18-55.png" alt="Screenshot"></p>
<p>Uh oh… it nailed it. It would have gotten the 5k too, since it said only a few hundred meters north of the river, and I guessed a bit further than that when I dropped the pin. Ouch.</p>
<p>It also visited the domain in order to get more information. Using Google during rounds is technically cheating - I’m unsure about visiting domains you find during the rounds though. It certainly violates the spirit of the game, but it also shows the models are smart enough to use whatever information they can to win.</p>
<p><em>Update: When I ran it again without search, it used the mountain outline to find the town anyway, see the image below. Search wasn’t needed at all!</em></p>
<p>Now o3 has a 800 point lead on me. I need a good round!</p>
<h3>o3 - without search</h3>
<p>{{note:o3-geoguessr-battle-austria-without-search}}</p>
<p>[Author - this image says it all:]</p>
<p><img src="https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2025-04-26-geoguessr/image/2025-04-29-16-08.png" alt="Screenshot"></p>
<p>I’m putting us in <strong>Dornbirn, Vorarlberg, Austria – about 47.41 ° N, 9.74 ° E, just east of the town centre</strong>.</p>
<p><strong>Why Dornbirn?</strong></p>
<table>
<thead>
<tr>
<th>Clue</th>
<th>What it suggests</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>“.at” web address on the driving-school VW</strong></td>
<td>Austria</td>
</tr>
<tr>
<td><strong>German on the delivery van (“Fach-/Fahrmontage …”)</strong></td>
<td>German-speaking region</td>
</tr>
<tr>
<td><strong>Vertical-board timber houses with deep, carved balconies</strong></td>
<td>Classic Bregenzerwald / Vorarlberg style rather than Tyrolean stucco or Swiss Appenzell paintwork</td>
</tr>
<tr>
<td><strong>Red–white no-stopping signs, metric road markings</strong></td>
<td>Continental Europe, EU standard signage</td>
</tr>
<tr>
<td><strong>Triangular peak with a lower shoulder just to the right</strong></td>
<td>Looks like the Staufen–Karren duo that rises immediately east of Dornbirn in the Rhine Valley</td>
</tr>
<tr>
<td><strong>Flat valley floor but high mountains only a couple of km away</strong></td>
<td>Matches the Dornbirn basin, not Innsbruck or Salzburg which have broader valley floors</td>
</tr>
<tr>
<td><strong>Mix of modern infill and turn-of-the-century villas</strong></td>
<td>Typical of the residential belts (e.g., Hatlerdorf/Haselstauden) around central Dornbirn</td>
</tr>
</tbody>
</table>
<p>Put together, the architectural dialect, the Austrian domain, and that very recognizable mountain profile make Dornbirn the best fit.</p>
<h2>Round 3 - Ireland</h2>
<h3>Human</h3>
<p><img src="https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2025-04-26-geoguessr/image/2025-04-27-18-02.png" alt="Screenshot"></p>
<p>First thought - Ireland. Dashed yellow outside lines are very rare, and with this terrain and a white EU plate, it’s 100% Ireland.</p>
<p>Open ocean to the west, but distant land across the water to the NW and maybe an island SW?</p>
<p>I line up the road angle and check the map, looking on the west coast of Ireland for some islands that make sense.  I spend several minutes scanning, because this is a lot of information, I might be able to get a 5k on this round.</p>
<h3>o3 - with search</h3>
<p>{{note:o3-geoguessr-battle-round-3}}</p>
<p>That​ stark grey limestone, the endless low dry-stone walls, and the yellow dashed line that marks the left-hand shoulder on Irish roads are dead giveaways: you’re in <strong>the Burren, County Clare, Ireland</strong>.<br>
More precisely, the view is from the <strong>R477 “Coast Road” between Fanore and Ballyvaughan, just south-east of Black Head</strong>, looking across Galway Bay toward the hills of Connemara.</p>
<p>Approximate pin: <strong>53.120 °N, -9.285 °W</strong>.</p>
<h3>o3 - without search</h3>
<p>{{note:o3-geoguessr-ireland-without-search}}</p>
<p><strong>Short answer –</strong> the viewpoint is on the R 477 “coast road” that skirts the limestone karst of <strong>the Burren, County Clare, Ireland – roughly the Black Head/Fanore stretch on the edge of Galway Bay (about 53.13 N, –9.30 W).</strong></p>
<hr>
<h3>How I got there</h3>
<table>
<thead>
<tr>
<th>Clue</th>
<th>What it tells us</th>
<th>Why it fits the Burren coast</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Road markings</strong> – a single white centre-line, with a broken <strong>yellow</strong> line at the road edge.</td>
<td>Yellow edge lines are distinctive to the Republic of Ireland (left-hand traffic).</td>
<td>UK edge lines are white; continental Europe rarely uses yellow in this pattern.</td>
</tr>
<tr>
<td><strong>Left-hand‐side parking</strong> and vehicles facing both ways.</td>
<td>Confirms left-hand traffic but Google car happens to be travelling the opposing lane (so the yellow edge line appears on its right).</td>
<td>Consistent with Ireland; Street View often records one direction only.</td>
</tr>
<tr>
<td><strong>Drystone walls made of flat, grey limestone slabs</strong>; low, terraced, almost pavement-like outcrops on the hillside.</td>
<td>Classic Burren karst geology.</td>
<td>You see identical walls and limestone pavements between Ballyvaughan and Fanore.</td>
</tr>
<tr>
<td><strong>Barren, stepped limestone hillside on one side, open sea on the other.</strong></td>
<td>Narrow shelf road on a coastal plateau.</td>
<td>The R477 is literally wedged between Galway Bay and the Burren escarpment.</td>
</tr>
<tr>
<td><strong>Sparse vegetation – mostly grykes filled with grass and a few low shrubs.</strong></td>
<td>Karst environment with thin soils.</td>
<td>The Burren’s ecology is famous for exactly this mix.</td>
</tr>
<tr>
<td><strong>No hard shoulder, occasional gravel lay-bys where tourists stop.</strong></td>
<td>Popular scenic route but still a regional road (an “R” road).</td>
<td>The R477 is marketed as part of the Wild Atlantic Way; these gravel pull-offs exist every kilometre or so.</td>
</tr>
<tr>
<td><strong>Vehicles &amp; plates</strong> – EU-style, but not yellow (so not UK), and a bright yellow Post/parcel van that isn’t Royal Mail.</td>
<td>Republic of Ireland uses white plates and has several yellow courier fleets.</td>
<td>Again points to ROI rather than NI.</td>
</tr>
</tbody>
</table>
<p>Put all that together and the only coastal road in Ireland that matches the stone-walled, treeless limestone landscape right down to the water is the Burren coast. The specific curve, with the sea to the north-west and the hillside to the south-east, matches the section just west of Black Head lighthouse, before Fanore beach.</p>
<h3>Result</h3>
<p><img src="https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2025-04-26-geoguessr/image/2025-04-27-19-06.png" alt="Screenshot"></p>
<p><strong>Seriously?</strong> That’s just outrageous. How can it possibly do that? I thought for sure I had this round, I was only 10km away!</p>
<p>The chain of thought is insanely thorough, looking at the road lines, cars, license plates, rocks, water, everything. But here’s another trick it uses:</p>
<p><img src="https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2025-04-26-geoguessr/image/2025-04-27-19-10.png" alt="Screenshot"></p>
<p>More web search! It’s cheating, but it’s also prospering. I don’t want to overstate the web search though - it only knew to search those particular terms first because it properly identified everything in the CoT first. It appears to have used search more to confirm its guess than to generate it.</p>
<p><em>Update: As I suspected, search was irrelevant here. It guessed almost identically.</em></p>
<h2>Round 4 - Colombia</h2>
<h3>Human</h3>
<p><img src="https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2025-04-26-geoguessr/image/2025-04-27-18-13.png" alt="Screenshot"></p>
<p>First thought - South America. Road lines look Brazilian, but then I notice the license plates. They’re all yellow, which means it’s almost certainly Colombia.</p>
<p>I scan some more, and find the classic Colombian cross on the back of this sign.</p>
<p><img src="https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2025-04-26-geoguessr/image/2025-04-27-18-14.png" alt="Screenshot"></p>
<p>It’s very flat here, which is unusual in Colombia, so it narrows it down quite a bit. It’s likely either east of the Andes mountain range, or it’s coastal. I see quite a few palm trees, and there’s lighter / sandy dirt, so I’m leaning towards coastal.</p>
<h3>o3</h3>
<p>{{note:o3-geoguessr-battle-round-4-cot}}</p>
<p>This looks like rural Colombia—specifically the hot, low-lying Caribbean-plain corridor that carries Ruta 25 / the Troncal de Occidente between Sincelejo and Caucasia.</p>
<p>Key clues I’m using</p>
<table>
<thead>
<tr>
<th>Clue</th>
<th>What it tells me</th>
</tr>
</thead>
<tbody>
<tr>
<td>Yellow-on-black license plates (all vehicles)</td>
<td>Colombia (Ecuador also uses yellow, but the typography and plate size here are Colombian).</td>
</tr>
<tr>
<td>Red banner on the shop roof reading <strong>“Águila Roja”</strong></td>
<td>That’s a Colombian coffee brand you see on roadside tiendas country-wide.</td>
</tr>
<tr>
<td>The small green inflatable tent printed <strong>“miNegocio”</strong> and ringed with yellow five-point icons</td>
<td>That’s the Prosperidad Social “Mi Negocio” micro-enterprise program; it tours Colombia’s northern departments.</td>
</tr>
<tr>
<td>Road markings: double solid yellow centre, white edge line with occasional dashes</td>
<td>Matches INVÍAS standard for Colombia’s two-lane primary highways.</td>
</tr>
<tr>
<td>Vegetation &amp; architecture</td>
<td>Lowland tropical dry forest / cattle country—typical of northern Bolívar, Sucre and Córdoba.</td>
</tr>
<tr>
<td>Continuous line of fuel-tankers and containers in the second frame</td>
<td>Classic queue before the Puente San Jorge or a peaje (toll) on Ruta 25.</td>
</tr>
</tbody>
</table>
<p>Put together, the scene fits the chain of little workshops and comedor stalls that flank <strong>Ruta 25 just west of Planeta Rica, Córdoba Department</strong>—roughly <strong>8.4 ° N, -75.6 ° W</strong>.</p>
<p>That’s my best pin.</p>
<h3>Result</h3>
<p><img src="https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2025-04-26-geoguessr/image/2025-04-27-19-24.png" alt="Screenshot"></p>
<p>Beaten again. It turns out there’s a flat section just west of the Andes much further south than I guessed.</p>
<p>Also, in the CoT o3 mentions a town further down the main road and closer to the location, but in the summary it chooses a different town for some reason.</p>
<p>It now leads by ~1,300 points heading into the final round.</p>
<h2>Final round - Slovakia</h2>
<h3>Human</h3>
<p><img src="https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2025-04-26-geoguessr/image/2025-04-27-18-23.png" alt="Screenshot"></p>
<p>First thought - Europe, central or eastern.</p>
<p>Zooming in, I see a bollard:</p>
<p><img src="https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2025-04-26-geoguessr/image/2025-04-27-18-24.png" alt="Screenshot"></p>
<p>These are only found in Czechia or Slovakia. This terrain is flat but with mountains in the distance, which could be either country. I scan around, and see a road sign.</p>
<p><img src="https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2025-04-26-geoguessr/image/2025-04-27-18-26.png" alt="Screenshot"></p>
<p>It’s quite difficult to read. Could it say Kosice? I know that’s in Slovakia. We’re heading west on this road and it’s 25km away.</p>
<p>I see there’s another city starting with H that’s only 16km, I should be able to scan the map and look for the in-between city to confirm. But I can’t find it!</p>
<p>I’m a bit afraid to guess Kosice, because it’s so far east - if it’s wrong, especially if it’s Czechia, then I’m in trouble. I choose to hedge instead, placing my marker in east Czechia.</p>
<p><img src="https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2025-04-26-geoguessr/image/2025-04-27-18-32.png" alt="Screenshot"><br>
The wrong decision! I now see my mistake - it wasn’t an H I was seeing, it was an M for the city of Michalovce. Will the model be able to read that sign?</p>
<p>I need to be closer here, or o3 will win.</p>
<h3>o3</h3>
<p>{{note:o3-geoguessr-battle-round-5-cot}}</p>
<p><strong>Guess: rural southwestern Slovakia – the Danubian lowlands, on national road 63 a few kilometres east-southeast of Šamorín (≈ 48.03 N, 17.40 E).</strong></p>
<h3>Why that’s my pick</h3>
<table>
<thead>
<tr>
<th>Clue</th>
<th>What it suggests</th>
<th>Match here</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Road furniture</strong></td>
<td>The white guide-post with a thick black band and two small orange reflectors is the standard Slovak post (Czech posts use red/white circles instead).</td>
<td>✔ Seen in the second shot</td>
</tr>
<tr>
<td><strong>Bus-stop sign</strong></td>
<td>Slovakia (and only a handful of neighbours) uses a <em>blue square with a yellow border</em> for bus stops.</td>
<td>✔ Clear in the first image</td>
</tr>
<tr>
<td><strong>Edge/centre markings</strong></td>
<td>Slovak main roads often have a dashed auxiliary line inside the carriageway plus the classic long-dash centre line. Hungary normally uses solid edge lines; Czechia usually omits the interior dashed line.</td>
<td>✔ Pattern matches Slovakia</td>
</tr>
<tr>
<td><strong>Warning sub-plate</strong></td>
<td>The “↕ 3 km ↕” format and font match Slovak design guidelines word-for-word.</td>
<td>✔ Under the deer-crossing sign</td>
</tr>
<tr>
<td><strong>Landscape</strong></td>
<td>Flat, intensively farmed fields with an occasional centre-pivot irrigator, dotted tree lines, and low hills far to the north – textbook Žitný ostrov / Danube floodplain scenery.</td>
<td>✔ Exactly what’s in frame</td>
</tr>
<tr>
<td><strong>Guard-rail colour</strong></td>
<td>The little blue hand-rail on the side track is common on minor Slovak bridges/culverts and rare elsewhere.</td>
<td>✔ Visible left in the first shot</td>
</tr>
</tbody>
</table>
<p>Taken together, those push very strongly toward Slovakia, and the terrain fits the stretch of road 63 that runs from Šamorín toward Dunajská Streda. The faint ridgeline on the horizon lines up with the Little Carpathians north-west of that point.</p>
<p>So while I can’t drop a pin to the metre, I’d put us on <strong>road I/63 just past the three-way junction by the hamlet of Kľúčovec, Dunajská Streda district</strong> – give or take a kilometre.</p>
<h3>Result</h3>
<p><img src="https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2025-04-26-geoguessr/image/2025-04-27-19-39.png" alt="Screenshot"></p>
<p>It got the correct country - in fact it got all five countries correct - but it was actually slightly further away, since it chose the max southwest and my hedge was a bit closer.</p>
<p>Still, I only gained a handful of points in that round, and needed a lot more to win.</p>
<h1>Results</h1>
<p><img src="https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2025-04-26-geoguessr/image/2025-04-27-18-34.png" alt="Screenshot"></p>
<p>I got 22,054 points out of a possible 25k. For a completely random seed, this is a good score for me, I got 4,000+ each time (ok, the Colombia guess was 3,983 - close enough!). My average score on this map is closer to 18k. This is likely because 4 out of 5 rounds were in Europe, which is often easier to guess than more rural and less developed areas.</p>
<p><img src="https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2025-04-26-geoguessr/image/2025-04-27-19-41.png" alt="Screenshot"></p>
<table>
<thead>
<tr>
<th style="text-align:right">Rnd</th>
<th>Country</th>
<th style="text-align:right">Human dist (km)</th>
<th style="text-align:right">o3 dist (km)</th>
<th style="text-align:right">Human pts</th>
<th style="text-align:right">o3 pts</th>
<th style="text-align:left">o3 used web?</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align:right">1</td>
<td>Bulgaria</td>
<td style="text-align:right"><strong>54</strong></td>
<td style="text-align:right">63</td>
<td style="text-align:right"><strong>4 856</strong></td>
<td style="text-align:right">4 755</td>
<td style="text-align:left">No</td>
</tr>
<tr>
<td style="text-align:right">2</td>
<td>Austria</td>
<td style="text-align:right">381</td>
<td style="text-align:right"><strong>0.4</strong></td>
<td style="text-align:right">3 336</td>
<td style="text-align:right"><strong>4 999</strong></td>
<td style="text-align:left"><strong>Yes</strong> (domain lookup)</td>
</tr>
<tr>
<td style="text-align:right">3</td>
<td>Ireland</td>
<td style="text-align:right"><strong>10</strong></td>
<td style="text-align:right">1.2</td>
<td style="text-align:right"><strong>4 984</strong></td>
<td style="text-align:right">4 997</td>
<td style="text-align:left">Yes (confirm)</td>
</tr>
<tr>
<td style="text-align:right">4</td>
<td>Colombia</td>
<td style="text-align:right">298</td>
<td style="text-align:right"><strong>82</strong></td>
<td style="text-align:right">3 983</td>
<td style="text-align:right"><strong>4 699</strong></td>
<td style="text-align:left">No</td>
</tr>
<tr>
<td style="text-align:right">5</td>
<td>Slovakia</td>
<td style="text-align:right"><strong>173</strong></td>
<td style="text-align:right">265</td>
<td style="text-align:right"><strong>4 895</strong></td>
<td style="text-align:right">4 729</td>
<td style="text-align:left">No</td>
</tr>
<tr>
<td style="text-align:right"><strong>Total</strong></td>
<td></td>
<td style="text-align:right"></td>
<td style="text-align:right"></td>
<td style="text-align:right"><strong>22 054</strong></td>
<td style="text-align:right"><strong>23 179</strong></td>
<td style="text-align:left"></td>
</tr>
</tbody>
</table>
<p><em>(Bold = per-round winner)</em></p>
<p>o3 scored 23,179, beating me by just over 1,100 points. A truly impressive performance.</p>
<p>I won rounds 1 and 5, but only by small margins, and it got two ~5k guesses. On both of those rounds, it did use search. [<em>Update: Search ended up not mattering in those rounds, see the round details above.</em>] However it still beat me in round 4 without search.</p>
<p>One major difference is time spent. I would typically guess within a minute or two, and in many rounds I will guess within 10 seconds. The model would nearly always take more than 2 minutes, and the longest was over 6 minutes! So humans still have the edge when it comes to speed, at the moment.</p>
<p>Remember, this is with only two screenshots per round. No metadata, no prompt engineering, just two .png files - not even zoomed in!</p>
<h1>EXIF spoofing</h1>
<p>It turns out, this isn’t as simple as it seems.</p>
<p>I initially used the <code>exiftool</code> to swap out the metadata from genuine images with GPS data into my screenshots.</p>
<p>Unfortunately, the models didn’t notice the EXIF data. They couldn’t see it at all. I asked them about it, and they ran python tools to check, and it showed as empty. The model claimed this was because file uploads in the browser strip EXIF data.</p>
<p>The work-around was to zip the image file. This did work, the model could then extract the EXIF location data.</p>
<p>No luck on my first attempt. It didn’t mention the EXIF data at all, so I asked about it, and here’s the response:</p>
<blockquote>
<p>I pulled the EXIF block out of the JPEG, and it <em>does</em> contain GPS tags—but they’re garbage for this purpose:</p>
<pre><code>GPSLatitude        : 13°44'21.72&quot; N
GPSLongitude       : 100°33'27.97&quot; E
GPSTimeStamp       : 10:43:48
GPSDateStamp       : 2024:12:23
</code></pre>
<p>Those coordinates put you in suburban Bangkok, Thailand—obviously nowhere near the Andean coffee-zone scene in the photo. So either the file is a re-encoded Street View frame with spoofed/default metadata, or the camera that captured the screenshot had stale GPS information. In short, the EXIF misleads rather than helps.</p>
</blockquote>
<p>I tried again, this time just using the <code>exiftool</code> to print the EXIF data and including it as text in the chat so that it wouldn’t require two prompts. Here’s the model’s response:</p>
<blockquote>
<p>Nice try with the EXIF! 43 ° N and a Nikon P6000 might shout “Tuscan holiday”, but the scene itself gives the game away:</p>
</blockquote>
<table>
<thead>
<tr>
<th>clue</th>
<th>what it points to</th>
</tr>
</thead>
<tbody>
<tr>
<td>Dense spruce &amp; birch forest, brown ground-cover and bare deciduous twigs – late-October boreal ecology</td>
<td>Scandinavia (boreal, not Mediterranean)</td>
</tr>
<tr>
<td>Single–carriageway road with <strong>no centre line</strong> but two long-dash <strong>edge lines</strong></td>
<td>Typical Norwegian county road marking (roads &lt; 5.5 m wide)</td>
</tr>
<tr>
<td>W-beam guard-rail on square posts + black/white delineator sticks</td>
<td>Statens vegvesen standard furniture</td>
</tr>
<tr>
<td>Grey painted through-truss bridge with bolted lattice diagonals</td>
<td>Common post-1960 Norwegian prefab design</td>
</tr>
<tr>
<td>River of decent width in a steep, forested valley</td>
<td>Central Norwegian interior topography</td>
</tr>
<tr>
<td>Faint “© 2022 Google” watermark – it’s a Street-View still, not a 2008 compact-camera shot</td>
<td></td>
</tr>
</tbody>
</table>
<blockquote>
<p>Putting all of that together, the location is almost certainly a rural county (fylkes) road in <strong>Trøndelag, central Norway</strong>.<br>
The only place that matches the exact bridge geometry, railing layout and curve on approach is the crossing of the <strong>Nea river on Fv 705 at Garberg, Selbu kommune</strong> – about 50 km southeast of Trondheim.</p>
<p><strong>My guess:</strong><br>
<em>Garberg bru, Fv 705, Selbu, Trøndelag, Norway</em> – roughly <strong>63.16 ° N, 11.35 ° E</strong>.</p>
<p>If you drop the pegman there in Street View you’ll see this exact frame.</p>
</blockquote>
<p>At this point it probably goes without saying that it nailed both guesses.</p>
<p>I’m unsure how other people were able to have the models pull the EXIF data from the image naturally. Maybe they’re using the mobile app, and it handles file uploads differently, or something else is going on. If anyone else is able to spoof EXIF data and have the models fooled, please show me how you did it.</p>
<p>Regardless, from my two tests, it appears that o3 is just too smart to be fooled by spoofed EXIF GPS data. It saw the obvious inconsistencies. It doesn’t only look at metadata, it reviews the image, and it’s very good at doing that.</p>
<h1>Conclusion</h1>
<p>Does the Chain of Thought make sense? For the most part, yes.</p>
<p>I notice that it often does a lot of unnecessary and repetitive cropping, and will sometimes spend way too much time on something unimportant. A human is very good at knowing what matters, and o3 is less knowledgeable about what things it should focus on. It got distracted by advertising multiple times.</p>
<p>However, most of what it says about things like signs and road lines appears to be accurate, or at least close enough to truth that they meaningfully add up. Given the end result of these excellent guesses, it seems to arrive at the guesses from that information.</p>
<p>If it’s using other information to arrive at the guess, then it’s not metadata from the files, but instead web search. It seems likely that in the Austria round, the web search was meaningful, since it mentioned the website named the town itself. It appeared less meaningful in the Ireland round. It was still very capable in the rounds without search. [<em>Update: Search didn’t matter in those rounds after all, see rounds above.</em>]</p>
<p>So to put a bow on this:</p>
<ol>
<li>The o3 model isn’t smoke and mirrors, tricking us by only using EXIF data. It’s at a comparable Geoguessr skill level to Master I or better players now (at least according to my own ~20 or so rounds of testing).</li>
<li>Humans still hold a big edge in decision time—most of my guesses were &lt; 2 min, o3 often took &gt; 4 min.”</li>
<li>Spoofing EXIF data doesn’t throw off the model.</li>
</ol>
<p>Whether you view this as dystopian or as a technological marvel - or both - you can’t claim it’s a parlor trick.</p>
]]></content>
        <author>
            <name>Sam Patterson</name>
            <uri>https://sampatt.com</uri>
        </author>
    </entry>
    <entry>
        <title type="html"><![CDATA[Why did I wait so long to Integrate AI into my Shell]]></title>
        <id>https://sampatt.com/blog/2025-07-27-why-did-i-wait-so-long-to-integrate-ai-into-my-shell</id>
        <link href="https://sampatt.com/blog/2025-07-27-why-did-i-wait-so-long-to-integrate-ai-into-my-shell"/>
        <updated>2025-07-27T15:23:24.000Z</updated>
        <summary type="html"><![CDATA[In Which I Add ZSH Widgets to make CLI AI Easy]]></summary>
        <content type="html"><![CDATA[<p>Somehow I never bothered to hook up an LLM to my shell until recently, and now I regret waiting so long. I explain how I’m using shell-gpt and ZSH widgets in this post.</p>
<p>I use LLMs frequently. A quick glance at my OpenAI dashboard shows about 30 threads over the past week, which doesn’t include the disappearing threads I use to prevent some conversations from being added to history. This also doesn’t include my programming with Claude Code or Cursor, or the various other programs I use which use API endpoints. This doesn’t include my local Ollama model requests either.</p>
<p>Linux has been my daily driver for over a decade now, and I’ve gotten very comfortable with the terminal. Copying and pasting into and out of the terminal happens all the time, so when I started using LLMs for programming and they had a “Add to chat” feature which took terminal output and dropped it into the conversation, I found that extremely useful. No more selecting text, copying, switching to browser, pasting, and doing the same in reverse.</p>
<p>But there are three main limitations for this feature:</p>
<ol>
<li>I’m not always using my IDE.</li>
<li>I don’t always want to add the terminal output to the current conversation.</li>
<li>This only adds context from the terminal, it’s not integrated into the shell.</li>
</ol>
<p>Many times I’m using keyboard shortcuts to pop a terminal open just to do something quickly, and I don’t want to open my IDE. Even if I am in my IDE, not all the things that happen in terminal are things I want to add to whatever conversation I was engaged in, and if I’m going to be starting new conversations then this isn’t any faster than it would be popping over to a browser.</p>
<p>Also, whatever output I get will be in the IDE LLM output. This does usually make it easy enough to run, though it typically requires mouse interaction and often scrolling as well.</p>
<p>I found a much better way, and it’s called <a href="https://github.com/TheR1D/shell_gpt">Shell‑GPT</a>:</p>
<h1>Shell-GPT</h1>
<p>Shell-GPT is fairly straightforward. It’s a python tool which describes itself as such:</p>
<blockquote>
<p>A command-line productivity tool powered by AI large language models (LLM). This command-line tool offers streamlined generation of <strong>shell commands, code snippets, documentation</strong>, eliminating the need for external resources (like Google search). Supports Linux, macOS, Windows and compatible with all major Shells like PowerShell, CMD, Bash, Zsh, etc.</p>
</blockquote>
<p>You can use it in multiple ways. One is to make a query:</p>
<pre><code>sgpt &quot;What is the fibonacci sequence&quot;
# -&gt; The Fibonacci sequence is a series of numbers where each number ...
</code></pre>
<p>This is useful, but there’s another way to use it which I love.</p>
<h3><strong>Ctrl + L</strong> — “Write the command for me”</h3>
<p>Press <strong>Ctrl + L</strong> after typing a natural‑language request (e.g. “find all JSON files below this folder”).<br>
Shell‑GPT’s built‑in shell integration replaces your current input line with a fully‑formed command.<br>
You can edit it if you like, then hit <strong>Enter</strong> to run.</p>
<p>This is very useful - I don’t always remember the best ways to use common command line utilities, and this feature has been very good at finding the right commands so far. Because it’s integrated into the shell, it just transforms the natural language request into the command - I only press the keyboard shortcut, and if I like what it recommends, I hit enter to run.</p>
<h2>What’s missing</h2>
<p>But I noticed an annoying limitation of Shell-GPT - you can’t just run a keyboard shortcut to automatically send the last shell command and output together.</p>
<p>This is one of the things that I most need in terminal. I run a command and get an error. In the past, I would copy both output and command into a browser LLM. If it gave me a different command back, I would copy and paste that into the shell. This always annoyed me - all the text is just right there, why should I move it back and forth to get an answer?</p>
<p>So along with o3, I build a ZSH widget that let me do just that.</p>
<h3><strong>Ctrl + O</strong> — “Explain what just happened”</h3>
<p>Press <strong>Ctrl + O</strong> immediately after any command finishes:</p>
<ol>
<li>Every command’s <code>stdout</code> + <code>stderr</code> is automatically mirrored into <code>~/.last_cmd_out</code> by a pair of <code>preexec</code>/<code>precmd</code> hooks.</li>
<li>The custom <strong><code>sgpt_last</code> widget</strong>—bound to <strong>Ctrl + O</strong>—does three things:</li>
</ol>
<pre><code>   tail -n 5000 ~/.last_cmd_out \      # grabs the last 5 000 lines of output
       | sgpt &quot;What went wrong?&quot;       # sends it to Shell‑GPT for analysis
   zle reset-prompt                    # refreshes your prompt afterwards
</code></pre>
<ul>
<li>
<p><strong>Step 1:</strong> <code>tail</code> keeps the capture lightweight (adjust the line count as you like).</p>
</li>
<li>
<p><strong>Step 2:</strong> The prompt “What went wrong?” can be changed to anything—e.g. “Summarize this output” or “Generate a fix.”</p>
</li>
<li>
<p><strong>Step 3:</strong> <code>zle reset‑prompt</code> ensures your prompt is redrawn cleanly after the AI reply streams back.</p>
</li>
</ul>
<p>The result appears inline in your terminal, giving you an instant diagnosis or explanation without re‑running the command.</p>
<hr>
<h2>Behind the Scenes (hook snippet)</h2>
<pre><code class="language-zsh">export LAST_CMD_OUT=&quot;$HOME/.last_cmd_out&quot;

capture_start() {
  : &gt;| &quot;$LAST_CMD_OUT&quot;
  exec {__SAVE_OUT}&gt;&amp;1 {__SAVE_ERR}&gt;&amp;2
  exec &gt; &gt;(tee -a &quot;$LAST_CMD_OUT&quot;) 2&gt;&amp;1
}

capture_end() {
  [[ -z ${__SAVE_OUT+x} ]] &amp;&amp; return
  exec 1&gt;&amp;$__SAVE_OUT 2&gt;&amp;$__SAVE_ERR
  exec {__SAVE_OUT}&gt;&amp;- {__SAVE_ERR}&gt;&amp;-
  unset __SAVE_OUT __SAVE_ERR
}

autoload -Uz add-zsh-hook
add-zsh-hook preexec capture_start
add-zsh-hook precmd  capture_end
</code></pre>
<p>Add the snippet above and the <code>sgpt_last</code> widget binding to your <code>~/.zshrc</code>, reload the shell, and enjoy AI super‑powers right from the prompt.</p>
]]></content>
        <author>
            <name>Sam Patterson</name>
            <uri>https://sampatt.com</uri>
        </author>
    </entry>
    <entry>
        <title type="html"><![CDATA[Visualizing Why Bitcoin Can't Work Over HF Radio]]></title>
        <id>https://sampatt.com/blog/2025-11-08-visualizing-why-bitcoin-cant-work-over-hf-radio</id>
        <link href="https://sampatt.com/blog/2025-11-08-visualizing-why-bitcoin-cant-work-over-hf-radio"/>
        <updated>2025-11-08T12:35:00.000Z</updated>
        <summary type="html"><![CDATA[In Which I Show Why We Need the Internet for Bitcoin]]></summary>
        <content type="html"><![CDATA[<p><em>Video version of this article:</em></p>
<p>{{youtube:<a href="https://youtu.be/tpdWersuhjQ">https://youtu.be/tpdWersuhjQ</a>}}</p>
<p>Surely if there are two technologies which are inseparable, it’s Bitcoin and the Internet. After all, Bitcoin is magic <strong>internet</strong> money, right?</p>
<p><img src="https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2025-10-19-visualizing-why-bitcoin-cant-work-over-hf-radio/image/2025-10-19-21-42.png" alt="Screenshot"></p>
<p>Not everyone is convinced. Long-time Bitcoiner NVK recently posted <a href="https://x.com/nvk/status/1978828198914781428">this on X</a>:</p>
<p><img src="https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2025-10-19-visualizing-why-bitcoin-cant-work-over-hf-radio/image/2025-11-01-03-56.png" alt="Screenshot"></p>
<p>NVK knows his stuff: he’s been running a Bitcoin hardware company for ages, has been a prominent technical voice in the Bitcoin community, and he’s a licensed amateur radio operator. In fact, he was the first person to <em>send</em> Bitcoin over HF radio, <a href="https://x.com/nvk/status/1095354354289135617">back in 2019</a>.</p>
<p>Also, NVK isn’t the only one who has taken this seriously. Famous Bitcoiners Nick Szabo and Elaine Ou did some experimentation with Bitcoin and HF in 2017 and gave this fascinating presentation at Scaling Bitcoin:</p>
<p>{{youtube:<a href="https://youtu.be/Wt8iGvgclXI?si=YQACnf5KzCKibBVa">https://youtu.be/Wt8iGvgclXI?si=YQACnf5KzCKibBVa</a>}}</p>
<p>It’s been a few years since these experiments. I shared a few messages with Szabo in the year or two after the experiment - he hoped to continue them, but to my knowledge, neither he nor Ou did anything more with the idea. But given NVK’s interest, perhaps the idea isn’t totally dead.</p>
<p>Would Bitcoin “not need the internet” if we limited blocks to only being 300kB and they were broadcast over HF radio? Is NVK right?</p>
<p><strong>The short answer</strong>: no.</p>
<p><strong>The long answer</strong>: if you define “using Bitcoin” as being a passive observer of the network, then it’s possible, but that’s a bad definition and it’s not a use case that should dictate Bitcoin’s design.</p>
<p>Explaining this with only numbers would be fairly boring, so I created some visualizations (like the example below) to show why it’s not a good idea.</p>
<p><img src="https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2025-10-19-visualizing-why-bitcoin-cant-work-over-hf-radio/gif/gif_20251101_041115.gif" alt="GIF"></p>
<p>But let’s clear up two things first:</p>
<ol>
<li>Why would we even want to use radio instead of the internet?</li>
<li>Who am I to tell you this isn’t possible?</li>
</ol>
<h1>Why Radio</h1>
<p>Bitcoin was created to be <a href="https://sampatt.com/blog/2019-06-06-satoshi-analysis">peer to peer cash</a>. The decentralization and trust-minimization of the design are the entire point. It was meant to be a way for people to move money around without needing to rely on existing financial institutions. To avoid those points of control and to empower individuals to handle their own money.</p>
<p>Doesn’t the internet work perfectly for this? In practice, it does - most of the time. In theory, less so.</p>
<p>If you want to build a decentralized network for payments that minimizes trust, then you need an underlying decentralized network to use. But our current internet isn’t fully decentralized and can’t be fully trusted. Just like banks act as points of control and surveillance in our financial system, our internet also has many points of control and surveillance.</p>
<p>It’s easy to imagine that the electrical impulses which leave your computer are connected directly to the other computer you’re talking to, and the signal goes from one to the other - because that’s true! But along the path, there are many physical points of infrastructure which neither party in the communication controls, and which are being monitored by various organizations (ISPs, other companies, governments).</p>
<p>Most of those organizations don’t care about you using Bitcoin. Most of them wouldn’t take any steps to prevent Bitcoin traffic. But, crucially, <em>they could if they wanted to.</em> Bitcoin’s allowed existence in the past is no guarantee in the future.</p>
<p>If you look at authoritarian regimes today, and throughout history, they typically demand full control over the money supply and the financial system. The physical infrastructure of the internet is completely vulnerable to state-level actors. We already see this in countries like China, and as Snowden showed us more than a decade ago, western governments are hardly idle online.</p>
<p>Radio is so compelling because <em>it bypasses all points of control.</em> The signal is emitted from one antenna, travels through the atmosphere, and is received by another antenna. Truly peer to peer; nothing in-between! This is as decentralized as it gets.</p>
<p>NVK is right when he says “Truly decentralize. Speed of light.” It’s a beautiful vision - have the underlying communication network be peer to peer at the most fundamental level possible. And to be clear, I support this vision. This post isn’t about why radio can’t be used to empower decentralization, or even why radio can’t be used for Bitcoin (I believe it can), but why <em>HF radio specifically</em> isn’t sufficient.</p>
<h1>My Bitcoin / Radio Experience</h1>
<p>I’ve been involved with Bitcoin for a long time. I was a senior policy analyst and ran the technology policy program at a Washington D.C. think tank, and while I was there I wrote one of the <a href="https://www.goodreads.com/book/show/18993889-bitcoin-beginner?from_search=true&amp;from_srp=true&amp;qid=vv7nuQKw3B&amp;rank=3">first books</a> published about Bitcoin, back in 2013. I then left in order to co-found a Bitcoin company in 2015 - we built the decentralized marketplace OpenBazaar.</p>
<p>I’ve been a licensed amateur radio operator for even longer than a Bitcoiner - I first got my ham license at 14 years old. I was obsessed with <a href="https://en.wikipedia.org/wiki/Numbers_station">number stations</a> back in my teen years, spending hours tuning the shortwave band trying to find them. I’ve logged many hundreds of DX (long distance) conversations over the years - my furthest is South Africa, ~8000 miles using only 100 watts (I love using data modes).</p>
<p>I was the first person to <em>receive</em> Bitcoin over HF radio, in that same transaction mentioned above - NVK sent the message from Toronto to me in Michigan.</p>
<p><img src="https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2025-10-19-visualizing-why-bitcoin-cant-work-over-hf-radio/image/2025-11-01-03-57.png" alt="Screenshot"></p>
<p>This experiment gained a small amount of online attention, and I was later invited onto the “School of Block” show as their technical expert to explain and to help them recreate it. They produced an excellent video, and if this subject interests you, I highly recommend you watch it:</p>
<p>{{youtube:<a href="https://www.youtube.com/watch?v=6mFu8Wh3rkc">https://www.youtube.com/watch?v=6mFu8Wh3rkc</a>}}</p>
<p><em>(I’m not a Bitcoin core developer, nor am I an electrical or radio engineer; if you happen to be those things and I’ve offended you deeply with my oversimplification (or I’ve made an error) feel free to contact me. My  info is in the About section.)</em></p>
<p>Regardless of my experience, the subject of radio + Bitcoin fascinates me - so let’s dive into it!</p>
<h1>The problem</h1>
<p>The main problem of using HF radio for Bitcoin is limited bandwidth. How much bandwidth do we need? Let’s briefly examine how Bitcoin works.</p>
<p>The simplest way to understand Bitcoin is to view it as a distributed ledger to keep track of digital money, combined with everything needed to maintain such a ledger. The ledger is a series of blocks linked in a chain (the blockchain) back to the first transaction in 2009. These blocks are individually quite small - the average block size at the time of writing is roughly 1.5MB, and they are only generated and added to the blockchain approximately every 10 minutes.</p>
<p>If you’re not familiar with Bitcoin, you’d be forgiven for thinking that 1.5MB is a miniscule amount of data. In fact, the block size has been an extremely contentious subject in the community, with many believing it should be even smaller, and many believing it must increase. I won’t go into the Block Size Wars of the 2017 era (it wasn’t pretty), but the main thing to note is that blocks are limited to about 4MB in size for the foreseeable future.</p>
<p>Only 1.5 - 4 MB of data every ten minutes? Where’s the bandwidth problem?</p>
<p>There isn’t one - for internet, or most forms of radio communication. Let’s visualize this.</p>
<p>If you’ve already got a solid handle on radio fundamentals, you can skip to the <a href="#speed-comparisons">Speed comparisons</a> section below.</p>
<h2>Frequency, Bandwidth, and Modulation</h2>
<p>For many years the terms <em>frequency</em>, <em>bandwidth</em>, and <em>modulation</em> - and how they related to each other - were always just outside my intuitive grasp. I knew how to use them in radio communications, but not what they represented at a fundamental level. Visualizations helped immensely.</p>
<p>Electromagnetic fields are strange things. They can be static, like the electric field around a charged balloon or the magnetic field around a neodymium magnet. Unless you interact with or move those fields in some way, they’re content to sit there.</p>
<p>If you connected a DC power source to an antenna, you’d create a static electric field (and a steady magnetic field) that could be measured nearby. But it wouldn’t radiate; it would just sit in place and fade quickly with distance.</p>
<p>Hook up the same antenna to AC, and it’s no longer static. The alternating current makes the electrons accelerate back and forth, and the changing electric and magnetic fields they generate propagate outward on their own - no longer bound to the antenna - radiating outward at the speed of light. Those propagating fields are <em>electromagnetic waves</em>.</p>
<p>{{viz:dc-ac-em-wave}}</p>
<p>We can detect these EM waves when they interact with matter, such as another antenna. The oscillating fields drive the electrons in the receiving antenna in the same rhythm as the transmitter, creating an alternating voltage we can measure.</p>
<p>To understand <em>frequency</em>, <em>bandwidth</em>, and <em>modulation</em> all you need to do is follow that oscillating voltage line through time and watch how it varies.</p>
<h3>Frequency</h3>
<p>Looking at the voltage line, you’ll see EM waves primarily as sine waves. The most fundamental aspect of these waves is how frequently they complete a cycle - how far apart each of the peaks are. This spacing is called the <em>frequency</em>, and we measure this in Hertz (Hz), which is one cycle per second.</p>
<p>The essential idea is that <strong>higher frequency means more opportunities to send information.</strong> To understand why, we need to know what a <em>carrier wave</em> is.</p>
<p>{{viz:frequency-modulation}}</p>
<p>A sine wave itself doesn’t give you any information. It’s just a line that moves up and down predictably. But this predictability is important, because if you want to send information over radio, you need to send this sine wave (called a carrier wave), but alter it a little bit to encode your data. We call these alterations to the carrier wave <em>modulation</em>.</p>
<h3>Modulation</h3>
<p>If the receiver knows the frequency of the sender’s carrier wave (they knew the distance between peaks on the voltage line), then they can look for any variations away from that perfect sine wave. Anything that deviates from the carrier wave is the underlying information.</p>
<p>There are various ways to modulate the carrier, and all of them can be visualized by looking at the voltage line. If you vary the height of the wave, that’s <em>amplitude modulation</em> (AM). If you vary the frequency away from the carrier (change the distance of the peaks), that’s <em>frequency modulation</em> (FM). If you shift when the line is moving up and down (changing the phase), that’s called <em>phase modulation</em> (PM). These can be combined as well - another method called Quadrature Amplitude Modulation (QAM) changes the amplitude and phase together.</p>
<p>{{viz:modulation}}</p>
<p>In this visualization you can see the carrier wave as a dashed line behind the modulated signal. By selecting different modulation modes and their strength, you can see differences emerge between the two lines - that difference contains the information of the signal.</p>
<h3>Bandwidth</h3>
<p>Notice that if you crank up the modulation strength, you can deviate from the carrier quite a bit. The more significant the deviation becomes, the further away the modulated signal reaches from the carrier frequency. This distance is called the <em>bandwidth</em> of the signal, and it’s directly related to how much information can be transmitted.</p>
<p>The frequency visualization earlier is a bit misleading. It’s not literally true that 1 Hz = 1 bit of information. In reality, the <strong><a href="https://www.geeksforgeeks.org/electronics-engineering/shannon-capacity/">Shannon–Hartley theorem</a></strong> tells us that the data rate of a channel depends on its bandwidth and its signal-to-noise ratio, not directly on its frequency.</p>
<p>So while frequency itself doesn’t directly increase the data rate, it does give you way more space for your modulated signal to deviate further and further away from the carrier signal. All that extra space - bandwidth - means you can pack in more data.</p>
<p>The next visualization shows what happens when you increase bandwidth. Note that in the visualization above, if we increased the frequency all the way to 100 Hz, that resulted in 100 chances to send information. But when we expand the bandwidth of the signal, we get many more opportunities.</p>
<p>There are different ways to use bandwidth. We’ve been discussing single-carrier signals, where using more bandwidth means further deviation from the carrier frequency. But you can also use multiple carriers in parallel instead, such as the popular <a href="https://en.wikipedia.org/wiki/Orthogonal_frequency-division_multiplexing">OFDM</a>.</p>
<p>{{viz:bandwidth-window}}</p>
<p>If you’re operating at 100 Hz and you’re using 1% of that frequency for your bandwidth, that means your signal spans about 1 Hz in total - roughly from 99.5 Hz to 100.5 Hz. That’s a very narrow slice, just a couple of “lanes” to send information.</p>
<p>But if you’re operating at 2.4 GHz (Wi-Fi) and you use 1% of <em>that</em> frequency, your bandwidth is <strong>24 MHz</strong> - 24 million hertz wide! That’s an enormous amount of spectrum to modulate, giving you millions of times more opportunities each second to modulate data.</p>
<p>For many forms of radio communication you’d be using even less than 1% of the carrier’s frequency as bandwidth, closer to 0.1%. This visualization shows you how much bandwidth you get when using only 0.1% of the carrier frequency - notice how quickly it grows at higher frequencies.</p>
<p>{{viz:radio-spectrum}}</p>
<p>This is why higher-frequency systems can carry vastly more information - not because the waves themselves are “faster,” but because they can use much wider frequency bands.</p>
<p>HF radio operates at frequencies between 3 - 30 million oscillations per second (MHz). That may sound like a high frequency, and back in the early days of our understanding of radio, it <em>was</em> higher than the most popular use of radio, the AM broadcast band. But at HF frequencies you typically have only about 2–12 kHz of usable bandwidth - thousands of times narrower than the MHz-wide channels used by Wi-Fi, cell networks, or satellites.</p>
<h2>Speed comparisons</h2>
<p>Let’s see how these different modes of communication compare on speed:</p>
<ol>
<li>Fiber internet</li>
<li>Line of sight radio (Cell network / Starlink)</li>
<li>HF radio</li>
</ol>
<p>Speeds across these modes vary quite a bit, but the averages I found ended up being approximately 1 Gigabits per second (Gbps) for fiber, 65 Megabits per second (Mbps) for cell networks, and 150 Mbps for Starlink (I averaged them together to 100Mbps).</p>
<p>What about the speed for HF? That’s a bit complicated, but let’s just take the number <a href="https://x.com/nvk/status/1979182736070877477">NVK gives us</a>:</p>
<p><img src="https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2025-10-19-visualizing-why-bitcoin-cant-work-over-hf-radio/image/2025-11-01-04-02.png" alt="Screenshot"></p>
<p>12 kHz of bandwidth → roughly <strong>50 kilobits per second (Kbps)</strong> for HF.  That amount of bandwidth on HF frequencies would be considered <em>wideband</em>, which is far beyond what ordinary ham operators are allowed to use. It’s the kind of channel width used by shortwave broadcasters, military systems, or governments, not individuals. Typical amateur allocations top out around 2-3 kHz.</p>
<p>The modern wideband HF systems that NVK is referencing can reach 50 kbps under ideal conditions, but this almost certainly requires a well-engineered, licensed, high-power setup with good signal-to-noise ratio and forward-error correction. Real-world amateur setups are an order of magnitude slower than this.</p>
<p>In practice, an HF station operating with 12 kHz of continuous bandwidth would need major transmit power, large antennas, and government-level spectrum authorization. In other words, these numbers only make sense if you have a large broadcast-grade facility or you’re willing to go rogue.</p>
<p>Could the international community be convinced to allocate 12 kHz of HF spectrum for a continuous Bitcoin “block beacon”?</p>
<p>Probably not, but let’s pretend it happens. How fast would it actually be?</p>
<p>This visualization shows how long it would take to transmit a single Bitcoin block in real time at various data rates. I randomly chose block #920,315, which was <strong>2,545,182 bytes</strong> in size. Each byte is 8 bits, so we’d need about <strong>20 million individual modulation opportunities</strong> to send this data.</p>
<p><em>(Because of <a href="https://github.com/bitcoin/bips/blob/master/bip-0152.mediawiki">BIP 152</a>, Compact Blocks, peers that share mempool data don’t need to send full blocks. This saves significant bandwidth when blocks are propagated. However, passive HF receivers wouldn’t have that shared mempool state, so for this visualization I’ll use the full block size.)</em></p>
<p>{{viz:block-transfer}}</p>
<p>Oof. Fiber is nearly instant, cell or Starlink take a few hundred milliseconds, and HF takes more than six minutes.</p>
<p>But what if we limit the block size to 300kB, as NVK suggests? That would drop the time down to 48 seconds - still extremely slow in modern terms, but fast enough to ensure that someone receiving the blocks from such a broadcast wouldn’t have blocks piling up faster than the ~10 minute average rate of block generation.</p>
<p>So it would work, right? Well, sorta, but there are a few problems.</p>
<h2>Two more problems with HF Radio</h2>
<p>Most people’s interactions with radio nowadays are at much higher frequencies than HF. When you use Wi-Fi, your cellphone, satellites, even FM, are all much higher than the 3 - 30 MHz HF band. They typically operate as line of sight (LOS), meaning the signal takes a direct path between the sender and receiver. This is why you can lose GPS signal in the mountains, or why your standard Wi-Fi router might have problems covering your entire home - stuff gets in the way of the signal.</p>
<p>LOS radio has the massive bandwidth advantage we’ve discussed, but it also has a fundamental problem - the earth is curved. The distance a signal can reach is inherently limited. Tall antennas can help get further around the curve, but unless you’re a satellite or on a mountain or airplane / balloon, you’re not usually going much further than ~50 miles.</p>
<p><img src="https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2025-10-19-visualizing-why-bitcoin-cant-work-over-hf-radio/image/2025-11-02-04-40.png" alt="Screenshot"><br>
<a href="https://www.qsl.net/4x4xm/HF-Propagation.htm">Image source</a></p>
<p>HF doesn’t have this limitation, because the signal is <em>reflected off the ionosphere back to earth</em>. As the image above shows, LOS signals (such as VHF or UHF) just blast off into space, but when the angle is correct, HF will “bounce” from the ionosphere and back to earth multiple times, allowing incredible distances. It’s amazing this is possible, but it comes at a huge disadvantage.</p>
<h3>1. Unreliable</h3>
<p>Have you ever listened to HF (shortwave) radio? I have fond memories of tuning the dial on my Icom IC-718 rig as a teen, listening to - who knows what. Half the time I certainly didn’t.</p>
<p>The unreliability of HF is part of the charm. You could be hearing rough Portuguese one minute (Brazilian fishermen?) only to have it fade away, then tune down the band and hear a Voice of America broadcast, then try (and fail) to read Morse code in one of the amateur bands, then find an elusive number station, only to have it fade out just when you try to start recording it.</p>
<p>The HF bands are notorious for being “closed” or “open,” meaning sometimes signals will reflect off the ionosphere, and sometimes they won’t. These are dependent on time of day, the solar cycle, and other factors - making it somewhat predictable - but never completely reliable.</p>
<p>It’s not so charming when trying to send data - especially a big, single transmission of data.</p>
<p>This means that safeguards such as forward error correction and sending repeats are necessary to increase the likelihood of receiving 100% of the message. Given the medium, it’s never certain, and if a recipient misses a block, what happens then? It’s not simple to use an unreliable medium, especially if you have no alternative methods.</p>
<h3>2. Antennas aren’t small</h3>
<p>The lower the frequency, the longer the wavelength of the signal, and the longer the antenna needs to be. Your cell phone operates at frequencies so high that the wavelength is short enough to use an antenna printed onto a chip.</p>
<p>HF antennas are long. This is partially because amateur radio operators want to transmit as well as receive, so they’re using proportions (such as quarter-wave) which are more efficient, resulting in antennas that are often 7 meters or longer. Ideally these are up 20+ feet off the ground.</p>
<p>If you’re only receiving, they can be much smaller, but they’re still not at a user-friendly size. If you wanted to receive a Bitcoin data signal reliably and with a small antenna, the broadcaster would need to be using an enormous amount of power.</p>
<p>All these limits add up to the same problem: HF isn’t practical for a system that frequently exchanges data. It’s a one-way pipe in a network that fundamentally depends on two-way exchange.</p>
<h2>Passive versus active participants</h2>
<p>HF’s limits - low bandwidth, long antennas, and unreliable propagation - mean you can’t be an active participant in the Bitcoin network.</p>
<p>Being a participant in the Bitcoin network isn’t just about receiving blocks. Only being able to receive blocks is hardly useful at all.</p>
<p>We already have an example of such a system, which has been deployed since 2017: the Blockstream satellite. The prominent Bitcoin development company has multiple satellites broadcasting the blockchain from space. It’s neat that this exists at all, but as far as I know, it’s not widely used. The <a href="https://github.com/Blockstream/satellite">last commit</a> on their project was over six months ago, and it’s marketed primarily as a backup to keep your Bitcoin node updated in case of an internet outage - not as an internet replacement.</p>
<p>The blockchain is currently 692 GB. Obviously you aren’t going to be receiving this over HF radio, since it would take roughly 3.5 years. So a user who is only a recipient of blocks needs to have the blockchain synced to some point before receiving blocks. This is almost certainly done over the internet - any other method would be nearly impossible to ensure you’re synced with this block beacon.</p>
<p>So you use the internet to get the blockchain, then switch it off to only receive blocks via HF. Now what happens? Probably not much, because you can’t <em>talk</em> to the Bitcoin network. You would have a “read-only wallet,” meaning you could receive new funds (but not send), and if you owned coins you could watch them do nothing. The only time you would see anything happening is in the unfortunate circumstance where someone got your private keys and moved your funds to their own control.</p>
<p>{{viz:network-topology}}</p>
<p>Other than as a temporary backup against unreliable internet, this approach is effectively useless. So let’s take a single step in the direction towards active participation without requiring the internet yet. What if we sent transactions over HF?</p>
<p>Of course, this is technically possible. Data is being sent and received over HF radio all the time, by amateurs, commercial stations, and governments. It’s not hard to imagine one Bitcoiner, or even small communities, doing this. After all, it has been done in experiments. But there’s a huge chasm between a handful of experiments and a system that can work at scale.</p>
<p>First, this requires an immediate upgrade to our antenna, power requirements, and radio equipment. It would be nothing like the “Casio watch” example NVK mentions. This is far from user-friendly today.</p>
<p><em>(I shared a draft of this article with NVK, and he suggested that the barriers of antenna size and power needs aren’t as large today as they were in the past; we can do a lot more with a lot less. I’ve witnessed this myself over my ham career - the advent of cheap <a href="https://en.wikipedia.org/wiki/Software-defined_radio">SDR</a> and the emergence of weak-signal data modes like <a href="https://unicomradio.com/js8call/">JS8</a> have made data over HF much more accessible. I’d love nothing more than to see those barriers continue to get lowered, but I suspect the fundamental constraints we’re discussing will never make it practical for most users.)</em></p>
<p>Also, I’ve completely ignored the regulatory side of things so far, but if you care about the legality, you’d need a license in basically all countries in the world to use HF spectrum, and broadcasting Bitcoin transactions for commerce isn’t allowed in most of them.</p>
<p>But let’s say you do it anyway - who would be listening to your message? The bitcoin block beacon is a broadcast, not a two-way service. Someone must be running a service to take your transaction and relay it to the rest of the network (over the internet). The problem is that the person running that service is a point of control, just like the internet infrastructure we tried to avoid. The same problem is true of the block beacon idea, or the Blockstream satellite - someone needs to be running that infrastructure on behalf of others.</p>
<p>Radio can connect people directly - no cables, no routers, no servers. So why not cut out the points of control and talk peer-to-peer? Why not have Bitcoin nodes directly connected via radio? It’s an appealing vision, and it’s not impossible. Mesh networking over radio already exists, but to my knowledge it’s always involved LOS radio, and has never run a large scale decentralized software network before. But it’s simply not feasible on HF; it collapses under scale.</p>
<p>According to  <a href="https://bitnodes.io/">Bitnodes</a> there are about <strong>23,000 Bitcoin nodes</strong> online right now. Each one is constantly gossiping with its peers, exchanging transactions and blocks - often using <strong>hundreds of gigabytes of bandwidth every month</strong>. You couldn’t possibly do that over HF; even a single node’s chatter would take months or years to transmit. The internet is completely necessary for the Bitcoin network to function.</p>
<h2>Conclusion</h2>
<p>In the narrowest possible sense, you might be able to “use Bitcoin” via HF radio by receiving blocks. This would require introducing a Bitcoin beacon using a dedicated slice of HF spectrum, which is unlikely to happen legally and which introduces exactly the type of centralized infrastructure we’re trying to avoid.</p>
<p>In any reasonable interpretation of being a <em>participant</em> in the Bitcoin network, you need the internet.</p>
<p>Suggesting that Bitcoin’s blocks should be significantly smaller in order to better serve passive participants in the network via a poor data communication method makes no sense. Removing points of control over our infrastructure is a good impulse, and I hope to see radio play more of a role in the future, but I’m not hanging my hopes on the HF bands.</p>
<p>For the foreseeable future, Bitcoin’s fate is tied to the internet, and even something as radical as reducing the block size by an order of magnitude won’t change that.</p>
<p><em>Thanks to NVK for reviewing a draft of this article, and for promoting Bitcoin &amp; ham radio over the years.</em></p>
]]></content>
        <author>
            <name>Sam Patterson</name>
            <uri>https://sampatt.com</uri>
        </author>
    </entry>
    <entry>
        <title type="html"><![CDATA[My Grandma Was a Fed - Lessons from Digitizing Hundreds of Hours of Childhood]]></title>
        <id>https://sampatt.com/blog/2025-12-13-my-grandma-was-a-fed-lessons-from-digitizing-hundreds-of-hours-of-childhood</id>
        <link href="https://sampatt.com/blog/2025-12-13-my-grandma-was-a-fed-lessons-from-digitizing-hundreds-of-hours-of-childhood"/>
        <updated>2026-01-13T02:28:43.000Z</updated>
        <summary type="html"><![CDATA[In Which I go into Great Detail About Digitizing Video8 and Learn My Grandmama Lived With a Prostitute]]></summary>
        <content type="html"><![CDATA[<p><em>This post is getting some <a href="https://news.ycombinator.com/item?id=46934513">love on HN</a> - welcome! I’m looking for work right now so if you need someone technical who can write and knows how to use AI tools, send me an email, it’s on my About page.</em></p>
<p>I was in my basement on my birthday, cleaning out our cluttered utility room, when I first noticed the plastic tub near the furnace. This bit of tidying up was in order to install a squat rack I’d just bought from a fellow father off Facebook Marketplace - apparently mid-life crises <em>do</em> hit at exactly 40 years old.</p>
<p>I’d forgotten entirely about this container, but the tightly-sealed white lid stood out in a room half-full of boxes with mangled cardboard flaps barely keeping them closed. I knelt down, released the snaps on each side, and lifted off the lid, revealing rows upon rows of neatly stacked video tapes.</p>
<p><img src="https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2026-02-08-patterkin/image/2026-02-08-01-08.png" alt="Screenshot"></p>
<p>You notice when someone is filming you. It’s only human nature to recognize when your appearance and behavior is being captured <em>forever</em>. This is true even in a world where everyone carries a video camera in their pocket. It was even much more true in my childhood, where film cameras were commonplace, but video cameras were a rarer sight.</p>
<p>No one told my father this - well, they did, and he didn’t care - because he loved his Sony Handycam Video8, and it became a ubiquitous part of our family gatherings. I’d forgotten this until I saw those tapes, mostly because it was part of the background of my childhood, Dad setting up the tripod in the corner of the room while he taught us to play card games or we decorated the Christmas tree.</p>
<p>The tub was almost completely full. I later counted 122 tapes, and found out this format could capture about two hours per tape. What was in those 200+ hours of my childhood? My mother’s handwriting gave me partial answers: Steven Play, Emily Recital 1993, Grandmama and Papa Thanksgiving, etc.</p>
<p>I noticed that as time went on my mother’s regard for posterity diminished. It soon became “Fall 1996”, and then just “1998.” What happened in 1998? I couldn’t access a single memory from that specific year, but I knew that strip of magnetized tape would bring them back. Curiosity flared.</p>
<p><em>One thing my mom may have wished we’d forgotten. In 1989 she self-deprecatingly proposed we create an ugly cake award in honor of her baking triumphs. Believe it or not, this was the runner up.</em></p>
<p><img src="https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2026-02-08-patterkin/image/2026-02-08-00-29.png" alt="Screenshot"></p>
<p>My mind kept coming back to those tapes as I continued cleaning. I opened another tub, this one containing letters I’d received in my later teen years. Those were difficult years for me, and I started to put the tub aside, deciding not to risk invoking a depressive form of nostalgia. But the curiosity sparked by those tapes hadn’t diminished, and I reopened the tub.</p>
<p>Do you remember the good things about the bad parts of your life? If so, congratulations, you’re more gracious (or have a better memory) than I. The low points are most vivid. But, if we’re fortunate, even - and especially - in those times we have people trying to help create some good things for us. I’m a bit ashamed to say I’d forgotten those people, and those things, and this tub was full of them. I was overcome with the wonderful realization of just how many supportive friends and family I had.</p>
<p>All of us tell ourselves stories about our own childhoods. How accurate are they? Historians know that the further away in time from the primary source, the less reliable the story. Other than parents and siblings (and perhaps a smattering of childhood friends), there’s almost no feedback to validate our own stories. And those other people also rely on memory, and are also afflicted with the same human foibles that lead us to not have entirely trustworthy stories of the past. Magnetic tape has no such flaws.</p>
<p>Siblings! I have three, and I imagined each of their reactions to unearthing hundreds of hours of childhood video. What stories did they have, and how might these tapes interact with them? And my own? I couldn’t know, but I suspected that it was closer to the “wonderful realization” end of the spectrum than the “depressive nostalgia.” Bolstered by those new positive memories, I decided I must digitize and share those tapes.</p>
<p>I searched for services which offered to digitize Video8 tapes. Most services cost about $20 per tape. Even with discounts for bulk amounts, it would likely have cost about $2k! I considered paying it (how exactly do you value a few hundred hours of childhood video?) but then I noticed how they delivered the videos - a private media hosting solution <em>for 60 days</em>. I knew this would be a huge amount of data, and only giving me and my siblings two months wasn’t sufficient.</p>
<p>I was curious about the possibility of doing this myself, and I asked ChatGPT. Not surprisingly, it knew a lot of the various tapes, file formats, sizes, processing, storage, and after it asked some clarifying questions, it was quite optimistic about me being able to do this myself. In typical sycophantic fashion, it reassured me that I was very technical adept, knew how to create websites, owned a 4090 GPU, and already had a homelab operating. If I set up good processes, I could save a lot of money and also have way more control.</p>
<p>I started researching digitizing the tapes, watching YouTube videos and talking to ChatGPT, taking notes about the main areas I needed to focus on. One of the most important things I learned was that in order to digitize the tape, it needed to play in real-time as it was captured.</p>
<p>I quickly realized the implication - this wouldn’t only be a big project logistically, but would also take a very long time. Capturing the tapes would be a few hundred hours alone, then I needed to convert the tapes into a format that can be shared online, create any metadata about the videos, determine local storage, online storage, and determine how to actually share the videos with my siblings. I stopped the research for a moment to create a timeline.</p>
<p>It was early September, so there were about four months until the end of the year. If I did one tape a day, I could nearly get the 122 tapes completed by then. I decided to aim for Christmas and make this a gift for my siblings. But this would mean I had to start now. Like, now now.</p>
<p>Digitizing these tapes requires a camera that can play Video8. Of course we didn’t have one of those, that’s a very old format. I started looking online before it struck me: we still had what was left of my parent’s possessions elsewhere in our basement. I ran downstairs.</p>
<p>I found the black and red camera bag in no time. My father’s camera was still there, the same device which recorded the ~20 years of videos. I plugged it in, and it lit up promisingly. I pressed the eject button in order to insert a tape, and… nothing. I did the full gambit of hopeful button pressing, to no avail, only to have ChatGPT dash my hopes for good when I asked for help.</p>
<p>It turns out that this is an exceedingly common problem on old Handycams, especially if they’ve gone unused for many years. The only realistic fix is to send it to a specialty repair shop, which tended to cost a few hundred dollars, and even then you weren’t sure how long it might last.</p>
<p>I was disappointed. It would have seemed appropriate to use the same camera to record all these memories, again. But ChatGPT gave me a silver lining - now that I needed a new camera, I could make this whole process much easier on myself by getting a slightly newer model which supported FireWire.</p>
<p>The old camera was entirely analog and the interface from the camera to a computer to record was somewhat convoluted, but with a newer camera from the early 2000s it would allow me to download raw .dv by plugging a FireWire cable into the camera and running it to… wait, FireWire? That’s ancient, I can’t plug that in anywhere! Even if I could, the operating system probably wouldn’t handle it anymore, right?</p>
<p>Somehow I made it 1,300 words without mentioning that I’m a Linux user. And yes, FireWire is still supported in the Linux kernel. All I needed to do was buy a FireWire card and plug it into a PCI-Express slot. It cost $22 - my first purchase so far.</p>
<p>My next purchase wasn’t so cheap.  I needed that “new” Handycam, and it turns out there’s a bustling market for these for exactly the reason I wanted it. They’re hot items for people doing video capture. I found quite a few cameras which were the right model, but nearly all of them clarified that the tape heads weren’t new and not guaranteed to be clean either. Everything I read said it was imperative to get solid heads, so when I saw a Sony DCR-TRV740 on Ebay that claimed to have new clean heads, I made an offer, he countered, and I bought it for $249.</p>
<p><img src="https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2026-02-08-patterkin/image/2026-02-08-00-39.png" alt="Screenshot"></p>
<p>I was too impatient to wait for delivery, and I noticed he lived in Michigan as I do, so I selected local pickup. Only when I checked the location did I realized he lived on the other side of the state. A more patient man would have sent a slightly embarrassing message on Ebay asking for delivery and figuring out the additional payment, but I couldn’t shake the vision of popping in a tape and seeing scenes no one had watched for ~20+ years. Tonight!</p>
<p>Three hours later I met a young man in the parking lot of a housing development “youth center.” He held out a cardboard box and politely asked for me to release the payment. I asked to see the camera operating - $249 isn’t chump change - and he looked around, no outlets in sight. But he popped his trunk and had an inverter in there (clearly a pro), plugged it in, and handed it over.</p>
<p>I had brought along two tapes, ensuring that 1) this camera played this format of tape, and 2) this camera worked at all. I don’t remember exactly which tape I popped in, except that I saw my mother as she looked when she was around my age now. She passed away in 2013 at 55 years old, and I hadn’t seen her at this age in… well, I don’t know how many years. I released the payment.</p>
<p>I had already exchanged a few question with this young man, and I told him about my project. He let me know that 120 tapes was way more than I could reliably capture without at least one cleaning along the way, and explained you could buy a cartridge to do this before we parted ways. Helpful guy.</p>
<p>I got home and popped in some more tapes, and each of them worked, even the oldest ones from the 1980s. The FireWare card and cable was still shipping, so I began to work on the pipeline. I initially thought I would just dump the raw file to my local SSD and then convert to an MP4 and figure out a way to share the file online. But the more I considered this approach, the less I liked the idea.</p>
<p>For one thing, each MP4 file was likely to be about 3GB. It’s not trivial to share a single file that size, and sharing 100+ is a real headache. Also, it’s not easy to watch a file like that. You’d need to download the entire file first to even be able to watch it.</p>
<p>Perhaps more importantly, would you even want to watch that much video? I had been so excited to see these memories that I still wasn’t taking in the scale. If you watched 8 hours of tapes a day, it would take nearly a month to get through them.</p>
<p>I could just imagine my siblings looking at a list of hyperlinked filenames, eyes glazing over. When they randomly chose T084.mp4 and spent 5 minutes downloading it, they’re met with 45 minutes of a church play from 1994 and an hour of carving pumpkins on the dining room table. Few would have the wherewithal to persevere and find the nuggets of nostalgia within.</p>
<p>The solution was clear: the tapes needed to divulge their secrets. If I captured the metadata about the videos themselves, then me and my siblings could decide what was worth watching and what could be skipped. When watching the tapes, the most obvious way to structure this data was right in front of me: chronological scenes unfolding one after the other. I planned to record the beginning and ending timestamp for a scene, along with who, when, where, and a description of what happened.</p>
<p>Conceptually this made a lot of sense, but I was unsure of the most practical way to do this with so much video to go through. I didn’t want to sit through all the videos and manually write down timestamps. I looked into AI, and found some promising leads. AI segmentation and identification models are getting very good, but unfortunately the versions you can run locally aren’t the best, and the sheer amount of video was impractical.</p>
<p>The FireWire card arrived, and I took my PC apart in order to plug it in. My 4090 takes up a comical amount of space inside my tower, but after some maneuvering I have it installed, and I power up the PC and the camera.</p>
<p>Linux doesn’t disappoint me. It recognizes the camera and with the help of some commands from ChatGPT, it starts capturing the first tape. I stop it after a few minutes to watch the footage, and my heart sinks.</p>
<p>It’s not tracking properly. This is one of the biggest problems when capturing from analog devices, tracking is finicky and can be difficult to solve. I notice that the tracking goes off during scene transitions. I try a different command, and this time it works fine.</p>
<p>It turns out that the <code>dvgrab</code> tool on Linux supports auto splitting scenes by default, as well as remotely operating the camera over the FireWire. I thought this feature would be convenient, because it allowed me to pop in a tape and then start capturing by running a script. However, for an unknown reason, if I used the remote camera controls and had auto split on, it would lose tracking on scene transitions.</p>
<p>Once I figured this out the capture worked perfectly. It meant I had to press a button to start capturing on the device, and it meant the blank gaps in between scenes would be left in the capture. I could live with those compromises.</p>
<p>The first full tape finished and produced a 26GB .dv file. This turned out to be a slight problem for the default video player in Ubuntu, which couldn’t easily move around a file so large. It kept getting stuck and crashing. I installed VLC and that fixed it immediately.</p>
<p>VLC was a godsend, because it solved my metadata problem. I didn’t know this before, but you can create your own VLC plugins. It turns out that ChatGPT is good at writing them. I told it that I wanted a timestamp tool where I could hit a hotkey and it would record the scene start time, then when I hit the hotkey again it would record the scene stop time and give me input fields for people and a video description. Each scene was written to a text file in the video directory.</p>
<p>This worked perfectly. I later added the ability to adjust the timestamp based on the video start time. Most videos had several seconds of blank time before the capture started so it would adjust the timestamp based on the delay input.</p>
<p>I created a YAML metadata file with everything needed and then dropped the scene information into a freeform notes section, then prompted an AI agent to format the scene information to make it proper YAML. AI is really good at formatting text; this saved me so much time. Later I add a script to convert to JSON to make it easier to use online.</p>
<p>{{note:patterkin-yaml}}</p>
<p>But it did still require me to watch the videos to find the scene start and end, enter in the dates, people, location and describe what happened in the video. Even hammering the right arrow key to skip ten seconds in VLC, this is still the single most time consuming part of this project.</p>
<p><strong>Spoiler alert</strong>: <em>it’s February and I’ve gotten through 64 tapes so far. I’m skipping ahead a bit here, but I build a platform to share these videos and I’ve shared it with my siblings already, so I’ll just keep uploading videos until I get through them all.</em></p>
<p>I now consider what else I need. I love captions. I watch most video with captions on. This is great for viewing, but when I consider it more deeply I realize that if I transcribe all these videos I’ll have a text record of a significant portion of our childhood. I later find out this was a great idea; there are multiple videos of my parents talking in great detail about our family history with their parents and one grandparent. There is a wealth of information about genealogy in those transcripts.</p>
<p>Obviously I cannot do this manually; this is already consuming a lot of my time. It’s the perfect job for an AI transcription model. I talk to ChatGPT about my options, and we settle on using WhisperX. I’m quite familiar with running AI models locally, and before long I test it on a full clip. It works beautifully, and only takes a few minutes for an entire two hours.</p>
<p>If I want to share these videos and scenes online, I’ll also need to pull some images out of them. Browsing text isn’t nearly as compelling as seeing a preview of the video itself. I write a script with ffmpeg to pull a frame from ten seconds in the video and include it in the processed output folder as a poster image. But when I test it, I realize a randomly chosen time doesn’t work well. The image is too random. So I allow the script to accept a flag with a timestamp. It’s fun to find the best time in a video to represent it.</p>
<p>A single image isn’t good enough though, because there are many scenes. I decide to make a sprite image, which take tiny thumbnails of frames at certain intervals and stitches them together into a single image. Then when sharing the scene the website can display just the portion of the image for that scene (closest timestamp).</p>
<p>We still have a 26GB raw .dv file. The first video processing step is to create an archive MP4. This is the file that will be processed further down the pipeline instead of working with the huge raw file. It takes 6-7 minutes to convert the entire file using ffmpeg.</p>
<p>I still need to decide how to share these. I could have dropped them on YouTube, but I don’t like the idea of Google having all my family’s memories. Once I began to create the metadata I already knew what I wanted to do: create my own streaming platform for our family’s memories.</p>
<p>I don’t know anything about streaming video, but ChatGPT does. It explains HTTP Live Streaming (HLS) and how to convert the archive MP4 to be streamable. Basically, it chops the video up into hundreds of very short videos and create a list of them. Your browser looks at the list and plays each of them in order. This means you don’t need to download the entire video in order to begin watching, you can seek around without much loading time, and you can also switch video quality depending on how fast your connection is.</p>
<p>This pipeline is getting long! I decide to bundle it all into a single master script. Once the raw file is done being captured, I watch it to record timestamps and other metadata, finalize the metadata YAML with AI prompting, and then run the script. It create the archive MP4, the transcript, poster image, sprite image, HLS videos, and the metadata JSON. This is everything I need for streaming. You can view the details of the script here:</p>
<p>{{note:patterkin-script}}</p>
<p>But I run into a problem: storage. Not only do I not have enough of it, but moving these large files to my NAS is painstakingly slow, sometimes taking half an hour to move a file. I don’t have any choice either, my SSD only has about 100GB free space, which is only 3-4 raw videos, and I can only process videos from my SSD, so I need to move everything as soon as it’s processed.</p>
<p>I log into my NAS box and notice the CPU is running hot. This is an old machine I got at an electronics refurbishing store. It’s worked well for many years, but it only has a 1GB network adapter and the CPU is struggling to handle moving files this size.</p>
<p>I know I need to upgrade my NAS, soon. A new machine entirely. Fortunately, I have a much newer PC that was my daily driver for many years. It has 2.5GB network and a beefy CPU. I decide to buy a 2.5GB network adapter for the old machine, solely to make transferring files between them faster.</p>
<p>Once everything arrives, I buy two 10 TB drives and put them into my new NAS and install TrueNAS. Everything works, and I go to test moving files from my main PC. Uh oh, it’s the same speed as the old NAS! Was this all a waste?</p>
<p>I talk with ChatGPT and we run some network diagnostics. It turns out that my router was throttling my local network traffic. I buy a simple network switch, and this solves the issue. Where the old NAS took 20 minutes to transfer the new NAS takes less than one minute.</p>
<p>Now local storage is solved, but online storage is still an open question. I eventually settle on using Cloudflare’s R2 object storage combined with authentication from Supabase. This allows me to ensure that only authenticated users can access the videos, and it’s fairly cheap to host a few terabytes of video that aren’t being frequently accessed.</p>
<p>I’ve never used Cloudflare workers before, and the documentation isn’t the best, but eventually I have it working and build the UI to allow my siblings to log in and view the videos.</p>
<p>I run the script on a raw video, check that it all works, then run the upload script, and boom! Our childhood videos are now streamable.</p>
<p><img src="https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2026-02-08-patterkin/image/2026-02-07-23-45.png" alt="Screenshot"></p>
<p>Something is missing though. Yes, it’s awesome to browse the videos this way, seeing different random scenes of your childhood. But there’s no story unfolding there. I decide to build a timeline which is personalized for each sibling - since each video has metadata about who is in the clip, and when the clip was taken, you can watch yourself grow by just scrolling your own timeline.</p>
<p>Of course having the ability to react and comment on videos was next - too many good memories to now allow us to express ourselves when watching them. With Supabase logins this wasn’t too hard to implement.</p>
<p>Lots of work in getting the video search, filtering, caching, etc. I keep watching videos and making metadata, uploading then finding improvements to the UI. It’s becoming a bit tedious, but sometimes I find little gems in the videos themselves.</p>
<p>Wait, did my Grandmama just say she worked for the FBI?!?! And lived with a prostitute?!?!?</p>
<p>In September 1986, my grandparents were telling stories of their youth to my parents. My papa was just describing how to was involved in the 1952 <a href="https://www.history.navy.mil/about-us/leadership/director/directors-corner/h-grams/h-gram-071/h-071-1.html">loss of the USS Hobson</a>, a real tragedy. Then my grandmother talks about working in Washingon DC in the late 1940s.</p>
<p>{{youtube:<a href="https://youtu.be/nRHjxTTxoEU">https://youtu.be/tpdWersuhjQ</a>}}</p>
<p>She always was hilarious. I don’t doubt that she did disconnect half of Washington in those two weeks as a switchboard operator.</p>
<p>Time to publish this for real: My wife created some Christmas cards with usernames and passwords for my siblings and their spouses. On Christmas day, we spent the morning with my youngest brother and his wife, and I showed them the site. He cast it from his laptop to the TV, and we had a great time scrolling through videos and sharing memories. My wife and kids had never even seen me as a child before, in video.</p>
<p>The site is now live for my siblings, and I’m considering making the code open source for other people to use themselves. It’s quite a bit of work to set it all up, but it should be fairly easy and cheap to maintain as a family archive for many years to come. It cost me $2.75 for storage in January. Once all the videos are uploaded, it might be $5 a month.</p>
<p>There are many small details about getting this to work that I’ve left out, because it was already getting far too long - if you’re interesting in tackling something like this yourself, feel free to reach out, I’m happy to share about my experience. My email is in the About page.</p>
]]></content>
        <author>
            <name>Sam Patterson</name>
            <uri>https://sampatt.com</uri>
        </author>
    </entry>
    <entry>
        <title type="html"><![CDATA[Sammy's Sensation: Pausing to See If a New Pattern Has a Name]]></title>
        <id>https://sampatt.com/blog/sammys-sensation</id>
        <link href="https://sampatt.com/blog/sammys-sensation"/>
        <updated>2025-05-23T12:41:18.000Z</updated>
        <summary type="html"><![CDATA[How I coined ‘Sammy’s Sensation’—the sudden urge to wonder if a newly noticed pattern already has a name.]]></summary>
        <content type="html"><![CDATA[<p><img src="https://cdn.jsdelivr.net/gh/sampatt/media@main/posts/2025-05-23-sammys-sensation/image/2025-05-23-12-46.png" alt="Silhouette of a boy pausing in thought with a question mark overhead"></p>
<h1>Definition</h1>
<p><strong>Sammy’s Sensation</strong>: the sudden urge to pause and ask whether there’s already an established name for a pattern you’ve just noticed.</p>
<h1>Summary</h1>
<p>I tell how my 12-year-old asked if a new discovery had a name, learned it did, and then coined ‘Sammy’s Sensation’ to honor the meta-inquiry—and to see if a blog post alone can seed a term into AI models.</p>
<h1>Story</h1>
<h2>The Question</h2>
<p>My son and I love table tennis, and last night was no exception - we played at our local club for hours. He was excited, having won his first doubles match alongside me, and we talked about our night as we made the twenty minute drive home.</p>
<p>I could hear the surprise in his voice as we approached our village,</p>
<p>“We’re already home? That felt quick!”</p>
<p>I nodded, not paying much attention to the comment - I was mentally playing out my night-time plan as I pulled into the driveway, a shower at the top of my list.</p>
<p>Sammy paused, then spoke again in a pensive tone.</p>
<p>“You ever notice that it feels like it takes a lot longer to go somewhere than it does to come back? Is there a name for that?”</p>
<p>Wow. Good question! I thought about it for a moment. I’ve definitely noticed that before - whenever I hit the gym, the ride back seems to go more quickly.</p>
<p>“You’re right bud, I have noticed that! I don’t know if that’s a thing people have named or not. I can look it up.”</p>
<h2>The Answer</h2>
<p>We got out of the car, I put away my gym bag, and then pulled out my phone to ask ChatGPT.</p>
<blockquote>
<p>My son noted that driving on the way home from table tennis it felt like it was much quicker than driving out there. He asked if there was a name for this effect. I told him I don’t know and that I would try to find out. Is this a named phenomenon?</p>
</blockquote>
<p>ChatGPT’s answer:</p>
<blockquote>
<p>Yes, this is a recognized psychological phenomenon often referred to as the <strong>“return trip effect.”</strong></p>
<h3>What is the Return Trip Effect?</h3>
<p>The <strong>return trip effect</strong> describes the common experience that the journey home feels shorter than the journey to a destination, even if the distance and actual travel time are the same. This feeling is well-documented in psychological studies.</p>
<p>…</p>
</blockquote>
<p><em>I later did a bit more research - it’s an interesting subject. I made a note summarizing a paper investigating it.</em></p>
<p>{{note:the-return-trip-effect-why-the-return-trip-often-seems-to-take-less-time}}</p>
<h2>Disappointment</h2>
<p>I read the response to Sammy. He wasn’t surprised, but I thought I detected a slight disappointment that other people had already noticed this effect. He made his way up the stairs to bed, and I considered his question and response.</p>
<p>I understood his disappointment. There’s something appealing about the idea that you’re the first one to notice something unique about the world. Even more to have it named after you. Neologisms are appealing to the lexognoscenti.</p>
<p>That’s rare for a reason - for all the variation in humanity, we’re still observing the same reality (for the most part). The later you are to the party, the fewer snacks are left on the counter - and crumbs aren’t all that interesting to anyone.</p>
<h2>The Impulse</h2>
<p>As my son matures, I’ve enjoyed seeing him notice more of these patterns on his own. Part of being a parent is to explicitly make our children aware of many patterns familiar to us in the world, especially the ones to embrace or avoid. One of my favorite aspects of being a parent is seeing them begin to discover these themselves.</p>
<p>It’s one thing to notice something and ask if others notice it too, but a different thing to wonder if this has been named and studied. There’s an implicit recognition that there’s a much broader group of minds out there, and a curiosity about whether they’ve had similar experiences.</p>
<p>Does that impulse have a name?</p>
<h2>Sammy’s Sensation</h2>
<p>According to ChatGPT:</p>
<blockquote>
<p>There’s no single “official” psychological or linguistic term that covers <em>exactly</em> “the act of asking whether something is already named…</p>
</blockquote>
<p>That’s good enough for me. It needs an alliterative name, so today I sat down and came up with this:</p>
<blockquote>
<p><strong>Sammy’s Sensation</strong>: the sudden urge to pause and ask whether there’s already an established name for a pattern you’ve just noticed.</p>
</blockquote>
<p>There you go, bud. Stay curious.</p>
<h1>Experiment</h1>
<p>The idea that people notice patterns in the world and are curious if they have a name is hardly new - but thus far I haven’t seen anyone coin a term for it, and <code>Sammy's Sensation</code> is a completely new combination of words to the internet (apart from a <a href="https://www.instagram.com/sammys.sensations/">home bakery</a> with the plural name).</p>
<p>This gives me the opportunity to see if this type of meta-naming will be picked up by LLMs. If so, how long it takes. Every few months I’ll check back in to see.</p>
<p>I’m curious if a single blog post is sufficient. I imagine not, but I do want to know where the threshold lies.</p>
<p>My post about o3 and Geoguessr was being cited by ChatGPT in further queries on the subject less than a week later. But that made it to the top of HN, which it clearly searches.</p>
<p>Will it surface less popular content? Will it cite new concepts created by random bloggers out of whole cloth alongside well-established research? I’m curious to see the outcome - I’ll share what I find here.</p>
<h2>Share</h2>
<p>Have you experienced Sammy’s Sensation yourself? I’m curious to hear examples where you notice something unique, and feel an impulse to know if this is “a thing” or not.</p>
]]></content>
        <author>
            <name>Sam Patterson</name>
            <uri>https://sampatt.com</uri>
        </author>
    </entry>
</feed>