<?xml version="1.0" encoding="utf-8"?>



<feed xmlns="http://www.w3.org/2005/Atom"
    xmlns:fh="http://purl.org/syndication/history/1.0"
    xmlns:at="http://purl.org/atompub/tombstones/1.0">

    <title>Publ: Development Blog</title>
    <subtitle>A personal publishing system for the modern web</subtitle>
    <link href="http://publ.beesbuzz.biz/blog/feed?tag=caching" rel="self" />
    <link href="http://publ.beesbuzz.biz/blog/feed" rel="current" />
    <link href="https://busybee.superfeedr.com" rel="hub" />
    
    
    <link href="http://publ.beesbuzz.biz/blog/" />
    <fh:archive />
    <id>tag:publ.beesbuzz.biz,2020-01-07:blog</id>
    <updated>2020-02-05T13:23:28-08:00</updated>

    
    <entry>
        <title>Caching stats update</title>
        <link href="http://publ.beesbuzz.biz/blog/287-Caching-stats-update" rel="alternate" type="text/html" />
        <published>2020-02-05T13:23:28-08:00</published>
        <updated>2020-02-05T13:23:28-08:00</updated>
        <id>urn:uuid:39ff7921-798e-5013-a96e-7bfdb8ccf119</id>
        <author><name>fluffy</name></author>
        <content type="html">
<![CDATA[
<p>A few weeks ago I had discovered that <a href="http://publ.beesbuzz.biz/blog/304-v0.5.12-released-and-lots-of-documentation-fixes">caching wasn&rsquo;t actually being used most of the time</a>, and took some stats snapshots for future comparison.</p><p>Now that Publ has been running with correct caching for a while, let&rsquo;s see how things have changed!</p>

<h2 id="287_h2_1_Caveats"><a href="http://publ.beesbuzz.biz/blog/287-Caching-stats-update#287_h2_1_Caveats"></a>Caveats</h2><p>These stats are based on overall site usage, so it includes both manual browsing and also search crawlers and feed readers and the like. Simply looking at the cache statistics doesn&rsquo;t paint a very clear picture of the actual performance improvements; in the stats, 10 users being able to quickly load a fresh blog entry quickly will be far overshadowed by a single search engine spidering the entire website and thrashing the cache, but those 10 users are, to me, far more important.</p><h2 id="287_h2_2_Measurements"><a href="http://publ.beesbuzz.biz/blog/287-Caching-stats-update#287_h2_2_Measurements"></a>Measurements</h2><h3 id="287_h3_3_Throughput"><a href="http://publ.beesbuzz.biz/blog/287-Caching-stats-update#287_h3_3_Throughput"></a>Throughput</h3><p>Here&rsquo;s a measurement of how much traffic the cache actually sees:</p><p><a href="http://publ.beesbuzz.biz/blog/287-Caching-stats-update"><img src="http://publ.beesbuzz.biz/static/_img/4d/c3a4/memcached_bytes-week-20191231_dd3a3e801a_320x180.png" width="320" height="180" srcset="http://publ.beesbuzz.biz/static/_img/4d/c3a4/memcached_bytes-week-20191231_dd3a3e801a_320x180.png 1x, http://publ.beesbuzz.biz/static/_img/4d/c3a4/memcached_bytes-week-20191231_dd3a3e801a_640x360.png 2x" loading="lazy" alt="memcached_bytes-week-20191231.png" title="Cache throughput, December 31 2019"></a><a href="http://publ.beesbuzz.biz/blog/287-Caching-stats-update"><img src="http://publ.beesbuzz.biz/static/_img/d5/ab7a/memcached_bytes-week-20200205_d7b44703ac_320x180.png" width="320" height="180" srcset="http://publ.beesbuzz.biz/static/_img/d5/ab7a/memcached_bytes-week-20200205_d7b44703ac_320x180.png 1x, http://publ.beesbuzz.biz/static/_img/d5/ab7a/memcached_bytes-week-20200205_d7b44703ac_640x360.png 2x" loading="lazy" alt="memcached_bytes-week-20200205.png" title="Cache throughput, February 5 2020"></a></p><p>The first graph shows that before I fixed the caching, very little was being written to the cache, but the amount being read from it was pretty steady. As soon as the fix was made and the cache was being written to, amazingly enough it started actually receiving traffic. In the initial spike of activity, the read and write rate were about the same, which seems plausible for a cache that&rsquo;s being filled in with a relatively low hit rate. There&rsquo;s a steady read rate of around 40K/second and a steady write rate of around 8K/sec &ndash; most of that being internal routines that were being written to the cache, uselessly.</p><p>The second graph (post-fix) shows a cache that&rsquo;s actually being actively used. There&rsquo;s an average write rate of 12K/sec, and a read rate of 17K/sec. There are also several write spikes at around 25K/sec, which I am suspecting are due to search crawler traffic.</p><h3 id="287_h3_4_Allocation"><a href="http://publ.beesbuzz.biz/blog/287-Caching-stats-update#287_h3_4_Allocation"></a>Allocation</h3><p>This is where things get a bit more useful to look at &ndash; how much stuff is actively being held in the cache?</p><p><a href="http://publ.beesbuzz.biz/blog/287-Caching-stats-update"><img src="http://publ.beesbuzz.biz/static/_img/50/5b4f/memcached_counters-week-20191231_98eea433c8_320x189.png" width="320" height="189" srcset="http://publ.beesbuzz.biz/static/_img/50/5b4f/memcached_counters-week-20191231_98eea433c8_320x189.png 1x, http://publ.beesbuzz.biz/static/_img/50/5b4f/memcached_counters-week-20191231_98eea433c8_640x377.png 2x" loading="lazy" alt="memcached_counters-week-20191231.png" title="Memory allocation, December 31 2019"></a><a href="http://publ.beesbuzz.biz/blog/287-Caching-stats-update"><img src="http://publ.beesbuzz.biz/static/_img/8a/414e/memcached_counters-week-20200205_6c17cef995_320x189.png" width="320" height="189" srcset="http://publ.beesbuzz.biz/static/_img/8a/414e/memcached_counters-week-20200205_6c17cef995_320x189.png 1x, http://publ.beesbuzz.biz/static/_img/8a/414e/memcached_counters-week-20200205_6c17cef995_640x377.png 2x" loading="lazy" alt="memcached_counters-week-20200205.png" title="Memory allocation, February 5 2020"></a></p><p>Before the cache fix, the answer to that was, &ldquo;Not much.&rdquo; The cache was averaging a size of a mere 868KB, and after I flipped the caching fix over, it jumped up considerably. During my testing of the fix, the size would spike up substantially and then drop down as cache items got evicted.</p><p>After the cache fix, the allocation went way up. It never went below 2MB, and during the write spikes it would jump up to 7MB or so. This is still far short of the 64MB I have allocated for the cache process.</p><h3 id="287_h3_5_Commands-results"><a href="http://publ.beesbuzz.biz/blog/287-Caching-stats-update#287_h3_5_Commands-results"></a>Commands/results</h3><p>Here&rsquo;s what is actually happening in terms of the cache hits and misses:</p><p><a href="http://publ.beesbuzz.biz/blog/287-Caching-stats-update"><img src="http://publ.beesbuzz.biz/static/_img/3e/0b68/memcached_rates-week-20191231_11ae089b24_320x202.png" width="320" height="202" srcset="http://publ.beesbuzz.biz/static/_img/3e/0b68/memcached_rates-week-20191231_11ae089b24_320x202.png 1x, http://publ.beesbuzz.biz/static/_img/3e/0b68/memcached_rates-week-20191231_11ae089b24_640x403.png 2x" loading="lazy" alt="memcached_rates-week-20191231.png" title="Commands, December 31 2019"></a><a href="http://publ.beesbuzz.biz/blog/287-Caching-stats-update"><img src="http://publ.beesbuzz.biz/static/_img/aa/32c5/memcached_rates-week-20200205_afb89652ea_320x202.png" width="320" height="202" srcset="http://publ.beesbuzz.biz/static/_img/aa/32c5/memcached_rates-week-20200205_afb89652ea_320x202.png 1x, http://publ.beesbuzz.biz/static/_img/aa/32c5/memcached_rates-week-20200205_afb89652ea_640x403.png 2x" loading="lazy" alt="memcached_rates-week-20200205.png" title="Commands, February 5 2020"></a></p><p>Before, the graph shows an average of 44 hits per second, and .63 misses per second. The GET and SET rates are (unsurprisingly) more or less the same.</p><p>After, we see much more interesting patterns &ndash; and not in a good way. It&rsquo;s averaging only 13 hits per second, and .8 misses per second, but that&rsquo;s an average. Eyeballing the graph it looks like the miss rate spikes at about the same time as the incoming traffic spikes, and outside of those spikes the hit rate is around 13 and the miss rate is&hellip; too small to reasonably estimate.</p><h3 id="287_h3_6_Page-load-time"><a href="http://publ.beesbuzz.biz/blog/287-Caching-stats-update#287_h3_6_Page-load-time"></a>Page load time</h3><p>Also when I made the change I also started monitoring the load time of a handful of URLs, which is <em>interesting</em>:</p><p><a href="http://publ.beesbuzz.biz/blog/287-Caching-stats-update"><img src="http://publ.beesbuzz.biz/static/_img/e1/2ff5/http_loadtime-week-20200205_2696cef180_320x197.png" width="320" height="197" srcset="http://publ.beesbuzz.biz/static/_img/e1/2ff5/http_loadtime-week-20200205_2696cef180_320x197.png 1x, http://publ.beesbuzz.biz/static/_img/e1/2ff5/http_loadtime-week-20200205_2696cef180_640x395.png 2x" loading="lazy" alt="http_loadtime-week-20200205.png" title="page load time"></a></p><p>What&rsquo;s interesting about these graphs is that Munin loads those URLs once every 5 minutes &ndash; which happens to be the cache timeout, and so that does a lot to explain the rather chaotic nature of the load time graph, especially on the Atom feed (minimum of 113ms, maximum of 45 seconds, average of 12 seconds). The Atom feed is probably the most loadtime-intense page on my entire website, and would most strongly benefit from caching. This graph tells me that based on the average vs. max times, the Atom feed is getting a hit rate of around 25%. That isn&rsquo;t great.</p><h2 id="287_h2_7_Conclusions"><a href="http://publ.beesbuzz.biz/blog/287-Caching-stats-update#287_h2_7_Conclusions"></a>Conclusions</h2><p>Aggregate memcached stats aren&rsquo;t really that useful for determining cache performance at this scale.</p><p>More to the point, the cache <em>as currently configured</em> probably isn&rsquo;t really making much of a difference. Items are falling out of the cache before they&rsquo;re really being reused.</p><h2 id="287_h2_8_Next-steps"><a href="http://publ.beesbuzz.biz/blog/287-Caching-stats-update#287_h2_8_Next-steps"></a>Next steps</h2><p>It&rsquo;s worth noting that the default memcached expiry time is 5 minutes (which also happens to be how I had my sites configured), which feels like a good tradeoff between content staleness and performance optimization. However, Publ <a href="https://github.com/PlaidWeb/Publ/commit/6ae4ae5731da46027ced9f0ea381dad66e3584a4#diff-650397549bec3d65892e233d5bd328f6R113">soft-expires all cached items</a> when there&rsquo;s a content change, so the only things that should linger with a longer expiry time are things like the &ldquo;5 minutes ago&rdquo; human-readable times on entries, which really don&rsquo;t matter if they&rsquo;re outdated.</p><p>As an experiment I will try increasing the cache timeout to an hour on all of my sites and see what effect that has. My hypothesis is that the allocation size and hit rate will both go up substantially, and the average page load time will go <em>way</em> down, with (much smaller) hourly spikes and otherwise a very fast page load (except for when I&rsquo;m making content changes, of course).</p><p>I&rsquo;m also tempted to try setting the default expiry to 0 &ndash; as in, never expire, only evict &ndash; and see what effect that has on performance. I probably won&rsquo;t, though &ndash; it would have an odd effect on the display of humanized time intervals and make that way too nondeterministic for my taste.</p><h2 id="287_h2_9_Update-Initial-results"><a href="http://publ.beesbuzz.biz/blog/287-Caching-stats-update#287_h2_9_Update-Initial-results"></a><mark>Update:</mark> Initial results</h2><p>Even after just a few hours it becomes <em>pretty obvious</em> what effect this change had:</p><p><a href="http://publ.beesbuzz.biz/blog/287-Caching-stats-update"><img src="http://publ.beesbuzz.biz/static/_img/86/7de4/apache_processes-pinpoint=1580849415-1580957415_c6b236631a_320x193.png" width="320" height="193" srcset="http://publ.beesbuzz.biz/static/_img/86/7de4/apache_processes-pinpoint=1580849415-1580957415_c6b236631a_320x193.png 1x, http://publ.beesbuzz.biz/static/_img/86/7de4/apache_processes-pinpoint=1580849415-1580957415_c6b236631a_640x386.png 2x" loading="lazy" alt="apache_processes-pinpoint=1580849415,1580957415.png" title="Apache process counts"></a><a href="http://publ.beesbuzz.biz/blog/287-Caching-stats-update"><img src="http://publ.beesbuzz.biz/static/_img/07/92c4/http_loadtime-pinpoint=1580849415-1580957415_9500d1701e_320x197.png" width="320" height="197" srcset="http://publ.beesbuzz.biz/static/_img/07/92c4/http_loadtime-pinpoint=1580849415-1580957415_9500d1701e_320x197.png 1x, http://publ.beesbuzz.biz/static/_img/07/92c4/http_loadtime-pinpoint=1580849415-1580957415_9500d1701e_640x395.png 2x" loading="lazy" alt="http_loadtime-pinpoint=1580849415,1580957415.png" title="Page load time"></a><a href="http://publ.beesbuzz.biz/blog/287-Caching-stats-update"><img src="http://publ.beesbuzz.biz/static/_img/41/ada6/memcached_bytes-pinpoint=1580849415-1580957415_82bcce1c79_320x180.png" width="320" height="180" srcset="http://publ.beesbuzz.biz/static/_img/41/ada6/memcached_bytes-pinpoint=1580849415-1580957415_82bcce1c79_320x180.png 1x, http://publ.beesbuzz.biz/static/_img/41/ada6/memcached_bytes-pinpoint=1580849415-1580957415_82bcce1c79_640x360.png 2x" loading="lazy" alt="memcached_bytes-pinpoint=1580849415,1580957415.png" title="memcached throughput"></a><a href="http://publ.beesbuzz.biz/blog/287-Caching-stats-update"></a><a href="http://publ.beesbuzz.biz/blog/287-Caching-stats-update"></a></p><p>The actual effect is a bit surprising, though; I would have expected the quiescent RAM allocation to be closer to the peak, and for the incoming (<code>SET</code>) traffic to be spikier after that as well. I wonder if improved site performance caused a malfunctioning spider to stop hammering my site quite so much, or something. I do know there are a bunch of spiders that have historically been pretty aggressive.</p><p>Of course the most important metric &ndash; page load time &ndash; has ended up <em>exactly</em> as I expected, with it dropping to an average of 2ms for everything and it only being that high because of hourly spikes. I guess the fact Munin is the still seeing the spikes means that Munin is keeping my cache warm (for a handful of pages), so, thanks Munin!</p><p><a href="http://publ.beesbuzz.biz/blog/287-Caching-stats-update"><img src="http://publ.beesbuzz.biz/static/_img/38/811e/http_loadtime-pinpoint=1580929669-1580958694_5cb55b7b33_320x197.png" width="320" height="197" srcset="http://publ.beesbuzz.biz/static/_img/38/811e/http_loadtime-pinpoint=1580929669-1580958694_5cb55b7b33_320x197.png 1x, http://publ.beesbuzz.biz/static/_img/38/811e/http_loadtime-pinpoint=1580929669-1580958694_5cb55b7b33_640x395.png 2x" loading="lazy" alt="http_loadtime-pinpoint=1580929669,1580958694.png" title="Munin keeping the cache warm"></a></p><p>Maybe I should set the cache expiration to a prime number so that it is less likely to be touched on an exact 5-minute interval.</p>

]]>
        </content>
    </entry>
    
    <entry>
        <title>v0.5.12 released, and lots of documentation fixes</title>
        <link href="http://publ.beesbuzz.biz/blog/304-v0.5.12-released-and-lots-of-documentation-fixes" rel="alternate" type="text/html" />
        <published>2019-12-31T00:02:13-08:00</published>
        <updated>2019-12-31T00:02:13-08:00</updated>
        <id>urn:uuid:2a1ac309-6a63-58a3-af88-c284086ec640</id>
        <author><name>fluffy</name></author>
        <content type="html">
<![CDATA[
<h2 id="304_h2_1_Release-notes"><a href="http://publ.beesbuzz.biz/blog/304-v0.5.12-released-and-lots-of-documentation-fixes#304_h2_1_Release-notes"></a>Release notes</h2><p>Today I got a fire lit under me and decided to do a bunch of bug fixing and general performance improvements.</p><p>Changes since v0.5.11:</p>
<ul>
<li>Fixed a micro-optimization which was causing some pretty bad cache problems (I really should write a blog entry about this but tl;dr micro-optimizations are usually bugs in disguise)</li>
<li>Fixed an issue which was causing the page render cache to not actually activate most of the time (you <em>know</em> there&rsquo;s going to be a ramble about this below&hellip;)</li>
<li>Fixed a bunch of spurious log meessages about nested transactions</li>
<li>Refactored the way that <code>markup=False</code> works, making it available from all Markdown/HTML contexts</li>
<li>Changed <code>no_smartquotes=True</code> to <code>smartquotes=False</code> (<code>no_smartquotes</code> is retained for template compatibility) (although I missed this on <code>entry.title</code>; I&rsquo;ve already <a href="https://github.com/PlaidWeb/Publ/commit/004fb47a3c53830081579e6ae5c1133f1ca2581e">committed a fix</a> for the next version)</li>
<li>Improve the way that the page render cache interacts with templates</li>
<li>Fixed an issue where changing a template might cause issues to occur until the cache expires</li>
</ul>
<h2 id="304_h2_2_Documentation-improvements"><a href="http://publ.beesbuzz.biz/blog/304-v0.5.12-released-and-lots-of-documentation-fixes#304_h2_2_Documentation-improvements"></a>Documentation improvements</h2>
<ul>
<li>The <a href="http://publ.beesbuzz.biz/manual/deploying/1278-Self-hosting-Publ">Apache/nginx deployment guide</a> is vastly improved:

<ul>
<li>Now it uses UNIX domain sockets instead of localhost ports, making service provisioning a bit easier</li>
<li>The systemd unit is now a user unit instead of a system unit, which improves security and also allows for gentler service restarts</li>
</ul></li>
<li>The <a href="http://publ.beesbuzz.biz/manual/deploying/441-Continuous-deployment-with-git">git deployment guide</a> has been updated per the above, and also some of the code snippets are cleaned up</li>
<li>The information about <a href="http://publ.beesbuzz.biz/html-processing">HTML processing</a> and <a href="http://publ.beesbuzz.biz/image-renditions">image renditions</a> has been consolidated and cleaned up</li>
<li>Information about <a href="http://publ.beesbuzz.biz/manual/706-User-authentication">private posts</a> and <a href="http://publ.beesbuzz.biz/manual/formats/1341-User-configuration-file">user configuration</a> has also been cleaned up somewhat</li>
<li>Also lots of updates to the <a href="https://github.com/PlaidWeb/Publ-templates-beesbuzz.biz/">beesbuzz.biz Publ templates</a></li>
</ul>


<h2 id="304_h2_3_The-caching-stuff"><a href="http://publ.beesbuzz.biz/blog/304-v0.5.12-released-and-lots-of-documentation-fixes#304_h2_3_The-caching-stuff"></a>The caching stuff</h2><p>So, once upon a time, the page render cache was caching at the response level, rather than the render level, which seemed like a good idea at the time. But then I realized this was bad, and made it so that if the request was coming from a browser that could potentially return a <a href="https://httpstatuses.com/304">not modified response</a>, this would break things badly. So, in that situation it just turned the render cache off.</p><p>This of course had the silly side effect of making the rendition cache not active in precisely the situation when it should most be active!</p><p>Later I had refactored the rendition cache to cache at the render level, with the request routing and response (which are cheap) always evaluated and only the page render itself would be cached. But I forgot to remove the check above.</p><p>So, all this time, the caching system was only being used for caching&hellip; stuff that didn&rsquo;t really benefit from being cached. Like low-level file lookups, which aren&rsquo;t exactly a performance hog (and could lead to rather unfortunate issues with template locations being out-of-date until cache expiry took place).</p><p>Anyway, after getting the cache to actually work, I also realized there were a few things I could do to make stale cached renditions no longer linger. Previously, the cache key that&rsquo;s generated for a rendition just involved (essentially) the file paths of the relevant items in the URL; category templates would know about the template&rsquo;s file path and the category path, and entry templates would additionally know about the entry ID, and then at a global level it would also know about the request&rsquo;s base URL (so it would cache different hostnames and schemes differently, which also had the nice side-effect of eliminating key conflicts if two sites were configured with the same memcached key prefix but I digress).</p><p>Well, first I realized it was pretty trivial to have entries and templates express their file fingerprint as part of their cache key, so changes to templates and entries would cause immediate cache misses &ndash; meaning instant updates on the next page load. But this would only apply to content updates on entry pages, not on category pages.</p><p>So I started to go down a rabbit hole where updates to entries would also update the cache key for the category itself, which caused indexing to take a lot more time and also required storing metadata about <em>all</em> categories (and not just ones with configuration metadata) in the database, and this had a few other annoying side-effects (meaning bugs) that had to be ironed out. And it still wouldn&rsquo;t help to update category pages which change due to an update to an entry in a different category.</p><p>Then I realized that the easiest thing to do would be to have the latest file modification be part of the cache key; any content file update would then basically invalidate the entire page render cache. Given that most sites only update very infrequently this seemed like a nice tradeoff. So I started implementing that&hellip;</p><p>&hellip;and then realized that in the early days of me adding caching to Publ, <em>I had already implemented that</em> since I thought it would be useful, and it was just not being used at all! (And I had even touched this code when I was adding mypy annotations to everything, but didn&rsquo;t even think about it&hellip;)</p><p>So, now a bit of functionality that&rsquo;s been there has theoretically made the rendition cache a lot faster, even around site resets. Neat.</p><p>In any case, after all this work I decided to do some benchmarking. I used <a href="https://github.com/Gabriel439/bench"><code>bench</code></a> to time rendering the Publ tests index page, and the results were interesting:</p>
<ul>
<li><p>No cache</p><figure class="blockcode"><pre><span class="line"><span class="line-content">time                 90.92 ms   (85.06 ms .. 97.29 ms)</span></span>
<span class="line"><span class="line-content">                     0.993 R²   (0.985 R² .. 1.000 R²)</span></span>
<span class="line"><span class="line-content">mean                 87.33 ms   (86.27 ms .. 90.70 ms)</span></span>
<span class="line"><span class="line-content">std dev              2.868 ms   (968.0 μs .. 4.970 ms)</span></span>
</pre></figure></li>
<li><p>SimpleCache (in-process object store)</p><figure class="blockcode"><pre><span class="line"><span class="line-content">time                 37.22 ms   (36.19 ms .. 38.11 ms)</span></span>
<span class="line"><span class="line-content">                     0.999 R²   (0.998 R² .. 1.000 R²)</span></span>
<span class="line"><span class="line-content">mean                 38.10 ms   (37.39 ms .. 40.58 ms)</span></span>
<span class="line"><span class="line-content">std dev              2.433 ms   (469.3 μs .. 4.620 ms)</span></span>
<span class="line"><span class="line-content">variance introduced by outliers: 19% (moderately inflated)</span></span>
</pre></figure></li>
<li><p>MemcacheD</p><figure class="blockcode"><pre><span class="line"><span class="line-content">time                 38.38 ms   (37.95 ms .. 39.06 ms)</span></span>
<span class="line"><span class="line-content">                     0.999 R²   (0.999 R² .. 1.000 R²)</span></span>
<span class="line"><span class="line-content">mean                 38.21 ms   (37.92 ms .. 38.51 ms)</span></span>
<span class="line"><span class="line-content">std dev              570.3 μs   (428.0 μs .. 762.1 μs)</span></span>
</pre></figure></li>
</ul>
<p>So, at least on that fairly simple test, the tests index page runs about 2x faster with a cache present than without. (MemcacheD is a little slower than SimpleCache, but that&rsquo;s to be expected, as it has to serialize/deserialize objects over the network. Frankly I&rsquo;m surprised it&rsquo;s only that small of a difference!)</p><p>Then I decided to benchmark the main page of <a href="https://beesbuzz.biz/">my personal website</a>, which is rather more complicated. Running locally I got these results:</p>
<ul>
<li><p>No cache</p><figure class="blockcode"><pre><span class="line"><span class="line-content">time                 280.0 ms   (274.8 ms .. 284.9 ms)</span></span>
<span class="line"><span class="line-content">                     1.000 R²   (0.999 R² .. 1.000 R²)</span></span>
<span class="line"><span class="line-content">mean                 278.0 ms   (277.1 ms .. 279.7 ms)</span></span>
<span class="line"><span class="line-content">std dev              1.548 ms   (749.5 μs .. 2.023 ms)</span></span>
<span class="line"><span class="line-content">variance introduced by outliers: 16% (moderately inflated)</span></span>
</pre></figure></li>
<li><p>SimpleCache</p><figure class="blockcode"><pre><span class="line"><span class="line-content">time                 19.32 ms   (19.19 ms .. 19.42 ms)</span></span>
<span class="line"><span class="line-content">                     1.000 R²   (1.000 R² .. 1.000 R²)</span></span>
<span class="line"><span class="line-content">mean                 19.28 ms   (19.21 ms .. 19.38 ms)</span></span>
<span class="line"><span class="line-content">std dev              201.5 μs   (138.9 μs .. 289.7 μs)</span></span>
</pre></figure></li>
<li><p>MemcacheD</p><figure class="blockcode"><pre><span class="line"><span class="line-content">time                 20.85 ms   (20.62 ms .. 21.13 ms)</span></span>
<span class="line"><span class="line-content">                     0.999 R²   (0.998 R² .. 1.000 R²)</span></span>
<span class="line"><span class="line-content">mean                 20.57 ms   (20.44 ms .. 20.74 ms)</span></span>
<span class="line"><span class="line-content">std dev              341.0 μs   (254.2 μs .. 511.7 μs)</span></span>
</pre></figure></li>
</ul>
<p>So, yeah, 14x faster&hellip; And my site feels way more responsive now, too, at least when Pushl isn&rsquo;t thrashing the cache.</p><p>Time will tell just how much of a difference this makes in practical terms; I&rsquo;ve had <a href="http://munin-monitoring.org/">munin</a> monitoring my MemcacheD for a while and the graphs made it look like it was pretty effective but it was of course not actually monitoring anything useful. But here&rsquo;s some graphs of the last week:</p><p><a href="http://publ.beesbuzz.biz/blog/304-v0.5.12-released-and-lots-of-documentation-fixes"><img src="http://publ.beesbuzz.biz/static/_img/4d/c3a4/memcached_bytes-week-20191231_dd3a3e801a_320x180.png" width="320" height="180" srcset="http://publ.beesbuzz.biz/static/_img/4d/c3a4/memcached_bytes-week-20191231_dd3a3e801a_320x180.png 1x, http://publ.beesbuzz.biz/static/_img/4d/c3a4/memcached_bytes-week-20191231_dd3a3e801a_640x360.png 2x" loading="lazy" alt="memcached_bytes-week-20191231.png" title="MemcacheD bytes"></a><a href="http://publ.beesbuzz.biz/blog/304-v0.5.12-released-and-lots-of-documentation-fixes"><img src="http://publ.beesbuzz.biz/static/_img/50/5b4f/memcached_counters-week-20191231_98eea433c8_320x189.png" width="320" height="189" srcset="http://publ.beesbuzz.biz/static/_img/50/5b4f/memcached_counters-week-20191231_98eea433c8_320x189.png 1x, http://publ.beesbuzz.biz/static/_img/50/5b4f/memcached_counters-week-20191231_98eea433c8_640x377.png 2x" loading="lazy" alt="memcached_counters-week-20191231.png" title="MemcacheD counters"></a><a href="http://publ.beesbuzz.biz/blog/304-v0.5.12-released-and-lots-of-documentation-fixes"><img src="http://publ.beesbuzz.biz/static/_img/3e/0b68/memcached_rates-week-20191231_11ae089b24_320x202.png" width="320" height="202" srcset="http://publ.beesbuzz.biz/static/_img/3e/0b68/memcached_rates-week-20191231_11ae089b24_320x202.png 1x, http://publ.beesbuzz.biz/static/_img/3e/0b68/memcached_rates-week-20191231_11ae089b24_640x403.png 2x" loading="lazy" alt="memcached_rates-week-20191231.png" title="MemcacheD rates"></a></p><p>In a week or so I&rsquo;ll see what they&rsquo;re like and if there&rsquo;s any difference. I&rsquo;m also just realizing that my &ldquo;HTTP load time&rdquo; graph isn&rsquo;t actually very useful so I need to configure Munin more appropriately.</p><p>I&rsquo;m also not entirely sure what those semi-regular spikes in MemcacheD traffic have been; it&rsquo;s unfortunately not easy to tell what individual things are using MemcacheD since it&rsquo;s just a big ol&#39; global key-value store, more or less.</p>

]]>
        </content>
    </entry>
    

    
</feed>