{
    "version": "https://jsonfeed.org/version/1",
    "title": "Jon George",
    "home_page_url": "https://jg.dev",
    "feed_url": "https://jg.dev/feed.json",
    "description": "Engineering leader & software architect. Currently building Explodex (AI ordnance identification) and SoftwareDB.",
    "icon": "https://jg.dev/512x512.png",
    "author": {
        "name": "feed@jg.dev",
        "url": "https://jg.dev"
    },
    "items": [
        {
            "id": "https://jg.dev/writing/zero-trust-in-code",
            "content_html": "<p>Zero trust is often discussed in terms of networks, devices, and infrastructure because these are usually the first line of defense against external threats. But as modern software development moves toward microservices, APIs, infrastructure-as-code, cloud architectures, and AI-driven systems, the boundaries between infrastructure and application are blurring — and the consequences of getting security wrong are getting higher, especially for teams building in regulated or mission-critical environments. Here I provide a brief introduction to zero trust and use cases for applying its concepts directly to software application code.</p><br><h3>What is zero trust?</h3><br><p><em>Zero trust</em> is a set of three principles aimed at protecting digital assets, services, and network infrastructure:</p><br><ul><br><li>Assume a breach has occurred</li><br><li>Apply least-privilege access</li><br><li>Perform explicit and continuous verification</li><br></ul><br><p>In practice this may take the form of microsegmentation, granting access to users for only the resources needed, and continuously monitoring and verifying users and devices connected to the network.</p><br><p>The concept of zero trust assumes that everything and everyone is a potential threat until the user and/or device has been authenticated and authorized to access a resource. After the initial authentication, zero trust environments continuously validate access attempts to ensure that only authenticated and authorized users, devices, and systems can interact with resources on a protected network.</p><br><p>Building on this concept, a <em>zero trust architecture</em> applies zero trust principles to a network or system. These principles align with <a href=\"https://csrc.nist.gov/publications/detail/sp/800-207/final\">NIST SP 800-207</a>, the federal standard for zero trust architecture, and are increasingly foundational to frameworks like CMMC and NIST SP 800-171 that govern how organizations handle controlled unclassified information (CUI) and other sensitive data.</p><br><h3>Software development and zero trust</h3><br><p>The concepts of zero trust are usually applied to network and infrastructure layers for safeguarding digital assets and resources. But what about the software applications that run on zero trust networks?</p><br><p>Applying zero trust to software application code provides a more holistic approach that improves security at every layer of the stack — from network access to application logic. By ensuring that our application logic applies principles of least-privilege access, continuous authentication, and continuous authorization, we can create more secure software to protect against modern attack vectors. For teams operating in regulated environments or building systems where downtime, data leakage, or unauthorized access have real-world consequences, this isn&#39;t optional — it&#39;s table stakes.</p><br><h3>The software development life cycle</h3><br><p>The <a href=\"https://www.redhat.com/en/topics/devops/what-is-devsecops\">DevSecOps approach</a> treats security as a shared responsibility across all stages of software development. But we all know that security is sometimes deprioritized when teams are under pressure to deliver quickly. When faced with a decision between security and time-to-market, it&#39;s the responsibility of the team to communicate the trade-offs effectively to stakeholders. If a product release can&#39;t be delayed in favor of security and a compromise is the next best option, queue up the technical debt for the next opportunity to release.</p><br><p>Introducing automation into the software development life cycle can also help reduce the overhead of developing and reviewing code for security. There are a host of tools available to perform <a href=\"https://owasp.org/www-community/controls/Static_Code_Analysis\">static code analysis</a> and <a href=\"https://owasp.org/www-community/Vulnerability_Scanning_Tools\">vulnerability scanning</a> that can help identify issues early on in the process.</p><br><p>Equally important — and often under-invested — is <strong>auditability</strong>. Structured logging, tamper-evident audit trails, and end-to-end traceability of sensitive actions aren&#39;t just compliance checkboxes. They&#39;re what let you answer the hard questions after an incident, demonstrate due diligence to regulators, and build the kind of operational trust that demanding customers require.</p><br><h3>Insider threats</h3><br><p>Software development teams often have access to critical systems and sensitive data. If a developer with elevated permission is compromised, the entire application and its data is at risk. Zero trust limits the potential damage by applying least privilege access to ensure that developers and systems can only access what is strictly necessary for their specific tasks. Code repositories and environments should be segmented to limit unnecessary access across teams and services. A front-end developer working on a user interface should not have access to sensitive back-end databases unless absolutely required.</p><br><p>In environments that handle CUI or regulated data, segmentation becomes even more consequential. Environments should be separated by data sensitivity, access should be provisioned per task rather than per role, and privileged actions should require explicit justification and leave a paper trail.</p><br><h3>Supply chain risks</h3><br><p>Modern software is built on the shoulders of giants; it involves using third-party libraries, frameworks, and APIs. These external dependencies can introduce security vulnerabilities that affect the integrity of the entire application or system. Zero trust requires strict monitoring of all external code and libraries that are integrated into the development process. <a href=\"https://cloud.google.com/software-supply-chain-security/docs/dependencies#dependency-tools\">Dependency management tools</a> can help you understand and evaluate the security posture of your project and its dependencies.</p><br><p>Maintaining a software bill of materials (SBOM) is increasingly expected — and in federal and defense contexts, often required. An SBOM gives you a defensible record of every component in your software, which is essential when a new CVE drops and you need to know, within hours, whether you&#39;re exposed.</p><br><h3>API security</h3><br><p>APIs are often a target for attackers because they provide a direct path to sensitive data or services. Zero trust ensures that every API call is authenticated, authorized, and encrypted — even within internal systems. Internal and external APIs should be equally secured to ensure no implicit trust exists within the system. For example, internal APIs should use token-based authentication in the same manner as external APIs.</p><br><h3>AI and ML systems</h3><br><p>AI and machine learning systems introduce security considerations that traditional application security doesn&#39;t fully address. A model is effectively new attack surface: training data can be poisoned, prompts and inputs can be weaponized to extract data or manipulate behavior, and the model&#39;s outputs are probabilistic — which means they cannot be trusted blindly the way we trust deterministic code.</p><br><p>Zero trust applied to AI systems means:</p><br><ul><br><li><strong>Treat model outputs as untrusted input.</strong> Validate, sanitize, and constrain what downstream systems accept from a model, especially when the output drives a decision or an action.</li><br><li><strong>Verify training data provenance.</strong> Know where your data came from, who touched it, and when. Apply the same supply chain rigor you apply to code dependencies.</li><br><li><strong>Continuously monitor for drift and adversarial behavior.</strong> Models degrade and can be attacked. Instrument them the way you instrument a production service.</li><br><li><strong>Audit model decisions.</strong> In high-stakes domains — medical, defense, financial — you need to be able to explain, after the fact, why a model produced a given output. Build the logging and traceability in from the start.</li><br></ul><br><p>For systems that inform operational decisions in contested or time-critical environments, these aren&#39;t theoretical concerns. They&#39;re the difference between a tool commanders can rely on and one they can&#39;t.</p><br><h3>Conclusion</h3><br><p>Zero trust provides a comprehensive, forward-thinking framework for ensuring that software remains secure in the face of modern challenges. Although the zero trust model has historically been applied to networks and infrastructure, the most interesting work now lives at the seam between infrastructure and application — and increasingly, at the seam between application and AI.</p><br><p>Applying zero trust principles to software development is essential for building secure, resilient, auditable, and compliant systems. By prioritizing least-privilege access, continuous verification, strict segmentation, and end-to-end auditability within our applications, we can bolster our cybersecurity strategy, reduce our attack surface, and build software that holds up in the environments that need it most.</p><br>",
            "url": "https://jg.dev/writing/zero-trust-in-code",
            "title": "Secure software development using zero trust",
            "summary": "Zero trust usually means networks and infrastructure. Here's what it looks like applied to application code and what it means for AI systems and regulated environments.",
            "image": "https://jg.dev/images/writing/zero-trust-in-code/og.png",
            "date_modified": "2026-04-15T00:12:12.000Z",
            "author": {
                "name": "feed@jg.dev",
                "url": "https://jg.dev"
            },
            "tags": [
                "Security"
            ]
        },
        {
            "id": "https://jg.dev/writing/process-1-million-plus-records-in-servicenow",
            "content_html": "<p>Every ServiceNow developer eventually needs to run a scripted operation across hundreds of thousands of records. Maybe you&#39;re backfilling a computed field, reclassifying CIs, reconciling data between tables, or cleaning up records that should have been purged years ago. You write a GlideRecord loop, paste it into Background Scripts, and hit Run. Then one of three things happens. The script times out. The instance slows to a crawl. Or both.</p><br><p>This is a real problem, but the answer isn&#39;t obvious because the right approach depends on what you&#39;re actually doing to each record. A simple field update and a complex reclassification are fundamentally different operations, and the tooling that handles one will choke on the other.</p><br><p>This post is a decision framework. I&#39;ll walk through every approach I&#39;ve used, when each one works, and when it breaks down. Then I&#39;ll go deep on the pattern I reach for when nothing else fits, event-driven recursion with nested batching.</p><br><h2>Know what you&#39;re dealing with</h2><br><p>Before choosing an approach, answer two questions:</p><br><p><em>Does your operation require scripting logic per record?</em> If you&#39;re setting field A to value B across every record that matches condition C, that&#39;s a static update. If you&#39;re computing a value based on related records, calling a script include, doing cross-table lookups, or applying conditional logic that varies per record, that&#39;s a scripted operation. </p><br><p><em>How many records do you need to process?</em><br>Hundreds, thousands, hundreds of thousands, millions? The approaches that work at 5,000 records will fail at 500,000. Scale changes the physics.</p><br><p>This distinction determines which tools are automatically off the table (pun intended).</p><br><h2>The approaches, in order of complexity</h2><br><h4>Update All from list view</h4><br><p>Filter a list, right-click a column header, select Update All, set the new value. It works. It&#39;s fast. No scripting required. This is the right choice for small to moderate datasets — a few thousand records — where you&#39;re setting fields to a uniform value.</p><br><p>If you&#39;re in a pinch, you can use update all on a larger number of records, just be prepared for your session to be locked while you wait for the operation to complete. You&#39;ll also need to re-run the operation on remaining records after it times out. </p><br><p>Update All breaks down the moment you need per-record logic, and it doesn&#39;t scale well past a few thousand records because the UI transaction will time out. If your update involves removing field values, you can set string values to <code>NULL</code>, but date fields can&#39;t be cleared.</p><br><p>Use it when: you need a quick, uniform field update on a filtered set you can see in a list.</p><br><p>Skip it when: you need any logic beyond &quot;set X to Y,&quot; or you&#39;re operating on more than a few thousand records.</p><br><h4>Data Management (Update Jobs / Delete Jobs)</h4><br><p>ServiceNow&#39;s System Data Management module is a no-code tool for bulk updates and deletes. You configure conditions, specify field values, and execute. It handles batching internally, you can schedule it, and it has built-in rollback, which is actually useful.</p><br><p>For static updates, Data Management is the right answer. If your operation is something like &quot;set category to &#39;network&#39; where description contains &#39;network&#39; and category is not &#39;network&#39;,&quot; use Update Jobs. Don&#39;t over-engineer it. The rollback capability alone justifies using it over a script.</p><br><p>But Data Management only supports setting fields to static values. It doesn&#39;t support computed values, cross-table lookups, conditional transforms, or any per-record scripting logic. I&#39;ve seen people try to use JavaScript syntax in the value fields — it doesn&#39;t work well. The moment your operation requires a script, Data Management is out.</p><br><p>There are also operational caveats. Updating all records in a table can temporarily lock the table, preventing inserts and updates. And the scheduling feature has been unreliable in my experience. Jobs sometimes sit in a &quot;New&quot; state and never fire, while &quot;Execute Now&quot; works fine. Not a dealbreaker, but worth knowing.</p><br><p>Use it when: you need a bulk field update with static values and want built-in rollback.</p><br><p>Skip it when: your operation requires per-record scripting logic.</p><br><h4>Background script with setLimit or chooseWindow</h4><br><p>This is where most developers land first for scripted operations. Write a GlideRecord loop, use <code>setLimit()</code> or <code>chooseWindow()</code> to process a subset, run it, adjust the window, run it again recursively or manually.</p><br><p>It works at moderate scale — tens of thousands of records. But the entire background script runs in a single transaction, and the transaction timeout clock starts when you hit Run. Calling functions recursively within the script doesn&#39;t reset the timer. The default background script quota is 4 hours, and transaction quota rules may enforce tighter limits depending on your instance configuration.</p><br><p>For larger datasets, this approach means manually re-running the script over and over, adjusting your query window each time. That&#39;s too much babysitting. It&#39;s fine for a one-time operation on 50,000 records. It&#39;s not viable for 1M+.</p><br><p>Use it when: you have a scripted operation on a moderate dataset (under ~100K records) and you&#39;re willing to babysit a few runs.</p><br><p>Skip it when: the dataset is large enough that re-runs become impractical, or when you need the operation to run unattended.</p><br><h4>Scheduled job on a recurring interval</h4><br><p>The next step up: create a scheduled job that processes a batch of records on a fixed interval. Run every 5 minutes, process 500 records per run. Walk away.</p><br><p>The problem is the interval math. You have to set the interval longer than your worst-case batch execution time to prevent jobs from piling up. If you&#39;re conservative (and you should be, because instance load varies), you waste time when the instance is idle.</p><br><p>Tim Woodruff illustrated this well in his <a href=\"https://snprotips.com/blog/2018/10/11/how-to-do-massive-slow-database-operations-without-slowing-down-your-instance\">SNProTips article on event-driven recursion</a>: if each operation takes 10-60 seconds and you&#39;re doing 20 records per batch with a conservative interval, processing 100,000 records could take 83 days. That&#39;s not a typo.</p><br><p>Scheduled jobs are the right tool for ongoing maintenance. It&#39;s great for a nightly cleanup or a weekly reconciliation. It&#39;s the wrong tool for one-off large-scale operations where you want the work done as fast as the instance allows.</p><br><p>Use it when: you need a recurring maintenance operation that processes records in perpetuity.</p><br><p>Skip it when: you have a one-off large-scale operation and need it done in hours or days, not months.</p><br><h4>Event-driven recursion</h4><br><p>This is the pattern I reach for when every other option has a dealbreaker: the operation requires per-record scripting, the dataset is too large for manual re-runs, and a scheduled job is too slow.</p><br><p>The core idea, originally documented by Woodruff, is simple: register an event, create a script action that processes a batch and then fires the same event again, and kick it off with a background script. </p><br><p>Each event fires a new script action in a <strong>new transaction</strong>, which genuinely resets the timeout clock. The next batch only fires after the current one completes, so there&#39;s no pile-up. And because each event goes to the back of the event queue, the instance self-throttles as needed. Batches run slower when the queue is busy, faster when it&#39;s idle.</p><br><p>This is the pattern I use the most, with some refinements I&#39;ll cover below.</p><br><h2>Implementing event-driven recursion</h2><br><p>The pattern has three components: </p><br><ol><br><li>an event registration</li><br><li>a script action</li><br><li>and a one-time background script to kick off the first event</li><br></ol><br><h4>Step 1: Register the event</h4><br><p>Navigate to <strong>System Policy &gt; Events &gt; Registry</strong> and create a new event. Give it a descriptive name, something like <code>custom.batch.backfill_field</code>. Add a short blurb about what triggers the event in the <code>Fired by</code> field. And add a description.</p><br><h4>Step 2: Create the script action</h4><br><p>Navigate to <strong>System Policy &gt; Events &gt; Script Actions</strong> and create a new record. Set the <strong>Event name</strong> to the event you registered. Set <strong>Order</strong> to 1000 (low priority - you don&#39;t want this competing with critical event processing).</p><br><p>Here&#39;s the what the Script Action looks like:</p><br><pre><code class=\"language-js\">/**<br> * Event-driven recursive batch processor.<br> * Processes records in batches, firing a new event after each<br> * batch to reset the transaction timeout and prevent instance<br> * degradation.<br> *<br> * @param {number} batchSize - Records to process per event <br> * (via event.parm1)<br> *<br> * @param {number} totalProcessed - Running count of processed<br> * records (via event.parm2)<br> */<br>(function process(batchSize, totalProcessed) {<br><br>    var EVENT_NAME = &#39;custom.batch.backfill_field&#39;; <br>    var TABLE = &#39;incident&#39;;<br>    var QUERY = &#39;u_custom_field=NULL&#39;;<br><br>    batchSize = parseInt(batchSize, 10) || 5000;<br>    totalProcessed = parseInt(totalProcessed, 10) || 0;<br><br>    var gr = new GlideRecord(TABLE);<br>    gr.addEncodedQuery(QUERY);<br>    gr.setLimit(batchSize);<br>    gr.query();<br><br>    if (!gr.hasNext()) {<br>        gs.info(<br>\t        &#39;EDR Complete: &#39; + totalProcessed + <br>\t        &#39; total records processed.&#39;,<br>\t        &#39;BatchProcessor&#39;<br>\t    );<br>        return; // No more records. Stop the recursion.<br>    }<br><br>    var batchCount = 0;<br>    while (gr.next()) {<br>    <br>        // ------- YOUR PROCESSING LOGIC HERE -------<br>        gr.setValue(&#39;u_custom_field&#39;, computeValue(gr));<br>        gr.setWorkflow(false); // Skip business rules if safe<br>        gr.update();<br>        // -------------------------------------------<br>        <br>        batchCount++;<br>    }<br><br>    totalProcessed += batchCount;<br>    gs.info(<br>\t    &#39;EDR Progress: &#39; + totalProcessed +<br>\t    &#39; records processed. Firing next batch.&#39;, <br>\t    &#39;BatchProcessor&#39;<br>    );<br><br>    gs.eventQueue(EVENT_NAME, gr, batchSize, totalProcessed);<br><br>})(event.parm1, event.parm2);<br></code></pre><br><p>A few things to note about this template:</p><br><p><strong>The query must exclude already processed records.</strong> This is what makes the pattern resumable. If your operation sets a field from NULL to a computed value, query for records where that field is still NULL. Each batch processes records that match the query, transforms them so they no longer match, and the next batch picks up where this one left off. If the process fails mid-run, you restart it and it automatically skips records that were already processed.</p><br><p><strong><code>setWorkflow(false)</code> is your friend, when appropriate.</strong> Every <code>gr.update()</code> fires every business rule, workflow, and/or notification on that table. On a table with heavy automation, this can be the difference between processing 5000 records per batch and 50. But only suppress workflows if your operation genuinely doesn&#39;t need the side effects. If you&#39;re reclassifying CIs and downstream processes need to react to the class change, leave workflows on and reduce your batch size.</p><br><p>Also note that <code>setWorkflow(false)</code> means that the <code>Updated</code> date/time and <code>Updated by</code> fields will not be updated.</p><br><p><strong>Log progress consistently.</strong> When this runs across a million records, you need to know where it is. The <code>gs.info</code> calls with a consistent source tag (<code>BatchProcessor</code>) let you filter the system log and watch progress in real time.</p><br><h4>Step 3: Kick it off</h4><br><p>Run this once in Background Script or Fix Script to fire the first event:</p><br><pre><code class=\"language-javascript\">gs.eventQueue(<br>\t&#39;custom.batch.backfill_field&#39;, <br>\tnew GlideRecord(&#39;incident&#39;), <br>\t5000, <br>\t0<br>);<br></code></pre><br><p>The first parameter is the batch size (5000 records per event), the second is the starting count (0). The script action handles everything from here.</p><br><p>When running this from a Fix Script, the record updates from the script action will <strong>not</strong> be recorded for roll back.</p><br><h2>Nested batching: batches within the batch</h2><br><p>Here&#39;s a refinement I use that goes beyond the base pattern. Within a single script action execution, I process records using a recursive function that operates on sub-batches.</p><br><pre><code class=\"language-javascript\">(function process(batchSize, totalProcessed) {<br><br>    var EVENT_NAME = &#39;custom.batch.backfill_field&#39;;<br>    var TABLE = &#39;incident&#39;;<br>    var QUERY = &#39;u_custom_field=NULL&#39;;<br>    var SUB_BATCH_SIZE = 1000;<br><br>    batchSize = parseInt(batchSize, 10) || 50_000;<br>    totalProcessed = parseInt(totalProcessed, 10) || 0;<br><br>    var batchCount = 0;<br><br>    function processSubBatch() {<br>        var gr = new GlideRecord(TABLE);<br>        gr.addEncodedQuery(QUERY);<br>        gr.setLimit(SUB_BATCH_SIZE);<br>        gr.query();<br><br>        if (!gr.hasNext()) {<br>            return false; // No more records<br>        }<br><br>        while (gr.next()) {<br>        <br>            // ------- YOUR PROCESSING LOGIC HERE -------<br>\t        gr.setValue(&#39;u_custom_field&#39;, computeValue(gr));<br>\t        gr.setWorkflow(false); // Skip business rules if safe<br>\t        gr.update();<br>\t        // -------------------------------------------<br>\t        <br>            batchCount++;<br>        }<br><br>        // If we haven&#39;t hit the outer batch limit, recurse<br>        if (batchCount &lt; batchSize) {<br>            return processSubBatch();<br>        }<br>        return true; // More records exist<br>    }<br><br>    var moreRecords = processSubBatch();<br><br>    totalProcessed += batchCount;<br>    gs.info(<br>\t    &#39;EDR Progress: &#39; + <br>\t    totalProcessed + &#39; records processed (&#39; + <br>\t    batchCount + &#39; this batch).&#39;<br>    );<br><br>    if (moreRecords) {<br>        gs.eventQueue(<br>\t        EVENT_NAME, <br>\t        new GlideRecord(TABLE), <br>\t        batchSize, <br>\t        totalProcessed<br>\t    );<br>    }<br>    else {<br>        gs.info(<br>\t        &#39;EDR Complete: &#39; + <br>\t        totalProcessed + &#39; total records processed.&#39;<br>        );<br>    }<br><br>})(event.parm1, event.parm2);<br></code></pre><br><p>The outer <code>batchSize</code> controls how many records are processed per event (per transaction). <br>The inner <code>SUB_BATCH_SIZE</code> controls how many records are queried and processed per recursive function call within that transaction.</p><br><p>I want to be transparent about why I use this structure. In my experience, I&#39;ve consistently observed that using recursive sub-batch calls within a single script action lets me process significantly more records per event, roughly 30,000-50,000 per event compared to 10,000-15,000 with a flat loop. But I haven&#39;t been able to isolate exactly why.</p><br><p>A few hypotheses. The recursive function calls may scope variables more tightly, allowing the JavaScript engine to garbage collect more efficiently between sub-batches. The transaction quota manager may check elapsed time at specific intervals rather than continuously, and the execution profile of the recursive structure may squeeze more work between checks. Or it could be a measurement artifact? Instance load varies, and without controlled benchmarking, the difference could be coincidental.</p><br><p>I use the pattern anyway because the throughput improvement has been consistent enough across multiple operations to be worth including. But I&#39;m presenting it as a technique worth testing in your environment, not a guaranteed optimization. If you benchmark this and find a clearer explanation, I&#39;d like to hear about it.</p><br><h2>Operating the pattern in production</h2><br><p>The mechanics of the pattern are straightforward. The operational details are where things get real.</p><br><h4>Monitoring progress</h4><br><p>Filter the system log by your source tag and sort by timestamp. You&#39;ll see a running count of processed records with timing between entries. If the gap between log entries suddenly increases, the instance is under load and the event queue is backing up. If entries stop appearing entirely, something went wrong. Check the most recent log entry for error context.</p><br><p>For long-running operations, I&#39;ll sometimes add a GlideAggregate call at the start of each batch to count remaining records matching the query. This gives you an estimated completion percentage, not just a running total.</p><br><h4>Handling failures</h4><br><p>The beauty of the query-based approach is that it&#39;s inherently resumable. If a batch fails partway through, the records that were already processed no longer match the query. You just fire the event again and it picks up from where it stopped.</p><br><p>Design your query so that processing a record removes it from the result set. This is the single most important design decision in the pattern.</p><br><h4>Event queue considerations</h4><br><p>Each batch fires one event. If you&#39;re processing a million records at 50,000 per batch, that&#39;s 20 events over the course of the operation. This is fine for the default event queue.</p><br><p>But if your batch size is small (say, 50 records because each operation is expensive) and your dataset is large, you&#39;ll generate thousands of events. At that scale, you can clog the default event queue and delay processing of other events like notifications, integrations, anything that runs through the same queue. ServiceNow provides a mechanism to create dedicated event queues for exactly this scenario. Move your batch processing event to its own queue so it doesn&#39;t compete with production event processing.</p><br><h4>Business rules and side effects</h4><br><p>I mentioned <code>setWorkflow(false)</code> earlier. Here&#39;s when to think carefully about it.</p><br><p>If you&#39;re doing a data cleanup or backfill where no downstream systems need to react then use <code>setWorkflow(false)</code>. The performance difference is dramatic. On a table with 10+ business rules, turning off workflows can increase your throughput by an order of magnitude.</p><br><p>If you&#39;re doing a reclassification or status change where other systems, notifications, or audit trails need to reflect the change, then leave workflows on. Reduce your batch size to compensate, and accept that the operation will take longer. Correctness beats speed.</p><br><p>There&#39;s no universal answer. Know your table&#39;s automation landscape before you decide.</p><br><h4>Testing before you commit</h4><br><p>Never run a batch operation at scale without validating the logic on a small subset first.</p><br><p>Set your initial batch size to 10. Fire one event. Check the 10 records it processed. Are the values correct? Did the right business rules fire (or not fire)? Did anything unexpected happen in the system log?</p><br><p>Then bump it to 100. Then 1,000. Watch the instance performance dashboard. If you&#39;re seeing degradation, reduce the batch size. The pattern is self-throttling via the event queue, but a batch that&#39;s too large can still spike resource usage within a single transaction.</p><br><p>The batch and sub-batch sizes are levers and knobs that allow you to fine tune the processing  performance. There are variations in performance between sub-prod and production environments to consider. From my experience, production usually performs better than sub-prod, but your mileage may vary.</p><br><h2>When to use what</h2><br><p>If this post gives you one thing, make it this:</p><br><p><strong>Static field update on any number of records →</strong> Data Management Update Jobs. No scripting needed, rollback included.</p><br><p><strong>Scripted operation on under ~100K records →</strong> Background script with <code>setLimit()</code>. Manual, but manageable.</p><br><p><strong>Scripted operation on 100K+ records →</strong> Event-driven recursion. Set it up once, kick it off, monitor progress.</p><br><p><strong>Recurring maintenance operation →</strong> Scheduled job. Built for this.</p><br><p>The event-driven recursion pattern isn&#39;t clever. The skill is recognizing when you need it and when a simpler tool will do the job.</p><br>",
            "url": "https://jg.dev/writing/process-1-million-plus-records-in-servicenow",
            "title": "Process 1M+ records in ServiceNow without killing your instance",
            "summary": "A decision framework for large-scale data operations in ServiceNow, and the event-driven recursion pattern I use when nothing else works.",
            "image": "https://jg.dev/images/writing/process-1-million-plus-records-in-servicenow/og.png",
            "date_modified": "2026-03-13T00:00:00.000Z",
            "author": {
                "name": "feed@jg.dev",
                "url": "https://jg.dev"
            },
            "tags": [
                "Engineering"
            ]
        },
        {
            "id": "https://jg.dev/writing/cognitive-debt",
            "content_html": "<p>Software teams have always had someone who understood the code first. Somebody built the system, carried the mental model, and everyone else had to catch up. Cognitive debt isn&#39;t new, but AI-assisted development is reshaping how it compounds, who owns it, and what it takes to resolve. Today, working code can ship without anyone ever having held the mental model. You can now ship working code where nobody ever held the mental model. The person who prompted it understands the intent but not necessarily the implementation. That&#39;s not a different degree of the old problem. It&#39;s a different kind of problem entirely.</p><br><h2>What cognitive debt actually is</h2><br><p>Cognitive debt is what accumulates when your codebase outpaces your team&#39;s ability to reason about it. It&#39;s not the lack of understanding itself, it&#39;s the latent liability that the difference creates. The debt comes due when something breaks, requirements change, or you need to extend functionality and nobody can work from first principles because they never built the mental model.</p><br><p>This is distinct from technical debt, and the distinction matters. Technical debt is visible. You can search for TODOs, measure complexity, see the dependency graph. Cognitive debt is invisible until it isn&#39;t. The codebase looks clean, the tests pass, and when something unexpected happens, nobody knows why it works the way it does. </p><br><p>That invisibility is what makes cognitive debt dangerous. Technical debt announces itself. Cognitive debt hides until something breaks and the person who needs to fix it can&#39;t explain how it works.</p><br><h2>What changes with AI, and what doesn&#39;t</h2><br><p>The pre-AI version of cognitive debt was a structural problem. Someone builds an entire system, leaves or gets hit by a bus, and suddenly the team owns code they can&#39;t reason about. The industry&#39;s mitigation strategies: documentation, code review, pair programming, knowledge sharing sessions. These are all attempts to transfer mental models between people. They&#39;re cognitive debt repayment strategies, and they worked because the knowledge existed somewhere. You just had to move it.</p><br><p>With AI-generated code, there&#39;s no one to transfer knowledge from. The &quot;author&quot; never held the mental model. The person who prompted it understands the intent but not necessarily the implementation. That&#39;s a qualitatively different starting point.</p><br><p>But here&#39;s where I need to be precise, because it&#39;s easy to overstate this: AI code generation tools can also produce explanations, diagrams, and conversations that help a developer build a mental model of what was generated. </p><br><p>Unlike a departed teammate, AI is always available to explain its own work. You can ask it to walk through the logic, explain tradeoffs, diagram the architecture, identify edge cases. It&#39;s an infinitely patient thought partner that never gets tired of questions.</p><br><p>So AI doesn&#39;t automatically create cognitive debt. It creates cognitive debt when developers skip the understanding step.</p><br><p>That makes this more of a discipline problem than a tooling problem. </p><br><p>The old version was structural. Knowledge got siloed because transferring it was expensive and slow. </p><br><p>The new version is behavioral. Understanding is available on demand, but you have to want it.</p><br><h2>Why the behavioral version is harder</h2><br><p>Here&#39;s what changes when cognitive debt shifts from structural to behavioral. </p><br><p>Your existing interventions stop working. You can mandate code reviews. You can block production releases with missing documentation or failing tests. These are structural interventions for a structural problem, and they work reasonably well.</p><br><p>Code reviews catch things a reviewer can see - obvious bugs, style violations, architectural red flags. It doesn&#39;t catch a developer who approved code they don&#39;t fully understand. A reviewer can look at AI-generated code, find nothing obviously wrong, and approve it, without anyone in the transaction building a durable mental model of what it does. The process ran correctly. The debt is still there.</p><br><p>You can require documentation. But a developer who doesn&#39;t understand the code they&#39;re shipping will write documentation that describes the intent, not the implementation. The documentation will look fine and fail you when you need it most.</p><br><p>And you can&#39;t mandate curiosity. You can&#39;t force someone to actually internalize an explanation rather than skim it. The behavioral version creates a new failure mode that structural version does not: the developer who thinks they understand the code because they prompted it, reviewed the output, and it looked reasonable. They&#39;re carrying invisible debt and they don&#39;t know it. The person who inherits a departed colleague&#39;s code at least knows they&#39;re in unfamiliar territory.</p><br><p>This has implications beyond individual practice. If the defining skill in an AI-augmented world is the willingness to slow down and understand (not just the speed at which you can generate), that changes what &quot;good&quot; looks like. The fastest prompter isn&#39;t the best developer. The one who consistently builds mental models of what they ship is. That&#39;s a harder thing to screen for in hiring, harder to measure in performance reviews, and harder to cultivate in a culture that celebrates velocity above almost everything else.</p><br><h2>Cognitive debt compounds</h2><br><p>There&#39;s a reason that makes this urgent rather than merely interesting - cognitive debt compounds.</p><br><p>When you ship code you don&#39;t understand, you make implicit assumptions about how it behaves. You don&#39;t know exactly where those assumptions are because you never reasoned through the implementation. When you build on top of that code with more AI-generated code, your new code inherits and depends on those unexamined assumptions. Each layer narrows your ability to reason about the layers beneath it, because the foundation is opaque and you&#39;ve built on top of the opacity. Each new layer of debt makes the layers beneath it harder to pay back, because now you can&#39;t reason about the foundation you&#39;re building on.</p><br><p>Technical debt compounds differently and more visibly. You watch the dependency graph get tangled, the build times creep up. Cognitive debt compounds silently. The codebase can look perfectly clean with passing tests and tidy architecture diagrams while the team&#39;s ability to reason about unexpected behavior degrades with every commit.</p><br><p>This is why &quot;I&#39;ll understand it later&quot; is more expensive than it sounds. Later, you&#39;ll be trying to understand unfamiliar code under pressure, with multiple additional layers of unfamiliar code on top of it, while something is broken in production. The cost of understanding goes up every day you defer it. And because cognitive debt is invisible, you won&#39;t know how much you owe until the fire is already burning.</p><br><h2>The sedan, not the Ferrari</h2><br><p>If pre-AI development were synonymous with &quot;horse and carriage&quot; days, then post-AI is like having a Ferrari. Raw capability. Maximum speed. You can generate in 30 seconds what would have taken two hours.</p><br><p>What I&#39;m suggesting is that we drive a sedan. Faster than a horse, but not as fast as a Ferrari.</p><br><p>On my team, if I use AI to generate code, I&#39;ll have one of my teammates grill me on it before the code ever sees our test environment. Not a cursory review...something closer to an interrogation. What are the edge cases? Why this approach over alternatives? What happens when this input is null? What assumptions does this make about the data?</p><br><p>Either I understand the code well enough to defend it, or I don&#39;t. We find out right there.</p><br><p>It&#39;s code review, which every serious team already does. The difference is the intent behind it. It checks whether the author actually understands what they pushed, directly addressing cognitive debt.</p><br><p>Knowledge transfer has always been a first-class reason code review the practice exists. So I want to be clear that this isn&#39;t some new process I invented. What I&#39;m describing is code review accounting for a failure mode it wasn&#39;t originally calibrated to catch.</p><br><p>Without this, tgshe author sounds confident. The reviewer finds nothing obviously wrong. The debt enters anyway. Existing review culture tends to catch shallow code. It&#39;s less equipped to catch shallow understanding of code that isn&#39;t shallow because the two used to be harder to decouple. You generally couldn&#39;t submit well-structured, well-tested code you didn&#39;t understand. Now you can.</p><br><p>So the practice is to review with a specific additional question: </p><br><blockquote><br><p><em>Does the author actually own this, or do they just own the output?</em></p><br></blockquote><br><p>When someone can&#39;t defend the code, it doesn&#39;t move forward. They go back, use the AI to walk through the implementation, ask about the tradeoffs it made. Then they come back and we go again. Sometimes they return with the same code and a better understanding. Sometimes the interrogation reveals the AI&#39;s approach was wrong and they come back with something different. Both outcomes are the system working.</p><br><p>This step is slower than just shipping. But it catches the invisible debt before it enters the codebase and starts compounding. And the result is a team where everyone can reason about the code they own.</p><br><h2>A note on reading code vs. writing code</h2><br><p>There&#39;s an assumption worth questioning: that understanding AI-generated code produces an equivalent mental model to writing the code yourself.</p><br><p>It might not. Writing code and reading code are different cognitive activities. When you write code, you make dozens of decisions every minute. You choose this approach over that one, you handle the edge case explicitly, you feel the resistance when a design doesn&#39;t fit. That friction is load-bearing. It&#39;s how you build durable understanding.</p><br><p>When you read code - even carefully, even with an AI walking you through it - you&#39;re reconstructing intent rather than experiencing the decisions as they were made. You can end up with an accurate understanding of what the code does without a feel for why it was shaped that way, or where it&#39;s fragile.</p><br><p>I don&#39;t think this means AI-assisted development produces systematically worse mental models. A developer who uses AI to generate code, then deliberately uses the same AI to stress-test their understanding, can end up knowing the code well. Sometimes better than if they&#39;d written it quickly under time pressure. But the equivalence isn&#39;t automatic, and treating it as automatic is how you end up thinking you understand something you don&#39;t</p><br><h2>Understanding takes time</h2><br><p>The current AI discourse is almost entirely about speed. Ship faster, build faster, iterate faster. But understanding takes time. We know this from spaced learning and distributed practice. We experience it every time we spend enough time with something until it finally clicks. These are biological processes, not workflow steps you can optimize away.</p><br><p>AI can compress the production of code. It cannot compress the cognition required to own it. Cognitive debt is the disconnect between those two speeds. Ship enough code you don&#39;t understand, and the interest compounds quietly.</p><br><p>But AI <em>can</em> change how productively you spend that cognitive time. Picture a spectrum:</p><br><p>At one end, you prompt, get code, and ship it unread. Maximum debt, and you don&#39;t even know you&#39;re carrying it. The invisibility is total.</p><br><p>In the middle, you prompt, read the code, and roughly follow the approach. Moderate debt. You know the shape but not the edge cases. You&#39;ll recognize the debt when it comes due, but you won&#39;t pay it down quickly.</p><br><p>At the other end, you prompt, then use the AI to walk you through the logic, interrogate tradeoffs, and stress-test failure modes. Minimal debt, and potentially <em>less</em> debt than writing it yourself.</p><br><p>That last point is worth sitting with. A developer who deliberately uses AI to learn the code it generated can build a stronger mental model than one who wrote it manually at 5pm on a Friday to hit a deadline and barely remembers how it works by Monday. AI doesn&#39;t just create the debt problem. It can also be the best tool available for paying it down if you choose to use it that way.</p><br><h2>The real question</h2><br><p>AI doesn&#39;t create cognitive debt. Instead, it shifts who&#39;s responsible for managing it from teams and organizational structures to individual developers making choices that noone else can see. This used to be a structural problem. It was something teams dealt with through onboarding, code review, required documentation etc. These work reasonably well because the problem is visible and the interventions are enforcable. </p><br><p>The new paradigm now it lands squarely on the individual developer. Are you using AI to skip understanding or are you using it to accelerate toward understanding? That choice determines whether you&#39;re accumulating invisible debt or paying it down in real time. And because the consequences are invisible until something breaks, there&#39;s no external feedback loop to catch you if you&#39;re making the wrong choice. </p><br><p>That&#39;s uncomfortable. It means the same tool can make you significantly better or significantly worse as a developer, depending on a discipline that&#39;s hard to screen for, hard to meaasure, and hard to mandate. The organizations that figure out how to cultivate it through culture, incentives, review processes that check for understanding, will have a real advantage. The ones that don&#39;t will accumulate debt they can&#39;t see until it&#39;s already expensive.</p><br><p>The speed gains from AI are real, and take the time to understand doesnt cancel out that speed. Thirty seconds of generation plus twenty minutes of deliberate comprehension is still faster than two hours of manual code writing. You delivered faster and you own what you shipped</p><br><p>The question is whether you&#39;re choosing to spend those twenty minutes.</p><br><hr><br><p><em>This is the third and final post in a series about the cognitive side of software engineering. The first was <a href=\"https://jg.dev/writing/slow-cook-your-ideas\">Slow cook your ideas</a>, about using LLMs as thinking partners to develop stronger mental models before building. The second was <a href=\"https://jg.dev/writing/the-five-dimensions-of-cognitive-work\">The five dimensions of cognitive work</a>, about the invisible cognitive costs that no productivity metric captures.</em></p><br>",
            "url": "https://jg.dev/writing/cognitive-debt",
            "title": "Cognitive debt",
            "summary": "AI doesn't automatically create cognitive debt. It creates cognitive debt when developers skip the understanding step.",
            "image": "https://jg.dev/images/writing/cognitive-debt/og.png",
            "date_modified": "2026-03-09T00:00:00.000Z",
            "author": {
                "name": "feed@jg.dev",
                "url": "https://jg.dev"
            },
            "tags": [
                "Engineering"
            ]
        },
        {
            "id": "https://jg.dev/writing/the-five-dimensions-of-cognitive-work",
            "content_html": "<p>I was on a call with thirty people, leading a solutioning discussion, and my brain just stopped. Not gradually. Mid sentence. The system I&#39;d been holding in my head, the one only I fully understood, the one I&#39;d been carrying context on for months, just collapsed. What came out of my mouth was something like &quot;uhh durr, my brain just stopped working, sorry guys.&quot; Thirty people on the line. Silence. Then I tried relentlessly to reload everything back in my mind to pick up the thread of my own thinking and resume like it hadn&#39;t happened.</p><br><p>That moment wasn&#39;t about being tired. I wasn&#39;t sleep deprived or burned out in the emotional sense. I&#39;d hit a different kind of wall, one that software engineering hits in a way most jobs don&#39;t, and one we barely have language for.</p><br><p>In my last post I talked about <a href=\"https://jg.dev/writing/slow-cook-your-ideas\">slow cooking your ideas</a> and ended with a concept I&#39;d been thinking about: cognitive endurance. But as I kept pulling on that thread, I realized endurance is only one dimension of something larger. Software engineering is cognitive work, and we have almost no vocabulary for what that actually means.</p><br><p>We have language for output. Velocity, throughput, cycle time, etc. We have entire dashboards dedicated to measuring what came out the other end. But we have almost nothing for the work that happens before the fingers hit the keyboard. The four hours you spent holding a system in your head to find a one-line fix? That doesn&#39;t show up anywhere.</p><br><p>That&#39;s a problem. Not just because it&#39;s unfair to the people doing the work, but because it makes us bad at planning, bad at protecting our most valuable resource, and bad at understanding why some tasks take ten times longer than they &quot;should.&quot;</p><br><p>Some teams have started to develop this language on their own, even if they don&#39;t realize it. On my team, the word you hear every day is &quot;tracking.&quot; Someone will say &quot;Ok, I&#39;m tracking,&quot; meaning they understand, they&#39;ve loaded the context. Or &quot;I wasn&#39;t tracking that,&quot; meaning it wasn&#39;t in their mental model. Anytime I hear the word tracking, I know someone&#39;s context just got updated. Something was added, removed, or both. It&#39;s a small word, but it&#39;s doing real cognitive work: it tells the room whether information landed or didn&#39;t without anyone having to explain the mechanics of why.</p><br><p>That&#39;s one word. I think we need a whole vocabulary.</p><br><h2>Five dimensions</h2><br><p>Over the past few years I&#39;ve become aware of five distinct dimensions of cognitive work. I didn&#39;t learn these from a textbook. I learned them from hitting walls, watching teammates hit walls, and slowly building a vocabulary for what was happening.</p><br><p><strong>Cognitive load</strong> is how complex a task is. Not how long it takes or how many lines of code it produces, how many interacting pieces you need to reason about simultaneously. Debugging a timing issue between two server side components might result in a one line fix. The load was enormous. The output was trivial. If you&#39;ve ever finished a task and felt completely drained despite having &quot;barely written any code,&quot; you were experiencing high load with low output, and no productivity metric in the world would have captured what that cost you.</p><br><p>If you map load against output, you get four quadrants. </p><br><p><em>High load, high output</em> is the dream. It&#39;s complex work that produces visible results. People get promoted for this. </p><br><p><em>High output, low load</em> is the grind. This is repetitive work that fills up a commit history without taxing the brain. It looks productive and it is, just not in the way that grows you. </p><br><p><em>Low load, low output</em> is downtime, and every team needs some. </p><br><p><em>High load, low output</em> is where the invisible work has the greatest impact. You spent the whole day reasoning through a system, tracing dependencies, weighing tradeoffs, and the only artifact is a one-line fix or a decision not to change anything at all. This is some of the most valuable work in software engineering, and it&#39;s the quadrant that&#39;s completely invisible to every productivity metric we use.</p><br><p><strong>Cognitive capacity</strong> is how much you can hold in your mind at once. Think of it like RAM. A computer doesn&#39;t get slower because the task is harder. It gets slower when it runs out of memory and starts swapping to disk. Your brain does the same thing. When you exceed your capacity, things start falling out. You forget the edge case you were tracking. You lose the thread of why you made a decision three files ago. The system you were holding in your head collapses, and you have to rebuild it from scratch. But unlike a computer, you can&#39;t check your resource monitor or task manager app. You rarely know you&#39;ve exceeded capacity until something drops.</p><br><p>This is different from load. Load is a property of the task. Capacity is a property of you, on this day, in this context. The same person can have vastly different capacity on a Monday morning after a quiet weekend versus a Thursday afternoon after three days of back to back meetings.</p><br><p>Senior engineers seem like they&#39;re seeing around corners. They&#39;re not. They&#39;re rotating the problem faster than you can track. That&#39;s <strong>mental agility</strong>, the ability to view a problem from multiple perspectives and shift between them fluidly. Can you think about this from the user&#39;s perspective, then the database&#39;s perspective, then the API perspective, then back to the user&#39;s? Can you hold the current implementation in your head while simultaneously reasoning about an alternative architecture? Agility is what separates &quot;I think this will work&quot; from &quot;I can see three ways this breaks and two ways to prevent it.&quot;</p><br><p><strong>Cognitive endurance</strong> is the ability to sustain high-load cognitive work over extended periods. Not in bursts. Continuously. This is the dimension I teased at the end of my last post, and the one I think is most underappreciated because the tasks that demand it most are often the ones that look the least productive from the outside.</p><br><p>I was on a team of four when we hired fifteen people all at once. Getting them up to speed while continuing to develop and ship was probably the highest-effort cognitive task of my career. For eighteen months, nearly every workday involved teaching, coaching, sharing context, reviewing code, creating diagrams and documentation, making and explaining design decisions, repeating things until they clicked, helping people find their place on the team, all while developing and shipping my own features. In a post-AI world, this may not sound like a lot. It was a lot.</p><br><p>The hardest part wasn&#39;t any individual day. It was that it didn&#39;t stop. Eighteen months of operating above your baseline, carrying context that only you hold, being the person the team turns to for both the problem definition and the solution. There were many times I felt like quitting, but I&#39;m not a quitter and I love my team. I endured. And after those eighteen months, people began operating with more autonomy and high agency. They started contributing ideas that were sometimes better than my own. The investment paid off, but endurance is what it cost.</p><br><p>The call with thirty people where my brain collapsed? That happened during this period. I was the only person on the team who truly understood a complex problem. Being the sole carrier of that cognitive burden is difficult when you&#39;re the only person on the team, period. With over a dozen people depending on you to bring the right problem, the right solution, and all of the thought-work on how to deliver? The weight compounds. You show up every day with a smile, and then one day on a call your brain just quits mid-sentence.</p><br><p>Maybe this is normal for a seasoned engineering manager or director. I was a software developer thrown into the deep end of managing people and technical systems at the same time. Nobody told me the job was going to be an endurance event.</p><br><p><strong>Context switching cost</strong> is the toll of moving between cognitive tasks. Every developer knows this one intuitively, but I think we underestimate how it interacts with the other four. Switching between two low-load tasks is cheap. Switching between two high-load tasks is devastating. You lose time by dumping everything out of your cognitive capacity and reloading a completely different system, with no guarantee you&#39;ll get the first one all back. During that eighteen-month stretch, I was context switching between teaching, designing, coding, and managing dozens of times a day. Each switch had a cost. The costs accumulated.</p><br><p>Then there&#39;s forced context switching, the kind you have no control over. You&#39;ve loaded a full system into your brain. You&#39;re tracking multiple conditions and side effects, headphones on, deep in it. And someone taps you on the shoulder to tell you something completely unrelated. Everything you were holding just scattered. The interruption took five seconds. Rebuilding the mental state takes thirty minutes, if you can get it all back at all.</p><br><p>Random side note: This is the reason that despite having a multitude of apps installed on my phone, the only notifications I allow are phone calls and text messages. And when I&#39;m deep in work, even those are disabled. It&#39;s self defense. Every notification is a forced context switch, and every forced context switch has a cognitive cost that the person/app sending the notification never has to pay.</p><br><h2>Cognitive proprioception</h2><br><p>There&#39;s a sixth concept here that doesn&#39;t fit neatly into the five dimensions, but I think it might be the most important one: cognitive proprioception.</p><br><p>Physical proprioception is your body&#39;s awareness of where it is in space. You can close your eyes and touch your nose because your body knows where your hand is without looking. Cognitive proprioception is the same thing for your mental state. It&#39;s the ability to sense where you are cognitively at any given moment. Am I approaching my capacity ceiling? Is this task high-load or am I just tired? Do I need a break, or should I push through because I&#39;m close to cracking the problem?</p><br><p>Early in my career I couldn&#39;t distinguish between those states. I&#39;d push until I crashed, not realizing I&#39;d been past my limit for an hour. Sometimes I&#39;d spend all day trying to solve a problem, and only after reaching a breaking point, decide to take a walk outside. Then boom. As I&#39;m on my walk, the solution I was looking for hits me. Barbara Oakley calls this &quot;diffuse mode.&quot;</p><br><p>In 2016 after taking <a href=\"https://www.coursera.org/learn/learning-how-to-learn\">Barbara Oakley&#39;s course, &quot;Learning how to learn,&quot;</a> I increased my cognitive proprioception and cognitive awareness. Now I can usually feel when I&#39;m approaching the edge, and I can make better decisions about how to spend what&#39;s left. That awareness doesn&#39;t prevent the crash, but it lets me see it coming and choose how to respond.</p><br><p>Proprioception develops with experience. So does raw capacity. A junior engineer&#39;s ceiling for holding system complexity is lower than a senior&#39;s, not because they&#39;re less intelligent, but because they haven&#39;t built the mental scaffolding yet. Years of loading systems into your head, tracing data flows, debugging across layers, that practice rewires how you reason about complexity. These cognitive properties aren&#39;t fixed. They&#39;re more like muscles. The more time you spend working them, the stronger they get and the more aware of them you become.</p><br><p>But even the strongest muscles have limits. The crash still comes. It just comes later, and you see it coming sooner.</p><br><h2>The invisible work</h2><br><p>If you&#39;re leading a team and you don&#39;t have a model for cognitive work, you&#39;re going to mismanage it. You&#39;re going to look at someone&#39;s light commit history and assume they had an unproductive day. You&#39;re going to schedule a &quot;quick&quot; architecture review between two deep debugging sessions and not understand why both suffer. You&#39;re going to estimate tasks by output complexity when you should be estimating by cognitive load.</p><br><p>A senior engineer staring at the ceiling for two hours is sometimes the most productive person on the team. They&#39;re not slacking. They&#39;re loading a system into their head, rotating it, finding the fault line, and preparing to make a precise intervention that will save everyone else days of work. But if your productivity metrics only measure output, that engineer looks like they&#39;re doing nothing.</p><br><p>Here&#39;s the flip side: when a team does cognitive work <em>together</em>, something happens that can&#39;t happen alone. I&#39;ve seen it in design sessions and architecture reviews where the whole team is genuinely thinking together, not presenting to each other, but instead building on each other&#39;s reasoning in real time. Someone throws out a undercooked idea, maybe even a bad one, and it sparks something in someone else. That spark leads to a direction nobody would have reached individually. The &quot;trash&quot; idea turns out to be the catalyst for the best idea in the room. And because everyone participated in the cognitive process of getting there, everyone feels individually invested. They understand the reasoning. They own the outcome. The quality of what gets built afterward is noticeably higher than when one person designs in isolation and hands down the plan.</p><br><p>I&#39;m known at work for writing elegant code. But to me simplifying code isn&#39;t just an aesthetic choice, it&#39;s a capacity investment. Every abstraction that reduces cognitive load for the next reader is giving them back capacity they can spend on the actual problem instead of on understanding my code. The tolerance for complexity in a codebase should be inversely proportional to how many people need to touch it. It&#39;s not thats simple code is &quot;better,&quot; but because every additional person who needs to reason about that code multiplies the cognitive cost.</p><br><h2>Cognitive currency</h2><br><p>Here&#39;s a question I keep coming back to: should we be accounting for cognitive cost the way we account for time and effort?</p><br><p>When an engineer estimates a task, they&#39;re usually estimating how long it will take or how many story points it&#39;s &quot;worth.&quot; But those estimates rarely capture what the task will actually cost the person doing it. A two-point story that requires holding three services in your head simultaneously costs more than a five-point story that&#39;s just tedious repetition. The points are higher on the second one, but the cognitive price is higher on the first. And the person who finishes the two-pointer will be more depleted than the person who finishes the five-pointer, which means whatever they pick up next will take longer than it should.</p><br><p>What if we had a way to talk about cognitive currency: how much cognitive capacity a task will cost, not just how much time? Not as a formal estimation framework, but as a shared language that a team can use when planning. &quot;This ticket is low effort but high cognitive cost&quot; is a sentence that would change how a team sequences work, protects focus time, and understands why someone needs a lighter afternoon after a heavy morning.</p><br><p>We track every other kind of engineering cost. Compute costs. Infrastructure costs. Opportunity costs. The one cost we never track is the cognitive cost to the humans doing the work. And it&#39;s the one that most directly determines quality, velocity, and retention.</p><br><h2>What changes</h2><br><p>If teams actually adopted this language, three things would shift.</p><br><p>First, estimation gets more honest. When you can say &quot;this is a high-load, high-endurance task&quot; instead of just &quot;this is hard,&quot; you can plan around it. You can protect the time. You can avoid scheduling a context-switch landmine in the middle of it.</p><br><p>Second, invisible work becomes visible. The four hours someone spent holding a system in their head to find a one line fix stops looking like an unproductive morning and starts looking like what it was... the most cognitively expensive work on the team that week.</p><br><p>Third, people get better at managing themselves. Once you have the vocabulary, you develop the proprioception. You start to notice when you&#39;re approaching your ceiling. You learn which kinds of work drain your endurance fastest. You stop pushing past the cliff because you finally have a name for what&#39;s happening on the other side of it.</p><br><p>We track lines of code, tickets closed, PRs merged. We don&#39;t track the cognitive cost of the work that produced them. I don&#39;t think we&#39;ll get better at managing cognitive work until we have better language for it. This is the vocabulary I wish I&#39;d had ten years ago.</p><br><hr><br><p><em>This is the second post in a series I&#39;m writing about the cognitive side of software engineering. The first was <a href=\"https://jg.dev/writing/slow-cook-your-ideas\">Slow cook your ideas</a>, about using LLMs as thinking partners to develop stronger mental models before building. Next I want to explore what cognitive debt looks like at the team level, when an entire organization loses the ability to reason about its own systems.</em></p><br>",
            "url": "https://jg.dev/writing/the-five-dimensions-of-cognitive-work",
            "title": "The five dimensions of cognitive work",
            "summary": "We track lines of code, tickets closed, PRs merged. Nobody tracks the four hours you spent holding a system in your head to find a one line of code fix.",
            "image": "https://jg.dev/images/writing/the-five-dimensions-of-cognitive-work/og.png",
            "date_modified": "2026-02-21T14:29:34.000Z",
            "author": {
                "name": "feed@jg.dev",
                "url": "https://jg.dev"
            },
            "tags": [
                "Engineering"
            ]
        },
        {
            "id": "https://jg.dev/writing/the-myths-and-realities-of-open-source-software-security",
            "content_html": "<p>Given enough eyeballs, all bugs are shallow. That&#39;s Linus&#39;s Law, the idea that open source software is inherently more secure because anyone can inspect the code. It&#39;s one of the most repeated claims in software, and it contains a critical assumption that rarely gets examined: it assumes the eyeballs show up.</p><br><p>Most of the time, they don&#39;t.</p><br><h3>The attention problem</h3><br><p>The case for open source security starts with transparency. Anyone can read the code. Anyone can find a vulnerability, submit a patch, or flag a concern. For projects like Linux and Kubernetes, backed by institutional funding, staffed by paid maintainers, and scrutinized by thousands of contributors, this model works exceptionally well. These projects have formal security processes, dedicated response teams, and the resources to act fast when something breaks.</p><br><p>But most open source software looks nothing like Linux. Most of it is maintained by one or two people in their spare time, with no budget, no security audit process, and no guarantee that anyone is reviewing pull requests with adversarial thinking. The transparency is there in theory. The attention is not.</p><br><p>This is the gap that Linus&#39;s Law doesn&#39;t account for. Openness creates the <em>possibility</em> of scrutiny. It doesn&#39;t create scrutiny itself. That requires people, time, expertise, and money, all of which are scarce and unevenly distributed across the open source ecosystem.</p><br><h3>Two case studies in under-investment</h3><br><p><strong>Heartbleed</strong> is the clearest example. OpenSSL powers a significant portion of the world&#39;s encrypted web traffic. For years, it was maintained by a handful of volunteers operating on a minimal budget. When the Heartbleed vulnerability was disclosed in 2014, it exposed a uncomfortable truth: critical internet infrastructure was running on volunteer labor. The bug wasn&#39;t hard to find, it had been sitting in the codebase for over two years. Nobody with the right expertise was looking.</p><br><p><strong>Log4j</strong> tells a similar story. Log4Shell, disclosed in late 2021, affected a Java logging library embedded in millions of applications. The vulnerability had existed for years. Log4j was widely used, but &quot;widely used&quot; and &quot;widely reviewed&quot; are not the same thing. Developers pulled it into their dependency trees without thinking about who was maintaining it or whether anyone was auditing it for security flaws.</p><br><p>Both cases are often cited as open source failures. They&#39;re better understood as <em>funding</em> failures. The code was open. The investment wasn&#39;t there.</p><br><h3>Supply chain attacks as a second-order effect</h3><br><p>Once you understand that most open source projects are under-watched, the rise of supply chain attacks becomes predictable. Attackers understand the attention economy better than most defenders do.</p><br><p>Instead of targeting well-guarded applications directly, threat actors go after the dependencies, small libraries and packages buried deep in the dependency tree where nobody is looking closely. They insert malicious code into a package that gets pulled into thousands of downstream applications automatically. The openness of the ecosystem, combined with the trust developers place in their dependency chains, creates a wide and largely unmonitored attack surface.</p><br><p>Modern applications routinely depend on hundreds of open source packages, each with their own dependency trees. The surface area is enormous. Tools like Dependabot and Snyk help by flagging known vulnerabilities, but they can&#39;t catch a compromised maintainer or a malicious contribution that passes code review in an under-resourced project. The tooling helps at the edges. The core problem is structural.</p><br><h3>What actually makes open source secure</h3><br><p>When open source security works, it works extremely well, often better than proprietary alternatives. But the variable isn&#39;t openness. It&#39;s investment.</p><br><p>The projects with strong security track records share common traits: institutional backing, paid maintainers, formal vulnerability disclosure processes, and enough community engagement to sustain genuine code review. Linux has the Linux Foundation. Kubernetes has the CNCF and major corporate sponsors. After Heartbleed, the Core Infrastructure Initiative (now the Open Source Security Foundation) was created specifically to fund critical projects that the market had neglected.</p><br><p>These are the projects where Linus&#39;s Law actually holds. Not because the code is open, but because the economics support sustained attention. Transparency is a necessary condition for community-driven security. It is not a sufficient one.</p><br><p>This distinction matters for how organizations evaluate open source dependencies. The question isn&#39;t &quot;is this open source or proprietary?&quot; The question is: who maintains this, how is it funded, how quickly do they respond to security disclosures, and is the level of investment proportional to how critical this software is to my stack?</p><br><p>Active communities, frequent commits, responsive maintainers, and institutional support are better security indicators than the license on the repository.</p><br><h3>Scaling attention with AI</h3><br><p>If the core problem is that human attention can&#39;t keep pace with the volume of open source code, AI-assisted security tooling is the most credible attempt at closing that gap. Large language models and machine learning systems can scan codebases continuously, flag anomalous patterns in pull requests, and detect vulnerability classes across thousands of projects simultaneously, without needing funding for a full-time security team on every repo.</p><br><p>This is already happening at scale. Google&#39;s OSS-Fuzz project, augmented with AI-driven fuzzing, has identified hundreds of vulnerabilities across critical open source projects that human reviewers missed. GitHub&#39;s CodeQL and Copilot Autofix are performing automated vulnerability detection and suggesting remediations directly within pull request workflows. These tools don&#39;t replace human judgment, but they provide a layer of continuous scrutiny that most projects could never afford to staff.</p><br><p>For under-resourced maintainers, the ones at the center of the attention problem, AI tooling could be transformative. A solo maintainer who can&#39;t afford a security audit can still run AI-assisted static analysis on every commit. That doesn&#39;t solve the funding problem, but it meaningfully raises the floor for baseline security across the ecosystem.</p><br><p>The complication is that this dynamic cuts both ways. AI is also accelerating the volume of code entering open source. Developers using code generation tools are contributing more code, faster, and not all of it is being fully reasoned through before it ships. If the existing problem is that maintainers are overwhelmed by the review burden, AI-generated contributions could intensify that pressure, more code entering the pipeline, potentially carrying subtle flaws that look clean on the surface.</p><br><p>Attackers benefit from the same tools. LLMs can help craft more sophisticated malicious contributions, code that passes cursory review, obfuscates its intent, or exploits trust patterns in under-monitored dependency chains. The supply chain attack surface described earlier becomes harder to defend when the quality of adversarial contributions improves.</p><br><p>The honest assessment is that AI doesn&#39;t resolve the open source security problem. It reshapes it. Automated tooling raises the ceiling for what well-resourced projects can catch and lowers the barrier for small projects to participate in basic security hygiene. But it also raises the sophistication of threats and increases the volume of code that needs review. The projects that will benefit most are the ones that adopt these tools deliberately, integrating AI-assisted scanning into their workflows rather than treating it as a passive safety net.</p><br><h3>The wrong question</h3><br><p>The open source security debate is usually framed as a binary: is open source more secure than proprietary software, or less? That framing misses the point entirely. Proprietary software has its own long history of unpatched vulnerabilities, opaque security practices, and slow disclosure timelines. The difference is that when proprietary software fails, you can&#39;t see why. When open source fails, the postmortem is public.</p><br><p>The more productive question is whether a given project, open source or otherwise, has the governance, funding, and community to match its level of criticality. A well-funded open source project with active maintainers, a formal security process, and AI-assisted tooling in its pipeline will outperform a proprietary product with a skeleton crew and no external audit. A solo-maintained open source library with no funding will not, though the floor is rising as automated security tools become more accessible.</p><br><p>Open source gives you the transparency to make that judgment. AI tooling is starting to give under-resourced projects the means to act on it. Whether that shifts the economics enough to close the attention gap is the question that will define open source security for the next decade.</p><br>",
            "url": "https://jg.dev/writing/the-myths-and-realities-of-open-source-software-security",
            "title": "The myths and realities of open source software security",
            "summary": "\"Many eyes\" sounds reassuring until you look at who's actually watching. What really makes open source secure (and what doesn't).",
            "image": "https://jg.dev/images/writing/the-myths-and-realities-of-open-source-software-security/og.png",
            "date_modified": "2026-02-21T00:14:00.000Z",
            "author": {
                "name": "feed@jg.dev",
                "url": "https://jg.dev"
            },
            "tags": [
                "Security"
            ]
        },
        {
            "id": "https://jg.dev/writing/slow-cook-your-ideas",
            "content_html": "<p>After you have been marinating on a thought for some time, there’s something sweet about putting it on a slow simmer. An idea you didn’t rush to the terminal with, instead one that sat in conversation for days or weeks, getting turned over, pressure-tested, shaped, and then built only after it was fully cooked.</p><br><p>That’s how I’ve been working lately. I use LLMs as a thinking partner to develop ideas slowly before I write a single line of code. And I’ve noticed two things: the output is higher quality, and I have a much stronger mental model of what I’m building before I build it.</p><br><p>The process has a rhythm to it. I’ve started calling it <strong>zoom out, zoom in, build, repeat</strong>.</p><br><h2>The macro cycle</h2><br><p>Every project starts with big, blurry questions. What is this thing? Who is it for? What’s the core interaction? What makes it different from the twelve other things that look like it?</p><br><p>These aren’t questions you answer in one sitting. They’re the kind of questions that benefit from a slow back-and-forth over days. I’ll open a conversation with Claude and describe the shape of an idea. Not the implementation, the intent. What I want someone to feel when they use it. What problem I keep running into that made me want to build it in the first place.</p><br><p>The LLM pushes back. It asks clarifying questions. It points out contradictions I glossed over. It suggests framings I hadn’t considered. And because the conversation persists, I can leave, think, come back, and pick it up again with more clarity.</p><br><p>This is the zoom-out phase. You’re circling the idea from altitude, getting the proportions right before you commit to anything.</p><br><h2>The micro cycle</h2><br><p>Once the big picture feels solid, I zoom in. Way in. Now I’m talking about data models, component hierarchies, API shapes, edge cases. I’m asking things like “if a user has 200 tools in their stack, what does pagination look like here?” or “should this be a server component or a client component given that I need scroll tracking?”</p><br><p>This is where the LLM earns its keep differently. At the macro level it’s a thinking partner. At the micro level it’s a rubber duck with opinions. I describe what I’m about to build in detail, and the act of describing it, combined with the model’s responses, forces me to confront decisions I would have otherwise deferred until I was halfway through implementation and stuck.</p><br><p>The micro cycle is where hand-waving dies. You can’t describe a feature to an LLM in vague terms and get useful feedback. You have to be specific. And that specificity is the whole point.</p><br><h2>Then you build</h2><br><p>After a few rounds of zooming out and zooming in, something clicks. The idea isn’t just an idea anymore. It’s a plan with clear edges. I know what I’m building, why each piece exists, and where the tricky parts are.</p><br><p>When I finally open my editor, the building phase is faster and more decisive. I’m not figuring things out as I go. I’ve already had the arguments with myself (via the LLM) about whether this should be a modal or a page, whether the state lives in the URL or in context, whether the MVP needs that feature or not.</p><br><p>I still hit surprises during implementation. But they’re smaller surprises. The structural decisions were already made during the slow cook.</p><br><p>And to be clear, this isn’t one pass through the cycle per project. A single feature might go through three or four iterations. I’ll zoom out on the concept, zoom in on the data model, build it, realize the UX feels wrong, zoom back out to reconsider the interaction pattern, zoom in on a different component structure, and build again. The cycle is fractal. It operates at the project level, the feature level, and sometimes even at the level of a single component that I can’t get right.</p><br><h2>Then you repeat</h2><br><p>Here’s the part that took me a while to internalize: the cycle doesn’t end after you ship v1. Building reveals new questions that conversation alone couldn’t surface. Users do things you didn’t expect. Performance bottlenecks show up where you assumed things would be fast. A feature you thought was essential turns out to be noise.</p><br><p>So you zoom back out. You take what you’ve learned from building and bring it back to the conversation. “Here’s what happened when I shipped this. Here’s what users are actually doing. Here’s what I got wrong.” And the slow cook starts again, now with better ingredients.</p><br><p>The full loop looks like this:</p><br><p><strong>Zoom out</strong>, Explore the idea. Challenge assumptions. Get the shape right.<br><strong>Zoom in</strong>, Go deep on specifics. Force precision. Resolve ambiguity.<br><strong>Build</strong>, Execute with confidence because you’ve already done the thinking.<br><strong>Repeat</strong>, Learn from what you built and start the conversation again.</p><br><h2>Why slow is fast (but not the fastest)</h2><br><p>The instinct with LLMs is to go faster. Generate code immediately. Scaffold a project in minutes. And for known problems with clear solutions, that speed is great.</p><br><p>But for new ideas, the ones that don’t have an obvious shape yet, speed is a trap. You end up building the first thing that comes to mind, then spending twice as long refactoring when you realize the foundation was wrong.</p><br><p>Slow cooking with an LLM inverts this. You spend more time in conversation and less time in rework. The thinking phase is longer but the building phase is shorter and cleaner. Total time to a good outcome goes down, not up.</p><br><p>I want to be honest about where this sits on the speed spectrum. This process is significantly faster than pre-AI development. Before LLMs, the slow cook still happened, it just happened in your head, on whiteboards, in docs no one read, and in conversations with coworkers who had their own stuff to do. Having an always-available thinking partner that can engage deeply with your specific problem compresses weeks of noodling into days.</p><br><p>But it’s not as fast as handing everything to AI and accepting whatever comes back. If you prompt an agent to build your feature from a one-line description and ship whatever it generates, you’ll beat me to deployment every time. The question is what you’re left with afterward, which brings me to the part I’ve been thinking about most.</p><br><h2>The cognitive debt problem</h2><br><p>There’s a term gaining traction that I think captures the real risk of AI-first development: cognitive debt. It’s what accumulates when you build things you don’t understand.</p><br><p>And it’s accumulating fast. Spotify’s co-CEO Gustav Söderström recently told investors that <a href=\"https://techcrunch.com/2026/02/12/spotify-says-its-best-developers-havent-written-a-line-of-code-since-december-thanks-to-ai/\">the company’s most senior engineers haven’t written a single line of code since December</a>, they only generate and supervise it. Entire teams are optimizing their codebases for AI readability rather than human readability. Developers are fully outsourcing code generation to models and becoming dependent on them not just for productivity, but for comprehension of their own systems.</p><br><p>The result is a growing number of people who are shipping software they cannot reason about. They can prompt for changes. They can ask the model to fix bugs. But they’ve lost, or never built, the mental model of how their system actually works. They can’t trace a request through their own architecture. They can’t predict what will break when they change something. They’re operating on faith in the model, not understanding of the code.</p><br><p>This is where I think the slow cook matters most. The zoom out, zoom in, build, repeat cycle isn’t just a workflow preference. It’s a defense against cognitive debt. Every time you zoom out, you’re reinforcing your understanding of why the system exists. Every time you zoom in, you’re engaging with how it works at a granular level. Every time you build, you’re testing that understanding against reality. And every time you repeat, you’re updating your mental model with what you learned.</p><br><p>You end up with something that pure code generation can’t give you: a codebase you actually hold in your head. You can reason about it, extend it, debug it, and explain it, because you spent time thinking about it, not just prompting for it.</p><br><p>I’m not arguing against using AI to write code. I use it constantly. I’m arguing against using AI as a substitute for understanding what you’re building. There’s a difference between leveraging a tool and surrendering to one.</p><br><h2>The LLM as a slow cooker</h2><br><p>I don’t think of Claude as a code generator. I think of it as the most patient conversation partner I’ve ever had. It doesn’t get bored when I revisit the same idea for the fifth time with a slightly different framing. It doesn’t lose context when I disappear for three days and come back with new constraints. It doesn’t judge me for changing my mind.</p><br><p>That patience is what makes the slow cook work. Good ideas need time and pressure. They need someone to keep asking “but why?” and “what about this case?” until the mushy parts firm up.</p><br><p>If you’re building something new and you find yourself jumping straight from idea to implementation, try slowing down. Open a conversation. Describe what you want to build without asking for code. Zoom out, zoom in, build, repeat.</p><br><p>Your ideas will be better for it. And so will your code.</p><br><hr><br><p><em>I’ve been reading a lot about cognitive debt, and I think there’s a related concept that doesn’t get enough attention: cognitive endurance, the ability to sustain deep understanding of a system as it grows in complexity, and the cognitive load that comes with it. <a href=\"https://jg.dev/writing/the-five-dimensions-of-cognitive-work\">Read the next post</a>.</em></p><br>",
            "url": "https://jg.dev/writing/slow-cook-your-ideas",
            "title": "Slow cook your ideas",
            "summary": "I’ve been using LLMs to simmer ideas before I build. The result is better output and a stronger mental model.",
            "image": "https://jg.dev/images/writing/slow-cook-your-ideas/og.png",
            "date_modified": "2026-02-15T12:00:00.000Z",
            "author": {
                "name": "feed@jg.dev",
                "url": "https://jg.dev"
            },
            "tags": [
                "Engineering"
            ]
        },
        {
            "id": "https://jg.dev/writing/send-cyber-junk-to-your-pi-hole-2",
            "content_html": "<p>A family member came over for movie night. Before the movie started, we sat through the usual barrage of ads. Except these weren&#39;t the usual ads. These were clearly, specifically, <em>personally</em> targeted at my guest, the kind of ads that made the room go quiet for a beat too long.</p><br><p>It was awkward. And afterward, I couldn&#39;t stop thinking about how it happened.</p><br><h3>My Theory</h3><br><p>My family member had connected his phone to my home wifi earlier that evening. That&#39;s normal everyone does it. His phone, now on my network, started doing what phones do: talking to ad servers, syncing data, pinging analytics endpoints. His advertising profile, built from his browsing habits, app usage, and who knows what else, became associated with my home IP address. When the streaming service served ads, it pulled from that profile. On my TV. In front of everyone.</p><br><p>I wanted to see if I was right. And I wanted to make sure it never happened again.</p><br><h3>What I found</h3><br><p>I started monitoring DNS requests on my home network, and what I found was jarring. Devices I thought were idle were constantly reaching out to ad servers, analytics endpoints, and domains I&#39;d never heard of. My wife&#39;s computers were making requests to dozens of tracking domains even while they were in sleep mode.</p><br><p>Every one of those requests is a data point. Enough data points and ad networks can build a profile tied to your IP address, your devices, and anyone who connects to your network. That&#39;s how my guest&#39;s ad profile bled into my living room.</p><br><h3>Enter Pi-hole</h3><br><p><a href=\"https://github.com/pi-hole/pi-hole/tree/master\">Pi-hole</a> is an open source DNS server that runs on a computer in your home network. Your devices use DNS to translate domain names into IP addresses. Pi-hole intercepts that step; it filters DNS queries based on a configured list of domains known for ads, malware, trackers, and more. When a device tries to reach a domain, Pi-hole checks it against a blocklist before the request ever leaves your network. If the domain is on the list, Pi-hole returns nothing. The request dies silently.</p><br><h3>What it actually looks like</h3><br><p>After running Pi-hole for a week, my dashboard showed that roughly 52% of all DNS queries on my network were being blocked. Most of those were ad servers and telemetry endpoints. A few were domains I&#39;d never seen before, quietly collecting data from devices I forgot were even connected.</p><br><p>My wife&#39;s computers got noticeably quieter. Ads on free streaming services didn&#39;t disappear entirely since some are baked into the video stream itself. But the targeted ones dropped off. The creepy &quot;how did they know that&quot; factor went away.</p><br><h3>What Pi-hole won&#39;t do</h3><br><p>Pi-hole isn&#39;t a silver bullet, and I&#39;d rather be honest about that than oversell it.</p><br><p><strong>Devices can bypass it.</strong> DNS settings are per-device. If a device is hardcoded to use Google&#39;s DNS (8.8.8.8) or Cloudflare (1.1.1.1), it skips your Pi-hole entirely. Some smart TVs and streaming devices do this. You can mitigate it with router-level firewall rules that redirect all DNS traffic to your Pi-hole, but that&#39;s an extra step.</p><br><p><strong>IP addresses bypass DNS altogether.</strong> If an app already knows the IP address of the server it wants to reach, it doesn&#39;t need to look up a domain name. No DNS query means Pi-hole never sees it.</p><br><p><strong>Unknown domains slip through.</strong> Pi-hole can only block what&#39;s on the list. New tracking domains get created constantly. Keeping your blocklists updated helps, but there will always be gaps.</p><br><p>For these reasons, Pi-hole should be one layer in your setup, not the only layer. Pair it with a browser ad blocker, a VPN if that matters to you, and basic network hygiene.</p><br><h3>Worth it?</h3><br><p>Absolutely. Pi-hole is free, runs on minimal hardware (a Raspberry Pi, an old laptop, a Docker container), takes maybe 30 minutes to set up, and requires almost no maintenance once it&#39;s running. I run Pi-hole on an old Dell XPS from 2016.</p><br><p>More importantly, it changed how I think about my home network. Before Pi-hole, I had no visibility into what my devices were doing. Now I do. And the next time someone connects to my wifi for movie night, I&#39;m not worried about what the TV is going to show us.</p><br><p>I&#39;ve since added a Ubiquiti Dream Router 7 to my setup, and what I found there makes the Pi-hole discoveries look tame. More on that soon.</p><br>",
            "url": "https://jg.dev/writing/send-cyber-junk-to-your-pi-hole-2",
            "title": "Send cyber junk to your Pi-hole: Protect your home network",
            "summary": "How an embarrassing movie night led me to discover what my home network was leaking, and how Pi-hole fixed it.",
            "image": "https://jg.dev/images/writing/send-cyber-junk-to-your-pi-hole-2/og.png",
            "date_modified": "2026-02-12T00:02:12.000Z",
            "author": {
                "name": "feed@jg.dev",
                "url": "https://jg.dev"
            },
            "tags": [
                "Security"
            ]
        },
        {
            "id": "https://jg.dev/writing/choosing-a-programming-language-for-your-next-project",
            "content_html": "<p>In a greenfield project, where there are no pre-existing code constraints, the selection of a programming language is a foundational decision with broad implications. The criteria to consider when choosing a language for a new project involves balancing theory with practicality and provides a stable foundation for scalability, efficiency, and sustainability.</p><br><h3>Project requirements</h3><br><p>Selecting a programming language should begin with a detailed analysis of project requirements and the language paradigms that can best address them. A language&#39;s syntax and underlying model significantly impacts its suitability for specific types of applications:</p><br><p>Functional languages are well-suited for tasks requiring higher-order functions, immutability, and statelessness, often found in data-centric applications or environments needing formal verification.</p><br><p>Object oriented languages are useful for projects that benefit from encapsulation, inheritance, and modularity, such as enterprise software systems with complex structures.</p><br><p>Procedural languages might be preferred for systems level programming, where control flow and state management are central.</p><br><p>For instance, a real time control system demands a language with low latency and fine-grained memory management, such as C or Rust, due to their close-to-hardware capabilities. A language that aligns with your project requirements will streamline the software design and implementation.</p><br><h3>Team considerations</h3><br><p>The skill level and background of the development team should influence the choice of language. Also, the time required to master the programming language should also be considered. A language’s complexity, syntactic clarity, and paradigm will influence both initial productivity and long term maintainability. If the team has experience with imperative languages, introducing a purely functional language might slow progress due to the paradigm shift required.</p><br><p>Languages that promote best practices such as type safety, memory safety, and modular design can lead to more maintainable and error-resistant code. Languages like Rust enforce memory safety without a garbage collector, which can prevent common errors while encouraging developers to write more efficient code. Adopting a language that promotes structured and maintainable code can reduce technical debt over time, an essential consideration for any greenfield project.</p><br><h3>Ecosystem</h3><br><p>The surrounding ecosystem and support for a language are also critical in facilitating reliable development. An extensive, mature ecosystem often indicates a language’s stability and the breadth of existing resources, such as libraries and frameworks, which can be leveraged for specific functionality.</p><br><p>Libraries simplify implementing standard functionalities like encryption, machine learning, and database management. For instance, Python’s libraries for data processing and machine learning, such as Pandas and TensorFlow, make it a preferred language for data-intensive projects.</p><br><p>Languages backed by active academic and industry communities are often more sustainable, with ongoing advancements and up to date practices. This support also reduces risks since it&#39;s more likely that challenging problems have documented solutions.</p><br><h3>Scalability and performance</h3><br><p>Scalability is a central concern for many modern applications, particularly in fields that handle large amounts of data or support millions of concurrent users. Selecting a language with efficient concurrency models or fine grained control over system resources can be essential for highly scalable applications. Key considerations include:</p><br><p>Languages like Go and Erlang offer concurrency models well-suited for networked applications that handle numerous simultaneous processes. For instance, Go facilitates lightweight concurrency which is ideal for web servers. Languages with explicit memory management allow precise control, which can be critical for resource constrained environments. Rust’s ownership model is particularly innovative, enforcing memory safety while avoiding garbage collection.</p><br><p>Balancing efficiency with practical performance ensures that the selected language will serve the application well as it scales. In particular, highly optimized languages like C and C++ may be necessary for applications where low-level performance tuning is essential.</p><br><h3>Security</h3><br><p>Languages with active security communities provide ongoing support, tools, and best practices to developers, which can improve application security posture over time. Security is increasingly vital in software design, particularly as regulatory demands intensify and application complexity grows. Some languages are inherently more secure due to their memory management techniques, type safety, or error handling paradigms:</p><br><p>Strongly typed languages can prevent a variety of runtime errors, reducing the risk of defects and vulnerable software. Languages like Rust prioritize memory safety, minimizing the likelihood of common vulnerabilities, such as buffer overflows, which are prevalent in unmanaged languages like C. </p><br><h3>Conclusion</h3><br><p>Selecting a programming language for a new project requires a strategic evaluation of the practical application and project specific requirements. By carefully analyzing the language’s alignment with project needs, team expertise, ecosystem, scalability, maintainability, and security, you can create a robust foundation that will sustain the project’s growth and adaptability over time. Language selection is both an art and a science - thoughtful choices in the early stages of a new project set the stage for innovation, stability, and success.</p><br>",
            "url": "https://jg.dev/writing/choosing-a-programming-language-for-your-next-project",
            "title": "Choosing a programming language for your next project",
            "summary": "A framework for greenfield decisions when there's no legacy code forcing your hand. Trade-offs over trends.",
            "image": "https://jg.dev/images/writing/choosing-a-programming-language-for-your-next-project/og.png",
            "date_modified": "2024-11-05T00:35:29.000Z",
            "author": {
                "name": "feed@jg.dev",
                "url": "https://jg.dev"
            },
            "tags": [
                "Engineering"
            ]
        },
        {
            "id": "https://jg.dev/writing/protect-modern-web-apis",
            "content_html": "<p>APIs are one of the most common threat vectors, <a href=\"https://www.gartner.com/en/documents/5471595\">according to a 2024 Gartner report</a>. APIs provide a pathway for data exchange and facilitate access to critical resources. They are the lifeline of modern web applications, bridging the gap between various services, platforms, and devices. This makes APIs prime targets for bad actors looking for valuables. Here we&#39;ll share practices that every developer should be familiar with to secure their web APIs.</p><br><ol><br><li>Authentication and authorization</li><br><li>Data encryption in transit and at rest</li><br><li>Validation and rate limiting</li><br><li>API gateway and web application firewalls</li><br><li>Testing and deployment pipeline</li><br></ol><br><h3>Common threats to APIs</h3><br><p>Compromised APIs can lead to compromised data, service outages, loss of public trust, and damage to your reputation. Before diving into the specifics, I believe it&#39;s important to understand the most common types of threats that target APIs today.</p><br><h5>Injection attacks</h5><br><p>Injection attacks have been around for ages, yet they still present a major issue for modern web applications. Injection attacks occur when untrusted inputs are improperly handled.</p><br><h5>No encryption or weak encryption</h5><br><p>Sensitive data is easily exposed during transmission between client and server. Appropriate encryption makes it much more difficult to decipher data being transmitted on a network.</p><br><h5>Denial of service attacks</h5><br><p>APIs without proper rate limiting can be overwhelmed by thousands, or millions, of request per second from a bad actor. The increased malicious network traffic within a very short period of time can bring down a system that isn&#39;t equipped to defend against this type of attack.</p><br><h5>Broken authentication</h5><br><p>Improper implementations of authentication mechanisms allow unauthorized access to resources and data.</p><br><h3>Authentication and authorization</h3><br><p>OAuth 2.0 and OpenID Connect (OIDC) remain the industry standards for handling API authentication and authorization. OAuth 2.0 provides delegated access to specific resources on behalf of a user. OIDC is an authentication mechanism to verify a user&#39;s identity on top of OAuth 2.0. Together they offer secure and flexible access management solutions. Avoid relying solely on API keys for endpoints with sensitive data. Always pair API keys with a strong authentication method like OAuth. Use short-lived access tokens and refresh tokens to minimize risks if a token is exposed.</p><br><p>Role-based access controls enforce access restrictions based on user roles, allowing different levels of access based on predefined categories. Attribute-based access controls provide more precise rules for allowing access based on attributes such as location or device type. Combining both types of controls provides broad, but nuanced, access policies aligning with security best practices.</p><br><h3>Data encryption in transit and at rest</h3><br><p>Encryption is essential. APIs frequently exchange sensitive information and should be secured using TLS 1.2 or higher. This ensures that data is encrypted while in transit between clients and minimizes the risk of interception from &quot;man in the middle&quot; attacks. For data that&#39;s stored on servers, use symmetric (i.e. AES-256) for fast, secure encryption of stored data. Hashing algorithms like <code>bcrypt</code> can be used to secure stored credentials like tokens and passwords. Even if a breach occurs, encryption renders the data unreadable.</p><br><h3>Validation and rate limiting</h3><br><p>Without proper input validation, APIs are susceptible to injection attacks. Never trust inputs by default and always implement validation on both the client and server. Consider using validation libraries that filter or sanitize inputs. Additionally, protect APIs from being overwhelmed by an unmanageable number of repeated and concurrent requests by using rate limiting. Setting reasonable thresholds for the frequency that a user can access your API will prevent abuse and interruption of the service.</p><br><h3>API gateway and web application firewalls</h3><br><p>In cloud infrastructure, API gateways are the central access point for APIs. Cloud service providers typically offer rate limiting, user authentication, and centralized logging and monitoring. Cloud providers also offer web application firewalls to filter malicious traffic based on predefined rules. Using an API gateway and a WAF together allows you to enforce security policies consistently across APIs.</p><br><h3>Deployment pipeline</h3><br><p>API security must be integrated into your CI/CD pipeline. This makes it seamless to automate security scanners to catch vulnerabilities early in the development cycle. You may also consider static and dynamic analysis tools to detect potential issue in your code before deployment and in the running application.</p><br><h3>Conclusion</h3><br><p>API security is a shared responsibility that requires attention at every layer. It&#39;s important to stay vigilant and aware of emerging threats and evolving security standards to keep APIs secure. The practices outlined in this articles provides a powerful toolkit to prevent potential threats. Security is no longer and option, it&#39;s a responsibility.</p><br>",
            "url": "https://jg.dev/writing/protect-modern-web-apis",
            "title": "Techniques for protecting web APIs from common threats in 2024",
            "summary": "APIs are the most common threat vector in 2024. Here's what actually stops attacks in production.",
            "image": "https://jg.dev/images/writing/protect-modern-web-apis/og.png",
            "date_modified": "2024-11-01T23:18:29.000Z",
            "author": {
                "name": "feed@jg.dev",
                "url": "https://jg.dev"
            },
            "tags": [
                "Security"
            ]
        },
        {
            "id": "https://jg.dev/writing/from-black-box-to-open-book-using-net-logs-to-understand-your-browsers-behavior",
            "content_html": "<p>Software engineers understand the importance of transparency in a system. When it comes to network communication, the most critical insights are often hidden in mundane technical details. We need visibility into software behavior to effectively troubleshoot, evaluate, and secure our systems. Network logs offer a look under the hood into a browser&#39;s inner workings. It reveals insights that can help identify performance issues, improve web app security, and ensure user privacy. We will walk through why net logs are useful, how to collect them, and how to use them to understand your browser&#39;s behavior.</p><br><h3>Why net logs matter</h3><br><p><a href=\"https://www.chromium.org/developers/design-documents/network-stack/netlog/\">Chrome&#39;s net-export</a> feature allows us to record every HTTP/HTTPS request made and every response received by the browser as it interacts with websites. The logs contain details about the URLs accessed, request and response times, headers, and resources loaded. By inspecting these details we can pin point long load times, unencrypted requests, issues with CORS requests, DNS queries, SSL certificates, and any external resource a site calls (analytics scripts, trackers, etc.).</p><br><h3>How to use net-export</h3><br><p>Net-export is currently a feature available to Chromium-based browsers. This includes the latest MS Edge, Vivaldi, Brave, Google Chrome, and others. It works by clicking a button to start recording network data to a file, interacting with a website, then stopping the recording. Afterward, the data is output to a <code>.json</code> file that can be used to inspect the data. There are several tools available, like <a href=\"https://netlog-viewer.appspot.com/\">Netlog-viewer</a>, to view, analyze, and interact with the JSON data in a more user friendly manner. Here are steps to use net-export:</p><br><ul><br><li>Open a chrome-based browser and navigate to chrome://net-export.</li><br><li>Click &quot;Start logging to disk&quot; and save the <code>.json</code> file.</li><br><li>In a separate tab or window, navigate to a site or reproduce an issue you might have.</li><br><li>Go back to your net-export tab and click &quot;Stop logging&quot;.</li><br></ul><br><h3>Analyzing the data</h3><br><p>You now have a raw record of browser network traffic. Chrome net logs are intricate and contain valuable details for analyzing browser behavior. Each entry in the log has an event type. Each event is linked with a source ID which groups related events in the same session or request flow. Below I&#39;ve included a description of some key events that may be of interest.</p><br><p><code>URL_REQUEST_START_JOB</code> starts an HTTP job for a URL request.</p><br><p><code>HTTP_STREAM_REQUEST</code> starts an HTTP stream request.</p><br><p><code>HTTP_STREAM_REQUEST_BOUND_TO_JOB</code> links an HTTP stream request to a specific job like an API call or a JavaScript <code>fetch()</code>.</p><br><p><code>HTTP_TRANSACTION_SEND_REQUEST</code> indicates when an HTTP request is sent.</p><br><p><code>HTTP_TRANSACTION_READ_RESPONSE_HEADERS</code> marks the receipt of HTTP response headers.</p><br><p><code>SSL_CONNECT</code> begins the SSL/TLS handshake process.</p><br><p><code>SOCKET</code> events represent TCP connection attempts, including DNS resolution and SSL handshake events.</p><br><p><code>SSL_HANDSHAKE_MESSAGE</code> logs each handshake in the SSL/TLS process.</p><br><p><code>CERT_VERIFIER_JOB</code> verifies the SSL certificate.</p><br><p><code>TCP_CONNECT_ATTEMPT</code> attempts to establish a TCP connection.</p><br><p><code>TCP_CONNECT</code> logs the start of a TCP connection.</p><br><p><code>SOCKET_POOL_CONNECT_JOB</code> represents a job connecting to a socket.</p><br><p><code>SOCKET_IN_USE</code> shows the socket being used for communication.</p><br><p><code>QUIC_SESSION</code> logs a <a href=\"https://en.wikipedia.org/wiki/QUIC\">QUIC session</a>.</p><br><h3>Privacy and security considerations</h3><br><p>When handling net logs, remember they contain sensitive information, including session cookies, access tokens, and personal data. Store logs in a secure location and encrypt it. Also consider redacting sensitive headers or payloads that may contain personal data before sharing.</p><br><h3>Closing thoughts</h3><br><p>Net-export allows us to unravel the network intricacies of browser behavior. By understanding net logs we gain insight into every request and response exchanged between our browser and the internet. This sheds light on security risks, privacy concerns, and performance issues. Next time that you are troubleshooting an issue or working on a security assessment, remember that net logs are waiting to be explored.</p><br>",
            "url": "https://jg.dev/writing/from-black-box-to-open-book-using-net-logs-to-understand-your-browsers-behavior",
            "title": "From black box to open book: Understand your browser's behavior using net logs",
            "summary": "Your browser knows more than it shows you. How to use network logs to troubleshoot, evaluate, and secure what's really happening.",
            "image": "https://jg.dev/images/writing/from-black-box-to-open-book-using-net-logs-to-understand-your-browsers-behavior/og.png",
            "date_modified": "2024-10-31T09:23:06.000Z",
            "author": {
                "name": "feed@jg.dev",
                "url": "https://jg.dev"
            },
            "tags": [
                "Tooling"
            ]
        },
        {
            "id": "https://jg.dev/writing/5-practical-tips-for-secure-software-development",
            "content_html": "<p>Even the best developers fall prey to the common pitfalls that leave their code vulnerable to attacks. We all want to write performant, reliable, and secure code, but security can feel like an afterthought when we&#39;re up against competing priorities and tight deadlines. By following a few guidelines you can greatly reduce security risks in your applications without it becoming too overwhelming.</p><br><h3>Security-conscious developer starter pack</h3><br><ul><br><li>Sanitize user input and use output encoding</li><br><li>Avoid hard coding sensitive information</li><br><li>Practice the principle of least privilege</li><br><li>Regularly update dependencies</li><br><li>Use error handling</li><br></ul><br><h3>Sanitize user input and use output encoding</h3><br><h5>Input sanitization</h5><br><p>Many programming languages provide libraries for sanitizing inputs. These libraries are the preferred tool for input sanitization compared to creating custom solutions, which may be prone to human error. Take <a href=\"https://jg.dev/writing/zero-trust-in-code\">a zero trust approach</a> by always assuming breech. Treat data from users and even from your own database, as malicious. For example, when working with HTML text inputs, sanitize the data by stripping away any script tags or unexpected character sequences.</p><br><p>It&#39;s best practice to implement input sanitization on both the client side and the server side. Client side code can be changed by the user or someone with malicious intent, but including sanitization on the client can help reduce the load of the server. Make sure that you do not modify the data after sanitization, else the data becomes dirty and untrusted again.</p><br><h5>Output encoding</h5><br><p>While sanitization &quot;cleans&quot; data as it comes in from the user, output encoding transforms data before it is sent to the browser. The encoding process should ideally occur on the server at the point when data is being prepared for the browser. This reduces the likelihood of executable code being injected into a payload from your system. Output encoding takes different forms depending on the context, but always involves encoding data being sent from a server to a client.</p><br><p>Ideally, input sanitization occurs as soon as possible after the data processing has begun. In contrast, output encoding would ideally occur as late as possible before data is sent to the browser. Don&#39;t encode or change data that&#39;s already been encoded. Encoding data multiple times may not yield the intended results.</p><br><h3>Avoid hard-coding sensitive information</h3><br><p>Some software applications need to use API keys to access an API or database passwords to access a database. Sensitive information should never be stored in your code, and definitely not in a public repository. This is a common mistake that can have devastating consequences. Use environment variables to store sensitive information. Environment variables are accessible by your code, but is not leaked if your code is exposed.</p><br><p>Most cloud providers support environment files, so you can easily load secrets without embedding them in your codebase. Cloud providers sometimes also have &quot;secrets management&quot; tools which provide robust encryption and controls for sensitive data. Remember, if you need to store sensitive data, keep it out of your code and do not commit it to source control.</p><br><h3>Practice the principle of least privilege</h3><br><p>Grant only the minimal level of access required for users, applications, and processes. It&#39;s a fundamental concept in security that often gets ignored when things get busy. Only grant users and services access that is absolutely necessary. For actions like updating user data, assign specific roles that can perform such operations. This way, if an account with minimal privileges is compromised, the impact is contained. The principle of least privilege is the best way to protect your application from internal and external threats.</p><br><h3>Regularly update dependencies</h3><br><p>A package that is outdated may have known security vulnerabilities that attackers can exploit. Use automated tools like Dependabot or <code>npm audit</code> to scan your dependencies for known vulnerabilities and suggest fixes. Applying patches and updating dependencies regularly keeps your application secure and up to date with the latest security patches.</p><br><h3>Use error handling</h3><br><p>Error messages are essential for debugging, but can also become a treasure trove of information for an adversary. Avoid overly detailed error messages in production to prevent attackers from gathering internal details about your system. Log errors in a way that is only accessible to developers using tools like Sentry or ELK (Elasticsearch, Logstash, and Kibana). These can capture the necessary information for debugging without leaking sensitive data. Balancing clarity with caution helps users while keeping important details hidden.</p><br><h3>Conclusion</h3><br><p>By adopting these tips into your development, you&#39;ll find that security becomes a natural part of your development process. Secure software development is a habit that starts with writing secure code and building safer applications. Our responsibility to put security first becomes more important as technology evolves. I hope this brings you a little closer to making security a part of your dev flow!</p><br>",
            "url": "https://jg.dev/writing/5-practical-tips-for-secure-software-development",
            "title": "5 practical tips for secure web development",
            "summary": "Security feels like an afterthought when deadlines are tight. A few guidelines that fit into the way you already work.",
            "image": "https://jg.dev/images/writing/5-practical-tips-for-secure-software-development/og.png",
            "date_modified": "2024-10-31T00:15:28.000Z",
            "author": {
                "name": "feed@jg.dev",
                "url": "https://jg.dev"
            },
            "tags": [
                "Security"
            ]
        },
        {
            "id": "https://jg.dev/writing/infrared-cameras-what-you-need-to-know",
            "content_html": "<p>Infrared cameras are becoming the norm for logging into laptops and mobile devices. They provide added convenience for authenticating and presence detection using facial recognition, but they also present new avenues for exploitation. This post discusses three things that users should consider when purchasing or using devices with IR sensors.</p><br><h3>What is infrared?</h3><br><p>Infrared light is a section of the electromagnetic spectrum just past the color red that is closely related to heat. When we refer to heat radiating from an object, we are often talking about infrared radiation. When infrared radiation hits an object, it increases the vibration of the molecules within the object. This is why we feel the warmth of the sun, a fire, or a cup of tea. Infrared waves transfer energy to our skin and we perceive this as heat.</p><br><p>Unlike some species in the animal kingdom, humans cannot <em>see</em> infrared, but we can <em>detect</em> infrared through heat. Because infrared is directly related to temperature, infrared cameras can measure temperature by detecting infrared radiation.</p><br><h3>Benefits of infrared cameras for authentication</h3><br><p>Due to the relationship between infrared and heat, infrared cameras can work without ambient light, at night, and in dark environments - this is where typical cameras fail. Infrared cameras can emit a controlled pulse of infrared light to illuminate a face and process the data received by the sensor for authentication. This is crucial for reliable facial recognition.</p><br><p>The heat detection capability of infrared sensors adds an extra layer of security by distinguishing between live human skin and inanimate objects. Additionally, by detecting how infrared light bounces off of the skin, infrared cameras can also provide three dimensional (depth) information about the user&#39;s face or the environment.</p><br><p>These features make infrared facial recognition far less susceptible to spoofing attempts or bypassing the system using masks, photos, or other replicas that lack the heat profile of a human face. The heat emitted by a user&#39;s face confirms life, an important factor in defending against fraud.</p><br><h3>The threat of unauthorized access</h3><br><p>Infrared cameras on laptop and mobile devices are just as susceptible to unauthorized access as normal built-in cameras. If you use a camera cover on your laptop webcam, consider covering your infrared camera as well. While manufacturers are making solid attempts at protecting authentication related data generated by infrared sensors, do your due diligence to ensure that no unauthorized applications are accessing your infrared camera unless they have explicit permission.</p><br><h3>Control your camera settings</h3><br><p>The most popular consumer devices with integrated infrared cameras are the Apple iPhone, several Android phones, and Window&#39;s laptops. Most infrared cameras are enabled by default and used when you set up facial recognition software like Window&#39;s Hello. Take some time to periodically review and adjust your camera settings. Both iOS and Windows operating systems allow you to control which applications have access to your cameras and there are multiple ways to manage access to your camera. Below is a list of options to manage or disable your integrated camera.</p><br><ul><br><li>Physical hardware - This requires opening your device and physically disabling the camera and/or sensors by disconnecting them from your system. This option is most practical if you are handy and do not need to use your integrated camera. This option could void your warranty and is not ideal for mobile devices.</li><br><li>BIOS - Some laptop firmware allows you to disable the camera in BIOS. This allows you to greatly reduce the likelihood of unauthorized access to your camera by applications installed on the machine. This configuration can be reversed but it does require rebooting your computer.</li><br><li>Registry - On Window&#39;s computers, camera access can be disabled in the registry. This is a configuration layer in the operating system where access to your integrated camera can be controlled. The Window&#39;s registry should not be updated unless you know what you are doing.</li><br><li>Device manager - On Window&#39;s computers, you may choose to disable your camera using the device manager. The device manager is a little more user friendly than the registry option, but provides the same effect.</li><br><li>System settings - On mobile devices and laptops, this is the most common way to manage camera access. Android, iOS, and Window&#39;s operating systems allow you to choose which applications may access your camera. You can also check privacy settings for additional controls to manage your camera.</li><br><li>Camera covers - Some laptop devices come with physical shutters, but take a closer look as some devices only include a shutter for the normal camera and not the infrared camera. You can also pick up camera covers for your laptop or phone on Amazon for pretty cheap - just make sure you cover both camera for the most effectiveness.</li><br></ul><br><h3>Conclusion</h3><br><p>Infrared sensors provide accurate, secure, reliable, and efficient biometric authentication on laptops and mobile devices. They improve the overall user experience by adapting to low-light and no-light environments. By capturing your face&#39;s unique depth and heat characteristics, these sensors can also tell the difference between a human&#39;s face and a mask or photo of a human face. </p><br><p>Infrared cameras are just as vulnerable as any other webcam. But you can mitigate potential security and privacy concerns through periodically managing your camera setting. There are several settings and configurations that you can use to maintain control over when and how your infrared cameras are used.</p><br>",
            "url": "https://jg.dev/writing/infrared-cameras-what-you-need-to-know",
            "title": "Infrared cameras: What you need to know",
            "summary": "IR cameras are everywhere now — laptops, phones, doorbells. Three things users should think about before trusting them.",
            "image": "https://jg.dev/images/writing/infrared-cameras-what-you-need-to-know/og.png",
            "date_modified": "2024-10-29T23:06:16.000Z",
            "author": {
                "name": "feed@jg.dev",
                "url": "https://jg.dev"
            },
            "tags": [
                "Privacy"
            ]
        },
        {
            "id": "https://jg.dev/writing/delaying-the-latest-software-update-think-twice",
            "content_html": "<p>Some organizations avoid updating their third party applications for fear of introducing new defects or changes that require development, testing, deployment, and end user training. After all, the current version has been tested and works great, right? As someone who is averse to change, I&#39;m here to urge you to think again. Here are some factors to consider when &quot;waiting until the kinks are worked out&quot; in the latest software version.</p><br><h3>Why do we need software updates?</h3><br><p>To non-software engineers, software can seem like a black box where things magically happen, but that&#39;s hardly the case. In fact, it&#39;s more like this:</p><br><p><img src=\"https://jg.dev/images/writing/delaying-the-latest-software-update-think-twice/image-3.png\" alt=\"\"><br>Source: <a href=\"https://xkcd.com/2347/\">https://xkcd.com/2347/</a></p><br><p>Most modern software applications are composed of software built by other individuals and organizations. For most of us, the code that we write leverages other software projects to accomplishes some task in a way that meets our use case. This perspective is critical to understanding why we need software updates in the first place - change.</p><br><h5>Changing software dependencies</h5><br><p>Software is rarely perfect, and what is perfect today may not be perfect tomorrow. A single software application may depend on hundred, sometimes thousands of other pieces of software. Software dependencies are developed and maintained by other companies or people who may not be involved in our project in any way.</p><br><p>It&#39;s up to software teams to monitor their dependencies and adapt their software to the changes down stream. Third party software teams must also monitor their software dependencies and adapt to changes. Additionally, third party software may also change as a result of any the reasons below.</p><br><h5>Changing threat landscape</h5><br><p>As technology evolves, so do the ways in which bad guys exploit software systems. Security is one of the most critical reasons that software is updated. As vulnerabilities are discovered, software teams update the code to eliminate vulnerabilities. The newer, more secure code needs to be installed on all devices using the application.</p><br><h5>Resolve defects</h5><br><p>As software defects are identified and reported, the software team fixes the defect to improve the quality of their software. Defects may be introduced through changes in hardware or software dependencies or a mistake in configuration or business logic. In either case, resolving the defect will improve stability and result in a smoother experience.</p><br><h5>Enhancements</h5><br><p>As people use the application, software teams may request feedback on how the application can be improved. The result of this feedback is usually new code that provides additional capabilities or makes the application easier to use. Enhancement may also take the form of optimization which could result in performance improvements.</p><br><h5>Updates to legal requirements</h5><br><p>Software systems that must meet certain legal requirements may be updated to ensure their application remains compliant with new or updated laws and regulations. Regulations for software are usually in place to protect users and their data by providing specific requirements for how data is collected, what types of data may be collected, notification to the user of data collection, and how the data is stored and transmitted.</p><br><h3>Common reasons to delay software updates</h3><br><p>I was a huge fan of Apple iPhone in the early days. I felt so strongly about iPhones that I purchased brand new iPhone 3Gs for family members who swear by iPhones still to this day. However, my passion for the iPhone quickly turned into frustration after several automatic software updates changed the user interface. After being forced to learn and adopt a new experiences many times over the years, I finally gave up the iPhone for good in favor of Android.</p><br><h5>Change aversion or learning curve</h5><br><p>Some people and organizations avoid updates simply because they are used to the current version. Updating the software may require readjustment or retraining that users just are not ready for.</p><br><h5>Cost of testing and deployment</h5><br><p>For organizations, updating software often involves testing the new version to make sure it works well within their systems. This usually requires significant time and resources, especially in larger organizations where testing and deployment are complex.</p><br><h5>Waiting for stable versions</h5><br><p>Updates can sometimes cause unintended side effects. Some users prefer to wait until early defects in a new release are fixed to avoid potential disruption or instability. They may hold off on installing an update until a few minor releases have addressed any initial issues.</p><br><h5>Hardware constraints</h5><br><p>Some older devices may struggle to handle newer updates as a result of increased resource consumption. We have seen companies like <a href=\"https://learn.microsoft.com/en-us/troubleshoot/windows-client/installing-updates-features-roles/processor-not-supported-together-with-windows-version\">Microsoft stop supporting five year old hardware</a> with Windows updates, and planned obscelescence <a href=\"https://kluwerlawonline.com/api/Product/CitationPDFURL?file=Journals%5CEuCML%5CEuCML2018044.pdf\">tactics used by Apple and Samsung</a>.</p><br><h3>Factors to consider when evaluating your options</h3><br><p>Some software creators continue to support older versions of their software. There are a handful of strategies for supporting multiple software versions, but the most common is an N - 1 or N - 2 strategy. If &quot;N&quot; is the current version of a software, support is also provided for the previous version or the previous two versions. This buffer allows more flexibility for users to plan and prepare for an update according to their own schedule and budget. Delay too long, and you may find yourself in a position where a major, forced upgrade becomes inevitable to maintain security or functionality. When you have the option to delay a software update, make sure you know the end-of-support timeline and consider the following factors.</p><br><h5>Security</h5><br><p>It is a race against the clock for attackers to exploit known vulnerabilities. By staying on older version, your systems are left unprotected against known threats that could expose sensitive data or allow unauthorized access. Review the release notes to determine the criticality and nature of updates. Prioritize updates with security patches by setting up alerts or implementing automated patching where possible to shield yourself or your organization from emerging threats. If you can&#39;t update right away, limit sensitive transactions on unsupported versions as they are more vulnerable to attacks.</p><br><h5>Supply chain attacks</h5><br><p>A software supply chain attacks occurs when a vulnerability in a software dependency is exploited. Supply chain vulnerabilities are a significant concern when dealing with third party applications. Software vendors are expected to actively monitor their own supply chains and release updates to secure components impacted by upstream vulnerabilities. Implement policies to evaluate and prioritize updates from third party vendors to ensure that your organization&#39;s defenses remain current.</p><br><h5>Long term support versions</h5><br><p>Long term support (LTS) versions are stable releases of software that maintained for a longer period of time than regular releases. LTS versions do not guarantee total security and waiting for the next LTS release could leave you without the latest security standards, especially when dealing with applications that need to handle sensitive information. Consider adopting a hybrid approach, balancing the stability of LTS releases with regular updates to keep your software secure and up to date.</p><br><h5>Compatibility with other systems</h5><br><p>Software is interconnected. Update may include enhancements that make applications compatible with other updated systems or programs. Map our the applications and systems in your ecosystem that have dependencies, and regularly update them to maintain compatibility.</p><br><h3>Final thoughts</h3><br><p>Holding off on software updates may avoid short-term disruption, but the risks of delaying should not be overlooked. Establish a balanced approach that considers both stability and security. Review release notes to understand the impact of an update and plan for a smooth implementation. Software is designed to improve over time and software updates include the latest defenses against vulnerabilities. While it&#39;s smart to assess each update individually, a regular update schedule is a good idea. So next time an update is available, think twice before hitting &quot;Remind me later.&quot;</p><br>",
            "url": "https://jg.dev/writing/delaying-the-latest-software-update-think-twice",
            "title": "Delaying the latest software update? Think twice",
            "summary": "Skipping updates feels safe until it isn't. Why \"the current version works fine\" is a more dangerous stance than it sounds.",
            "image": "https://jg.dev/images/writing/delaying-the-latest-software-update-think-twice/og.png",
            "date_modified": "2024-10-29T01:18:43.000Z",
            "author": {
                "name": "feed@jg.dev",
                "url": "https://jg.dev"
            },
            "tags": [
                "Engineering"
            ]
        },
        {
            "id": "https://jg.dev/writing/the-rise-of-low-code-and-citizen-developers-will-developers-become-redundant-2",
            "content_html": "<p>Low code and no code platforms can make app creation feel like designing a slide deck rather than coding a software project, giving rise to a new crop of creators - citizen developers. For many, this change brings excitement. For others, it may raise some existential questions. Are seasoned developers about to be replaced? No. Is there room for professional developers in a world where anyone can code? Yes!</p><br><h3>What are low code platforms?</h3><br><p>Low code and no code platforms aren&#39;t about coding in the traditional sense. Instead, you get a visual interface with drag-and-drop components and pre-built templates. Users can connect databases, create forms, and automate workflows with just a few clicks. This is pretty powerful as a team can go from idea to a functional prototype in a few hours rather than weeks. For small applications or dashboards, these tools are a game changers.</p><br><h3>The role of citizen developers</h3><br><p>The term &quot;citizen developer&quot; is a way to describe people who, without formal training in coding, use these platforms to create business solutions. This approach tries to address a problem in many organizations - the demand for software outpaces what development teams can deliver. Citizen developers help keep organizations adaptable, especially for quick solutions that don&#39;t need complex architecture or back-end code.</p><br><h3>Why low code platforms won&#39;t replace traditional developers</h3><br><p>As someone who has developed pro code on low code platforms for a variety of use cases and organizations, there are limitations that professional developers need to address.</p><br><ul><br><li>Integrations and interoperabilityLow code platforms might offer basic integrations out-of-the-box, but these are typically limited in depth and flexibility. For enterprise level applications that need to work with legacy systems, external or custom databases, seasoned developers are essential to ensure a smooth and reliable connection.</li><br><li>Complexity and scalabilityAs projects grow in complexity it becomes a challenge for low code solutions to keep up. This is where experienced developers really shine, and it&#39;s unlikely that low code will fully replicate that expertise anytime soon.</li><br><li>Security and complianceFor industries where security and compliance are non-negotiable, citizen developers may find themselves in uncharted territory. Following security protocols and remaining compliant is a critical task that falls outside the scope of citizen development.</li><br><li>CustomizationOrganizations quickly realize that out-of-the-box templates, workflows, and interfaces do not meet all of their business needs. To adapt to a company&#39;s unique business processes, low code platforms typically offer some customization options. Professional developers are still required to write custom code for organizations to get the most out of the system.</li><br></ul><br><h3>How developers fit into a low code world</h3><br><p>So where do developers fit into the picture? Low code tools may take on some of the lighter development work, but developers are critical in taking these tools to the next level. Here is how we can stay relevant.</p><br><ul><br><li>Governance and oversightTo ensure quality, security, and consistency across the board, developers play a key role in establishing guidelines and reviewing developed applications.</li><br><li>Innovation and strategyLow code frees up developers from handling every small detail about every small project. This means we can focus more on strategic, high-impact initiatives like large scale data processing, integrations, and complex workflows - areas where low code can&#39;t make an impact.</li><br><li>Core infrastructureSince low code platforms don&#39;t make themselves, the foundational elements requires developers who understand the intricacies of data management, security, and scalability. These platform still need robust databases, reliable servers, and secure cloud infrastructure.</li><br><li>Hybrid developmentCreating a blend of low code and traditional code is valuable for teams looking to get the most out of their platform. Embedding pro code into a low code application can facilitate highly complex business processes, data processing, user interface components, integrations, and automation.</li><br></ul><br><h3>Conclusion</h3><br><p>If you are developer, don&#39;t worry. You&#39;re not going anywhere. The rise of low code platforms and citizen developers is a shift, not a replacement. It is a shift that is exciting and empowering for all of us creating tomorrow&#39;s technology.</p><br>",
            "url": "https://jg.dev/writing/the-rise-of-low-code-and-citizen-developers-will-developers-become-redundant-2",
            "title": "The rise of low code and citizen developers: Will developers become redundant?",
            "summary": "Low-code makes app creation feel like building a slide deck. What that means for the people who write actual code.",
            "image": "https://jg.dev/images/writing/the-rise-of-low-code-and-citizen-developers-will-developers-become-redundant-2/og.png",
            "date_modified": "2024-10-25T23:33:23.000Z",
            "author": {
                "name": "feed@jg.dev",
                "url": "https://jg.dev"
            },
            "tags": [
                "Industry"
            ]
        },
        {
            "id": "https://jg.dev/writing/data-privacy-how-developers-can-prepare-for-the-future",
            "content_html": "<p>As technology advances, so do the ways in which our personal data is collected, stored, and shared. The evolution of data collection has prompted law makers to be more privacy-conscious and invested in when, how, and why attributes about us are collected. As software engineers, we are at the forefront of these changes. Our work impacts how data is managed, and we have a responsibility to anticipate and adapt to privacy demands. Here is a look at the future of privacy, why it matters, and how developers can prepare.</p><br><h3>The future</h3><br><p>New technologies mean more data. With the increasing number of smart devices and the infusion of AI into our daily lives, organizations are collecting, buying, selling, and using more data than ever before. Large companies, like Google, Apple and Mozilla, are making shifts away from third-party cookies in the browser in favor of newer technologies. Software regulation and compliance requirements are evolving with laws such as <a href=\"https://gdpr.eu/what-is-gdpr/\">GDPR</a>, <a href=\"https://www.ftc.gov/business-guidance/resources/complying-coppa-frequently-asked-questions#A.%20General%20Questions\">COPPA</a>, and <a href=\"https://oag.ca.gov/privacy/ccpa\">CCPA</a> being introduced with pretty significant penalties if not followed. With documentaries like &quot;The Social Dilemma,&quot; and things like the Cambridge Analytica scandal, people are becoming more aware of the importance protecting their right to privacy. Taking steps to protect that right is another hurdle we&#39;ll save for another day.</p><br><h3>Why data privacy matters</h3><br><p>Trust is everything. Users want to know that they can trust the apps and platforms they interact with. A lack of confidence in a digital product or service due to mishandled or leaked data could mean the end of the road for that company. While <a href=\"https://www.securityweek.com/linkedin-hit-with-310-million-euro-fine-for-data-privacy-violations-from-irish-watchdog/\">fines for violating privacy laws are steep</a>, privacy shouldn&#39;t just be about avoiding fines. Instead, it could be leveraged to differentiate a product or service, turning compliance into a competitive advantage. </p><br><p>Additionally, data privacy is an extension of the fundamental right to privacy. Protecting data is critical for maintaining personal autonomy, freedom from surveillance, and safeguarding against misuse or exploitation. Without data privacy, the broader right to privacy cannot be fully realized, threatening the integrity of democratic societies.</p><br><h3>How to prepare</h3><br><h5>Don&#39;t collect what you don&#39;t need</h5><br><p>Privacy starts at the inception of a product. Evaluate and make considerations for the data elements that you really need, and ignore everything else. You don&#39;t need to protect data that you don&#39;t have. Also, giving users the ability to control their data, preferably by <strong>opting in</strong> rather than <strong>opting out</strong>, makes privacy settings more intuitive and accessible.</p><br><h5>Use encryption and anonymization</h5><br><p>If you are handling user data of any kind, encryption is your best friend. Encrypt data in transit and at rest, making sure that only authorized users and devices have access to it. For analytics where you don&#39;t need to identify individual users, anonymize the data. It&#39;s a win-win, you get the data you need, and users&#39; privacy stays in tact.</p><br><h5>Use a zero trust approach</h5><br><p><a href=\"https://jg.dev/writing/zero-trust-in-code\">Trust no one by default, and always verify.</a> Verify every user and device before granting access to a resource. Striking a balance between security and convenience can be challenging, but zero trust provides a solution to the traditional security-convenience trade off.</p><br><p><img src=\"https://jg.dev/images/writing/data-privacy-how-developers-can-prepare-for-the-future/Screen-Shot-2021-11-01-at-1.00.52-PM-1024x343.png\" alt=\"\"></p><br><p><img src=\"https://jg.dev/images/writing/data-privacy-how-developers-can-prepare-for-the-future/Screen-Shot-2021-11-01-at-1.02.43-PM-1024x394.png\" alt=\"\"><br>Source: <a href=\"https://blogs.vmware.com/euc/2021/11/zero-trust-avoids-the-traditional-security-versus-convenience-tradeoffs.html#:~:text=It%20literally%20changes%20the%20relationship,or%20a%20lot%20of%20security.\">VMware End-User Computing Blog</a></p><br><h5>Make privacy usable</h5><br><p>Who enjoys jumping through hoops just to feel safe online? No one. We should aim to make privacy as user friendly as possible. Create concise privacy policies in <a href=\"https://www.plainlanguage.gov/\">plain language</a>. Simplify privacy settings so they are easy to find, understand, and use. Let users know in real-time when data is being collected or shared.</p><br><h5>Be cautious with AI</h5><br><p><a href=\"https://www.securityweek.com/user-outcry-as-slack-scrapes-customer-data-for-ai-model-training/\">AI features are being integrated more frequently into software systems</a>. Things like facial recognition and predictive analytics collect large amounts of data without users even realizing it. Developers should be knowledgeable about the algorithms used in third-party AI solutions to ensure that they do not infringe on user privacy rights. The key here is transparency. Succinctly and clearly let users know what data is being used by AI and how it is processed.</p><br><h3>Wrapping up</h3><br><p>The future of privacy is one where developers are responsible for safeguarding user&#39;s privacy and advocating for users&#39; right to decide what data is collected, stored, and shared. Just because technology can do something doesn&#39;t mean that it should. Embrace &quot;privacy by design&quot; by building systems that prioritize greater transparency and security. Our role as technologists is a privilege, and the decisions we make today will echo for eternity.</p><br>",
            "url": "https://jg.dev/writing/data-privacy-how-developers-can-prepare-for-the-future",
            "title": "Data privacy: How developers can prepare for the future",
            "summary": "Privacy laws are catching up to how data actually gets collected. What engineers should be doing now before they're forced to.",
            "image": "https://jg.dev/images/writing/data-privacy-how-developers-can-prepare-for-the-future/og.png",
            "date_modified": "2024-10-24T20:49:14.000Z",
            "author": {
                "name": "feed@jg.dev",
                "url": "https://jg.dev"
            },
            "tags": [
                "Privacy"
            ]
        },
        {
            "id": "https://jg.dev/writing/are-larger-monolithic-functions-actually-better",
            "content_html": "<p>Deciding whether to decompose code into smaller functions significantly impacts the maintainability, readability, and sometimes the performance of our software projects. A blanket, one-size-fits-all, approach is rarely appropriate for engineering decisions and this topic is no exception. Are monolithic functions better? It depends. Below you&#39;ll find a strategy to evaluate when to break down functions or methods into smaller ones.</p><br><h3>Programming languages 101</h3><br><p>Software applications contain instructions that tell a computer what actions to perform. For example: display a website, send an email, etc. These instructions are communicated to the computer in the form of 1&#39;s and 0&#39;s - machine code. Creating software by writing instructions in machine code is challenging and infeasible (although possible) for most of us.</p><br><p>To make it easier for humans to tell computers what to do, some really smart people invented programming languages. Programming languages allow us to write instructions for a computer in a way that is more natural to humans when compared to writing only 1&#39;s and 0&#39;s. To run a program, the instructions that we write in a programming language must be translated to machine code, and machine code is executed on the computer.</p><br><p>This post is primarily concerned with high-level programming languages that are multiple layers of abstraction away from machine code. Since programs can have hundreds or thousands of lines of code, we structure our code in a way that&#39;s easy for us to read and understand. Decomposition is one of the strategies used to structure code for maintainability, readability, and extensibility.</p><br><h3>Illustrative analogies</h3><br><p>The best way to understand the two ends of the spectrum is with examples.</p><br><h5>Overly monolithic</h5><br><p>You pick up a book and begin to read. There are no chapters, no table of contents, and no clear breaks between different parts of the plot. Although it&#39;s possible to read the entire book, it would be difficult to follow. For larger books, it&#39;d become increasingly difficult. If the author needed to revise one part of the story, they would have to sift through the entire book to find the relevant section since there is no clear structure to navigate.</p><br><h5>Overly decomposed</h5><br><p>On the other hand, you pick up a second book and begin to read. Every chapter is broken down into dozens of tiny subsections, sometimes just a few sentences long. Each subsection focuses on a minute detail like a single line of dialogue or a brief description of a scene. You have to constantly jump between sections to piece together a narrative and the constant breaks disrupt the flow of the story. If the author needed to revise one part of the story, they would have to touch multiple tiny subsections, making the process unnecessarily complex.</p><br><h3>Strategy for deciding when to decompose</h3><br><p>A strategy for deciding when to decompose code provides us with guardrails to ensure that we never end up at the extreme of either end of the spectrum. We create more consistency in our decision making process which translates to increased likelihood of consistently achieving your desired result.</p><br><p>Breaking down a code solution into more manageable parts seems intuitive, but there are several things to consider before making a decision. Here are some factors to consider in order of highest to lowest priority.</p><br><ul><br><li>Is the decomposition a dependency for something else?If another piece of code or another system is dependent on our code being decomposed, that might be an indicator of a larger issue related to code structure. In this case we should absolutely consider decomposition.</li><br><li>Does the code base follow an established pattern of modularity?Consistency is priority. If there is an existing pattern for function decomposition, consider decomposition. Changing or improving on the pattern is a separate discussion.</li><br><li>Will decomposition improve clarity and maintainability?The ultimate purpose of decomposition is to change how we structure our code. We determined that no other part of the system is dependent on our decomposition, so the only other reason to consider decomposition is to improve readability. If decomposition will not help someone else understand the code, consider keeping the monolithic function.</li><br><li>Is the decomposition part of a rapid prototype or rapid development effort?True prototypes might evolve into a production product, but there&#39;s no guarantee that it will. Avoid over-engineering or solving for problems that don&#39;t exist. If speed is the top priority and this is early-stage code, consider keeping the monolithic function.</li><br><li>Are you decomposing tightly coupled logic?If multiple pieces of logic are tightly coupled, it may not make sense to separate them. Forcing decomposition solely for the sake of modularity results in convoluted solutions. Consider keeping the monolithic function.</li><br><li>Is this logic used in multiple places, or is it reusable in other parts of the code?If the logic can be reused elsewhere in the code, encapsulate it in a function. Consider decomposition.</li><br><li>Does the function do a single, straightforward thing? Sometimes straightforward things like database transactions, or processing a file, requires multiple steps. Since we have determined that the logic is not used in multiple parts of the code, it&#39;s acceptable to break the Single Responsibility Principle here. Consider keeping the monolithic function.</li><br><li>Is cyclomatic or cognitive complexity high?If we reached this point and cyclomatic or cognitive complexity is high, consider decomposition to make your code easier for others to digest.</li><br><li>Is the function large? Large is subjective. <a href=\"https://en.wikipedia.org/wiki/The_Power_of_10:_Rules_for_Developing_Safety-Critical_Code\">NASA&#39;s Power of 10 rules</a> says to avoid functions larger than a printed page. This does not account for font size, but you or your team may elect to choose a more appropriate, concrete threshold. Consider decomposition of large functions to make your code easier for others to digest.</li><br></ul><br><p><img src=\"https://jg.dev/images/writing/are-larger-monolithic-functions-actually-better/Decision-Tree-for-Function-Decomposition--simplified-.svg\" alt=\"\"></p><br><h3>Benefits of decomposed functions</h3><br><ul><br><li>Function length - Lengthy functions are often frowned upon and are shortened through decomposition into smaller functions.</li><br><li>Cyclomatic complexity - Cyclomatic complexity is a measure of the number of separate, independent paths that a program can take. While decomposition does not reduce cyclomatic complexity, it can make code easier to understand.</li><br><li>Single responsibility principle - A function that does multiple things is more difficult to maintain. Decomposition involves extracting functionality into separate functions such that each function does one thing well. Additionally, functions that perform a single task are easier to test.</li><br><li>Don&#39;t repeat yourself principle - Using similar logic in multiple places bloats your code, making it more difficult to maintain. Decomposition extracts common logic into a single function that can be reused.</li><br></ul><br><h3>Benefits of monolithic functions</h3><br><ul><br><li>Speed - When speed is a priority, keeping things monolithic can allow for faster iterations. In the case of rapid development and prototyping, starting with a monolithic approach upfront can help avoid premature abstraction. Allow patterns to emerge and let that inform your decomposition.</li><br><li>Simplicity - Forcing decomposition for the sake of modularity can result in over-engineered solutions. If a large function contains tightly coupled logic, it may make more sense to keep that logic together instead of separating it.</li><br><li>Atomic tasks - Tasks that are naturally atomic may not benefit from being decomposed. For example, decomposing a function that handles a single, straightforward transaction could lead to unnecessary complexity without improving clarity or maintainability.</li><br><li>You ain&#39;t guna need it principle - If your function works well and there is no compelling reason or need to break it apart, don&#39;t do it.</li><br></ul><br><h3>Conclusion</h3><br><p>Extremes within software development are usually not optimal as there are always trade offs associated with every decision, including when and how we break our code into smaller pieces. Use and modify the model presented above or come up with your own decision making process to help you decide when to break down large functions. A model will help you evaluate relevant decision factors when determining, arrive at an answer with greater consistency, and have a concrete justification for your decision.</p><br><p>The best way to decide if you should decompose a function is highly dependent on the context of the project, the team, and the code. Thoughtful decomposition leads to more maintainable, scalable, and testable code, but only when it is applied judiciously and aligned with the overall architecture and goals of the project.</p><br>",
            "url": "https://jg.dev/writing/are-larger-monolithic-functions-actually-better",
            "title": "Are larger, monolithic functions actually better?",
            "summary": "The \"extract everything into small functions\" reflex isn't always right. When keeping things together actually makes code better.",
            "image": "https://jg.dev/images/writing/are-larger-monolithic-functions-actually-better/og.png",
            "date_modified": "2024-10-23T23:37:49.000Z",
            "author": {
                "name": "feed@jg.dev",
                "url": "https://jg.dev"
            },
            "tags": [
                "Engineering"
            ]
        }
    ]
}