<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Back to Basic on Kianoosh's Blog</title><link>https://kianoosh.dev/tags/back-to-basic/</link><description>Recent content in Back to Basic on Kianoosh's Blog</description><generator>Hugo -- gohugo.io</generator><language>en-us</language><lastBuildDate>Thu, 12 Feb 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://kianoosh.dev/tags/back-to-basic/index.xml" rel="self" type="application/rss+xml"/><item><title>Processes and Threads: A Quick Reminder</title><link>https://kianoosh.dev/posts/2026-02-12-processes-and-threads-a-quick-reminder/</link><pubDate>Thu, 12 Feb 2026 00:00:00 +0000</pubDate><guid>https://kianoosh.dev/posts/2026-02-12-processes-and-threads-a-quick-reminder/</guid><description>&lt;p>In earlier posts, we spent some time exploring OS cache hierarchies and storage device latencies. But all of that really clicks when we consider processing power.&lt;/p>
&lt;p>A computer is, after all, a system that takes input data and produces output data. All these latencies exist to ensure the processor can read and write data efficiently.&lt;/p>
&lt;p>&lt;a href="https://planetscale.com/blog/processes-and-threads">In this excellent blog&lt;/a>, Ben Dickens explains processes and threads with fantastic visualizations. Reading it makes you want to revisit topics like ARM64 instructions, FLOPS, and how assemblers generate binaries, and also helps make sense of context switching and time slicing in an operating system.&lt;/p></description></item><item><title>Storage Architecture: From Tape to SSD</title><link>https://kianoosh.dev/posts/2026-02-11-storage-architecture-from-tape-to-ssd/</link><pubDate>Wed, 11 Feb 2026 00:00:00 +0000</pubDate><guid>https://kianoosh.dev/posts/2026-02-11-storage-architecture-from-tape-to-ssd/</guid><description>&lt;p>In a &lt;a href="https://kianoosh.dev/posts/2026-01-26-feeling-the-speed-understanding-the-cache-hierarchy">previous post&lt;/a>, we talked about the fundamentals of caching and briefly touched on how &lt;strong>SSDs are much faster than HDDs&lt;/strong>, yet &lt;strong>still much slower than memory&lt;/strong> (RAM and CPU caches).&lt;/p>
&lt;p>But the real question is: &lt;strong>why?&lt;/strong> 🗿
Every time you can answer a “why” question, you move one level deeper in understanding a domain — and this is one of those moments.&lt;/p>
&lt;p>In &lt;a href="https://planetscale.com/blog/io-devices-and-latency">this excellent blog post&lt;/a> , &lt;strong>Ben Dicken&lt;/strong> dives into the internals of &lt;strong>Tape Storage, Hard Disk Drives, and Solid-State Drives&lt;/strong>, tracing their evolution through history — all the way to the modern paradigm of &lt;strong>separating computation from storage&lt;/strong>, and why this architectural choice isn’t always the right solution.&lt;/p></description></item><item><title>Feeling the Speed: Understanding the Cache Hierarchy</title><link>https://kianoosh.dev/posts/2026-01-26-feeling-the-speed-understanding-the-cache-hierarchy/</link><pubDate>Mon, 26 Jan 2026 00:00:00 +0000</pubDate><guid>https://kianoosh.dev/posts/2026-01-26-feeling-the-speed-understanding-the-cache-hierarchy/</guid><description>&lt;p>While reading about the query processing layer in ClickHouse, I came across this detail:&lt;/p>
&lt;blockquote>
&lt;p>To keep CPU caches hot, the plan contains hints that the same thread should process consecutive operators in the same lane.
This got me thinking: CPU caches are clearly critical for performance—but what exactly makes them so special? Are they really that much faster than RAM to justify this added complexity?&lt;/p>
&lt;/blockquote>
&lt;p>It reminded me of a design review meeting where I confidently suggested computing directly over data on disk because “SSDs are fast enough.” (Yes… shame on me 😵‍💫)&lt;/p></description></item><item><title>Software Modularity: Trivial Concept, Yet Still Rarely Done Right!</title><link>https://kianoosh.dev/posts/2026-01-07-software-modularity-trivial-concept-yet-still-rarely-done-right/</link><pubDate>Wed, 07 Jan 2026 00:00:00 +0000</pubDate><guid>https://kianoosh.dev/posts/2026-01-07-software-modularity-trivial-concept-yet-still-rarely-done-right/</guid><description>&lt;p>If you&amp;rsquo;ve ever read any software engineering blog or book, you&amp;rsquo;ve probably seen the word &lt;strong>&amp;ldquo;Modularity&amp;rdquo;&lt;/strong> mentioned many times. But ask most engineers what modularity really means, what benefits it brings, and how to actually break a system into modules — and you&amp;rsquo;ll often get vague or unclear answers.&lt;/p>
&lt;p>To find clear and timeless answers, we need to go back to the 1960s and 70s, when these ideas were first introduced and refined. In his famous 1972 paper (over 8,000 citations and still cited 250+ times in 2025!), David Parnas tackled a problem that was unsolved until then: &lt;strong>the criteria for decomposing software into modules.&lt;/strong>&lt;/p></description></item></channel></rss>