Re-post from Storage Networking blog in August '05
We've been doing a lot of thinking lately about the blocks in block storage. At some level blocks make sense. It makes sense to break the disk media into fixed-size sectors. Disks have done this for years and up until the early 1990s, disk drives had very little intelligence and could only store and retrieve data that was pre-formatted into their native sector size. The industry standardized on 512-byte sectors and file systems and I/O stacks were all designed to operate on these fixed blocks.
Now fast-forward to today. Disk drives have powerful embedded processors in integrated circuits that have wasted silicon real-estate where more could be added. Servers use RAID arrays with very powerful embedded computers that internally operate on RAID volumes with data partitioned into stripes much larger than 512 byte blocks. These arrays use their embedded processors to emulate the 512-byte block interface of a late 1980s disk drive. Then, over on the server side, we still have file systems mapping files down to these small blocks as if IT were talking to an old drive.
This is what I'm wondering about. Is it time to stop designing storage subsystems that pretend to look like an antique disk drive and is it time to stop writing file systems and I/O stacks designed to spoon-feed data to these outdated disks?
<< Home