unfortunately it is refelcted by B) - lets start with a single disk example ...
sector size is usually 4KB with modern OS and large disks ... now storing a tiny text file of say 200 byte requires 4096 byte space on the disk (one sector), storing a larger photoshop file of 1 MB requires 250 sectors, each of them fully filled ...
to read data an application asks the filesystem, the filesystem (or OS) asks the harddisk controller, the HD controller asks the disk.
for the text file a sector is read by the disk, the controller forwrads it to the filesystem which forwards the file to the application (2.000% overhead for reading).
for the larger file a (hopefully) consecutive number of sectors is read by the disk which will try to use intelligent read ahead using its cache, the controller forwards it to the filesystem, forwards it to the application (almost no overhead for reading).
obviously there is even more involved since the OS has to request memory space and store data there, but data can't be read faster than the pattern drives below the heads and one or more sectors can be read.
to overcome this limit of theroughput striping allows to write (and read) parallel on two or more disks which is done in blocks, usually 64K, but this works well only with multiples of 64K ... eg. 256K across 4 disks write as fast as 64K on a single disk (almost ...).
neither the application nor the filesystem knows something about which byte has been stored where. given the volume is formatted in 4K sectors (by the filesystem) it just knows it is somewhere within a 4 x 64K = 256K block which now needs to be read by 4 disks, forwarded to the controller, the filesystem, the application ...
imagine the overhead reading little portions and depending on the quality of the raid controller (if it allows to *grab through* or tries useless caching operations) the *felt* performance is better or worse compared to a single disk ...
hth, christian