Apache is faster when using disk cache than memory cache. This seems counter-intuitive, as I expected that delivering content from RAM would be significantly faster than accessing a file off a slower disk drive. However, after a conversation with “chipig” in the Apache IRC group, I have seen the light.
If you use mod_mem_cache, when Apache recieves a request from a browser for a file, that file’s contents must first be read into memory, and then sent out to the client via a communcations endpoint. This process of copying the data into RAM, and then into a kernel buffer to send it is wasteful, and time-consuming. The server really doesn’t want the contents of the file-it merely wants to send the contents to the browser.
If you use the mod_disk_cache module, Linux can use the sendfile () API. Sendfile eliminates the necessity that the server read a file before sending it. With sendfile, the server specifies the file to send and the communications endpoint in the sendfile API; then the OS reads and sends the file. Thus, the server doesn’t have to issue a read API or dedicate memory for the file contents, and the OS can use its file system cache to efficiently cache files that clients request.
And because the copying is done in the kernel, these disk accesses will be buffered by the kernel. This serves to increase cache speed even higher.