WIKIBOOKS
DISPONIBILI
?????????

ART
- Great Painters
BUSINESS&LAW
- Accounting
- Fundamentals of Law
- Marketing
- Shorthand
CARS
- Concept Cars
GAMES&SPORT
- Videogames
- The World of Sports

COMPUTER TECHNOLOGY
- Blogs
- Free Software
- Google
- My Computer

- PHP Language and Applications
- Wikipedia
- Windows Vista

EDUCATION
- Education
LITERATURE
- Masterpieces of English Literature
LINGUISTICS
- American English

- English Dictionaries
- The English Language

MEDICINE
- Medical Emergencies
- The Theory of Memory
MUSIC&DANCE
- The Beatles
- Dances
- Microphones
- Musical Notation
- Music Instruments
SCIENCE
- Batteries
- Nanotechnology
LIFESTYLE
- Cosmetics
- Diets
- Vegetarianism and Veganism
TRADITIONS
- Christmas Traditions
NATURE
- Animals

- Fruits And Vegetables



ARTICLES IN THE BOOK

  1. Adobe Reader
  2. Adware
  3. Altavista
  4. AOL
  5. Apple Macintosh
  6. Application software
  7. Arrow key
  8. Artificial Intelligence
  9. ASCII
  10. Assembly language
  11. Automatic translation
  12. Avatar
  13. Babylon
  14. Bandwidth
  15. Bit
  16. BitTorrent
  17. Black hat
  18. Blog
  19. Bluetooth
  20. Bulletin board system
  21. Byte
  22. Cache memory
  23. Celeron
  24. Central processing unit
  25. Chat room
  26. Client
  27. Command line interface
  28. Compiler
  29. Computer
  30. Computer bus
  31. Computer card
  32. Computer display
  33. Computer file
  34. Computer games
  35. Computer graphics
  36. Computer hardware
  37. Computer keyboard
  38. Computer networking
  39. Computer printer
  40. Computer program
  41. Computer programmer
  42. Computer science
  43. Computer security
  44. Computer software
  45. Computer storage
  46. Computer system
  47. Computer terminal
  48. Computer virus
  49. Computing
  50. Conference call
  51. Context menu
  52. Creative commons
  53. Creative Commons License
  54. Creative Technology
  55. Cursor
  56. Data
  57. Database
  58. Data storage device
  59. Debuggers
  60. Demo
  61. Desktop computer
  62. Digital divide
  63. Discussion groups
  64. DNS server
  65. Domain name
  66. DOS
  67. Download
  68. Download manager
  69. DVD-ROM
  70. DVD-RW
  71. E-mail
  72. E-mail spam
  73. File Transfer Protocol
  74. Firewall
  75. Firmware
  76. Flash memory
  77. Floppy disk drive
  78. GNU
  79. GNU General Public License
  80. GNU Project
  81. Google
  82. Google AdWords
  83. Google bomb
  84. Graphics
  85. Graphics card
  86. Hacker
  87. Hacker culture
  88. Hard disk
  89. High-level programming language
  90. Home computer
  91. HTML
  92. Hyperlink
  93. IBM
  94. Image processing
  95. Image scanner
  96. Instant messaging
  97. Instruction
  98. Intel
  99. Intel Core 2
  100. Interface
  101. Internet
  102. Internet bot
  103. Internet Explorer
  104. Internet protocols
  105. Internet service provider
  106. Interoperability
  107. IP addresses
  108. IPod
  109. Joystick
  110. JPEG
  111. Keyword
  112. Laptop computer
  113. Linux
  114. Linux kernel
  115. Liquid crystal display
  116. List of file formats
  117. List of Google products
  118. Local area network
  119. Logitech
  120. Machine language
  121. Mac OS X
  122. Macromedia Flash
  123. Mainframe computer
  124. Malware
  125. Media center
  126. Media player
  127. Megabyte
  128. Microsoft
  129. Microsoft Windows
  130. Microsoft Word
  131. Mirror site
  132. Modem
  133. Motherboard
  134. Mouse
  135. Mouse pad
  136. Mozilla Firefox
  137. Mp3
  138. MPEG
  139. MPEG-4
  140. Multimedia
  141. Musical Instrument Digital Interface
  142. Netscape
  143. Network card
  144. News ticker
  145. Office suite
  146. Online auction
  147. Online chat
  148. Open Directory Project
  149. Open source
  150. Open source software
  151. Opera
  152. Operating system
  153. Optical character recognition
  154. Optical disc
  155. output
  156. PageRank
  157. Password
  158. Pay-per-click
  159. PC speaker
  160. Peer-to-peer
  161. Pentium
  162. Peripheral
  163. Personal computer
  164. Personal digital assistant
  165. Phishing
  166. Pirated software
  167. Podcasting
  168. Pointing device
  169. POP3
  170. Programming language
  171. QuickTime
  172. Random access memory
  173. Routers
  174. Safari
  175. Scalability
  176. Scrollbar
  177. Scrolling
  178. Scroll wheel
  179. Search engine
  180. Security cracking
  181. Server
  182. Simple Mail Transfer Protocol
  183. Skype
  184. Social software
  185. Software bug
  186. Software cracker
  187. Software library
  188. Software utility
  189. Solaris Operating Environment
  190. Sound Blaster
  191. Soundcard
  192. Spam
  193. Spamdexing
  194. Spam in blogs
  195. Speech recognition
  196. Spoofing attack
  197. Spreadsheet
  198. Spyware
  199. Streaming media
  200. Supercomputer
  201. Tablet computer
  202. Telecommunications
  203. Text messaging
  204. Trackball
  205. Trojan horse
  206. TV card
  207. Unicode
  208. Uniform Resource Identifier
  209. Unix
  210. URL redirection
  211. USB flash drive
  212. USB port
  213. User interface
  214. Vlog
  215. Voice over IP
  216. Warez
  217. Wearable computer
  218. Web application
  219. Web banner
  220. Web browser
  221. Web crawler
  222. Web directories
  223. Web indexing
  224. Webmail
  225. Web page
  226. Website
  227. Wiki
  228. Wikipedia
  229. WIMP
  230. Windows CE
  231. Windows key
  232. Windows Media Player
  233. Windows Vista
  234. Word processor
  235. World Wide Web
  236. Worm
  237. XML
  238. X Window System
  239. Yahoo
  240. Zombie computer
 



MY COMPUTER
This article is from:
http://en.wikipedia.org/wiki/Cache_memory

All text is available under the terms of the GNU Free Documentation License: http://en.wikipedia.org/wiki/Wikipedia:Text_of_the_GNU_Free_Documentation_License 

Cache

From Wikipedia, the free encyclopedia

(Redirected from Cache memory)
Look up cache in Wiktionary, the free dictionary.

In computer science, a cache (pronounced /kæʃ/, like the English word cash) is a collection of data duplicating original values stored elsewhere or computed earlier, where the original data is expensive (usually in terms of access time) to fetch or compute relative to reading the cache. Once the data is stored in the cache, future use can be made by accessing the cached copy rather than re-fetching or recomputing the original data, so that the average access time is lower.

Caches have proven extremely effective in many areas of computing because access patterns in typical computer applications have locality of reference. There are several kinds of locality, but this article primarily deals with data that are accessed close together in time. The data might or might not be located physically close to each other.

Operation

Diagram of a CPU memory cache
Diagram of a CPU memory cache

A cache is a block of memory for temporary storage of data likely to be used again. The CPU and hard drive frequently use a cache, as do web browsers.

A simple definition of Cache would be: A temporary storage area where frequently accessed data can be stored for rapid access.

A cache is made up of a pool of entries. Each entry has a datum(a piece of information), which is a copy of the datum in some backing store. Each entry also has a tag, which specifies the identity of the datum in the backing store of which the entry is a copy.

When the cache client (a CPU, web browser, operating system) wishes to access a datum presumably in the backing store, it first checks the cache. If an entry can be found with a tag matching that of the desired datum, the datum in the entry is used instead. This situation is known as a cache hit. So, for example, a web browser program might check its local cache on disk to see if it has a local copy of the contents of a web page at a particular URL. In this example, the URL is the tag, and the contents of the web page is the datum. The percentage of accesses that result in cache hits is known as the hit rate or hit ratio of the cache.

The alternative situation, when the cache is consulted and found not to contain a datum with the desired tag, is known as a cache miss. The datum fetched from the backing store during miss handling is usually inserted into the cache, ready for the next access.

If the cache has limited storage, it may have to eject some other entry in order to make room. The heuristic used to select the entry to eject is known as the replacement policy. One popular replacement policy, least recently used (LRU), replaces the least recently used entry (see cache algorithms). More efficient caches compute use frequency against the size of the stored contents, as well as the latencies and throughputs for both the cache and the backing store. While this works well for larger amounts of data, long latencies, and slow throughputs, such as experienced with your hard drive and the Internet, it's not efficient to use this for cached main memory (RAM).

When a datum is written to the cache, it must at some point be written to the backing store as well. The timing of this write is controlled by what is known as the write policy. In a write-through cache, every write to the cache causes a write to the backing store. Alternatively, in a write-back cache, writes are not immediately mirrored to the store. Instead, the cache tracks which of its locations have been written over (these locations are marked dirty). The data in these locations is written back to the backing store when that data is evicted from the cache. For this reason, a miss in a write-back cache will often require two memory accesses to service: one to retrieve the needed datum, and one to write replaced data from the cache to the store.

Data write-back may be triggered by other policies as well. The client may make many changes to a datum in the cache, and then explicitly notify the cache to write back the datum.

The data in the backing store may be changed by entities other than the cache, in which case the copy in the cache may become out-of-date or stale. Alternatively, when the client updates the data in the cache, copies of that data in other caches will become stale. Communication protocols between the cache managers which keep the data consistent are known as coherency protocols.

Applications

CPU caches

Main article: CPU cache.

Small memories on or close to the CPU chip can be made faster than the much larger main memory. Most CPUs since the 1980s have used one or more caches, and modern general-purpose CPUs inside personal computers may have as many as half a dozen, each specialized to a different part of the problem of executing programs.

Disk buffer

(also known as disk cache or cache buffer)

Hard disks have historically often been packaged with embedded computers used for control and interface protocols. Since the late 1980s, nearly all disks sold have these embedded computers and either an ATA, SCSI, or Fibre Channel interface. The embedded computer usually has a small amount of memory, used to store the bits going to and coming from the disk platter.

The disk buffer is physically distinct from and is used differently than the page cache typically kept by the operating system in the computer's main memory. The disk buffer is controlled by the embedded computer in the disk drive, and the page cache is controlled by the computer to which that disk is attached. The disk buffer is usually quite small, from two to sixteen megabytes, and the page cache is generally all unused physical memory, which in 2006 this may be as much as 4GB for desktop computers and 8GB for servers. While data in the page cache is reused multiple times, the data in the disk buffer is typically never reused. In this sense, the terms disk cache and cache buffer are misnomers; the embedded computer's memory is more appropriately called the disk buffer.

The disk buffer has multiple uses:

Read-ahead / read-behind 
When executing a read from the disk, the disk arm moves the read/write head to (or near) the correct track, and after some settling time the read head begins to pick up bits. Usually, the first sectors to be read are not the ones that have been requested by the operating system. The disk's embedded computer typically saves these unrequested sectors in the disk buffer, in case the operating system requests them later.
Speed matching 
The speed of the disk's I/O interface to the computer almost never matches the speed at which the bits are transferred to and from the hard disk platter. The disk buffer is used so that both the I/O interface and the disk read/write head can operate at full speed.
Write acceleration 
The disk's embedded microcontroller may signal the main computer that a disk write is complete immediately after receiving the write data, before the data are actually written to the platter. This early signal allows the main computer to continue working even though the data has not actually been written yet. This can be somewhat dangerous, because if power is lost before the data are permanently fixed in the magnetic media, the data will be lost from the disk buffer, and the file system on the disk may be left in an inconsistent state. On some disks, this vulnerable period between signaling the write complete and fixing the data can be arbitrarily long, as the write can be deferred indefinitely by newly arriving requests. For this reason, the use of write acceleration can be controversial. Consistency can be maintained, however, by using a battery-backed memory system in the disk controller for caching data - although this is typically only found in high end RAID controllers. Alternately, the caching can simply be turned off when the integrity of data is deemed more important than write performance.
Command queuing 
Newer SATA and most SCSI disks can accept multiple commands while any one command is in operation. These commands are stored by the disk's embedded computer until they are completed. Should a read reference the data at the destination of a queued write, the write's data will be returned. Command queuing is different from write acceleration in that the main computer's operating system is notified when data are actually written onto the magnetic media. The OS can use this information to keep the filesystem consistent through rescheduled writes.

Other caches

CPU caches are generally managed entirely by hardware. Other caches are managed by a variety of software. The cache of disk sectors in main memory is usually managed by the operating system kernel or file system. The BIND DNS daemon caches a mapping of domain names to IP addresses, as does a resolver library.

Write-through operation is common when operating over unreliable networks (like an ethernet LAN), because of the enormous complexity of the coherency protocol required between multiple write-back caches when communication is unreliable. For instance, web page caches and client-side network file system caches (like those in NFS or SMB) are typically read-only or write-through specifically to keep the network protocol simple and reliable.

A cache of recently visited web pages can be managed by your web browser. Some browsers are configured to use an external proxy web cache, a server program through which all web requests are routed so that it can cache frequently accessed pages for everyone in an organization. Many internet service providers use proxy caches to save bandwidth on frequently-accessed web pages.

Search engines also frequently make web pages they have indexed available from their cache. For example, Google provides a "Cached" link next to each search result. This is useful when web pages are temporarily inaccessible from a web server.

Another type of caching is storing computed results that will likely be needed again, or memoization. An example of this type of caching is ccache, a program that caches the output of the compilation to speed up the second-time compilation.

The difference between buffers and cache

The terms are not mutually exclusive and the functions are frequently combined; however, there is a difference in intent. A buffer is a temporary storage location where a large block of data is assembled or disassembled. This may be necessary for interacting with a storage device that requires large blocks of data, or when data must be delivered in a different order than that in which it is produced, or merely desirable when small blocks are inefficient. The benefit is present even if the bufferend data is written to the buffer once and read from the buffer once.

A cache, on the other hand, hopes that the data will be read from the cache more often than it is written there. Its purpose is to reduce accesses to the underlying storage.

See also

  • Cache algorithms
  • Cache coloring
  • CPU cache
  • Web cache
  • Data grid
Retrieved from "http://en.wikipedia.org/wiki/Cache"