Windows 2000 memory usage


















Memory addresses below 0x are assigned to user-mode modules, including the Win32 subsystem, and the remaining 2 GB are reserved for the kernel. This model features 3-GB address space for user processes, and 1-GB space for the kernel. This option exploits a feature of some Intel CPUs e. In this Chapter, I will ignore these special configurations.

Before delving into the technical details of the i architecture, let's travel back in time to the year , when Intel released the mother of all PC processors: the I want to restrict this discussion to the most significant milestones. If you want to know more, Robert L. Hummel's programmer's reference is an excellent starting point Hummel It is a bit outdated now because it doesn't cover the new features of the Pentium family; however, this leaves more space for important information about the basic i architecture.

Although the was able to address 1 MB of Random Access Memory RAM , an application could never "see" the entire physical address space because of the restriction of the CPU's address registers to 16 bits.

This means that applications were able to access a contiguous linear address space of only 64 KB, but this memory window could be shifted up and down in the physical space with the help of a set of bit segment registers.

Each segment register defined a base address in byte increments, and the linear addresses in the KB logical space were added as offsets to this base, effectively resulting in bit addresses. It featured a memory model where physical addresses were not generated by simply adding a linear address to a segment base.

To retain backward compatibility with the and , the still used segment registers, but they did not contain physical segment addresses after the CPU had been switched to Protected Mode. Instead, they provided a selector, comprising an index into a descriptor table. The target entry defined a bit physical base address, allowing access to 16 MB of RAM, which seemed like an incredible amount then.

However, the was still a bit CPU, so the limitation of the linear address space to 64 KB tiles still applied. The breakthrough came in with the CPU. Fortunately, the descriptor structure contained some spare bits that could be reclaimed. While moving from to bit addresses, the size of the CPU's data registers was doubled as well, and new powerful addressing modes were added. This radical shift to bit data and addresses was a real benefit for programmers— at least theoretically.

Practically, it took several years longer before the Microsoft Windows platform was ready to fully support the bit model. Whereas Windows 3. Internally, of course, segmentation was still active, as I will show later in this chapter, but the entire responsibility for managing segments finally had been moved to the operating system. Another essential new feature was the hardware support for paging, or, more precisely, demand-paged virtual memory.

This is a technique that allows memory to be backed up by a storage medium other than RAM—a hard disk, for example. With paging enabled, the CPU can access more memory than physically available by swapping out the least recently accessed memory contents to backup storage, making space for new data.

Theoretically, up to 4 GB of contiguous linear memory can be accessed this way, provided that the backup media is large enough—even if the installed physical RAM amounts to just a small fraction of the memory. Of course, paging is not the fastest way to access memory. It is always good to have as much physical RAM as possible. But it is an excellent way to work with large amounts of data that would otherwise exceed the available memory.

For example, graphics and database applications require a large amount of working memory, and some wouldn't be able to run on a low-end PC system if paging weren't available. In the paging scheme of the , memory is subdivided into pages of 4-KB or 4-MB size. The operating system designer is free to choose between these two options, and it is even possible to mix pages of both sizes. Later I will show that Windows uses such a mixed page design, keeping the operating system in 4-MB pages and using 4-KB pages for the remaining code and data.

The pages are managed by means of a hierarchically structured page-table tree that indicates for each page where it is currently located in physical memory.

This management structure also contains information on whether the page is actually in physical memory in the first place.

If a page has been swapped out to the hard disk, and some module touches an address within this page, the CPU generates a page fault, similar to an interrupt generated by a peripheral hardware device. Next, the page fault handler inside the operating system kernel will attempt to swap back this page to physical memory, possibly writing other memory contents to disk to make space. Usually, the system will apply a least-recently-used LRU schedule to decide which pages qualify to be swapped out.

By now it should be clear why this procedure is sometimes referred to as demand paging: Physical memory contents are moved to the backup storage and back on software demand, based on statistics of the memory usage of the operating system and its applications. The address indirection layer represented by the page-tables has two interesting implications. First, there is no predetermined relationship between the addresses used by a program and the addresses found on the physical address bus of the CPU chip.

If you know that a data structure of your application is located at the address, say, 0x, you still don't know anything about the physical address of your data unless you examine the page-table tree.

It is up to the operating system to decide what this address mapping looks like. Even more, the address translation currently in effect is unpredictable, in part because of the probabilistic nature of the paging mechanism. Fortunately, knowledge of physical addresses isn't required in most application cases.

This is something left for developers of hardware drivers. The second implication of paging is that the address space is not necessarily contiguous. Depending on the page-table contents, the 4-GB space can comprise large "holes" where neither physical nor backup memory is mapped.

If an application tries to read to or write from such an address, it will be aborted immediately by the system. Later in this chapter, I will show in detail how Windows spreads its available memory over the 4-GB address space. Along with higher clock frequencies, these newer models contain optimizations in other areas. For example, the Pentium features a dual instruction pipeline that enables it to execute two operations at the same time, as long as these instructions don't depend on each other.

For example, if instruction A modifies a register value, and the consecutive instruction B uses the modified value for a computation, B cannot be executed before A has finished.

But if instruction B involves a different register, the CPU can execute A and B simultaneously without adverse effects. This and other Pentium optimizations have opened a wide field for compiler optimization. In the context of i memory management, three sorts of addresses must be distinguished, termed logical, linear, and physical addresses in Intel's system programming manual for the Pentium Intel c. From my experience windows will not use more then 4gb of RAM due to the fact that it wasn't designed to.

Back in those days when RAM cost an arm and a leg for 1gb. You might be able to utilize all 6 GB on your server. It depends Windows is a bit OS. All address pointers are 32 bits max 4 GB virtual address space. But, it is more complicated than that. See 1. In Windows , 2 GB or 3 GB if a switch is used of the address space is reserved for the application to use. I searched around the Cacti forums and this seems to be a common problem but I can't seem to find a good solution.

So far what I have found is: - According to this post monitoring physical memory should work in Windows SP4. The problem is this software costs money, can effect server performance, and would have to be deployed to a large number of servers. My cacti server is running on Windows so I figured this might work. Why is this so painful? Surely there is a way to monitor physical memory usage on Windows which is a bit more straight-forward?

Any tips or suggestions would be appreciated. Common--and generally safe--options There are a few very quick and easy ways to increase the performance of a Windows system. What works very well for one user may not work at all for another, however. Add RAM This is almost always safe, and almost always results in increased system performance, albeit only up to a certain point. This is probably self-explanatory. Adding RAM Adding RAM is one of the easiest ways to boost system performance, but the law of diminishing returns applies—the more RAM that you add, the less the performance improvement will be over small increments.

Consider that, eventually, adding more RAM will simply result in a plateau on a system performance graph. The paging file: Size and location DO matter The paging file has been around since the inception of Windows.



0コメント

  • 1000 / 1000