Threat Research

Memory acquisition and the pagefile(s)

In the past, I have discussed how in reality there may be as many as 16 pagefiles on a single host. The next question is, "How much data could be contained in all these pagefiles"? Why does this matter? Well, the more data in the pagefiles, the longer they will take to acquire.

The size of the pagefiles usually depends on the amount of RAM in the host. If you allow Windows to automatically configure the pagefile(s), it will typically recommend that the total size of the pagefiles should be 1.5 times the size of RAM. Here is an example of the recommended settings on a host with 3.5 GB of memory.

The recommended total pagefile size is 5,371 MB or approximately 1.5 times 3.5 GB. However, you can configure the pagefiles manually. Some Web sites suggest making the size of the pagefile(s) as much as 3 times the size of RAM. This is what Microsoft has suggested as the maximum size for better performance on Windows XP.

As pagefiles get bigger, they will take longer to acquire. Let's look at how large they could be in x64 / EM64T, which is generically referred to as 64bit. On 64bit Windows hosts, 32bits or 2^32 are used to represent the offset in the pagefile where the page was stored. Each page in the pagefile is 4096 bytes or 2^12. We know there can be as many as 16 pagefiles or 2^4. Putting it all together:

(Pagefile Offset) * (Page Size) * (Number of Pagefiles) = Total Size of Paging Data

(2^32) * (2^12) * (2^4) = Total Size of Paging Data

2^48 = Total Size of Paging Data

281,474,976,710,656 = Total Size of Paging Data

256 TB = Total Size of Paging Data

Now, I know 256 TB is not going to be typical, but acquiring even 4 GB to 12 GB of paging files can take a long time. The pagefiles are in use and locked by the operating system. To gain access, tools typically parse the filesystem for access to the sectors that represent the pagefiles. This prolongs the time required to acquire the files.

Next time in this series, we will discuss more about time and its implication on the paging files. If this series is boring you, the memory forensics class at Black Hat contains more hands-on applications and use cases. This year, Aaron LeMasters, author of Web Historian 2.0, will be helping with the class. I hope to see you there.