Removing Disabled Users from Membership of Groups in Active Directory

Lets discuss about a scenario where one needs to perform housekeeping on the groups inside AD on a routine basis, so that they are free of disbaled or deleted members. In every organisation, after a user is offboarded, its remnants linger as memberships of groups. Hence out of many reasons, the group memberships should be accurate and current.

When was the account last modified?

Now while drafting a strategy, we must keep in mind that some users are offboarded on a temporary basis or some users may be hired back after a brief period of time. So removing memeberships of these users will create an administrative overhead later, hence it is better to not remove their memberships in first place. This can be done to some extent by making user of WhenChanged attribute of a user account object fetched by get-aduser query.

Where are the disabled user accounts kept?

Next we might also have a special OU to hold all the disabled users, which we can use as a searchbase in get-aduser query or filter distinguished names of members of get-adgroupmember query.

Method 1:

The breakdown of the method is explained in points below.

1> We start with groups in AD. And recursively every group member is searched.

2> Now as per our requirement, we match with below requirement set:    

  • The user should be disabled
  • The user account should not have been modified in a certain period from now.
  • The user account should be residing in a certain OU.

3> After the search has been narrowed, we proceed to removal of these users from corresponding all group memberships.

4> A log file can also be generated for reference.

Import-Module ActiveDirectory

$OutFile = "C:\DisabledUserMembership_ $(get-date -f yyyy-MM-dd).csv"

#Recursively searches for all groups in the domain and displays on screen.
foreach ($group in (Get-ADGroup -Filter *)) {
 Write-Host $group.Name -Foreground "green";

#For each group, members are checked recursively if they are member of OU= .
 foreach ($member in (Get-ADGroupMember -Identity $group)) {
 if ($member.objectClass -eq "user" -and ($member.distinguishedName.Contains("OU= ,OU= ,DC= ,DC= "))) {

#Now it will further shortlist only those users which have not been changed in 3 months from today, which can be modified as per need.
 $date = ((get-date).addmonths(-3))
 $user = Get-ADUser -Identity $member.distinguishedName -Properties whenChanged| Where-Object {$_.whenChanged -le $date}

#Now it will check if above shortlisted users are disabled currently and then remove group membership and display user's name too.
 if ($user.enabled -eq $false) {
 Write-Host $user.Name
 Remove-ADGroupMember -Identity $group -Members $user -Confirm:$false
$OutInfo = $user.name + ";" + $group.name

Add-Content -Value $OutInfo -Path $OutFile
   }
  }
 }
}

Method 2:

A different approach than the one expained above and I personally like this one better.

1> We start with all users in AD.

2> Then we refine the users fetched by matching with our three requirements mentioned before.

3> After the search has been narrowed, we proceed to removal of these users from corresponding all group memberships.

4> A log file can also be generated for reference, like before

Import-Module ActiveDirectory
$OutFile = "C:\DisabledUserMembership_ $(get-date -f yyyy-MM-dd).csv"
$date = ((get-date).addmonths(-3))
$SearchBase = "OU= ,OU= ,DC= ,DC= "
$Users = Get-ADUser -filter * -SearchBase $SearchBase -Properties MemberOf,whenChanged| Where-Object {$_.whenChanged -le $date}

Foreach ($User in $Users){
 If ($User.enabled -eq $false){
#Memebrof attribute of user object gives the group names, 
#which the account is member of.
 $Groups = $User.MemberOf
 Foreach ($Group in $Groups){
 $OutInfo = $User.sAMAccountName + ";" + $Group
 #Display the output on screen.
 $OutInfo
 #Append the output to the out file.
 Add-Content -Value $OutInfo -Path $OutFile
 }
 #Remove the user from all groups it is a member of.
 $User.MemberOf | Remove-ADGroupMember -Member $User -Confirm:$false 
 }
}

Now as it can be seen that the second method looks more sleek, hence may prove to be a better solution to our requirement.

Now this script can be scheduled to run on a daily basis in task scheduler to keep the group memberships fresh.

Advertisements

Processing windows event viewer logs with Powershell

While working with Windows event Viewer, its better to make use of the command

Get-Winevent

This is more powerful than the “Get-eventlog” command which has a limited scope.

Two major points of differences (courtesy: Managing event logs in PowerShell

  • Get-WinEvent gives you much wider and deeper reach into the event logs. It can access log providers directly as well as tap into Windows event tracing logs. That said, it’s easier to delve into the content of classic event log entries with Get-EventLog.
  • For remoting, Get-WinEvent uses the built-in Windows event log remoting technology instead of PowerShell remoting. Thus, you’ll find that remote log queries run faster with Get-WinEvent than with Get-EventLog.

A comparison between the two can also be found in Processing Event Logs in PowerShell

$starttime = (Get-Date).AddDays(-27)
$endtime = (Get-Date).AddDays(-1)
Get-WinEvent –FilterHashtable @{logname=’security’;StartTime=$starttime;endtime=$endtime} -MaxEvents 10|select *

Piping the get-winevent to select *, will reveal all the search paramters that we can make use of. Also we can limit the search by including a switch -maxevents, which will show the latest N number of the events. The concept of hashtable with reference to get-winevt is explained well in Advanced Event Log Filtering Using PowerShell

Message : The computer attempted to validate the credentials for an account.

Authentication Package: MICROSOFT_AUTHENTICATION_PACKAGE_V1_0
Logon Account: usr
Source Workstation: SERVER01
Error Code: 0xC0000064
Id : 4776
Version : 0
Qualifiers :
Level : 0
Task : 14336
Opcode : 0
Keywords : -9218868437227405312345452
RecordId : 27939335804
ProviderName : Microsoft-Windows-Security-Auditing
ProviderId : 548XX25-5X78-4XX4-X5ba-3e3b0XXd
LogName : Security
ProcessId : 612
ThreadId : 156552
MachineName : SERVER01.DOMAIN.CORP
UserId :
TimeCreated : 10.09.2017 14:09:45
ActivityId :
RelatedActivityId :
ContainerLog : security
MatchedQueryIds : {}
Bookmark : System.Diagnostics.Eventing.Reader.EventBookmark
LevelDisplayName : Information
OpcodeDisplayName : Info
TaskDisplayName : Credential Validation
KeywordsDisplayNames : {Audit Failure}
Properties : {System.Diagnostics.Eventing.Reader.EventProperty, System.Diagnostics.Eventing.Reader.EventProperty,
System.Diagnostics.Eventing.Reader.EventProperty, System.Diagnostics.Eventing.Reader.EventProperty}

We should note here that while searching with get-winevent command, we must mandatorily provide either of below three search parameters along with any combination of the rest of the parameters.

LogName  -> This can be seen from “General” Tab of Preview Pane of the Windows event viewer console.

ProviderName -> This can be seen from “Details” Tab of Preview Pane of the Windows event viewer console.

Path

Searching through old archived evtx files

Making use of Path parameter

If we have archived the old event viewer log files on a network path or local hard drive, which are of the format .evtx, then in order to scan through the log files for a particular event ID, we wil find ourselves then in a difficult situation with very less tools at our disposal. In this situation the third parameter “Path” will help us.

$starttime = get-date (Get-Date).AddDays(-27)
$endtime = get-date (Get-Date).AddDays(-22)
$ls=gci "C:\LOGS\*"| 
Where{$_.LastWriteTime -gt $starttime -and $_.LastWriteTime -lt $endtime}

foreach ($l in $ls){
if ($p.name -notmatch "Temp"){
$a=Get-WinEvent -FilterHashtable @{Path="$p";id="4738"} -ErrorAction SilentlyContinue | 
select message, id, level ,providername, logname, processid, threadid, machinename, userid, timecreated, containerlog|
where {$_.message -like '*user1*'}
$a
$a|out-file "C:\log.txt" -Append
}}

Parsing through a string in message field of an event log

In above example we have parsed throught the test in Message field of the event log corresponding to event ID “4738” for an account “user1” that was changed. We couldn’t have done this parsing from the hash table, hence we resorted to use of where.

We will wrap the discussion here 🙂

Migrating from Windows Server 2012 R2 to Windows Server 2016

As a first step to migrate an old windows server to a new Windows Server 2016, it is always better to chalk out a strategy or a check list. This will help ensure no hiccups come in between the process.

In this direction, the first step can be listing out the roles and features installed on the server. At this stage we would also identify the things that will not automatically come to the new server once it is brought into the domain and promoted.

In my example I am considering a server “server1” installed with Windows Server 2012 R2. It is an additional domain controller and also has DNS and DHCP roles installed in it. “server1” is in a DHCP failover relation (Hot Standby) with another DC “server2”.

Following are the things that we need to manually configure in the new server and is painstaking.

1> During roles and features installation stage, we need to manually install the same.

2> Conditional Forwarders (if any) configured at server level.

3> DHCP server level configurations.

4> IP configuration which in this case will be same as the old one.

Now we will one by one try to automate these steps. There sure are other ways to do this like command line approach and GUI, but will restrict ourselves to powershell here.

1> We can perform below steps to automate the installation of roles and features, which can take time to select all roles and features and can lead to errors also.

First, using PowerShell, we need to create an answer file from already existing server “server1”, containing list of all roles and features installed.

Get-WindowsFeature |
? { $_.Installed -AND $_.SubFeatures.Count -eq 0 } |
Export-Clixml \\server3\c$\ServerFeatures.xml

Second, from elevated PowerShell, we will import the same answer file in the new server “server3” and it will automatically install all the roles and features in that file.

$ServerFeatures = Import-Clixml c:\ServerFeatures.xml
foreach ($feature in $ServerFeatures) {
Install-WindowsFeature -Name $feature.name
}

2> We can make use of a powershell module DnsShell to export DNS conditional forwarders from “server1” to “server3”. First we should download the module which is compatible with powershell version 4.0 also and then importing it as follows.

import-module "C:\Program Files (x86)\WindowsPowerShell\Modules\DnsShell"

For making the module available to all the users logging into the system, the module should be placed in “C:\Program Files (x86)\WindowsPowerShell\Modules\”

Get-DnsZone -ZoneType Forwarder -Server server1 | New-DnsZone -Server server3

In this way we are piping the DNS conditional forwarders settings to the new server in one command.

3> Now we will look into transfering the server level configurations that we might have on the olde server “server1”. We will first execute below command in powershell to create a backup of DHCP leases.

export-dhcpserver -computername server1 -file "c:\dhcpexport.xml" -leases

Then we can import this exported file in new server either by specifying UNC path of file or by first manually copying the file in new location. Kindly note that we need to specify here the DHCP backup directory which can be seen in properties of dhcp server in DHCP console.

Import-DhcpServer -ComputerName server2 -File C:\dhcpexport.xml -BackupPath C:\dhcpbackup\ -ServerConfigOnly

We need not export other configurations as the old server “server1” is in a DHCP failover relation with “server2”. Hence when we reconfigure the failover between “server2” and “server3”, complete configuration will get replicated to the new server “server3” except the server level configuration. We can refer to this link for setting up a DHCP failover.

4> Before the old server “server1” has been demoted, unjoined from domain and shutdown permanently, we should export the IP configuration of the server and store in a migration store folder.

Export-SmigServerSetting -IPConfig -Path "c:\ipexport\" -Verbose

Also it would be better if we export the IP configuration to a text file from command line.

IPConfig -all >  c:\ipconfig.txt

Now we can import the configuration in the new server “server3” by first copying both the “ipconfig.txt” and migration store folder to the new server “server3” and then executing below command in powershell. We need to perform this step only when the old server “server1” is completely removed. Also we must know the MAC addresses of the source and destination NIC cards.

Import-SmigServerSetting -IPConfig All -SourcePhysicalAddress "XX-XX-XX-XX-XX-XX" -TargetPhysicalAddress "YY-YY-YY-YY-YY-YY" -Path "c:\ipexport\" -Verbose

For detailed ip configuration migration steps, we can refer to this technedt link IP Configuration: Migrating IP Configuration Data

This concludes our discussion here.

Working with csv and xlsx files using powershell Part-2

Continuing from where we left in part-1, we’ll dive further in this topic.

Copy a range of cells from one excel sheet and paste in another

[threading.thread]::CurrentThread.CurrentCulture = 'en-US'
$xl1 = New-Object -ComObject Excel.Application
$xl1.visible = $true
$wb1 = $xl1.workbooks.open($xlsx3)
$xl1.Worksheets.item(1).name = "Worksheet1"
$ws1 = $wb1.worksheets.Item(1)

$ws1.Activate()

$lr1 = $ws1.UsedRange.rows.count
$r1 = $ws1.Range("A1:F$lr1")
$r1.copy()

[threading.thread]::CurrentThread.CurrentCulture = 'en-US'
$xl2 = New-Object -ComObject Excel.Application
$xl2.visible = $true
$wb2 = $xl2.workbooks.open($xlsx4)
$ws2 = $wb2.worksheets.Item(1)
$ws2.activate()

$r2 = $ws2.Range("A1:F$lr1")
$ws2.Paste($r2)

$wb1.save()

$wb2.save()
$xl1.Workbooks.Close()

$xl2.Workbooks.Close()
$xl1.Quit()

$xl2.Quit()

We can change the cell selection as per our need, for e.g.

$lr1 = ($ws1.UsedRange.rows.count)-1

This will count all the rows ‘r’ and subtract the last row. As a result the variable ‘$lr1’ will store the numeric value ‘r-1’. After this we can just mention the selction range as

$r1 = $ws1.Range("A1:F$lr1")

This will select all the cells containing data starting from ‘A1’ cell upto the second last row. Instead of two steps, it can be combined into one as,

$r1 = $ws1.Range("A1:F$($ws1.UsedRange.rows.count-1)")

We can paste the selection to any place in the excel by altering the range as below.

$lr2 = $r1.Rows.Count

This will store the count of rows contained in the copy selection that we have made and store it in variable ‘$lr2’.

$r2 = $ws2.Range("A2:F$lr2")
$ws2.Paste($r2)

Note that the source and destination cell range should be exactly same.

Now that we are discussing about excel com object and powershell, so Powershell Excel Cookbook Ver 2 can’t go without mention. It has almost everything one can get across while working to automate excel with PowerShell.

Inserting values in specific cells

$ws1.cells.item(1,2) = "Hello"

This will store the string ‘Hello’ in cell ‘B1’. The range (1,2) correspond to cell at the intersection of first row and second column.

Sorting a column

$rsort1=$ws1.usedrange
$rsort2 = $ws1.Range("A2")
[void]$rsort1.sort($rsort2, 1)

First we will select all the cells where data is present and store this range in variable ‘rsort1’. After this we will pick the cell from the respective column from which onwards, we want the sorting to be done. So in this case, cells in ‘A’ column starting from ‘2’ row will be sorted in ascending manner due to parameter ‘1’.

Summing values in each column and displaying the sum at the bottom of respective column

$r1 = $ws1.usedRange
$row1 = $r1.rows.count
$Sumrow1 = $row1 + 1
$r2 = $ws1.Range("B2:B$row1")
$r3 = $ws1.Range("C2:C$row1")
$r4 = $ws1.Range("D2:D$row1")
$r5 = $ws1.Range("E2:E$row1")
$r6 = $ws1.Range("F2:F$row1")
$functions = $xl1.WorkSheetfunction
$ws1.cells.item($Sumrow1,1) = "Total"
$ws1.cells.item($Sumrow1,2) = $functions.sum($r2)
$ws1.cells.item($Sumrow1,3) = $functions.sum($r3)
$ws1.cells.item($Sumrow1,4) = $functions.sum($r4)
$ws1.cells.item($Sumrow1,5) = $functions.sum($r5)
$ws1.cells.item($Sumrow1,6) = $functions.sum($r6)
[void]$r1.entireColumn.Autofit()

First we will store the count of all the rows conatining data and store in variable $row1. Assuming that the first row contains the headers, we select from 2nd row to the end. After this we make use of ‘worksheetfunction’ of excel ‘xl1’ object that we have created to call ‘sum’ function. After this we will sum the values in columns ‘A’ to ‘F’ and store at their bottom. After the operation, we can ‘autofit’ the entire data range. There is another way of adjusting the column width as below, by specifying the column width.

$ws1.Columns.Item('a').columnWidth = 20

Sometimes if the datatype of values is set to text/general, the summing operation might fail. So we have to set the datatype prior to this, as follows.

$ws1.Columns.Item('a').Numberformat="0"
$ws1.Columns.Item('b').Numberformat="0"
$ws1.Columns.Item('c').Numberformat="0"
$ws1.Columns.Item('d').Numberformat="0"
$ws1.Columns.Item('e').Numberformat="0"
$ws1.Columns.Item('f').Numberformat="0"

Adding chart to worksheet

Lets select chart and chart type first that we will put in the worksheet.

$ch = $ws1.shapes.addChart().chart
$ch.chartType = 58

Then we will set the data range that we want to chart. We can also give a suitable title to the chart.

$ch.setSourceData($range1)
$ch.HasTitle = $true
$ch.ChartTitle.Text = "My chart"

Now we will select the cell range where we want to copy the chart. If we want to adjust the chart size, it can be done by alterig the range as below.

$RngToCover = $ws1.Range("A$($range2.Rows.Count + 5):F$($range.Rows.Count + 23)")
$ChtOb = $ch.Parent
$ChtOb.Top = $RngToCover.Top
$ChtOb.Left = $RngToCover.Left
$ChtOb.Height = $RngToCover.Height
$ChtOb.Width = $RngToCover.Width

Cell formatting

Below are some basic formatting operations that can be done on a cell range.

$range.select()
$range.HorizontalAlignment = -4108
$range.VerticalAlignment = -4107
$range.BorderAround(1,4,1)
$range.WrapText = $false
$range.Orientation = 0
$range.AddIndent = $false
$range.IndentLevel = 0
$range.ShrinkToFit = $false
$range.ReadingOrder = -5002
$range.MergeCells = $false
$range.Borders.Item(12).Weight = 2
$range.Borders.Item(12).LineStyle = 1
$range.Borders.Item(12).ColorIndex = -4105

Working with csv and xlsx files using powershell Part-1

This is a part of series of posts to discuss various operations that can be done on .csv and .xlsx files using powershell.

Changing delimiter of csv file:

Import-Csv -Delimiter "," -Path $p|Export-Csv -Delimiter ";" -Path $p1 -NoTypeInformation

Replacing a character from a column in csv file:

Import-Csv -Delimiter ";" -Path $p1 -Header "1","2","3"|
Select-Object *| ForEach-Object {
"{0};{1};{2}" -f $_.1,($_.2).replace("Mr.",""),$_.3>> $p2
}

Removing header from a csv file:

Import-Csv -Delimiter ";" -Path $p2 -Header "1","2","3"|
Select-Object -Skip 1 >> $p3
}

Inserting headers in a headerless csv file:

import-csv -Delimiter ";" -Path $p4 -Header "Team","Player name","Status"|
Export-Csv -Delimiter ";" -Path $p5 -NoTypeInformation

Converting csv into xlsx file:

# Create a new Excel workbook with one empty sheet
[threading.thread]::CurrentThread.CurrentCulture = 'en-US'
$excel = New-Object -ComObject excel.application
$excel.DisplayAlerts=$false
$workbook = $excel.Workbooks.Add(1)
$worksheet = $workbook.worksheets.Item(1)

# Build the QueryTables.Add command and reformat the data
$TxtConnector = ("TEXT;" + $p5)
$Connector = $worksheet.QueryTables.add($TxtConnector,$worksheet.Range("A1"))
$query = $worksheet.QueryTables.item($Connector.name)
$query.TextFileOtherDelimiter = $delimiter
$query.TextFileParseType = 1
$query.TextFileColumnDataTypes = ,1 * $worksheet.Cells.Columns.Count
$query.AdjustColumnWidth = 1

# Execute & delete the import query
$query.Refresh()
$query.Delete()

# Save & close the Workbook as XLSX.
$workbook.SaveAs($xlsx)
$excel.Quit()

Filter columns of an excel file:

Now we will make use of a module PSExcel which makes data manipulation very easy.

import-module "C:\Program Files (x86)\WindowsPowerShell\Modules\PSExcel-master\PSExcel"

[threading.thread]::CurrentThread.CurrentCulture = 'en-US'

Import-XLSX $xlsx -sheet Sheet1 |
Where-Object {($_.'Team' -in @("KKR","KVIP","CSK"))`
-and ($_.'Status' -notin @("bench","not selected"))}|
Export-XLSX $xlsx1

Create Pivot table from a worksheet of an excel file:

Import-XLSX $xlsx1 -sheet Sheet1 |
Export-XLSX $xlsx2 -PivotRows "Team" -PivotColumns "Status" -PivotValues "Player name" -ChartType pie

Merging 2 excel worksheets over a common column

$lt = Import-XLSX $xlsx1 -Sheet Sheet2 -header "Name","Employee Number","Department"

$rt = Import-XLSX $xlsx2 -Sheet Worksheet2 -header "Name","Salary"

Join-Object -Left $lt -Right $rt -LeftJoinProperty "Name" -RightJoinProperty "Name" -type AllInBoth -RightProperties "Salary"|
Export-XLSX $xlsx3 -WorksheetName Mergedoutput

The other options of joining are:

1> -AllInLeft

2> -AllInRight

3> -AllInBoth

4> -OnlyIfInBoth

I’ll try to explore more operations that can be done specifically with xlsx files in part-2 of this series.

Collect basic and dynamic disk details from remote computers

In this post we will discuss some PowerShell scripts to get the details like, used space, free space, total size, etc. of all the drives on remote computers. The list of remote computers could be fed as a text file to the script. Lets discuss each scripts one by one.

Using Win32_LogicalDisk

$CSVFile = "C:\Disk space report.csv"
"Computer Name;Volume Name;Drive Name;Size(GB);Free(GB);Used(GB);Used%(GB)" | out-file -FilePath $CSVFile -append

#domain admin's credentials
$cre = get-credential
$machines = get-content "C:\serverset.txt"
foreach ($machine in $machines) 
{
$disk=Get-WmiObject Win32_LogicalDisk -filter "DriveType=3" -computer $machine -Credential $cre| Select SystemName,DeviceID,VolumeName,Size,freespace
foreach ($d in $disk)
{
#to set language pack for current thread as en-US
[threading.thread]::CurrentThread.CurrentCulture = 'en-US'
$DeviceID=($d.DeviceID).Replace(":", "")

$VolumeName=$d.VolumeName
#Round(($d.size/1gb), 2)) will round the value to two decimal places
$size=('{0:N2}' -f [Math]::Round(($d.size/1gb), 2))

$freespace=('{0:N2}' -f [Math]::Round(($d.freespace/1gb), 2))
#we have to define the values stored in $d.size and $d.freespace as integer
$usespace=('{0:N2}' -f ([Math]::Round($d.size/1GB,2 -as [int])) - ([Math]::Round($d.freespace/1GB,2 -as [int])))

$usespacepc=('{0:N2}' -f [Math]::Round((($usespace/$size)*100), 2))

$exporttocsv =$machine+";"+$DeviceID+";"+$d.VolumeName+";"+$size+";"+$freespace+";"+$usespace+";"+$usespacepc
$exporttocsv | Out-File -Append $CSVFile
  }
}

Now where on one hand we can get free space quite easily, we need to perform calculations to get used space.

Using Get-PSDrive

Now we will try to do the same thing, but this time we won’t do any calculation. As a result this can lead to higher speed and accuracy. We’ll try to make use of get-psdrive command to get used space and free space directly. However we still need to use get-wmiobject command to get total size of drive and volumename.

$CSVFile = "C:\Disk space report.csv"
"Computer Name;Drive Name;Volume Name;Size(GB);Free(GB);Used(GB);Used%(GB)" | out-file -FilePath $CSVFile -append
WriteHost"ComputerName;VolumeName;DriveName;Size(GB);Free(GB);Used(GB);Used(GB)"

$cre = get-credential
$machines = get-content "C:\serverset.txt"
foreach ($machine in $machines) {
#=================================================== 
#---To get Volume names and Total size, we will-----
#---use Win32_LogicalDisk, and to get Used Space----
#---and free space, we will use Get-PSDrive---------
#===================================================
$disk=Get-WmiObject Win32_LogicalDisk -filter "DriveType=3" -computer $machine -Credential $cre | Select Size,DeviceID,VolumeName

$disk1=Invoke-Command -ComputerName $machine {Get-PSDrive} -Credential $cre | Select-Object PSComputerName,Name,Used,Free,Provider|
where-object Provider -match "FileSystem"

foreach ($d in $disk)
{
[threading.thread]::CurrentThread.CurrentCulture = 'en-US'
$deviceid=($d.DeviceID).Replace(":", "")

foreach ($d1 in $disk1)
{

if ($d1.Name -match $deviceid){
$VolumeName = $d.VolumeName
[threading.thread]::CurrentThread.CurrentCulture = 'en-US'
$size=('{0:N2}' -f [Math]::Round(($d.size/1gb), 2))

$freespace=('{0:N2}' -f [Math]::Round(($d1.free/1gb), 2))

$usespace=('{0:N2}' -f [Math]::Round(($d1.used/1gb), 2))

$usespacepc=('{0:N2}' -f [Math]::Round((($usespace/$size)*100), 2))

$exporttocsv =$d1.PSComputerName+";"+$deviceid+";"+$VolumeName+";"+$size+";"+$freespace+";"+$usespace+";"+$usespacepc
$exporttocsv
$exporttocsv| Out-File -Append $CSVFile
Break }
    }
  }
}

Using Win32_Volume

Now we will see how we can make use of Win32_Volume to get the desired output.

$CSVFile = "C:\Disk space report.csv"
"Computer name,Drive name,Size(GB),Free(GB),Used(GB),% Used" | out-file -FilePath $CSVFile -append
$machines = get-content "C:\Users\umurarka\Desktop\servers.txt"
$cre = Get-Credential
foreach ($machine in $machines) 

{
$disk = Get-WmiObject - -ComputerName $machine -Class Win32_Volume -ErrorAction SilentlyContinue
foreach ($d in $disk)
{

$size=('{0:N2}' -f [Math]::Round(($d.Capacity/1gb), 2))

$freespace=('{0:N2}' -f [Math]::Round(($d.freespace/1gb), 2))

$usespace=('{0:N2}' -f ([Math]::Round($d.Capacity/1GB,2 )) - ([Math]::Round($d.freespace/1GB,2)))

$usespacepc=('{0:N2}' -f [Math]::Round((($usespace/$size)*100), 2))

$exporttocsv ="acrdefrps01"+" "+$d.DriveLetter+" "+$size+" "+$freespace+" "+$usespace+" "+$usespacepc
$exporttocsv| Out-File -Append $CSVFile
  }
}

A Fast Way of getting disk space report through UNC paths

Using above methods sometimes come with problems like access denied. To overcome these challenges, we can use UNC paths to solve our problem. Also it has many times higher speed than any of the above method.

#===================================================
#---function for geeting disk space from UNC--------
#===================================================
[threading.thread]::CurrentThread.CurrentCulture = 'en-US'
function getDiskSpaceInfoUNC($p_UNCpath, $p_unit = 1GB, $p_format = '{0:N1}')
{
    # unit, one of --> 1kb, 1mb, 1gb, 1tb, 1pb
    $l_typeDefinition = @' 
        [DllImport("kernel32.dll", CharSet = CharSet.Auto, SetLastError = true)] 
        [return: MarshalAs(UnmanagedType.Bool)] 
        public static extern bool GetDiskFreeSpaceEx(string lpDirectoryName, 
            out ulong lpFreeBytesAvailable, 
            out ulong lpTotalNumberOfBytes, 
            out ulong lpTotalNumberOfFreeBytes); 
'@
    $l_type = Add-Type -MemberDefinition $l_typeDefinition -Name Win32Utils -Namespace GetDiskFreeSpaceEx -PassThru

    $freeBytesAvailable     = New-Object System.UInt64 # differs from totalNumberOfFreeBytes when per-user disk quotas are in place
    $totalNumberOfBytes     = New-Object System.UInt64
    $totalNumberOfFreeBytes = New-Object System.UInt64

    $l_result = $l_type::GetDiskFreeSpaceEx($p_UNCpath,([ref]$freeBytesAvailable),([ref]$totalNumberOfBytes),([ref]$totalNumberOfFreeBytes)) 

    $totalBytes     = if($l_result) { $totalNumberOfBytes    /$p_unit } else { '' }
    $totalFreeBytes = if($l_result) { $totalNumberOfFreeBytes/$p_unit } else { '' }

    New-Object PSObject -Property @{
        Path      = $p_UNCpath
        Total     =  [math]::round($totalBytes , 2 , 1)
        Used      =  [math]::round($totalBytes -  $totalFreeBytes , 2 , 1)
        Used_percent      = [math]::round((($totalBytes -  $totalFreeBytes)/$totalBytes)*100 , 2 , 1)
        Free      = [math]::round($totalFreeBytes , 2 , 1)
    } 
}

#===================================================
$CSVFile = "C:\Disk space report.csv"
$machines = get-content "C:\servers.txt"
foreach ($machine in $machines) 
 
{
#===================================================
#---With the help of Win32_Share, we will filter----
#---and refine our search results to include only---
#---the drive letters, e.g. C$----------------------
#===================================================
$ds = get-WmiObject -class Win32_Share -computer $machine -ErrorAction SilentlyContinue|
? {($_.description -NotLike "*printer*") -and ($_.name -notlike "SYSVOL") -and ($_.name -like "?$")} |
 select Name

 foreach ($d in $ds) {
 $p = join-path \\$machine\ $d.name

[threading.thread]::CurrentThread.CurrentCulture = 'en-US'
getDiskSpaceInfoUNC $p |ft -AutoSize
getDiskSpaceInfoUNC $p |ft -AutoSize -HideTableHeaders| Out-File -Append $CSVFile

  }
}

What to do if computer has some dynamic disks also?

Above powershell commands will only list out basic disks in the system. But if the system also has dynamic disks, then we are left with only a small set of options, described below. This is part of Microsoft strategy of phasing out dynamic disks as can be seen in the link Tip of the Day: Dynamic Disks and Windows PowerShell

Win32_DiskPartition

Get-WmiObject Win32_DiskPartition -filter "Type='Logical Disk Manager'" |select *|ft -autosize

A more detailed query.

[threading.thread]::CurrentThread.CurrentCulture = 'en-US'
Get-WMIObject Win32_DiskPartition | `
Sort-Object DiskIndex, Index | `
Select-Object -Property `
@{Expression = {$_.DiskIndex};Label="Disk"},`
@{Expression = {$_.Index};Label="Partition"},`
@{Expression = {$_.Caption};Label="Caption"},`
@{Expression = {$_.Type};Label="Type"},`
@{Expression = {Get-DriveLetter($_.__PATH)};Label="Drive"},`
@{Expression = {$_.BootPartition};Label="BootPartition"},`
@{Expression = {"{0:N3}" -f ($_.Size/1Gb)};Label="Size_GB"},`
@{Expression = {"{0:N0}" -f ($_.BlockSize)};Label="BlockSize"},`
@{Expression = {"{0:N0}" -f ($_.StartingOffset/1Kb)};Label="Offset_KB"},`
@{Expression = {"{0:N0}" -f ($_.StartingOffset/$_.BlockSize)}; Label="OffsetSectors"},`
@{Expression = {IF (($_.StartingOffset % 64KB) -EQ 0) {" Yes"} ELSE {" No"}};Label="64KB"}|ft -AutoSize|out-file "C:\output.txt"

To distinguish between basic and dynamic disks in the output, note the entry under heading “Type”

Basic disk is the one with entry “Installable File System”

Dynamic disk is the one with entry “Logical Disk manager”

Diskpart

Diskpart command: List Disk can give the output of all disks attached to the system.

But to output the same to a text file, we might need a workaround.

Step1> Create a text file (e.g. script.txt) in notepad and save it in C:\ drive, with the following command written inside.

List Disk

Step2> In a command prompt, execute below command to get the output.

diskpart /s C:\script.txt > C:\output.txt

Getting folder owners’ details from remote computers

In this short article, we would try to get information about owners of folders on remote computers and the permissions that they have on them.

The below script would travel one level down of C: drive and then recursively gather information about all files and subfolders within.

$path = gci “C:\*” -Recurse
GET-ACL “$path”| select path, Owner -expand access| 
select @{n=”Path”;e={$_.Path.replace(“Microsoft.PowerShell.Core\FileSystem::”,””)}}, Owner, IdentityReference, FileSystemRights, AccessControlType, IsInherited|
Export-Csv "C:\folder_owner_output.csv" -NoTypeInformation

This post has been majorly inspired by this blog post Find file / folder owner information using PowerShell or command prompt 

I recently go hold of another script that can be really useful if we are just interested in getting folder names and their owners. Also we can get owner details of folders at a particular level in a folder hierarchy. In my case I want folder owner details at second level down the root folder only using UNC path. Including “$_.PSIsContainer -eq $True” restricts the searchbase to folders only and excludes the files.

$arr = @()
gci "\\server01\*\*" | ? {$_.PSIsContainer -eq $True} | % {
$obj = New-Object PSObject
$obj | Add-Member NoteProperty Name $_.Name
$obj | Add-Member NoteProperty Owner ((Get-ACL $_.FullName).Owner)
$arr += $obj
}
$arr
$arr | Export-CSV -notypeinformation "C:\folder_owner_output.csv"

Outlook Automation using Powershell

Although Microsoft does not recommend unattended automation of its office products, we can leverage powershell at automating some common outlook related operations.

Searching for a particular subject line and then downloading its attachments

$date = get-date -format d

$filepath = "C:\attachments\"
$attr = $date+"_report.csv"
$path = Join-Path $filepath $attr

$olFolderInbox = 6
[threading.thread]::CurrentThread.CurrentCulture = 'en-US'
$outlook = new-object -com outlook.application;
$ns = $outlook.GetNameSpace("MAPI");
$inbox = $ns.GetDefaultFolder($olFolderInbox)
$messages = $inbox.items

foreach($message in $messages){
$msubject = $message.subject
write-host "Messages Subject: $msubject"

if ($msubject.contains("Daily Report")){
$message.attachments|foreach {
Write-Host $_.filename
Write-Host "Attachment is saved at $path"
$_.saveasfile(($path))}
$TargetFolder = $inbox.Folders.Item('Processed')
$message.Move($TargetFolder)
break
}else{continue}
}

Prior to running above script we must create a folder “Processed” under “Inbox” of Outlook. Also we may have a hard time scheduling this on task scheduler as the account from which the script runs, should be logged in.

Sending Mail

$From = "abc@company.com"
$to = "xyz@company.com"
$Cc = "pqr@company.com"

$Subject = "Daily Report"
$Body = "PFA The Excel Report containing details."
$SMTPServer = "server01.company.corp"

Send-MailMessage -From $From -to $To -Cc $Cc -Subject $Subject `
-Body $Body -SmtpServer $SMTPServer -Attachments $path

write-host "Mail sent with report attached"

In above script we have not made use of COM object. As a result we can use this as unattended script scheduled on task scheduler. Also the execution is fast.

Now the same task can also be done via COM object as can be seen below.

$Username = "MyUserName";
$Password= "MyPassword";

function Send-ToEmail([string]$email, [string]$attachmentpath){

    $message = new-object Net.Mail.MailMessage;
    $message.From = "YourName@gmail.com";
    $message.To.Add($email);
    $message.Subject = "subject text here...";
    $message.Body = "body text here...";
    $attachment = New-Object Net.Mail.Attachment($attachmentpath);
    $message.Attachments.Add($attachment);

    $smtp = new-object Net.Mail.SmtpClient("smtp.gmail.com", "587");
    $smtp.EnableSSL = $true;
    $smtp.Credentials = New-Object System.Net.NetworkCredential($Username, $Password);
    $smtp.send($message);
    write-host "Mail Sent" ; 
    $attachment.Dispose();
 }
Send-ToEmail  -email "reciever@gmail.com" -attachmentpath $path;

The above script has been taken from a solution posted in how to send email with powershell

Working with DHCP failover information in a Windows Server 2012 environment

In this article, we will discuss about a method to get the information regarding DHCP failover configuration in our domain and the corresponding DHCP servers names (Primary and Secondary). Along with DHCP servers Names, we would also get information such as  Scope ID, Subnet mask, Start and End range. All this data collected would then be appended to a csv file.

A point to be noted here is that prior to windows server 2012, DHCP failover could only be possible in a Windows server based cluster environment, But going further with windows server 2012, DHCP failover can be very easily configured from DHCP console between two nodes. Step-by-Step: Configure DHCP for Failover This link gives a background of what i mentioned here along with some more information. Therefore following discussion will be applicable only for Windows Server 2012 setups or newer.

[threading.thread]::CurrentThread.CurrentCulture = 'en-US'
$CurrentDate = Get-Date -format dddd
$CSVFile = "C:\Dhcp_report_$CurrentDate.csv"
$machines = Get-DhcpServerInDC

"DHCPServer,SecondaryDHCPServer,ScopeName,ScopeID,Subnetmask,Startrange,Endrange" |out-file -FilePath $CSVFile -append

foreach ($machine in $machines)
{
$DHCPName = $machine.dnsname
if ($DHCPName -notin @("server01.company.corp","server02.company.corp")){
$AllScopes = Get-DhcpServerv4Scope -ComputerName $DHCPName
foreach ($scope in $AllScopes)
{
$ScopeName = $scope.Name
$ScopeId = $scope.ScopeId
$failoverscope = Get-dhcpserverv4failover -ComputerName $DHCPName -ScopeId $ScopeId -ErrorAction SilentlyContinue
if ($failoverscope) {
$SecDHCP = ($failoverscope.SecondaryServerName)
}
else {
$SecDHCP = "no failover"
}
$Subnetmask = $scope.subnetmask
$Startrange = $scope.startrange
$Endrange = $scope.endrange
$OutInfo = ($DHCPName).Replace(".company.corp","") + "," + ($SecDHCP).Replace(".company.corp","") + "," +$ScopeName + "," + $ScopeId+ "," + $Subnetmask + "," + $Startrange + "," + $Endrange
Write-Host $OutInfo
Add-Content -Value $OutInfo -Path $CSVFile
}
}
}

DHCP Server Cmdlets in Windows PowerShell will give a full list of powershell commands associated with DHCP.

Alerting if a change of failover is detected

Now with the same commands we shall create another script that will first search for all authorised DHCPs in the domain, then it will sequentially scan through a list of servers to see if they are authorised. Also it will see if their is a change of failover state of dhcp from main DHCP to partner DHCP server. Then it will also check if DHCP services are up and running if the main DHCP server’s failover state is active.

We can even insert a code module for sending mails if an unexpected change of failover state has happened. This script could be scheduled in task scheduler to perform this check regularly to avoid any unwanted changes.

$authdhcps=Get-DhcpServerInDC|select dnsname
$machines = get-content "c:\servers.txt"
foreach ($machine in $machines)
{

if ($authdhcps -match [System.Net.Dns]::GetHostByName("$machine").HostName){
write-host $machine "is authorised dhcp server"
$a=get-dhcpserverv4failover -computername $machine

if ($a.serverrole -eq "Active"){
$p = $a.partnerserver
Write-host "$machine is active and $p is in standby. Now checking DHCP server and client service status."
$dhcpServices = Get-Service -ComputerName $machine -Name *dhcp*

foreach ($dhcpService in $dhcpServices){

if ($dhcpService.Status -eq "Running"){
Start-Service $dhcpService
Write-Host "Starting " $dhcpService.DisplayName " service on $machine"
" ---------------------- "
" Service is now started"
}

else{
Write-Host $dhcpService.DisplayName "service is already started on $machine"
}
}
}
}

else{
write-host $machine "is not authorised dhcp server"
}

else{
Write-host "$machine is standby and now checking if partner server $p is authorised dhcp server"

if ($authdhcps -match [System.Net.Dns]::GetHostByName("$p").HostName){
write-host $p "is authorised dhcp server.Now checking DHCP server and client service status"
$dhcpServices = Get-Service -ComputerName $p -Name *dhcp*

foreach ($dhcpService in $dhcpServices){

if ($dhcpService.Status -eq "Running"){
Start-Service $dhcpService
Write-Host "Starting " $dhcpService.DisplayName " service on $p"
" ---------------------- "
" Service is now started on partner $p"
}

else{
Write-Host $dhcpService.DisplayName "service is already started on partner $p"
}
}
}

else{
write-host $p "is not authorised dhcp server"
}
}
}

Lastly we can see that there are are two ways by which we can get FQDN or DNS name of a server as can be seen in each of the above two scripts.

First is,

$DHCPName = $machine.dnsname

And second will be,

[System.Net.Dns]::GetHostByName("$p").HostName

Another fun thing that can create confusion at many times is that prior to windows server 2012, the service status would be “started” if it is running fine.

Whereas going forward with windows server 2012, the service status would be “running” if it were running.

Please excuse for poor script formatting 🙂

Editting folder permissions with PowerShell

In this post, we will discuss about modifying folder NTFS permissions without GUI on a Windows platform. Modifying NTFS permissions for a small environment by manual GUI means is understandable, but it can get really hectic and time consuming for a large environment. Now lets have a look at the following Powershell Script that is a step in the direction of automating this task.


$CSVFile = "C:\log.csv"
$machines = get-content "C:\servers.txt"
foreach ($machine in $machines)
{
<# get all network path names of shared folders from another text file, 
then process through each path recursively
sample path file:
   \\$machine\Users
   \\$machine\Departments
Instead of inputting text file, we can also use:
$paths = gi "H:\Users\*","H:\Departments\*" #>
 $paths = (get-content "C:\paths.txt").replace('$machine', $machine)
 foreach ($path in $paths)
  {
<# -----part-1 (disable inheriance)-----
To change access of an object on folder which has been inherited
from parent folder, we need to disable the inheritance first.
So, traverse one level down in each path, then check there only for 
folders, then disable inheritance at that level. #>
   get-childitem -Path $path\* | ? { $_.PsIsContainer } | % {
<# We could have also used $acl = Get-Item $_ | get-acl
But this would not work in cases where the account who is launching 
the script is not an owner of the folder whose permissions are being 
changed. So we need to tell powershell to only look for 'Access' by 
using a piped code structure. #>    
   $acl = (Get-Item $_).GetAccessControl('Access')
<# first arguement ($true) will disable inheritance and 
second arguement ($false) will remove all ACEs inherited  
from parent folder. #>
   $acl.SetAccessRuleProtection($true, $false)
   Set-Acl -path $path -aclObject $acl | Out-Null
<# ----part-2 changing premissions of an account---
To avoid the problem of ownership, here also we first
try to tell powershell that we are only interested in 
accesses of each acl and not anything extra, by using
a loop structure. #>
   $acl2 = Get-Acl -path $path
   $accesses = $acl2.Access
# get ace from each acl recursively
   foreach ($access in $accesses)
    {
     $ids = $access.IdentityReference.Value
     foreach ($id in $ids)
      {
       if ($id -eq "$machine\User")
        {
<# when conditions true, remove all permissions for 
the mentioned account only #>
         $acl2.RemoveAccessRule($access) | Out-Null
        }
      }             
<# provide Read and Execute permission at parent or root folders level 
to the same account #>
     Set-Acl -path $path -aclObject $acl2 | Out-Null
     $permission = $name,'ReadandExecute','ContainerInherit,ObjectInherit','None','Allow'
     $aclrule = New-Object system.security.accesscontrol.filesystemaccessrule -ArgumentList $permission
     $acl.SetAccessRule($aclrule)
     Set-Acl -path $path -aclObject $acl2 | Out-Null
     $OutInfo = $machine+","+$path+","+$aclrule.IdentityReference+","+$aclrule.FileSystemRights
     Write-Host $OutInfo             
     Add-Content -Value $OutInfo -Path $CSVFile                                
    }
  }
}

Here Inheritance flag is set for both container and object whereas Propogation flag is none.

Following is a table which will help in setting propogation and inheritance flags.

    ╔═════════════╦═════════════╦═══════════════════════════════╦════════════════════════╦══════════════════╦═══════════════════════╦═════════════╦═════════════╗
    ║             ║ folder only ║ folder, sub-folders and files ║ folder and sub-folders ║ folder and files ║ sub-folders and files ║ sub-folders ║    files    ║
    ╠═════════════╬═════════════╬═══════════════════════════════╬════════════════════════╬══════════════════╬═══════════════════════╬═════════════╬═════════════╣
    ║ Propagation ║ none        ║ none                          ║ none                   ║ none             ║ InheritOnly           ║ InheritOnly ║ InheritOnly ║
    ║ Inheritance ║ none        ║ Container|Object              ║ Container              ║ Object           ║ Container|Object      ║ Container   ║ Object      ║
    ╚═════════════╩═════════════╩═══════════════════════════════╩════════════════════════╩══════════════════╩═══════════════════════╩═════════════╩═════════════╝

We can speed up things by employing psexec utility for above PowerShell script as discussed in my previous posts. For some reason get-acl and set-acl fail to function for folders on which the current user is not an owner of. Set-acl tries to change the owner of folder which is certainly not the intention in most cases.

For further reading, refer to this technet link: Windows PowerShell Tip of the Week where  [System.Security.AccessControl.FileSystemRights] (a .NET Framework file system enumeration) and security descriptors are discussed in greater detail.

There is another approach to above scenario, which makes use of a powershell module “NTFSsecurity” File System Security PowerShell Module 4.2.3 by Raimund Andree. We can achieve the desired effect by proceeding as follows.


<#First we need to import the module "NTFSSecurity"
If we want the module to be used by all users on the system, 
then it should be #placed under the path: 
C:\Program Files(x86)\WindowsPowerShell\Modules\NTFSSecurity #>
Import-Module C:\Program Files (x86)\WindowsPowerShell\Modules\NTFSSecurity
<#This will list out all the commands which are 
available in this module#>
Get-Command -Module ntfssecurity
<#This will disable inheritance at only one level below the root 
"test" folder. Remove the \* from path, if inheritance is to 
be disabled at all levels inside.#>
get-item "\\server1\test\*" | Disable-NTFSAccessInheritance
<#Now we first get the access of the account whose access we want to 
change, then we remove its access from the folder.
Now we add the desired access of the same account to the folder.This 
is particularly useful when we want to remove the special permissions 
provided to any account, which may present a security risk.#>
Get-Item -Path "\\server1\test" |Get-NTFSAccess -Account Users | Remove-NTFSAccess
Add-NTFSAccess -Path "\\server1\test" -Account Users -AccessRights ReadAndExecute -AppliesTo ThisFolderAndSubfoldersOneLevel

Before proceeding with the solutions presented in this post, do take a full backup of ACls of the folder in question using icacls command. This technet blog How to Back Up and Restore NTFS and Share Permissions  provides all the details required for performing a quick backup and restore of ACLs.

By wracking mind with these and many other solutions sometimes we feel like that sticking to basics does the job best. A simple command line approach is both faster and accurate in a mass folder structure. We can use icacls to do the same stuff what we discussed so far by putting far lesser effort. For this purpose I a sharing this link iCACLS.exe (2003 sp2, Vista+), which has got all the applications of this command covered.