In computing, the same-origin policy (SOP) is a crucial component of the web application security model. This policy dictates that a web browser allows scripts from an initial web page to access data on a second web page, but only if both pages share the same origin, determined by a combination of URI scheme, host name, and port number.
This safeguard prevents malicious scripts from gaining unauthorized access to sensitive data on another web page through the Document Object Model (DOM).
This is particularly significant for modern web applications relying on HTTP cookies for user sessions, where servers use cookie information to disclose sensitive details or execute actions that alter the application’s state. To uphold data confidentiality and integrity, a strict separation of content from unrelated sites is essential on the client-side.
Notably, the same-origin policy pertains only to scripts, allowing resources like images, CSS, and dynamically-loaded scripts to be accessed across origins through corresponding HTML tags, although fonts pose a notable exception. Unfortunately, attacks can exploit the absence of the same-origin policy for HTML tags.
Cross-Site Scripting (XSS) is a type of security vulnerability that occurs when an attacker injects malicious scripts into web pages that are then viewed by other users. This can happen when a web application does not properly validate or sanitize user input before including it in the output it generates.
There are three main types of XSS attacks:
Stored XSS (Persistent XSS): In this scenario, the malicious script is permanently stored on the target server. It is then served to users whenever they access a particular page or resource, making it a persistent threat. For example, an attacker might inject a script into a forum post or a comment on a website.
Reflected XSS (Non-Persistent XSS): In this case, the injected script is included in the URL and is reflected off the web server to the user’s browser. The user typically receives a malicious link and, when they click on it, the script is executed. Reflected XSS attacks are often embedded in phishing emails or malicious websites.
DOM-based XSS: This type of XSS involves the manipulation of the Document Object Model (DOM) in a user’s browser. Instead of exploiting a vulnerability on the server, the attacker manipulates the client-side code, making changes to the DOM and causing unintended behavior.
Impact of XSS Attacks:
Session hijacking: Attackers can steal session cookies, allowing them to impersonate users and perform actions on their behalf. Defacement of websites: Attackers may modify the content of web pages to display offensive or misleading information. Theft of sensitive information: Malicious scripts can capture keystrokes or other sensitive data entered by users on compromised pages. Distribution of malware: Attackers can use XSS to deliver and execute malware on users’ devices.
To prevent XSS attacks, developers should implement proper input validation and output encoding. Input validation ensures that user input meets the expected criteria, while output encoding ensures that any user input displayed on a web page is properly encoded to prevent script execution. Additionally, the use of secure coding practices, such as Content Security Policy (CSP), can help mitigate the risk of XSS attacks.
When configuring ClearPass, you must authenticate your administrative user. In this guide, I’ll explain a simple method to utilize Active Directory for ClearPass admin login. I use Active Directory since it’s widely used, allowing us to seamlessly integrate without establishing new systems or relying on the admin database in ClearPass.
ClearPass Operator Login – Duplicate the Current Service
ClearPass utilizes its own authentication process. When you click the login button on the ClearPass login page, ClearPass generates a TACACS request and authenticates the user using a service. The default service for this purpose is “[Policy Manager Admin Network Login Service]”:
Copy this service and place at the top of all services. You can rename the service and modify the Authentication Sources to include Active Directory and change the “Strip Username Rules” section to reflect it to look like the below screenshot:
Create and Apply Role Mapping and Enforcement
Create a role mapping policy to map Active Directory group membership to the appropriate role for administrative access.
I am using the default Enforcement policy that was on the default cloned service:
This saves me time. However, feel free to establish your own rules and policies. Keep in mind to include the conditions mentioned above in your policy for a backup plan. This ensures you can rely on the local admin account in case of a disaster. So, modify the default admin account password to a secure and complex one and store it safely
Log out from ClearPass and log in again using an AD account. For added security, use a different browser to test the login without logging out first.
If you successfully log in, we’ve done it right. You can then deactivate the old service by clicking the green light at the end of the row; it will turn red.
Additionally, check the login with the built-in account to ensure the fallback plan is functioning correctly.
In EAP-TLS, a digital certificate replaces the user ID and passwords used by PEAP. If a user is disabled in AD and is using a certificate issued from ClearPass or an internal PKI infrastructure, access will be granted the next time the user authenticates. ClearPass, upon receiving the request, checks the PKI infrastructure using OCSP or CRL to verify if the certificate has been revoked, without checking AD for account disablement. If OCSP is configured correctly and the user certificate has been revoked, access will be denied. If revoking user certificates cannot be completed promptly, UserAccountControl should be employed to prevent disabled users from gaining access to the network.
To accomplish this, first we must change the LDAP query at CPPM > Configuration > Configuration > Authentication> Sources > Click on proper source > Attributes > Click Authentication > Filter Query:
While this step is optional, it allows users to be located using either sAMAccountName or userPrincipalName (UPN), which is the prevailing approach for generating user certificate Common Names (CNs). Modifying the Filter Query to incorporate both UPN and sAMAccountName eliminates the need for “Strip Username Rules.”
2. Next, we must include the LDAP Attribute named “userAccountControl” in the server settings for the Active Directory Authentication Source. This is done under CPPM > Configuration > Configuration > Authentication> Sources > Click on proper source > Click Authentication > Clic to add a new entry:
The following is a list of the common userAccountControl flags. With this scenario the only flag we care about is the 512 – Enabled Account. In the policy we are about to write, all authentication requests will query AD to retrieve the status of the userAccountControl. If the user account has a status of 512 access will be granted, if UserAccountControl returns anything other than 512 access will be denied.
512 – Enabled Account
514 – Disabled account
544 – Account Enabled – Require user to change password at first logon
4096 – Workstation/server
66048 – Enabled, password never expires
66050 – Disabled, password never expires
262656 – Smart Card Logon Required
532480 – Domain controller
3. Next, we need to create a Role Mapping using the userAccountControl attribute. We also check to make sure that the client cert is issued by the trusted Certificate Authority. Setting up both conditions is straightforward. First it checks to see if the certificate is issued from an internal CA, and user authorization with a userAccountControl of 512. If both checks pass, the user receives a role for internal network access.
4. Finally, once userAccountControl is activated in a service, Asset Tracker will present the account status for each authenticating user. The two RADIUS request below depict a user with account status 512 – Enabled.
This article does a great job of summarizing this process already, but I wanted to describe a few caveats and explain the formatting of the script a bit clearer:
Here is an example config of this with all of the required components:
conf system automation-action
edit AutomatedConfigBackup
unset script
set script "execute backup config ftp \"/Fortinet_Backups/FortigateBackup.conf\" 10.10.10.10 \"Domain\\UserHere\" PasswordHere"
end
config system automation-trigger
edit "AutomatedConfigBackup"
set trigger-type scheduled
set trigger-hour 22
set trigger-minute 58
next
end
edit "AutomatedConfigBackup_FTP"
set trigger "AutomatedConfigBackup"
config actions
edit 1
set action "AutomatedConfigBackup"
set required enable
next
end
All of this is easy enough to follow along, except for the format of the backup script and command itself. Let’s analyze this further to see why it is formatted as such:
set script "execute backup config ftp \"/Fortinet_Backups/FortigateBackup.conf\" 10.10.10.10 \"Domain\\UserHere\" PasswordHere"
With FortiOS and this “set script” command with quotes around it, you have to use the \ character to include the “” around the path and username so it includes it exactly.
You must have quotes around the path for it to work. For Windows use the / slash for the path and \ for the domain\user
From the documentation:
Special characters
The following characters cannot be used in most CLI commands: <, >, (, ), #, ‘, and “
If one of those characters, or a space, needs to be entered as part of a string, it can be entered by using a special command, enclosing the entire string in quotes, or preceding it with an escape character (backslash, ).
If you are using the SSL VPN Web Portal, and you are allowing access to resources that are accessible over a site to site VPN tunnel from the FortiGate, there are several considerations to keep in mind.
With the web portal, the firewall will proxy the connection from the user to the actual resources. If the resource is on the local network and the firewall has a route to the destination, it will source traffic from one of it’s interfaces that is “closest” to the end device. This may be the local LAN/internal interface.
However, if the resource is available only across a site to site VPN tunnel, then the firewall will try and source this traffic from the “closest” IP to the site to site VPN tunnel. If there is no IP address explicitly configured on the VPN tunnel itself, then it will choose the public IP used in the VPN tunnel termination. Many times, the public IP isn’t allowed to pass traffic across the actual VPN, and so this will be rejected.
The best thing to do would be to assign an IP address to the tunnel interface itself. When this is configured, the firewall will use this IP address to proxy and source the traffic from. Additionally, you must make sure that this IP address is allowed on the VPN as far as the proxy-IDs, routes on the other end, and any security rules on both ends that control traffic access.
Some organizations may choose to setup their remote access VPN to enable the VPN to connect automatically prior to a user logging into the machine. There may be a number of reasons for this, but in this case the VPN client must use a machine certificate to authenticate to the VPN.
I have encountered the below situation in regards to this pre-logon VPN setup. A machine with Global Protect and always-on with pre-logon settings reboots and is sitting at the Windows login screen. The machine successfully authenticates to the VPN using a machine cert. Now, the user logs into the machine. What should happen is that the Global Protect client detects a user login, then changes the VPN session from “pre-logon” to show the actual user logged into the machine. This could be from single sign on credentials or also use a user certificate to authenticate to the VPN.
This process is well documented and should happen seamlessly. However, the documentation doesn’t mention that the Global Protect agent will actually attempt another DNS request for the Global Protect gateway. So, let’s assume the gateway address is vpn.example.com. Prior to any VPN connection, the machine uses whatever DNS servers are configured, let’s assume 8.8.8.8 and 1.1.1.1. This then uses public DNS to resolve vpn.example.com to its public IP and the connection successfully goes through.
When the user logs in, the machine is already connected to the VPN and will be using internal DNS servers. Because of this and the fact that Global Protect will do a fresh DNS request for the gateway after the user logs in, this DNS request will fail to resolve properly to the external gateway and Global Protect would fail to connect. I determined this was being done from analyzing the logs of a system that to see the DNS request fail to resolve.
So, there are two ways to fix this. One, a DNS A record can be added to the internal DNS servers that maps vpn.example.com to its public IP. The other method is to create a DNS split tunnel in the Global Protect settings to exempt the vpn.example.com from being resolved by the DNS servers over the tunnel and instead use whatever DNS servers are configured on the end system.
I have encountered situations where a customer is needing to route all traffic, including Internet-bound traffic, from a branch location back to a main location to ensure consistent security policies, content filtering, or some other requirement.
In this scenario, let’s assume there is a firewall and ISP at the branch location. Normally, the firewall would have a default route configured that points to the ISP gateway for all Internet traffic. When adding a site to site VPN tunnel on a firewall that uses route-based VPNs and you want to send all traffic across this tunnel, you must create a default route that points to the tunnel interface. Fortinet and Palo Alto are examples of firewalls that use route-based VPNs where this would be applicable.
Because you are changing the default route to point to a site to site VPN, it is critical to also add a specific host route that points to the VPN peer public IP with the ISP gateway as the next hop. This is to ensure that the firewall has a route to the VPN peer instead of only using the default route that points to the VPN interface.
Background: In the past, when using the agentless method of User-ID with a Palo Alto firewall, WMI was used for the firewall to connect to a domain controller and parse Windows security logs to find user to IP mappings. However, on June 14, 2022, Microsoft released patch KB5004442 for Windows Server to address the vulnerability described in CVE-2021-26414. This patch essentially breaks the WMI connection from the firewall to the server. This is described well here: https://knowledgebase.paloaltonetworks.com/KCSArticleDetail?id=kA14u000000wkkfCAA
The new solution is to enable WinRM to be used in place of WMI. This process is very well documented so I won’t go into detail here, but I did want to mention that I had to add the service account used for this into the “Remote Management” group in Active Directory or else it wouldn’t work properly and would show “Access Denied” for the server. The below links are helpful in configuring this and only one of them has the Remote Management Group mentioned.