Recently, I have had a chance to interact with FortiIdentity Cloud which is Fortinet’s replacement for FortiToken Mobile. In this use case, I simply needed MFA with FortiTokens and SMS, and have not used any of the other features of FortiIdentity Cloud. In this situation, one user had a very old phone which did not support the FortiToken app, so SMS was required. SMS is considered a very poor choice for MFA security and is much less preferred over tokens. In this use case, only local accounts were used and no centralized user repository like Radius or Active Directory was used.
What I found is that SMS authentication can be configured only by the root FortiIdentity Cloud admin account. Delegated administrators have limited visibility and cannot modify the Authentication Type for all users.
Steps to enable SMS authentication for a user:
Create or sync the user on the FortiGate. This can be a FortiGate admin user or VPN user locally created on the firewall.
The user will automatically appear in FortiIdentity Cloud.
Log into FortiIdentity Cloud using the root admin account.
This account is required to view all synchronized users.
Open the user profile and change Authentication Type → SMS.
Note: All new users default to FortiToken.
Save the configuration. The user can now authenticate with SMS OTP.
Note: Nothing special was required to be done on the FortiGate when creating this user. You don’t have to specify SMS authentication via the CLI, just add in the phone number and choose SMS. It will default to use FortiToken, but you don’t have to change this on the firewall itself. That change solely occurs in FortiIdentity Cloud in this case.
If you’ve ever tried connecting your MacBook to a T‑Mobile iPhone hotspot only to find that FortiClient, GlobalProtect, or other VPNs refuse to pass traffic or connect at all, you’re not alone. The hotspot provides perfectly normal internet access, but the VPN has issues.
Here’s the short version of why it happens and the simple workaround that fixes it.
The Cause: T‑Mobile’s IPv6‑Only Hotspot Path
T‑Mobile’s mobile network is largely IPv6‑only, and when your Mac connects to an iPhone hotspot, it often receives only an IPv6 address, not a real IPv4 lease. To keep IPv4‑only apps working, the carrier uses DNS64/NAT64 translation mechanisms that synthesize IPv6 addresses for IPv4 destinations. This system works for normal browsing but breaks many enterprise VPN clients, which often expect either:
A real IPv4 path, or
A fully IPv6‑capable VPN gateway (many aren’t)
As a result, the VPN may connect but fail to pass traffic—or fail during negotiation—when used over a T‑Mobile hotspot.
The Easy Workaround
Fortunately, the fix is simple:
Force IPv4 on the Mac for this hotspot SSID.
You can do this by disabling IPv6 on the Wi‑Fi interface whenever you join the hotspot. Once IPv6 is off, the Mac stops using DNS64/NAT64, and the VPN behaves normally again. This avoids the synthetic IPv6 addressing that causes the VPN tunnel to break.
A Smooth Way to Automate It
Instead of toggling settings manually, macOS lets you create a dedicated Network Location:
Create a new location, e.g., “T‑Mobile Hotspot IPv4”
With that location selected, open your Wi‑Fi interface → Details → TCP/IP
Set Configure IPv6 → Off
Switch to this location whenever you join the hotspot
Now you have a one‑click way to force IPv4 whenever you tether.
Wrap‑Up
The problem isn’t your Mac, your VPN client, or even your employer’s firewall—it’s simply how IPv6‑only networks interact with VPN protocols that weren’t built with translation layers in mind. For now, flipping your Mac into IPv4‑only mode on that hotspot SSID is the cleanest and most reliable solution.
In computing, the same-origin policy (SOP) is a crucial component of the web application security model. This policy dictates that a web browser allows scripts from an initial web page to access data on a second web page, but only if both pages share the same origin, determined by a combination of URI scheme, host name, and port number.
This safeguard prevents malicious scripts from gaining unauthorized access to sensitive data on another web page through the Document Object Model (DOM).
This is particularly significant for modern web applications relying on HTTP cookies for user sessions, where servers use cookie information to disclose sensitive details or execute actions that alter the application’s state. To uphold data confidentiality and integrity, a strict separation of content from unrelated sites is essential on the client-side.
Notably, the same-origin policy pertains only to scripts, allowing resources like images, CSS, and dynamically-loaded scripts to be accessed across origins through corresponding HTML tags, although fonts pose a notable exception. Unfortunately, attacks can exploit the absence of the same-origin policy for HTML tags.
Cross-Site Scripting (XSS) is a type of security vulnerability that occurs when an attacker injects malicious scripts into web pages that are then viewed by other users. This can happen when a web application does not properly validate or sanitize user input before including it in the output it generates.
There are three main types of XSS attacks:
Stored XSS (Persistent XSS): In this scenario, the malicious script is permanently stored on the target server. It is then served to users whenever they access a particular page or resource, making it a persistent threat. For example, an attacker might inject a script into a forum post or a comment on a website.
Reflected XSS (Non-Persistent XSS): In this case, the injected script is included in the URL and is reflected off the web server to the user’s browser. The user typically receives a malicious link and, when they click on it, the script is executed. Reflected XSS attacks are often embedded in phishing emails or malicious websites.
DOM-based XSS: This type of XSS involves the manipulation of the Document Object Model (DOM) in a user’s browser. Instead of exploiting a vulnerability on the server, the attacker manipulates the client-side code, making changes to the DOM and causing unintended behavior.
Impact of XSS Attacks:
Session hijacking: Attackers can steal session cookies, allowing them to impersonate users and perform actions on their behalf. Defacement of websites: Attackers may modify the content of web pages to display offensive or misleading information. Theft of sensitive information: Malicious scripts can capture keystrokes or other sensitive data entered by users on compromised pages. Distribution of malware: Attackers can use XSS to deliver and execute malware on users’ devices.
To prevent XSS attacks, developers should implement proper input validation and output encoding. Input validation ensures that user input meets the expected criteria, while output encoding ensures that any user input displayed on a web page is properly encoded to prevent script execution. Additionally, the use of secure coding practices, such as Content Security Policy (CSP), can help mitigate the risk of XSS attacks.
When configuring ClearPass, you must authenticate your administrative user. In this guide, I’ll explain a simple method to utilize Active Directory for ClearPass admin login. I use Active Directory since it’s widely used, allowing us to seamlessly integrate without establishing new systems or relying on the admin database in ClearPass.
ClearPass Operator Login – Duplicate the Current Service
ClearPass utilizes its own authentication process. When you click the login button on the ClearPass login page, ClearPass generates a TACACS request and authenticates the user using a service. The default service for this purpose is “[Policy Manager Admin Network Login Service]”:
Copy this service and place at the top of all services. You can rename the service and modify the Authentication Sources to include Active Directory and change the “Strip Username Rules” section to reflect it to look like the below screenshot:
Create and Apply Role Mapping and Enforcement
Create a role mapping policy to map Active Directory group membership to the appropriate role for administrative access.
I am using the default Enforcement policy that was on the default cloned service:
This saves me time. However, feel free to establish your own rules and policies. Keep in mind to include the conditions mentioned above in your policy for a backup plan. This ensures you can rely on the local admin account in case of a disaster. So, modify the default admin account password to a secure and complex one and store it safely
Log out from ClearPass and log in again using an AD account. For added security, use a different browser to test the login without logging out first.
If you successfully log in, we’ve done it right. You can then deactivate the old service by clicking the green light at the end of the row; it will turn red.
Additionally, check the login with the built-in account to ensure the fallback plan is functioning correctly.
In EAP-TLS, a digital certificate replaces the user ID and passwords used by PEAP. If a user is disabled in AD and is using a certificate issued from ClearPass or an internal PKI infrastructure, access will be granted the next time the user authenticates. ClearPass, upon receiving the request, checks the PKI infrastructure using OCSP or CRL to verify if the certificate has been revoked, without checking AD for account disablement. If OCSP is configured correctly and the user certificate has been revoked, access will be denied. If revoking user certificates cannot be completed promptly, UserAccountControl should be employed to prevent disabled users from gaining access to the network.
To accomplish this, first we must change the LDAP query at CPPM > Configuration > Configuration > Authentication> Sources > Click on proper source > Attributes > Click Authentication > Filter Query:
While this step is optional, it allows users to be located using either sAMAccountName or userPrincipalName (UPN), which is the prevailing approach for generating user certificate Common Names (CNs). Modifying the Filter Query to incorporate both UPN and sAMAccountName eliminates the need for “Strip Username Rules.”
2. Next, we must include the LDAP Attribute named “userAccountControl” in the server settings for the Active Directory Authentication Source. This is done under CPPM > Configuration > Configuration > Authentication> Sources > Click on proper source > Click Authentication > Clic to add a new entry:
The following is a list of the common userAccountControl flags. With this scenario the only flag we care about is the 512 – Enabled Account. In the policy we are about to write, all authentication requests will query AD to retrieve the status of the userAccountControl. If the user account has a status of 512 access will be granted, if UserAccountControl returns anything other than 512 access will be denied.
512 – Enabled Account
514 – Disabled account
544 – Account Enabled – Require user to change password at first logon
4096 – Workstation/server
66048 – Enabled, password never expires
66050 – Disabled, password never expires
262656 – Smart Card Logon Required
532480 – Domain controller
3. Next, we need to create a Role Mapping using the userAccountControl attribute. We also check to make sure that the client cert is issued by the trusted Certificate Authority. Setting up both conditions is straightforward. First it checks to see if the certificate is issued from an internal CA, and user authorization with a userAccountControl of 512. If both checks pass, the user receives a role for internal network access.
4. Finally, once userAccountControl is activated in a service, Asset Tracker will present the account status for each authenticating user. The two RADIUS request below depict a user with account status 512 – Enabled.
This article does a great job of summarizing this process already, but I wanted to describe a few caveats and explain the formatting of the script a bit clearer:
Here is an example config of this with all of the required components:
conf system automation-action
edit AutomatedConfigBackup
unset script
set script "execute backup config ftp \"/Fortinet_Backups/FortigateBackup.conf\" 10.10.10.10 \"Domain\\UserHere\" PasswordHere"
end
config system automation-trigger
edit "AutomatedConfigBackup"
set trigger-type scheduled
set trigger-hour 22
set trigger-minute 58
next
end
edit "AutomatedConfigBackup_FTP"
set trigger "AutomatedConfigBackup"
config actions
edit 1
set action "AutomatedConfigBackup"
set required enable
next
end
All of this is easy enough to follow along, except for the format of the backup script and command itself. Let’s analyze this further to see why it is formatted as such:
set script "execute backup config ftp \"/Fortinet_Backups/FortigateBackup.conf\" 10.10.10.10 \"Domain\\UserHere\" PasswordHere"
With FortiOS and this “set script” command with quotes around it, you have to use the \ character to include the “” around the path and username so it includes it exactly.
You must have quotes around the path for it to work. For Windows use the / slash for the path and \ for the domain\user
From the documentation:
Special characters
The following characters cannot be used in most CLI commands: <, >, (, ), #, ‘, and “
If one of those characters, or a space, needs to be entered as part of a string, it can be entered by using a special command, enclosing the entire string in quotes, or preceding it with an escape character (backslash, ).
Stacking Aruba CX 6300 switches allows you to manage a group of switches as a single entity, increasing network availability and simplifying management. Aruba calls stacking VSF or Virtual Switching Framework.
Prerequisites:
Two or more Aruba CX 6300 switches
Appropriate stacking cables
Refer to https://www.arubanetworks.com/techdocs/AOS-CX/10.05/HTML/5200-7324/index.html#book.html for more information.
Steps:
First, you’ll want to ensure you have a means to console into the switch using a USB-C cable or use the management port with the default DHCP client. The console port is only USB-C so you’ll need that cable and the chipset appears to be build into the switch itself and should be identified by your operating system and then within SecureCRT or whatever terminal emulator you use.
Next, you will want to ensure all switches in the stack are running the same version of code to begin with. You can upgrade the switches once they are stacked to speed up the upgrade process.
Wire the stacking cables between switches starting with port 50 on top/master switch to port 49 on next switch and so on repeating that pattern for all switches in the stack.
From the Master switch run:
conf t
vsf start-auto-stacking
The stack should form and the member switches will reboot.
Wait for switches to fully come back up.
Verify the stack using this command: show vsf
Check the orange plastic labels that slide out of the front panel of the switch to ensure the switch members are in proper order by verifying the MAC address of each switch in the stack.
Next, the switches can be upgraded from the WebGUI by using the management IP of the stack