Blog

Blog

Horizon Security Audit

Hello! Long summer past, and time to blog some more. One of our customers has decided to make a security audit of Horizon View. So we sat together with their specialist and started to go over how Connection Server works with users. And he found a surprise for me: I never thought about the details of user management and believed it all came from Active Directory. And I was wrong - Connection Server does have several "cached" accounts: one of them is the account used to work with vCenter, which can be a local non-AD vCenter user account. Another account is a local SQL account used to store Horizon Events logs. So thanks to my EUC colleagues and Sarah Swatman, who pointed out these things inside the AD LDS database, which accompanies the Connection Server instance. I wrote on the details of connecting to the local AD LDS DB in a KB page.

Once you are there, you can find a pae-VcUserPassword field in AD LDS in CN=<uuid>,OU=VirtualCenter,OU=Properties. Right click on this and select Properties to find “pae-VcUserPassword”.

Cluster Master Secret, also called Key Vault Master Key is currently used to protect secret/sensitive data in LDAP. Encrypted data always has this format: {keyname-algname:algversion}, and the encryption algorithm in use is AES.

Back in the article, I also pointed out the place, where Console admin roles are attached to Active Directory users. So in case you have similar questions from security, now you know where to find everything!

 

A common issue with every installation of AirWatch is AWCM external/internal certificate. It is simple logic that every connection to AWCM should be certificate-signed. With a valid (not self-signed!) certificate. But on the internal side we have some local FQDN which is usually different from how AirWatch "Device Services+AWCM" host (also called Front-End Server or FE) is published on the external network.

So the formal way is to issue a separate certificate for the internal connections, which will be valid for all other AirWatch components (mainly the Console and the Connectors). But everybody wants to avoid all this certificate mayhem. So the answer is - send everybody on the external URL: we already have a valid certificate there, so let's reuse it. A simple and brutal way is to just write the "<internal ip> = <external dns name>" line in the hosts-file on every affiliated server. But a more intelligent way is to do split-dns.

Usually we have the Windows Domain Controller acting as DNS, so we will use it to create a separate zone for mdm.company.com, which will send all servers to an internal IP via an external signed URL.

Create split-DNS for single hosts

Since DNS is hierarchy-organized, you can tell the internal DNS server to be authoritative only for a sub-tree of a domain - on mdm.company.com. If you try to resolve parent zone company.com, the DNS server would go down the Forwarders-hierarchy starting at the DNS root servers. So instead of creating a zone for the whole namespace, create a zone for the host.

  • Add a new primary zone;
  • Don’t allow dynamic updates to the zone;
  • Create a new A/AAAA record for the host.

When creating an A/AAAA record:

  • Leave the name field empty;
  • Don’t create a PTR record;
  • Point it to the internal IP of the host.

Test the record with nslookup:

nslookup mdm.company.com

In case AWCM is still not reachable, check this article.

Happy deploying!

The current situation with virus and people evacuating to work remotely from home has brought a renaissance in the world of VDI. This gave an opportunity for some new companies to try reinvent the wheel, by taking what is available in open source community - KVM hypervizor, SPICE protocol, Openstack / OpenNebula orchestrator, some scotch tape and legendary free beer - and mix this together to create the "new breed of VDI".

Does it work? - it does (if constructed correctly). It comes with a price though. OpenStack is not the easiest thing to configure, on several occasions I witnessed companies trying to deploy it - with mediocre results at best. And then we have specific functions often required by businesses. Niche things like 3D graphics in VDI. Not an option with SPICE protocol. Other niche but widely important things: using physical PCs for VDI. This is a complex topic in Horizon, but even more so in KVM.

Then comes a wide topic of using peripheral devices: printers-scanners, tokens, etc. This is quite limited in the original SPICE protocol. SPICE does not allow SSO from VDI client to VDI desktop. Most of the redirection of devices is direct raw bi-directional USB pass-through. Which excludes peripheral devices from VDI client (you cannot share a device in this mode, it is "taken" by the VDI desktop as single-owner) and grows additional network traffic (lots of it! Especially printers and scanners transmitting raw text files and TIFF images). Lack of smartcard support excludes such features as two-factor authentication to VDI desktop.

I can go on and on in details, but it is needless. The VDI market is long stable, and it is not easy, to say the least, to successfully enter there. So weigh carefully what your specific requirements are, what features are needed. Because if you start developing just a few custom features in a community-based VDI, this will usually overkill commercial VDI by price most of the time, and support will be mainly on the shoulders of a small group of developers and admins who deployed the solution. A business risk on its' own.


This week I had several requests "can we control our very special SAP app with MDM?"

So SAP once had their own MDM called SAP Afaria, and the fact they dropped it did not change the fact that they are quite diligent in supporting MDM topic as a whole in their apps.

You can't always find it easily in their docs, but usually it works. A few things I found myself searching:


This week I had some "good old days" experience with Horizon deployment at a customer site. For the last couple of years all I did were AirWatch pilots, so it was a refreshing thing to check out what's up in the latest Horizon by myself. As always, it all starts with sending deployment requirements to the customer, and usually they say "we prepared everything" and in reality ignore 95% of what you ask them to do upfront.

This time I had more or less a responsible customer, but trouble came from the "Golden image" they promised to prepare for the pilot. I did not think about double-checking the distrib they used and we installed Horizon agent without a second thought. And after a reboot Windows just crashed into blue-screen with total corruption of its' boot procedure.

- Where did you take this image from, guys?
- Well, we downloaded it from somewhere, but it's Windows10, right? It's brand new thing, and we even patched it!

Well it turned out the "brand new" Windows10 was actually build 1511. And patching it did not help, they just mutually don't like each other with Horizon. So a note for future - besides all the other stuff, always check the Win10 build the customer used for their gold image. VMware has a KB on this right here - https://kb.vmware.com/s/article/2149393 

Happy deploying!

In short - yes, you do!

But what's going on here anyway?

We are trying to notify a mail client from our on-prem Exchange, that there is a fresh new EMail for him to pick up. To do this, we need to send a PUSH message to the device using a platform-vendor cloud (APNs for Apple, FCM for Google). MS Exchange cannot send PUSH messages itself, so VMware have built a special server for this - the ENSv2. It goes to Exchange, impersonating the EMail user, looks for new EMail, then sends notifications there is stuff out there. So it can send PUSH messages into APNs/FCM directly, right? Potentially it can, but then you have to register it in APNs/FCM, every single instance of ENSv2. This is not convenient, since you can have a cluster of ENS servers, you can also move them around etc., and every time you have to re-register them. To make this a bit more convenient, VMware have build their own cloud service - Cloud Messaging Service (CNS), which collects all messages from all ENS servers around the world, and then sends them to APNs/FCM from itself, handling the registration thing by itself. I show all this on a schema in the KB page.

To connect ENSv2 to CNS, you need a CNS certificate. It is only one generic cert, you do not have to generate it for every ENS. In fact, you just download it from my.workspaceone.com. But for PoC you can actually take it from the KB page, it's available in the lower right section of the page, below the Important Tools section. It's the same file you get from the official portal, I try to check and take a fresh version of it when it becomes available once in a several years.

The thing most people stumble on (and the reason I had several calls last week) is that the official doc does not seem to clearly state why you also need to make a support ticket. The reason is simple protection over the CNS: VMware does not want this service to be DDoS-ed or spoofed or run down in any other way - this will stop mail notifications for all clients around the globe. So they have some kind of firewall there, allowing only a whitelist of customer ENS servers to get through. To be registered on this list, you do have to raise a ticket in support, tell them who you are and drop them a ENS certificate the console generates in process of ENSv2 configuring.

So that's all. And in case things get ugly, check out the ENSv2 troubleshooting page!

AirWatch applications and SDK check the system for compromised (jailbreak/root) status. When they uncover by some symptoms that the device was compromised, this status is transmitted to the Console, and an action is taken. Or not. There is a switch tucked away in the depths of settings, which allows AirWatch to ignore Compromised devices.

It is in Settings → Apps → Settings and Policies → Security Policies.


Usually this option is enabled and in normal situation (and production environment!) it should be enabled. But I have encountered several times situations, when some strange rugged devices from China needed to be tested, and AirWatch reported outright, that they were "compromised", and did not even allow enrollment. With this switch turned DISABLED, you can enroll a compromised device and manage it as normal. The only downside to this being that this option is switched for all devices in the Organization Group, so best choice is to have a separate Test group for such devices.

LetsEncrypt public certificates from Mozilla Foundation are cool, but updating every 3 months can be a pain. There are several ways to automate, and the latest I discovered is to outsource this procedure: turns out there is a DNS-provider https://porkbun.com/ who do the procedure for you. Just download the brand new certificates every 3 months and insert them where they should be, without additional fuss. Sometimes it takes some conversion to insert the certificates properly. I have written a couple of articles to find the conversion commands fast: one for IDM certificates, and a paragraph on securing Nginx reverse-proxy with certificates on my other portal.

Recently I have spent the day troubleshooting a not-so-common piece of AirWatch together with a customer - the Remote File Storage or RFS. And then news of Personal Content going End-Of-Life hit us. So let's look at the topic in detail. There are 2 kinds of storage in AirWatch currently (until 3rd of January 2020 at least) for the 2 kinds of documents:

  • For corporate documents, which the company may want to protect, while distributing on workers' phones, we have the VMware Content client. It is a client to corporate portals, like Sharepoint, Alfresco or some other portal using the WebDav or CMIS protocols, or even simple file shares. VMware Content takes these documents to the mobile device and allows them to be opened and read there (see Content-Locker files support). Documents themselves are stored at the original portal, and VMware Content temporarily downloads and caches just a subset of them on a mobile device;
  • For workers' personal documents, there is another tool - the Content Locker Sync, which allows the user to Upload any documents to AirWatch and see them synced to all enrolled devices. Documents themselves are stored in the SQL database on the AirWatch server. In case there are too many of such documents, a separate storage solution, the RFS, is needed.

It all looks straightforward, until we get to macOS and Windows notebooks. For Windows, we have the corporate VMware Content client, but it simply dumps the documents in a folder on the HDD. No inner protection used. Why? - Because we can (and should) configure BitLocker by central policy to protect all the contents of the Windows machine HDD. So it is not the client VMware Content, which serves as the "protected container" - it is the Windows machine itself!

For macOS the situation is strangest of all: historically there is only a Content Locker Sync client for this OS. Seems like macOS users do not need corporate content. macOS also has cryptography on the HDD partition level, controlled by central policy, but out of the box we can only sync personal content there.

What clients are often trying to do is try merge together VMware Content and Content Locker Sync - point them to the same network file share for instance (deploy RFS, point it to a file network share, then install VMware Content and point it to the same file share). This will not work at all due to the fact that RFS technology uses blocks to store lots of files. So what the user will see on the file share are strange files with long IDs - the presentation of block data on an object-level file system.

The viable solution to the absence of functionality is to use native portal clients. Examples: for Sharepoint portal there is a Teams client in Windows and macOS, for network file share there is an old-school method via WebDav envelope. In macOS and Windows we can use these to access corporate data with all the native restrictions configured on the portal level, and then propagate these files further to mobile devices using VMware Content client.

So currently, until VMware/AirWatch developers present something new, the this seems like the best answer: use corporate portal or repository clients to sync documents with macOS and Windows, while securing the HDD with central crypto policy, and use VMware Content client for the mobile devices. See a list of Content Locker Integration portals.

Podcast stream of news

There are lots of ways to bring fresh news. Together with my colleague & VMware architect Alex Malashin we decided to try recording podcasts on latest VMware Cloud & EUC news & headlines (currently in Russian) and send them directly to your earbuds! Next step is to invite guests to talk about cool things happening in the industry.

Follow our new project on https://digital-work.space/podcast

A quick question flew in: when adding user groups as local admins of enrolled macbooks during macOS domain join, how should such groups be listed in AirWatch console? (Profile-> macOS-> Device Profile-> Directory-> Administrative-> Group Names).

Official doc does not clarify this.

Answer: group names should be just listed by simple name, example.local/Restricted Objects/Roles/Role-EndUserAdmins will be just role-enduseradmins - AirWatch will find the group itself by recursive search. If there are several groups to be listed, use a comma to separate them.

Samsung KNOX devices

Usually, when someone asks me what Samsung devices are good enough for some Restriction Profile I need to do hard googling to find a SAFE (KNOX) version on the Samsung Support Site. So I decide to put this link here to save your time.

Devices by KNOX versions: https://www.samsungknox.com/en/knox-platform/supported-devices

Lately I had a heated dialogue with Citrix tech guys in one of our clients about publishing of their apps. It seems there are two ways to integrate Citrix with Identity Manager:

  1. Use Storefront - the Citrix web portal in front of the XenApp farm. IDM can impersonate a user, go to Storefront using its' access policy, and it will present what the user can see. This method is seen as more secure by Citrix engineers, and it leaves their policies alone, and Storefront in control of the user access policies. So this is the default way they recommend to do the integration;
  2. Use direct Powershell access of IDM into the Citrix broker. This gives IDM admin access into Citrix XenApp farm and collects the full app list. Then IDM presents a subset of this list to the users based on its' own access policy. In this method, IDM acts as the central corporate portal, and basically circumvents Storefront, making it obsolete.

So it is a matter of political decision and who is taking the responsibility for either keeping Citrix in a separate silo, or fully integrating it into a central system. Either way, it is a tough choice!




Antivirus Kills AirWatch

Last year I have deployed a demo stand of AirWatch at one of the customers, and all went well, until they called a week later and said AirWatch has dropped dead. Well, life is life, nothing too unusual about AirWatch dropping dead from time to time, so I asked them to send in some logs, which they did not do. The customer was convinced it was something with their DB server, so they pleaded for me to come and just re-deploy everything on a fresh DB.

So I came by today and opened the logs first, and saw AWCM complaining like this:

Exception Details: java.io.IOException: An existing connection was forcibly closed by the remote host

This did not look like a database issue. But AWCM status/stats pages were working correctly. so I opened the Cloud Connector VM, and decided to reinstall. And when I went to Control Panel → Uninstall, it hit me in the eye: there was an antivirus in the list. It was not there, when I installed AirWatch, I was sure of it!

"Why, what's the problem?" - customer admins asked? - "Antivirus gets installed automatically by domain policy on all Windows machines, servers included". When I deployed AirWatch, the VMs were freshly-made, and the policy did not have time to work. But it did in a week of time. And when the anti-virus got there, first thing it saw was suspicious network activity of AirWatch, and blocked it. Then it saw a suspicious admin console on the AirWatch Admin Console role server - and blocked it. 

Needless to say, when the antivirus was uninstalled for the sake of the demo, AirWatch returned back to life the same instant, and that was a challenge solved.

Moral of the story: when doing test deployment of AirWatch or Horizon in a semi-production environment, always ask what kind of domain policies affect the domain server Windows OS, try to exclude the VMs during demo phase from all infrastructure security stuff. And better off - do demo environments in an isolated demo zone, and may the Digital Force always be with you!

There is a simple setup with IDM, which many customer admins like to implement in proof of concept projects, and later migrate directly into production. In this setup, IDM is deployed in DMZ and protected by some Load Balancer: F5 BigIP or Citrix NetScaler or KEMP ADC or whatever. Let's take F5 as an example:


The challenge here is to correctly configure the load balancing appliance, which some admins fail to do. The configuration often used is a simple "SSL Pass-through". What this leads to, is access of unauthenticated users to the API endpoints of IDM. IDM responds to these requests and reveals a lot of inside information, which may compromise security. Here are some examples of such links:

https://<IDM External URL>/SAAS/jersey/manager/api/
https://<IDM External URL>/SAAS/jersey/manager/api/system/health/calculators
https://<IDM External URL>/SAAS/jersey/manager/api/system/health
https://<IDM External URL>/SAAS/jersey/manager/api/messaging/health
https://<IDM External URL>/SAAS/API/1.0/REST/system/health/calculators
https://<IDM External URL>/SAAS/API/1.0/REST/system/health
https://<IDM External URL>/AUDIT/API/1.0/REST/system/health

In the worst case scenario, the admin console will also be available externally (/SAAS/admin/* link).

To mitigate this, L7 URL filtering, whitelisting of external URLs must be configured on the load balancing appliance. A second option is to create two VIP interfaces on the load balancing appliance, and close all the vulnerable URLs for external access. Here is the example schema: