Blog

This week I had several requests "can we control our very special SAP app with MDM?"

So SAP once had their own MDM called SAP Afaria, and the fact they dropped it did not change the fact that they are quite diligent in supporting MDM topic as a whole in their apps.

You can't always find it easily in their docs, but usually it works. A few things I found myself searching:


This week I had some "good old days" experience with Horizon deployment at a customer site. For the last couple of years all I did were AirWatch pilots, so it was a refreshing thing to check out what's up in the latest Horizon by myself. As always, it all starts with sending deployment requirements to the customer, and usually they say "we prepared everything" and in reality ignore 95% of what you ask them to do upfront.

This time I had more or less a responsible customer, but trouble came from the "Golden image" they promised to prepare for the pilot. I did not think about double-checking the distrib they used and we installed Horizon agent without a second thought. And after a reboot Windows just crashed into blue-screen with total corruption of its' boot procedure.

- Where did you take this image from, guys?
- Well, we downloaded it from somewhere, but it's Windows10, right? It's brand new thing, and we even patched it!

Well it turned out the "brand new" Windows10 was actually build 1511. And patching it did not help, they just mutually don't like each other with Horizon. So a note for future - besides all the other stuff, always check the Win10 build the customer used for their gold image. VMware has a KB on this right here - https://kb.vmware.com/s/article/2149393 

Happy deploying!

In short - yes, you do!

But what's going on here anyway?

We are trying to notify a mail client from our on-prem Exchange, that there is a fresh new EMail for him to pick up. To do this, we need to send a PUSH message to the device using a platform-vendor cloud (APNs for Apple, FCM for Google). MS Exchange cannot send PUSH messages itself, so VMware have built a special server for this - the ENSv2. It goes to Exchange, impersonating the EMail user, looks for new EMail, then sends notifications there is stuff out there. So it can send PUSH messages into APNs/FCM directly, right? Potentially it can, but then you have to register it in APNs/FCM, every single instance of ENSv2. This is not convenient, since you can have a cluster of ENS servers, you can also move them around etc., and every time you have to re-register them. To make this a bit more convenient, VMware have build their own cloud service - Cloud Messaging Service (CNS), which collects all messages from all ENS servers around the world, and then sends them to APNs/FCM from itself, handling the registration thing by itself. I show all this on a schema in the KB page.

To connect ENSv2 to CNS, you need a CNS certificate. It is only one generic cert, you do not have to generate it for every ENS. In fact, you just download it from my.workspaceone.com. But for PoC you can actually take it from the KB page, it's available in the lower right section of the page, below the Important Tools section. It's the same file you get from the official portal, I try to check and take a fresh version of it when it becomes available once in a several years.

The thing most people stumble on (and the reason I had several calls last week) is that the official doc does not seem to clearly state why you also need to make a support ticket. The reason is simple protection over the CNS: VMware does not want this service to be DDoS-ed or spoofed or run down in any other way - this will stop mail notifications for all clients around the globe. So they have some kind of firewall there, allowing only a whitelist of customer ENS servers to get through. To be registered on this list, you do have to raise a ticket in support, tell them who you are and drop them a ENS certificate the console generates in process of ENSv2 configuring.

So that's all. And in case things get ugly, check out the ENSv2 troubleshooting page!

AirWatch applications and SDK check the system for compromised (jailbreak/root) status. When they uncover by some symptoms that the device was compromised, this status is transmitted to the Console, and an action is taken. Or not. There is a switch tucked away in the depths of settings, which allows AirWatch to ignore Compromised devices.

It is in Settings → Apps → Settings and Policies → Security Policies.


Usually this option is enabled and in normal situation (and production environment!) it should be enabled. But I have encountered several times situations, when some strange rugged devices from China needed to be tested, and AirWatch reported outright, that they were "compromised", and did not even allow enrollment. With this switch turned DISABLED, you can enroll a compromised device and manage it as normal. The only downside to this being that this option is switched for all devices in the Organization Group, so best choice is to have a separate Test group for such devices.

LetsEncrypt public certificates from Mozilla Foundation are cool, but updating every 3 months can be a pain. There are several ways to automate, and the latest I discovered is to outsource this procedure: turns out there is a DNS-provider https://porkbun.com/ who do the procedure for you. Just download the brand new certificates every 3 months and insert them where they should be, without additional fuss. Sometimes it takes some conversion to insert the certificates properly. I have written a couple of articles to find the conversion commands fast: one for IDM certificates, and a paragraph on securing Nginx reverse-proxy with certificates on my other portal.

Recently I have spent the day troubleshooting a not-so-common piece of AirWatch together with a customer - the Remote File Storage or RFS. And then news of Personal Content going End-Of-Life hit us. So let's look at the topic in detail. There are 2 kinds of storage in AirWatch currently (until 3rd of January 2020 at least) for the 2 kinds of documents:

  • For corporate documents, which the company may want to protect, while distributing on workers' phones, we have the VMware Content client. It is a client to corporate portals, like Sharepoint, Alfresco or some other portal using the WebDav or CMIS protocols, or even simple file shares. VMware Content takes these documents to the mobile device and allows them to be opened and read there (see Content-Locker files support). Documents themselves are stored at the original portal, and VMware Content temporarily downloads and caches just a subset of them on a mobile device;
  • For workers' personal documents, there is another tool - the Content Locker Sync, which allows the user to Upload any documents to AirWatch and see them synced to all enrolled devices. Documents themselves are stored in the SQL database on the AirWatch server. In case there are too many of such documents, a separate storage solution, the RFS, is needed.

It all looks straightforward, until we get to macOS and Windows notebooks. For Windows, we have the corporate VMware Content client, but it simply dumps the documents in a folder on the HDD. No inner protection used. Why? - Because we can (and should) configure BitLocker by central policy to protect all the contents of the Windows machine HDD. So it is not the client VMware Content, which serves as the "protected container" - it is the Windows machine itself!

For macOS the situation is strangest of all: historically there is only a Content Locker Sync client for this OS. Seems like macOS users do not need corporate content. macOS also has cryptography on the HDD partition level, controlled by central policy, but out of the box we can only sync personal content there.

What clients are often trying to do is try merge together VMware Content and Content Locker Sync - point them to the same network file share for instance (deploy RFS, point it to a file network share, then install VMware Content and point it to the same file share). This will not work at all due to the fact that RFS technology uses blocks to store lots of files. So what the user will see on the file share are strange files with long IDs - the presentation of block data on an object-level file system.

The viable solution to the absence of functionality is to use native portal clients. Examples: for Sharepoint portal there is a Teams client in Windows and macOS, for network file share there is an old-school method via WebDav envelope. In macOS and Windows we can use these to access corporate data with all the native restrictions configured on the portal level, and then propagate these files further to mobile devices using VMware Content client.

So currently, until VMware/AirWatch developers present something new, the this seems like the best answer: use corporate portal or repository clients to sync documents with macOS and Windows, while securing the HDD with central crypto policy, and use VMware Content client for the mobile devices. See a list of Content Locker Integration portals.

Podcast stream of news

There are lots of ways to bring fresh news. Together with my colleague & VMware architect Alex Malashin we decided to try recording podcasts on latest VMware Cloud & EUC news & headlines (currently in Russian) and send them directly to your earbuds! Next step is to invite guests to talk about cool things happening in the industry.

Follow our new project on https://digital-work.space/podcast

A quick question flew in: when adding user groups as local admins of enrolled macbooks during macOS domain join, how should such groups be listed in AirWatch console? (Profile-> macOS-> Device Profile-> Directory-> Administrative-> Group Names).

Official doc does not clarify this.

Answer: group names should be just listed by simple name, example.local/Restricted Objects/Roles/Role-EndUserAdmins will be just role-enduseradmins - AirWatch will find the group itself by recursive search. If there are several groups to be listed, use a comma to separate them.

Samsung KNOX devices

Usually, when someone asks me what Samsung devices are good enough for some Restriction Profile I need to do hard googling to find a SAFE (KNOX) version on the Samsung Support Site. So I decide to put this link here to save your time.

Devices by KNOX versions: https://www.samsungknox.com/en/knox-platform/supported-devices

Lately I had a heated dialogue with Citrix tech guys in one of our clients about publishing of their apps. It seems there are two ways to integrate Citrix with Identity Manager:

  1. Use Storefront - the Citrix web portal in front of the XenApp farm. IDM can impersonate a user, go to Storefront using its' access policy, and it will present what the user can see. This method is seen as more secure by Citrix engineers, and it leaves their policies alone, and Storefront in control of the user access policies. So this is the default way they recommend to do the integration;
  2. Use direct Powershell access of IDM into the Citrix broker. This gives IDM admin access into Citrix XenApp farm and collects the full app list. Then IDM presents a subset of this list to the users based on its' own access policy. In this method, IDM acts as the central corporate portal, and basically circumvents Storefront, making it obsolete.

So it is a matter of political decision and who is taking the responsibility for either keeping Citrix in a separate silo, or fully integrating it into a central system. Either way, it is a tough choice!




Antivirus Kills AirWatch

Last year I have deployed a demo stand of AirWatch at one of the customers, and all went well, until they called a week later and said AirWatch has dropped dead. Well, life is life, nothing too unusual about AirWatch dropping dead from time to time, so I asked them to send in some logs, which they did not do. The customer was convinced it was something with their DB server, so they pleaded for me to come and just re-deploy everything on a fresh DB.

So I came by today and opened the logs first, and saw AWCM complaining like this:

Exception Details: java.io.IOException: An existing connection was forcibly closed by the remote host

This did not look like a database issue. But AWCM status/stats pages were working correctly. so I opened the Cloud Connector VM, and decided to reinstall. And when I went to Control Panel → Uninstall, it hit me in the eye: there was an antivirus in the list. It was not there, when I installed AirWatch, I was sure of it!

"Why, what's the problem?" - customer admins asked? - "Antivirus gets installed automatically by domain policy on all Windows machines, servers included". When I deployed AirWatch, the VMs were freshly-made, and the policy did not have time to work. But it did in a week of time. And when the anti-virus got there, first thing it saw was suspicious network activity of AirWatch, and blocked it. Then it saw a suspicious admin console on the AirWatch Admin Console role server - and blocked it. 

Needless to say, when the antivirus was uninstalled for the sake of the demo, AirWatch returned back to life the same instant, and that was a challenge solved.

Moral of the story: when doing test deployment of AirWatch or Horizon in a semi-production environment, always ask what kind of domain policies affect the domain server Windows OS, try to exclude the VMs during demo phase from all infrastructure security stuff. And better off - do demo environments in an isolated demo zone, and may the Digital Force always be with you!

There is a simple setup with IDM, which many customer admins like to implement in proof of concept projects, and later migrate directly into production. In this setup, IDM is deployed in DMZ and protected by some Load Balancer: F5 BigIP or Citrix NetScaler or KEMP ADC or whatever. Let's take F5 as an example:


The challenge here is to correctly configure the load balancing appliance, which some admins fail to do. The configuration often used is a simple "SSL Pass-through". What this leads to, is access of unauthenticated users to the API endpoints of IDM. IDM responds to these requests and reveals a lot of inside information, which may compromise security. Here are some examples of such links:

https://<IDM External URL>/SAAS/jersey/manager/api/
https://<IDM External URL>/SAAS/jersey/manager/api/system/health/calculators
https://<IDM External URL>/SAAS/jersey/manager/api/system/health
https://<IDM External URL>/SAAS/jersey/manager/api/messaging/health
https://<IDM External URL>/SAAS/API/1.0/REST/system/health/calculators
https://<IDM External URL>/SAAS/API/1.0/REST/system/health
https://<IDM External URL>/AUDIT/API/1.0/REST/system/health

In the worst case scenario, the admin console will also be available externally (/SAAS/admin/* link).

To mitigate this, L7 URL filtering, whitelisting of external URLs must be configured on the load balancing appliance. A second option is to create two VIP interfaces on the load balancing appliance, and close all the vulnerable URLs for external access. Here is the example schema:





So basically I said it all in the title: I had recently several questions from customers and partners about ENS needing a public certificate. The official docs are very vague on this topic, and I sincerely thought ENS does not need one, since logically ENS is supposed to take new E-Mail notifications from some Exchange and send it via Apple Cloud as a PUSH-notification to the mobile device.

In reality this is not totally true: before ENS starts to send notifications, it needs to register a device for it to be a subscriber of future notifications. The ENS cannot send the subscription data itself in a PUSH message. What it can send is a request to go visit itself and register a subscription. So the device receives a PUSH message and goes to ENS for to make a subscription. And since it goes to ENS, this means ENS has to be published to Internet, and it has to have a valid public certificate installed into IIS Bindings.

Read more on ENS troubleshooting in my KB article and tread lightly in the MDM infrastructure!


Some time ago I had an issue on my hands: a client has installed AirWatch in production, and next they started a semi-production usage of Identity Manager (IDM) 3-node cluster together with Horizon View desktop. They are fans of strictness everywhere, and users entering not just their account names in login forms, but full-blown UserPrincipalNames (UPNs). So when they installed IDM, on the Directory connection stage they requested to change the 'username' attribute to be a UPN. All worked fine until they tried to integrate IDM with AirWatch. This detail broke the integration, and I had to do something about this.

Our PSO told me there is no way to legally do this, just reinstall IDM from scratch. But I got curious, what about 'illegal' (non documented) ways?

So the base of the problem is that IDM Console does not let you delete stuff: Active Directory cannot be removed, because an active Connector is attached to it, Connector cannot be removed because an Identity Provider (IdP) is associated with it (BuiltIn IdP) and a BuiltIn IdP cannot be dissociated, because it is locked to the Connector - we return to the beginning.

To break this circle, I opened the 'saas' database. What I discovered, is that IDM DB is very neat and logical compared to AirWatch. It follows the console controls very closely: if there is a tab called 'Connectors' in the console, you can be sure to find a corresponding saas.Connector table, Directories tab has saas.Directories table and so on.

So I searched for a 'weak link' in the chain of dependencies and found it: the place, where IdP is associated with the Connector. All I had to do was delete this line in a small table saas.IdpJoinConn:

SELECT * FROM saas.saas.IdpJoinConn;
 
-- Check the ID of your IdP defore doing this!!
 
DELETE FROM saas.saas.IdpJoinConn
WHERE idIdentityProvider = N; -- set N to the ID of your IdP

Once this was gone, the Connector could be deleted, and the Directory, and the puzzle unraveled itself, saving a lot of time on reinstalling the IDM cluster from scratch. As always, I documented my investigation in great detail in a KB page.

I hope this story proves inetresting to you and will inspire to check out the IDM database. So stay tuned and let the Digital God light your path!


Several times while deploying AirWatch in PoC, I had security guys come and request an audit. Especially before opening network ports from the outside. Usually they have a security scanner with them, and it finds things on the IIS server on AirWatch Directory Services server role. I collect the details on AirWatch Hardening in the Knowledge Base. One of the fun and nasty things happened, when the security team came to the IT guys project owners and demanded to do hardening before deploying AirWatch at all. What they did was turn off cipher suites on a Windows Gold image with IIS, and then they cloned the image for different AirWatch servers. 

Nobody told such hardening was made, when the client provided Windows virtual machines, and it took our PSO a long time to figure out why AirWatch AWCM refuses to receive certificates from the MS CA. They just could not agree on the cipher protocol, because all available variations were turned off by security.

Another incident was when the security team made an audit of Identity Manager. In the process they requested the sshuser login/password to inspect SSH exploits. As result, they scrambled catalina.options file of IDM, and broke the Web server: all web interfaces went down on IDM and we had lots of fun trying to figure out why this happened. Solved it by resetting the options file. 

So what lesson does this teach us? Be very vigilant, when a client decides to do an audit. Keep very close and request a log of every action they do on your AirWatch deployment.