LetsEncrypt public certificates from Mozilla Foundation are cool, but updating every 3 months can be a pain. There are several ways to automate, and the latest I discovered is to outsource this procedure: turns out there is a DNS-provider https://porkbun.com/ who do the procedure for you. Just download the brand new certificates every 3 months and insert them where they should be, without additional fuss. Sometimes it takes some conversion to insert the certificates properly. I have written a couple of articles to find the conversion commands fast: one for IDM certificates, and a paragraph on securing Nginx reverse-proxy with certificates on my other portal.
Recently I have spent the day troubleshooting a not-so-common piece of AirWatch together with a customer - the Remote File Storage or RFS. And then news of Personal Content going End-Of-Life hit us. So let's look at the topic in detail. There are 2 kinds of storage in AirWatch currently (until 3rd of January 2020 at least) for the 2 kinds of documents:
- For corporate documents, which the company may want to protect, while distributing on workers' phones, we have the VMware Content client. It is a client to corporate portals, like Sharepoint, Alfresco or some other portal using the WebDav or CMIS protocols, or even simple file shares. VMware Content takes these documents to the mobile device and allows them to be opened and read there (see Content-Locker files support). Documents themselves are stored at the original portal, and VMware Content temporarily downloads and caches just a subset of them on a mobile device;
- For workers' personal documents, there is another tool - the Content Locker Sync, which allows the user to Upload any documents to AirWatch and see them synced to all enrolled devices. Documents themselves are stored in the SQL database on the AirWatch server. In case there are too many of such documents, a separate storage solution, the RFS, is needed.
It all looks straightforward, until we get to macOS and Windows notebooks. For Windows, we have the corporate VMware Content client, but it simply dumps the documents in a folder on the HDD. No inner protection used. Why? - Because we can (and should) configure BitLocker by central policy to protect all the contents of the Windows machine HDD. So it is not the client VMware Content, which serves as the "protected container" - it is the Windows machine itself!
For macOS the situation is strangest of all: historically there is only a Content Locker Sync client for this OS. Seems like macOS users do not need corporate content. macOS also has cryptography on the HDD partition level, controlled by central policy, but out of the box we can only sync personal content there.
What clients are often trying to do is try merge together VMware Content and Content Locker Sync - point them to the same network file share for instance (deploy RFS, point it to a file network share, then install VMware Content and point it to the same file share). This will not work at all due to the fact that RFS technology uses blocks to store lots of files. So what the user will see on the file share are strange files with long IDs - the presentation of block data on an object-level file system.
The viable solution to the absence of functionality is to use native portal clients. Examples: for Sharepoint portal there is a Teams client in Windows and macOS, for network file share there is an old-school method via WebDav envelope. In macOS and Windows we can use these to access corporate data with all the native restrictions configured on the portal level, and then propagate these files further to mobile devices using VMware Content client.
So currently, until VMware/AirWatch developers present something new, the this seems like the best answer: use corporate portal or repository clients to sync documents with macOS and Windows, while securing the HDD with central crypto policy, and use VMware Content client for the mobile devices. See a list of Content Locker Integration portals.
There are lots of ways to bring fresh news. Together with my colleague & VMware architect Alex Malashin we decided to try recording podcasts on latest VMware Cloud & EUC news & headlines (currently in Russian) and send them directly to your earbuds! Next step is to invite guests to talk about cool things happening in the industry.
Follow our new project on https://digital-work.space/podcast
A quick question flew in: when adding user groups as local admins of enrolled macbooks during macOS domain join, how should such groups be listed in AirWatch console? (Profile-> macOS-> Device Profile-> Directory-> Administrative-> Group Names).
Official doc does not clarify this.
Answer: group names should be just listed by simple name, example.local/Restricted Objects/Roles/Role-EndUserAdmins will be just role-enduseradmins - AirWatch will find the group itself by recursive search. If there are several groups to be listed, use a comma to separate them.
Usually, when someone asks me what Samsung devices are good enough for some Restriction Profile I need to do hard googling to find a SAFE (KNOX) version on the Samsung Support Site. So I decide to put this link here to save your time.
Devices by KNOX versions: https://www.samsungknox.com/en/knox-platform/supported-devices
Lately I had a heated dialogue with Citrix tech guys in one of our clients about publishing of their apps. It seems there are two ways to integrate Citrix with Identity Manager:
- Use Storefront - the Citrix web portal in front of the XenApp farm. IDM can impersonate a user, go to Storefront using its' access policy, and it will present what the user can see. This method is seen as more secure by Citrix engineers, and it leaves their policies alone, and Storefront in control of the user access policies. So this is the default way they recommend to do the integration;
- Use direct Powershell access of IDM into the Citrix broker. This gives IDM admin access into Citrix XenApp farm and collects the full app list. Then IDM presents a subset of this list to the users based on its' own access policy. In this method, IDM acts as the central corporate portal, and basically circumvents Storefront, making it obsolete.
So it is a matter of political decision and who is taking the responsibility for either keeping Citrix in a separate silo, or fully integrating it into a central system. Either way, it is a tough choice!
Last year I have deployed a demo stand of AirWatch at one of the customers, and all went well, until they called a week later and said AirWatch has dropped dead. Well, life is life, nothing too unusual about AirWatch dropping dead from time to time, so I asked them to send in some logs, which they did not do. The customer was convinced it was something with their DB server, so they pleaded for me to come and just re-deploy everything on a fresh DB.
So I came by today and opened the logs first, and saw AWCM complaining like this:
This did not look like a database issue. But AWCM status/stats pages were working correctly. so I opened the Cloud Connector VM, and decided to reinstall. And when I went to Control Panel → Uninstall, it hit me in the eye: there was an antivirus in the list. It was not there, when I installed AirWatch, I was sure of it!
"Why, what's the problem?" - customer admins asked? - "Antivirus gets installed automatically by domain policy on all Windows machines, servers included". When I deployed AirWatch, the VMs were freshly-made, and the policy did not have time to work. But it did in a week of time. And when the anti-virus got there, first thing it saw was suspicious network activity of AirWatch, and blocked it. Then it saw a suspicious admin console on the AirWatch Admin Console role server - and blocked it.
Needless to say, when the antivirus was uninstalled for the sake of the demo, AirWatch returned back to life the same instant, and that was a challenge solved.
Moral of the story: when doing test deployment of AirWatch or Horizon in a semi-production environment, always ask what kind of domain policies affect the domain server Windows OS, try to exclude the VMs during demo phase from all infrastructure security stuff. And better off - do demo environments in an isolated demo zone, and may the Digital Force always be with you!
There is a simple setup with IDM, which many customer admins like to implement in proof of concept projects, and later migrate directly into production. In this setup, IDM is deployed in DMZ and protected by some Load Balancer: F5 BigIP or Citrix NetScaler or KEMP ADC or whatever. Let's take F5 as an example:
The challenge here is to correctly configure the load balancing appliance, which some admins fail to do. The configuration often used is a simple "SSL Pass-through". What this leads to, is access of unauthenticated users to the API endpoints of IDM. IDM responds to these requests and reveals a lot of inside information, which may compromise security. Here are some examples of such links:
https://<IDM External URL>/SAAS/jersey/manager/api/
In the worst case scenario, the admin console will also be available externally (/SAAS/admin/* link).
To mitigate this, L7 URL filtering, whitelisting of external URLs must be configured on the load balancing appliance. A second option is to create two VIP interfaces on the load balancing appliance, and close all the vulnerable URLs for external access. Here is the example schema:
So basically I said it all in the title: I had recently several questions from customers and partners about ENS needing a public certificate. The official docs are very vague on this topic, and I sincerely thought ENS does not need one, since logically ENS is supposed to take new E-Mail notifications from some Exchange and send it via Apple Cloud as a PUSH-notification to the mobile device.
In reality this is not totally true: before ENS starts to send notifications, it needs to register a device for it to be a subscriber of future notifications. The ENS cannot send the subscription data itself in a PUSH message. What it can send is a request to go visit itself and register a subscription. So the device receives a PUSH message and goes to ENS for to make a subscription. And since it goes to ENS, this means ENS has to be published to Internet, and it has to have a valid public certificate installed into IIS Bindings.
Read more on ENS troubleshooting in my KB article and tread lightly in the MDM infrastructure!
Some time ago I had an issue on my hands: a client has installed AirWatch in production, and next they started a semi-production usage of Identity Manager (IDM) 3-node cluster together with Horizon View desktop. They are fans of strictness everywhere, and users entering not just their account names in login forms, but full-blown UserPrincipalNames (UPNs). So when they installed IDM, on the Directory connection stage they requested to change the 'username' attribute to be a UPN. All worked fine until they tried to integrate IDM with AirWatch. This detail broke the integration, and I had to do something about this.
Our PSO told me there is no way to legally do this, just reinstall IDM from scratch. But I got curious, what about 'illegal' (non documented) ways?
So the base of the problem is that IDM Console does not let you delete stuff: Active Directory cannot be removed, because an active Connector is attached to it, Connector cannot be removed because an Identity Provider (IdP) is associated with it (BuiltIn IdP) and a BuiltIn IdP cannot be dissociated, because it is locked to the Connector - we return to the beginning.
To break this circle, I opened the 'saas' database. What I discovered, is that IDM DB is very neat and logical compared to AirWatch. It follows the console controls very closely: if there is a tab called 'Connectors' in the console, you can be sure to find a corresponding saas.Connector table, Directories tab has saas.Directories table and so on.
So I searched for a 'weak link' in the chain of dependencies and found it: the place, where IdP is associated with the Connector. All I had to do was delete this line in a small table saas.IdpJoinConn:
Once this was gone, the Connector could be deleted, and the Directory, and the puzzle unraveled itself, saving a lot of time on reinstalling the IDM cluster from scratch. As always, I documented my investigation in great detail in a KB page.
I hope this story proves inetresting to you and will inspire to check out the IDM database. So stay tuned and let the Digital God light your path!
Several times while deploying AirWatch in PoC, I had security guys come and request an audit. Especially before opening network ports from the outside. Usually they have a security scanner with them, and it finds things on the IIS server on AirWatch Directory Services server role. I collect the details on AirWatch Hardening in the Knowledge Base. One of the fun and nasty things happened, when the security team came to the IT guys project owners and demanded to do hardening before deploying AirWatch at all. What they did was turn off cipher suites on a Windows Gold image with IIS, and then they cloned the image for different AirWatch servers.
Nobody told such hardening was made, when the client provided Windows virtual machines, and it took our PSO a long time to figure out why AirWatch AWCM refuses to receive certificates from the MS CA. They just could not agree on the cipher protocol, because all available variations were turned off by security.
Another incident was when the security team made an audit of Identity Manager. In the process they requested the sshuser login/password to inspect SSH exploits. As result, they scrambled catalina.options file of IDM, and broke the Web server: all web interfaces went down on IDM and we had lots of fun trying to figure out why this happened. Solved it by resetting the options file.
So what lesson does this teach us? Be very vigilant, when a client decides to do an audit. Keep very close and request a log of every action they do on your AirWatch deployment.
It so happened these days, that Email Notification Service (ENS - we will be talking about the new ENS v2 in this blog post) is in the center of action for me. How does ENS work? Let's talk it over on example of iOS: on iOS our mail client (Boxer) is forced to sleep while it is not opened. And it cannot truly receive any messages or notifications from our corporate mail server. What happens is that a special ENS server is installed in the datacenter next to the mail server, and it goes to the mail server once in a while on behalf of a user to check if there is new EMail for him. If the EMail server notifies that there is indeed some new message, the ENS send a message over to Apple Cloud (so it needs access to Internet), which in turn sends it as a PUSH notification to the device.
The device maintains a channel with Apple servers and gets the PUSH on iOS system. The iOS system cannot receive and process the message itself - it has to correspond it to some application, to Boxer client. iOS draws the appropriate icon on the notification and updates the "unread messages" badge on Boxer. As I already said, Boxer knows nothing of any new EMail or notifications - it is all done at system level. Only once it is opened, Boxer actually goes to the mail server and shows the actual new messages.
That was the big story, now for some details. In order for ENS to request messages from the mail server on behalf of a user, it must store some credentials for this user to present as proof for authentication. To do this, ENS has its' own SQL database. In there you can find the user accounts encoded as IDs (UserInfoID), their certificates and private keys, corresponding devices, info on how many unread letters each user has (what to write on the badge). It is not clear - what UserInfoID stands for which user? There is a simple trick to figure this out: in Boxer there is such a thing as the default notification sound for incoming EMail (note.m4r), which is synced with SQL (ENS.dbo.DeviceInfo table). If you change the sound to some rare one, it will change in SQL and reveal this user/device.
You can then use UserInfoID to trace what is going on: how many unread letters did ENS pick up for this user, what certificate does it use etc. I have collected some more troubleshooting tips for ENS over here. Don't forget to backup ENS DB before you try stuff with it: if it breaks, all enrolled users with EMail will have to reinstall Boxer on their devices.. Have fun!
As you obviously know, AirWatch is essentially a SQL database and an API Server, with some front-ends and message dispatch components hanging around. In my article last month I discussed some quick code to get into the API. Let's look a bit closer at the database.
SQL with AirWatch starts as a set of design decisions: how should the database be deployed? General rule of thumb here is to avoid clustering: it is a vastly better choice to have a single SQL instance on a metrocluster, than having several SQL instance nodes. By the way, this is also what Microsoft says: marketing talk aside, MS SQL cluster has risks of falling apart and requires a lot of attention. I collected other recommendations as well - from AirWatch team (green rows of the table) and from Microsoft based on SQL for a very large SCCM deployment (yellow rows).
There are several important tables to look at in AirWatch SQL. Apart from them, when the Admin Console of AirWatch makes a query into the database, it often launches a complex function, which joins several tables together. And here is where we can poke our nose in using the SQL Profiler. When installing MS SQL, you need to check "Management tools - Advanced" component to get the SQL Profiler, and it becomes available as a separate tool.
The idea of SQL Profiler is quite simple: it intercepts any queries to the SQL database, shows them, and enables you to copy such a query and replay it manually. To help find stuff in case there is constant stream of requests, you can do some filtering. For example, the Admin Console requests can be filtered out by putting Webconsole in the queries source field.
A little use case to illustrate how this can come in handy: at one company an admin has mistakenly uploaded a new version of an internal app to the console. The new version was still in Beta and was supposed to be uploaded to AirWatch in test zone, but the admin did not know of this and hurled the thing into production server. The Beta-app did not work the way it was supposed to, but its' users were several company top managers and on holidays at the time, so the errors came as a surprise a month and 1-2 weeks later. And this was the key to the problem: when investigation started on who uploaded the defected app and when, the client used standard console **HUB -> Reports and Analytics -> Events -> Console Events** to see a report, and saw nothing. Why? Because the farthest time the console allows to search is for "Last Month". And, as I said, the incident happened earlier.
So using SQL Profiler they intercepted the query the AirWatch Admin Console sends to the database:
It is easy to see above, that if we change the dates of the search scope and change MaximumRows result to some bigger number (1000?), we may see much more than the Admin Console can show us. It worked perfectly and allowed us to see the exact time when the defect app was uploaded and who did it.
Have fun, and don't forget to backup you DB regularly before doing cool stuff!
I had an interesting experience this week with a customer trying to deploy some custom apps. The problem they encountered proved deeper than it first looked: the apps got deployed by AirWatch and worked at the device, but AirWatch Console reported a Failed installation, or just hung the status as "In Progress". As it turned out, programmers compiled the apps with an unsupported build tool version, which made a different table with app metadata, which AirWatch could not understand.
First useful tip to know: AirWatch supports Targeted logging, getting Agent logs from device. The trick is that logs verbosity can be raised by going to Admin → Diagnostics and putting general logs in verbose mode. No docs say anything about this, but it does affect verbosity level of the logs in AirWatch mobile apps.
Second useful tip: you can see what metadata got into AirWatch and on device when uploading the application to the Console and later deploying this app. To see the metadata table that got on the device, you need to join a couple of SQL tables:
More on deployment scenarios and troubleshooting app metadata using SQL is written in a separate article.
By the way, there are several ways to do the upload automatically! There is a Jenkins plugin with AirWatch, which is the preferred way, but you can also use a curl command. Both ways have their quirks and need deep configuring. An article on curl upload will help you start. Have fun!
A new iPhone X MAX is out on the market, some top manager already bought it, and the big question is: will AirWatch support it?
What does "support" mean: we have the Agent support on device and we have Console support (compliance rules, smart groups etc. based on device model). We can check out what devices does the console support by going to AirWatch SQL and doing something like:
dbo.DeviceModelInfo is a neat little table, which surprisingly contains mode Apple device models than Android devices. For Android devices, it is important what does AirWatch have a driver for (called Platform OEM Service), a list of those can be found in the open documentation for the current version of AirWatch. Other interesting tables in AirWatch can be found on this page of the knowledge base.