Microsoft introduced Azure Function Apps in March 2016. The service allows developers to write event-driven code that execute when triggered by events inside Azure services. Developers can leverage Azure Functions to build HTTP-based APIs that will be accessible by a variety of applications. The service cost is scalable, in terms of payment, so you pay only for the resources consumed when your function was executed.
The distinct feature of Azure, while comparing it to Amazon or Google, is the ability to run Windows-based containers. Both environments, whether you use Linux or Windows, support compiled binaries interpreted languages. Each function has unique GIT endpoint for CI/CD integration.
Over the past few weeks, I’ve been exploring best practices, configuration issues, and potential vulnerabilities as it relates to serverless apps on Microsoft Azure following my write up on Google Cloud Functions.
Developers can use external modules and libraries to reduce the development time. These extras are likely maintained on a free time basis and thus may contain vulnerable code or even worse – malicious code.
Most package managers has a feed for vulnerabilities that helps in the upgrading schedule of the packages. .NET package manager, NuGet, doesn’t have a publicly available vulnerabilities feed which makes tracking the vulnerabilities harder.
Innocently including vulnerable code in functions is an extremely easy mistake to make. Developers need to carefully review the libraries they are using. They should go through the code manually when needed and make sure that the code is updated frequently as well as pull the code often and keep track on future updates to the libraries.
There are 3rd party solutions, such as Twistlock, that can integrate into the development build pipeline and scan external libraries for malware and vulnerabilities.
Continuous deployment with Git repository
Azure allows you to assign a Git repository (Bitbucket, Github or others) to functions. This way developers can maintain code in Github, while Azure will pull code changes and update the function accordingly.
The next step after breaking into your function’s code repository would be changing the code of your production functions. I suggest to set up 2 Factor Authentication (2FA in short) in your repository account in order to make it harder to steal an account with access to the repository. Don’t forget bots and automation accounts, they are more likely to be vulnerable than normal user accounts because they usually don’t support 2FA. If 2FA is not supported, make sure to use strong passwords for your bots.
Yet another solution for this obstacle would be to deploy automatically only to staging environment, and then promote the code to production environment manually.
Shared disk space
Functions at the same web app share a mutual home directory. The directory holds information about the function execution such as logs, authorization tokens, and other OS-dependant information.
The directory is accessible at D:\home from Windows functions and at /home from Linux functions. It’s important to know that some of the subdirectories under the home folder are mutual and persistent for functions at the same web app. This means that one function can access files of other functions — the function doesn’t need to run at the same time as the directory is persistent!
I want to emphasize that once a function was hacked, it’s relatively easy, using these directories, to gain persistency. The scope of this isolation leakage is limited to the web app. Unfortunately there is no easy way to avoid the leakage, as it is integral part of Azure Function Apps design. A simple overcome for this issue is to segment the functions between multiple functions apps, this way each app is limited to its web-app and cannot interfere with the others. This solution requires to duplicate the configuration since the deployment configuration is configured per web-app and not per function.
During penetration testing, a security researcher tries to gather as much information as possible about the target that an attacker would look to exploit. Azure, by default, includes the unhandled exception traceback inside the HTTP response. Attackers would use this information to improve their angle and identify the vulnerable service or code inside the function. To avoid exposing the information, one could set a top-level exception handler that doesn’t output the traceback back to the user.
By default authentication is disabled for new functions. It’s possible to set up authentication and to authenticate users against Google, Facebook, Microsoft and other authentication providers.
The function is somewhat limited in terms of networking. When the function code executes inside a Linux container, it is assigned an IP at the subnet of 188.8.131.52/32 on the Docker virtual container network. Inside the function, local port 80 is exported for allowing incoming connections from the reverse proxy. These incoming connections are used for the HTTP trigger mechanism. It is implied from the use of reverse proxy that the function is protected with some kind of throttling mechanism.
Outgoing TCP and UDP connections are allowed.
Windows functions are hardened in terms of networking: I could not enumerate the network interfaces nor IP addresses although outgoing traffic is allowed.
Registry and WMI
The Windows registry is partly accessible on Windows containers. Basic access can be gained through PowerShell. The registry seems quite hardened and no leaks were found between the function container and the host.
WMI is short for Windows Management Instrumentation. It allows software to interact with the Windows operating system in an easy manner. For example, WMI can be used to list all the IP addresses of a system and also change them. Inside Windows container the interesting WMI classes are not available. Enumeration of the WMI classes and namespaces is not available as well. This leaves us with no access to WMI. I assume it can break some software (or PowerShell scripts) that rely on WMI, but at least it provides an attacker way less options to expand outside of the function.
During the research I used Kudu a lot for debugging. Kudu is not a security tool, but it is indeed very useful debugging tool.
For each Azure web app, which may contain a single function or more, there is a corresponding Kudu website to debug and manage the functions. Kudu was developed by Microsoft and it allows debugging of both Windows and Linux based functions. It is accessed by browsing https://*.scm.azurewebsites.net where the wildcard is your web app name.
With Kudu, you can download Docker logs, browse your home directory files (the shared directory), see your function Git endpoint and much more. Kudu functionality varies between Windows and Linux environment, as the deployment and features of Azure Functions also varies when changing operating system.
Most of Kudu’s functionality is exported using REST API.
Microsoft Azure Functions provide a distinct platform that allows, for the first time, execution of code under a Windows container in a Serverless model. The platform has very nice features and made big progress in the past year. I’m looking forward to see the progress of the project this year, especially if it would not be in preview anymore. I hope these best practices and research results provide deeper insights into potential risk when building serverless applications.
Follow us on @twistlocklabs for more interesting security reviews!
Tracking Down Exposed Kubernetes Instances in the WildRead the Blog
The state of exposed container applications and registries | Labs researchRead the Blog
How to Crash the Linux Kernel with a CDROM Interaction – CVE-2018-11506Read the Blog
Twistlock Protection for Kubernetes Specific AttacksRead the Blog