Skip to main content

Ingress to Kubernetes apps deployed on Azure Kubernetes Service (AKS)

In this guide, you'll launch a new cluster with Azure Kubernetes Service (AKS) and a demo app. You'll then add the ngrok Kubernetes Operator to route public traffic directly to your demo app through an encrypted, feature-rich tunnel for a complete proof of concept.

In the end, you'll have learned enough to deploy your next production-ready Kubernetes app with AKS, with the ngrok Kubernetes Operator giving you access to additional features, like observability and resiliency, with no extra configuration complexity.

Here is what you'll be building with:

  • The ngrok Kubernetes Operator: ngrok's official controller for adding secure public ingress and middleware execution to your Kubernetes apps with ngrok's Cloud Edge. With ngrok, you can manage and secure traffic to your apps at every stage of the development lifecycle while also benefitting from simpler configurations, security, and edge acceleration.
  • Azure Kubernetes Service (AKS): A managed Kubernetes environment from Microsoft. AKS simplifies the deployment, health monitoring, and maintenance of cloud native applications, whether you deploy them in Azure, in on-premises data centers, or at the edge. With 40 regions, you should be able to deploy a cluster close to your customers.

What you'll need

  • An Azure account with permissions to create new Kubernetes clusters.
  • An ngrok account.
  • kubectl and Helm 3.0.0+ installed on your local workstation.
  • The ngrok Kubernetes Operator installed on your cluster.
  • A reserved domain, which you can get in the ngrok dashboard or with the ngrok API.
    • You can choose from an ngrok subdomain or bring your own custom branded domain, like https://api.example.com.
    • We'll refer to this domain as <NGROK_DOMAIN>.

Create your cluster in AKS

Start by creating a new managed Kubernetes cluster in AKS. If you already have one, you can skip to Step 2: Install the ngrok Kubernetes Operator.

  1. Go to the Kubernetes services section in your Azure console and click CreateCreate a Kubernetes cluster.

  2. Configure your new cluster with the wizard. The default options are generally safe bets, but there are a few you might want to look at depending on your requirements and budget:

    • Cluster present configuration: You can choose a production-ready deployment, a dev/test deployment, and others.
    • Region: The data center where AKS will deploy your cluster—pick a region geographically close to your primary customers and/or your organization.
    • AKS pricing tier: The Free tier works great with less than 10 nodes, and you can always upgrade to the production tier after deployment.
  3. Click Review + create and wait for Azure to validate your configuration. If you see a Validation failed. warning, check out the errors—they're likely related to quota limits. When it's ready, click Create. Grab a cup of coffee—deployment will take a while.

  4. When AKS completes the deployment, click Go to deployment, then Connect, which will show you options for connecting to your new cluster with kubectl. Follow the instructions to use the Cloud shell or Azure CLI, then double-check AKS has successfully deployed your cluster's underlying services:

    Loading…

Deploy a demo microservices app

To showcase how this integration works, you'll deploy the AKS Store app, which uses a microservices architecture to connect frontend UI to API-like services, passing data to RabbitMQ and MongoDB in the backend. To showcase the features of AKS, you'll deploy this demo app directly in the Azure Portal.

tip

If you prefer the CLI, save the YAML below to a .yaml file on your local workstation and deploy with kubectl apply -f ....

  1. Click CreateApply a YAML.

  2. Copy and paste the YAML below into the editor.

    Loading…
  3. Click Add to deploy the demo app. To double-check services deployed successfully, click on Workloads in the Azure Portal and look for store-front, rabbitmq, product-service, and order-service in the default namespace. If you prefer the CLI, you can run kubectl get pods for the same information.

Add ngrok's Kubernetes ingress to your demo app

Next, you'll configure and deploy the ngrok Kubernetes Operator to expose your demo app to the public internet through the ngrok Cloud Edge.

  1. In the Azure Portal, click Create→Apply a YAML.

  2. Copy and paste the YAML below into the editor. This manifest defines how the ngrok Kubernetes Operator should route traffic arriving on NGROK_DOMAIN to the store-front service on port 80, which you deployed in the previous step.

    tip

    Make sure you edit line 9 of the YAML below, which contains the NGROK_DOMAIN variable, with the ngrok subdomain you created in the second step.

    Loading…
  3. Click Add to deploy the ingress configuration.

    You can check on the status of the ingress deployment in the Azure Portal at Services and ingressesIngresses. You should see the store-ingress name and your ngrok subdomain. If you need to edit your ingress configuration in the future, click on the ingress item and then the YAML tab.

  4. Navigate to your ngrok subdomain, e.g. https://NGROK_DOMAIN.ngrok.app, in your browser to see the demo app in action. Behind the scenes, ngrok's Cloud Edge routed your request into the ngrok Kubernetes Operator, which then passed it to the store-front service.

    The AKS Store demo app accessible from the public internet

Add OAuth authentication to your demo app

Now that your demo app is publicly accessible through the ngrok Cloud Edge, you can quickly layer on additional capabilities, like authentication, without configuring and deploying complex infrastructure. Let's see how that works for restricting access to individual Google accounts or any Google account under a specific domain name.

With our Traffic Policy system and the oauth action, ngrok manages OAuth protection entirely at the cloud edge, which means you don't need to add any additional services to your cluster, or alter routes, to ensure ngrok's edge authenticates and authorizes all requests before allowing ingress and access to your endpoint.

To enable the oauth action, you'll create a new NgrokTrafficPolicy custom resource and apply it to your entire Ingress with an annotation. You can also apply the policy to just a specific backend or as the default backend for an Ingress—see our doc on using the Operator with Ingresses.

  1. Edit your existing ingress YAML with the following. Note the new annotations field and the NgrokTrafficPolicy CR.

    Loading…
  2. When you open your demo app again, you'll be asked to log in via Google. That's a start, but what if you want to authenticate only yourself or colleagues?

  3. You can use expressions and CEL interpolation to filter out and reject OAuth logins that don't contain example.com. Update the NgrokTrafficPolicy portion of your manifest after changing example.com to your domain.

    Loading…
  4. Check out your deployed app once again. If you log in with an email that doesn't match your domain, ngrok rejects your request. Authentication... done!

What's next?

You've now used the open source ngrok Kubernetes Operator to add public ingress to a demo app on a cluster managed in AKS without having to worry about complex Kubernetes networking configurations. Because ngrok abstracts ingress and middleware execution to its Cloud Edge, you can follow a similar process to route public traffic to your next production-ready app.

For next steps, explore our Kubernetes docs for more details on how the Operator works, different ways you can integrate ngrok with an existing production cluster, or use more advanced features like bindings or endpoint pooling.