Deploy a multicloud API gateway
Whether because of an acquisition, an ask to make your cloud-native architecture
more redundant, or to access cloud-specific services, you need to make multitle API
services available from more than one cloud behind a single hostname, like api.example.com
.
To help you sail through the challenges of any multicloud deployment, you're looking for an API gateway platform that lets you:
- Use your existing Kubernetes clusters and networking
- Consistently apply security and traffic management policy in one place
- View API requests and responses on a single pane of glass
- Stop worrying about cloud-specific tools and implementations.
Enter ngrok's multicloud API gateway.
What you'll learn
In this tutorial, you'll learn how to implement ngrok as a multicloud API gateway in a Kubernetes environment with these broad steps:
- Deploy the ngrok Kubernetes Operator to your clusters in multiple clouds.
- Set up a single hostname, attached to a Cloud Endpoint that routes traffic to services on multiple clouds based on pathname, an internal Agent Endpoint then receives in your cluster.
- Add traffic management rules, like JWT validation and rate limiting, to all or some of your API services.
In the end, you'll get a single hostname, like api.example.com
, that routes to
API services on multiple clouds based on pathname—plus essential traffic
management policies like authentication and rate limiting. Your architecture
will look like:
What you'll need
kubectl
and Helm installed on your local workstation.- Two or more cloud providers.
- We'll refer to them as
cloud A
andcloud B
.
- We'll refer to them as
- A Kubernetes cluster on each cloud.
- A reserved domain, which you can get in the ngrok
dashboard or with the ngrok
API.
- You can choose from an ngrok subdomain or bring your own custom branded
domain, like
https://api.example.com
. - We'll refer to this domain as
<YOUR_NGROK_DOMAIN>
.
- You can choose from an ngrok subdomain or bring your own custom branded
domain, like
- An Auth0 account (free is fine) for creating an API and generating JWTs.
Deploy demo API services (optional)
If you want to quickly wire up a POC using ngrok, we recommend using our demo API service, which responds with details about your requests.
The two manifests below provision pods and resources for abc-app
on cloud A
and xyz-app
on cloud B, respectively, and configure the container's
environment variables to respond uniquely to "prove" you've actually gone multicloud.
- Cloud A: abc-app
- Cloud B: xyz-app
Loading…
Loading…
Apply these with kubectl apply -f ...
into the respective cloud and cluster.
Deploy the ngrok Kubernetes Operator
Add the ngrok Kubernetes Operator repo to Helm.
Loading…
Set up a few environment variables to help apply the ngrok Kubernetes Operator.
NGROK_AUTHTOKEN
: An ngrok authtoken—you can use either your default authtoken or create a new authtoken for this deployment.NGROK_API_KEY
: An API key created in the ngrok dashboard to associate with your deployment.
Loading…
Install the ngrok Kubernetes Operator into a new ngrok-operator
namespace.
Loading…
Repeat the Operator installation for cloud B.
Create internal agent endpoints
These endpoints are private to your account and can only receive traffic
forwarded with the forward-internal
Traffic Policy action, which means they're
never publicly accessible.
The ngrok Kubernetes Operator comes with an AgentEndpoint
CRD that helps you
quickly map specific upstream services to ngrok endponits.
If you're bringing your own API services instead of using the demo API, you'll
need to change the url
and upstream.url
fields based on your architecture.
- Cloud A: abc-app
- Cloud B: xyz-app
Loading…
Loading…
Apply these with kubectl apply -f ...
into the respective cloud and cluster.
Create a cloud endpoint
Cloud endpoints are persistent, always-on endpoints that you can manage with the ngrok dashboard or API.
You centrally control your traffic management and security policy on your cloud endpoint, which operates as the "front door" of your multicloud API gateway, then forward traffic your API services in multiple clouds. That's much easier than synchronizing policies using cloud-specific tools, since they're all configured and managed in different ways.
- Dashboard
- API
Hop over to the Endpoints section of the ngrok dashboard and click + New.
Leave the Binding value Public, then enter the domain name you reserved earlier. Click Create Cloud Endpoint.
The ngrok
CLI provides a helpful wrapper around the ngrok
API, which you can use to create a cloud endpoint and
apply a file containing Traffic Policy rules.
Because every cloud endpoint must contain a Traffic Policy rule, create a
new file named policy.yaml
on your local workstation with the following
YAML, which is temporary until you add proper routing.
Loading…
Create a cloud endpoint on {YOUR_NGROK_DOMAIN}
, passing your
policy.yaml
file as an option.
Loading…
You'll get a 201
response—save the value of id
, as you'll need it
again later to continue configuring the Traffic Policy applied to your
cloud endpoint. We'll refer to it as <CLOUD_ENDPOINT_ID>
.
Route traffic to your services
Your front door is ready, but it currently doesn't have any logic for routing traffic to your API services in multiple clouds.
Enter our Traffic Policy system, which lets you filter traffic based on its properties and take action as it passes through ngrok's global network. Two important concepts of Traffic Policy to note:
- Phases are the distinct points in the lifecycle of
a request where you can filter and take action. For this use case, we're using
on_http_request
, which activates when ngrok receives an HTTP request over an established connection. - Expressions define when to run
your actions. They're written in Common Expression
Language, and must evaluate to
true
to run the corresponding action.
The rules below:
- Filter for requests arriving on only
https://<YOUR_NGROK_DOMAIN>/abc
and forward them to your internal agent endpoint in cloud A. - Filter for requsets arriving on only
https://<YOUR_NGROK_DOMAIN>/xyz
and forward them to your internal agent endpoint in cloud B.
You can also route by other properties, like subdomains and headers.
- Dashboard
- API
Copy and paste the rules below into your cloud endpoint's Traffic Policy editor
in the dashboard. If you're bringing your own API services instead of using the
demo API, you'll need to change /abc
and /xyz
to match your services' paths
and the url
for your internal agent endpoints.
Loading…
Hit Save to lock in the new policy.
Update your existing policy.yaml
file with the YAML below. If you're
bringing your own API services instead of using the demo API, you'll need to
change /abc
and /xyz
to match your services' paths and the url
for
your internal agent endpoints.
Loading…
Update your cloud endpoint.
Loading…
At this point, your multicloud API gateway is up and running! Give it a try, won't you?
Loading…
You should get a response like:
Loading…
And when you run curl https://<YOUR_NGROK_DOMAIN>/xyz
?
Loading…
Add traffic management policies
Your multicloud API gateway routes traffic, but doesn't yet do essential work of an API gateway: offload all the non-functional requirements away from your services.
One great feature of ngrok's building blocks of endpoints and Traffic Policy rules is that they're composable—you can reuse them, chain them, and apply them at multiple stages in the lifecycle of an API request.
With the shape you've already created, you can centrally manage certain policies, like authentication, on your cloud endpoint, then compose additional rules onto specific services.
Validate JWTs on all APIs and requests
API authentication is too important not to apply consistently across all your
APIs and requests. That's where the always-on, front door to all your routes
quality of a cloud endpoint comes in handy—you can apply the jwt-validation
action once for dependable AuthN
no matter how many services you end up deploying behind your multicloud API
gateway.
ngrok's JWT validation action helps you:
- Give your end users many ways to access your APIs.
- Ensure only requests containing the correct access token, specified by an
Authorization: Bearer ...
header, can access any of your APIs. - Add claims to tokens for authorization and fine-grained access control where a
specific token may only have access to a certain API (
service_access: abc
) or apply RBAC (features: read
). - Use a single credential for end users who need to access multiple upstream services.
- Offload all this logic from your API services and run it in ngrok's network.
You can use any OAuth provider for JWT validation, but but let's quickly cover the process with Auth0.
- Log in to your Auth0 tenant dashboard.
- Select Applications > APIs, then + Create API.
- Name your API whatever you'd like.
- Replace the value of the Identifier field with
<YOUR_NGROK_DOMAIN>
. - Leave the default values for JSON Web Token (JWT) Profile and JSON Web Token Signing Algorithm.
- Click Create.
- Navigate to your application and click on the Test tab, where you can find a signed, fully functional JWT and examples of how to programmatically generate more.
The rule below builds on top of the previous cloud endpoint policy to:
- Reject requests missing a token with a
401 Unauthorized
error. - Reject requests with an invalid token with a
403 Forbidden
error. - Forward requests with a valid token to one of your internal agent endpoints based on the pathname.
You'll need to change the variables accordingly—if you're not sure where to find this information, we have a full integration guide with more details.
Loading…
Apply in either the dashboard or ngrok API.
Rate limit specific API services
Let's say one of your services (like abc-app
, if you're following along with
the demo service), needs additional protection from unintentional misuse and
malicious attacks.
The rate-limit
Traffic Policy
action allows you to reject requests
with a 429
error code once a user or group have exceeded your customizable
threshold, and the AgentEndpoint
CRD allows you to define a Traffic Policy for
just that endpoint.
The rule below builds on top of your AgentEndpoint
CR from earlier to:
- Allow up to
10
requests per IP in a60s
window. - Reject requests that exceed the rate limiting capacity with a
429
error response. - Forward all other requests to the upstream API service at
http://abc-app-service.default:4000
.
Loading…
Apply the updated manifest with kubectl apply -f ...
.
Ready to test your rate limit in action? Run the below command after replacing
<YOUR_NGROK_DOMAIN>
and the path, if relevant.
Loading…
You'll see a few normal responses until you hit the rate limit, and then you'll
see 429
errors. Run the same command on the /xyz
path and you won't see the
same errors, since you've applied this policy only to the
https://abc-cloud-a.internal
agent endpoint.
If you want all your APIs to have a consistent rate limiting strategy, you can
move the rule to your cloud endpoint above the jwt-validation
action.
What's next?
You've now brought your multicloud APIs online with ngrok's API gateway, which also automatically gives you features like DDoS protection and global load balancing. Plus, you've added global AuthN with JWTs and explored composing Traffic Policy rules on multiple endpoints.
Not a bad start—and it probably wasn't nearly as tough as you thought it would be, too.
That said, your journey into multicloud API gateway with ngrok is just beginning. Next up, we recommend you:
- Check out your Traffic Inspector (documentation to observe, modify, and replay requests across your API gateway.
- Explore other opportunities to manage and take action on API traffic in our Traffic Policy documentation.