Skip to main content

Configure Secure Access to Remote IoT Devices

ngrok is a universal gateway, which means it allows you to create secure ingress to any app, IoT device, or service without networking expertise.

This guide will walk you through an example scenario using ngrok to set up a secure, controlled remote access solution for IoT devices. The solution will enable you to grant trusted parties access to critical systems without exposing those systems to the public internet or relying on complex VPN setups.

What you'll need

  • An ngrok account. If you don't have one, sign up.
  • An ngrok agent configured on your local machine. See the getting started guide for instructions on how to install the ngrok agent.
  • An ngrok API Key. You'll need an account first.

Example scenario

Consider a situation where a network of smart factories is coming online, each with IoT-connected machines, telemetry sensors, and a real-time monitoring dashboard.

In this scenario, each factory's network blocks inbound connections, but the technicians need temporary access to the dashboard. The telemetry API and sensor database must remain permanently accessible from the company's cloud, and access to the dashboard must be authenticated via Azure AD.

Each factory would only need one ngrok agent running.

Why only one ngrok agent per factory?

Traditionally, you might assume that every device inside the factory needs its own ngrok agent, but this isn't necessary. A single ngrok agent is installed on a network-accessible machine inside the factory, and it:

  1. Acts as a central gateway (jumpbox) that can reach any machine on the local network, eliminating the need for multiple agents.
  2. Creates Internal Endpoints so that each API, database, and dashboard is securely exposed inside ngrok, never publicly visible.
  3. Uses Cloud Endpoints for controlled access, where external cloud apps can access only what they need, and the dashboard is only started when requested.
  4. Runs as a background service configured to automatically start on boot, restart after crashes, and log events.
  5. Dynamically manages tunnels, as The agent API can start and stop tunnels as needed.

This setup minimizes security risks, simplifies deployment, and ensures continuous uptime for mission-critical services.

Understanding cloud and internal endpoints

An internal endpoint enables a service inside the factory network to be reachable within ngrok, without being publicly exposed. They can:

  1. Only receive traffic from cloud endpoints or internal services that explicitly route traffic to them.
  2. Not be accessed directly from the internet.
  3. Be used for telemetry APIs, databases, and dashboards.

Here's an example: The factory's telemetry API runs on a local server (192.168.1.100:8080). Instead of exposing it publicly, you can create an internal endpoint:

Loading…

Now this API is only accessible inside ngrok's private network.

A cloud endpoint is a permanent, externally accessible entry point into the factory network that's also:

  1. Managed centrally via the ngrok API or dashboard.
  2. Always on, not tied to the lifecycle of the agent.
  3. Does not forward traffic to the agent by default—it must be configured to route traffic to internal endpoints.
  4. Used for exposing services to external cloud apps securely.

For example, the factory's telemetry API is accessible via https://factory.example.com/api, but instead of exposing the API directly, a cloud endpoint forwards traffic to its internal endpoint:

Loading…

Define internal endpoints in ngrok.yaml

After installing the ngrok agent, define all required internal endpoints inside the ngrok configuration file, which is at /etc/ngrok.yml on Linux or C:\ngrok\ngrok.yml on Windows.

Loading…

Install ngrok as a background service

Now, install and start the service

Loading…
tip

In most cases, installing ngrok as a service requires administrator privileges.

This will start all tunnels defined in the configuration file, ensure ngrok runs persistently in the background, and integrate with native OS service tooling.

Reserve a TCP address for your TCP-based cloud endpoint

When you reserve a TCP Address, you can create a TCP cloud endpoint that binds to that domain. Reserved TCP addresses are available on ngrok's pay-as-you-go plan.

Loading…

Reserve a custom wildcard domain

Creating a custom wildcard domain will allow you to create endpoints and receive traffic on any subdomain of your domain. Wildcard domains are available on ngrok pay-as-you-go-plans when you verify with support. It can be helpful to create a separate subdomain for each factory you wish to connect to.

Loading…

Create a bot user and authtoken for the factory

Create a bot user so that you can create an agent authtoken independent of any user account. A bot user represents a service account, and allows each customer network to have its own authtoken. In the case one authtoken is compromised, only that customer network may be affected rather than all of them.

Loading…

Navigate to the Authtokens section of your ngrok dashboard and create an authtoken for that bot user and assign it an ACL binding, so that it can only create endpoints bound to the reserved wildcard domain for that factory's telemetry API.

Create cloud endpoints for always-on API and database access

Since the telemetry API and database must be always accessible, create permanent cloud endpoints that route traffic to their internal endpoints. Create an HTTPs cloud endpoint for the API using the ngrok platform API.

Loading…

Now factory1.api.acme.com/api permanently forwards traffic to api.internal inside the factory. Next, create a TCP cloud endpoint for the sensor database because it's on a TCP-based internal endpoint.

Loading…

Now, your reserved TCP address forwards TCP connections to the factory database and only allows access from 203.0.113.0/24 and denies traffic from 192.0.2.0/24.

Enable on-demand web dashboard access

Since the web dashboard should only be online when needed, use ngrok's agent API to dynamically start and stop tunnels. As a technician, start the tunnel by running:

Loading…
note

Ensure your chosen domain has the proper TLS/SSL certs provisioned in order to create a secure, working connection.

Now, app.factory.com is live—only for this session.

Secure API access with Google OAuth and mTLS with Traffic Policy

Navigate to your newly created telemetry API cloud endpoint in the endpoints tab on the dashboard, and apply a Traffic Policy. Make sure you have a certificate on hand or generate them using the instructions in our terminate-tls docs.

Loading…

Now, only Google authenticated users can access the web dashboard and the connection is secured with mTLS.

Welcome to your secure, connected, and automated factory

You officially made it! You have now integrated a system that allows you to seamlessly and securely access any and all remote services within your enterprise. Let's recap what what you've built:

  1. One ngrok agent per factory and no need for multiple installs.
  2. Always-online API and dashboard, securely available via cloud endpoints.
  3. A web dashboard that spins up on-demand with authentication via Google OAuth and mTLS security.
  4. An Agent API that dynamically manages tunnels with automatic provisioning and deprovisioning.
  5. ngrok runs as a background service, which means it's reliable and will always auto-restart.