With each new data breach it’s becoming clearer that by adopting a Zero Trust security model (ZTNA), organizations can better protect themselves and their users’ data. If applied correctly, ZTNA ensures minimal lateral movement across the organization’s resources, limiting access to sensitive information, and reducing the impact of a potential attack. While most stakeholders strive for improved network security practices, asking organizations to re-architect their corporate networks in order to enable zero trust policies is a tall order. At Bowtie, we believe that organizations should be able to achieve a stronger security posture and adopt a ZTNA model without starting over.
Bowtie provides the capability to define granular networking policies for each segment of an organization's network, without needing to send users' traffic beyond their own trusted network or restructuring their network in any way. This starts by deploying the Bowtie Controller – a software-based, self-contained networking appliance – within the organization’s private network. The Controller is responsible for brokering access between the organization’s users and devices, and its data, applications, and private networks. We have designed the Bowtie controller image such that it can be deployed in all major cloud environments or as a virtual machine for on-premise infrastructure.
Preparing the deployment
A Controller simply needs to be positioned in a segment of the network that is both reachable by clients outside the network, and by applications and hosts within the network. With this in mind, those who would prefer to get up and running quickly can jump ahead to our deployment outline to get started.
For those with perceived network hurdles, below is an overview of the various approaches that are typically used when deploying Bowtie Controllers.
- The first approach involves deploying the Controllers within a subnet that is reachable by the public internet, but also able to reach private subnets within the same network. In this model the Controllers may be assigned their own public-facing IP addresses, and will be positioned to accept traffic from the public internet. All unnecessary ports will be closed, and all traffic entering the private network from the Controllers will be authenticated and authorized. The Controllers will operate as the front door to the network and thus will be responsible for creating routes to the private subnets where the organization’s servers, applications, and private data reside.
- The second approach involves placing the Controllers within a private subnet in the network, and then relying on load balancing/NAT gateway devices to forward traffic to the Bowtie cluster. From there, Bowtie will instantiate routes to other private subnets within the same site or networking segment.
The first approach, while simpler in nature and requiring less infrastructure (and therefore less infra-related costs), has the Bowtie Controllers directly accessible from the public internet. Some organizations prefer to use dedicated ingress, making the second approach a better fit.
A third approach acts as a hybrid of the two already discussed. A single Controller can operate as a “lighthouse”, to which all other “submerged” Controllers can communicate with. For example, if an organization has both a US-West network and a US-East network, a public-facing Controller can be stood up within each region, and then all other Controllers deployed within the networking segments under those regions will communicate with devices through the associated region Controller. This approach not only comes with the benefit of IP address conservation, but it yields a posture in which there’s minimal infrastructure overhead required to stand up a Bowtie mesh network.
In the next section, we will walk through deploying a set of Bowtie Controllers using the first deployment model, as this keeps the setup simple and straightforward. If interested in the second style of deployment, however, we suggest having a look at our control-plane Terraform module. This was designed specifically for AWS environments and includes support for network load balancers and instance failover through auto-scaling groups.
Deploying with Terraform
Now that we have laid the groundwork, let us jump into a deployment. For cloud-based environments, we have a Terraform “quick-start” module that will help deploy a high-availability Bowtie cluster within the given cloud environment in just a few minutes. The virtualization deployment process will vary depending on the hypervisor technology being used, but the principle remains the same.
To get started, head over to our starter-kits and select the kit corresponding to where your network resides (e.g. AWS, Azure, or GCP). Clone the repository, and then start filling out the terraform.tfvars file according to your organization’s account information, network specifications, and naming conventions. Once finished, follow the instructions outlined in the readme file to apply the configuration.
For reference, here’s an example of a complete .tfvars file for deploying within GCP:
# Shared variables
project = "bowtie"
region = "us-east1"
# Network module
create_vpc = true # set to false to use a pre-existing VPC, identifed by name
create_subnets = true # set to false to use pre-existing subnets, identified by name
vpc_name = "example-vpc"
subnet_names = ["example-vpc-subnet-1", "example-vpc-subnet-2"]
subnet_cidrs = ["10.10.1.0/24", "10.10.2.0/24"]
subnet_regions = ["us-east1", "us-west1"]
# Compute module
machine_type = "e2-medium"
network_tier = "STANDARD"
image_ver = "24-05-004" # see https://api.bowtie.works/platforms/GCP for latest version
dns_zone_name = "gcp.bowtie.example.com"
controller_name = ["c0", "c1"]
external_ips = ["35.211.195.10", "35.212.197.140"]
# Bowtie module
ipv4_range = "10.0.0.0/16"
For this deployment, we’re deploying all of the surrounding infrastructure that will support the Controllers as well as the Controllers themselves.
This includes:
- A new VPC
- A subnet per instance, each in separate regions to help contribute to a high availability posture
- New firewall rules that will only enable the egress and ingress ports and protocols that are necessary for operation
- The instances that the Controller images will be installed on
You may notice in the readme that beyond the GCP infrastructure, we are also including Bowtie-specific configuration details, which will be used to bootstrap the Controller configuration via a local cloud-init file. Using cloud-init in our bundles allows administrators to fully bootstrap a Controller deployment without needing to spend time navigating through a setup wizard to get the installation across the finish line.
It is worth noting that in many cases, much of this infrastructure will already be present to support other applications within your cloud sites. In those instances, the repository can act as a guide for the image details, recommended hardware specifications, and cloud-init components to build out your own Terraform configuration file.
Next steps
With the Bowtie Controllers deployed, you now have the capability to enforce granular access policies for your private data and resources. To start implementing these policies, visit your Controller’s hostname in a web browser, log in, and then head to the Policies page. There, you’ll define the resources, resource groups, and policies that allow or deny access for users and devices to those resource groups. Be sure to have a look at our documentation for additional guidance and best practices when it comes to policy creation and management.
In addition to implementing policies, we can also improve the user experience when accessing resources by making Bowtie aware of the DNS servers that support your private networks. Under the DNS tab, you will find a subsection for adding a Managed Domain. You can think of this as the list of Bowtie-managed hostnames that your private DNS server can respond to queries for. Once configured, as long as the Controller can reach the DNS server, users will be able to access resources behind these private hostnames regardless of which network they find themselves on.
With Access Policies now at the ready to prevent open-ended lateral movement and managed domains prepared to serve user requests, the last step is to bring on users! This starts with users installing the Bowtie client based on their device type and operating system. The latest version of the Bowtie client can be found at our platforms page. If you plan to leverage MDM or a specific fleet-management tool, let us know, and we will share a zero-configuration package that is better suited for large-scale deployments.
Once users install the Bowtie client and supply the endpoint of your Controllers in the setup wizard, they will be placed in a Pending state awaiting access. At this point, administrators can either grant users access manually at the Devices page or access can be auto-approved when the user successfully completes the implemented user authentication method. For more information about user authentication or standing up SSO login with Bowtie, please have a look at our SSO configuration documentation.
Growing with Bowtie
With devices now joining your Bowtie cluster, your users can start accessing the resources they need and are granted access to - and nothing more. You are in complete control of which resources your devices, users, and user groups can access; and because Bowtie scales to any network size, as your cluster grows to accommodate more sites and more networks, so do your policies. No longer do policies need to be created, managed, and duplicated across multiple networks: define them once in Bowtie and enable zero trust principles across all of your networks. This all comes without the cost of re-architecting your network or having to send your users traffic to external cloud services. Your data, your users, and your networks.
If you have any questions or need further assistance setting up your Bowtie cluster, please email or contact us directly.