Skip to main content
Uncategorized

Securing the existing Enterprise network between Virtual Machines and Containers with Aporeto

By July 15, 2021No Comments

Securing the existing Enterprise network between Virtual Machines and Containers with Aporeto

In the previous blog post Simple by design; Automating per-namespace isolation with Aporeto and OpenShift we discussed how to apply the principles of Zero Trust networking for containerized environments. This approach is easy to implement with new platforms that support user and system defined metadata to help make network security decisions. This is a slam dunk deployment with greenfield applications that are born in a container platform, but how does this functionality extend into less metadata-rich and legacy environments?

In this blog post we will cover:

  • The challenge with Container –> VM network security policy enforcement
  • How to use Aporeto to protect legacy application workloads
  • How to automatically increase security in the communications between container and VM workloads
  • How the Aporeto namespace hierarchical model simplifies policy management at scale

The Problem: Securing Communications between Containers and Virtual Machines

A common deployment pattern for many Enterprises will be to create new microservices within a container platform and leave the data persistence layer deep within a well-protected network zone. The issue arises when these microservices must communicate with these data layers through a myriad of firewalls and access controls. Traditional network security devices aren’t metadata aware and often rely on simple IP address ranges to permit or deny communication flow. In the world of container platforms, where an IP address is just as ephemeral as the container instance and the source address is NAT’d through the host instance (which changes at any given time), it is impossible to lock down the communication to specific workloads and ultimately the firewall must permit all traffic from the container platform. This is a big problem.

<img src="https://res.cloudinary.com/arctiq/image/upload/q_auto/posts/overview.png" position="center" size="XXL" alt="overview" %}

The Aporeto Enforcer: Not Just for Container Workloads

In the previous blog, the Aporeto Operator and Aporeto Enforcers were deployed as pods within an existing container platform. In this deployment, the Enforcer is deployed as a DaemonSet to ensure that every node/server has a running Aporeto Enforcer agent at all times. While this deployment method works for container platforms, a different approach is required for bare metal or virtual machine environments that are not containerized.

Aporeto’s documentation outlines a couple of different approaches for protecting individual host services:

  • Deployed as a system service
  • Deployed as a container on an individual host

It should be noted that the Aporeto Enforcer agent is currently only supported on linux systems, however, it can be deployed in a proxy configuration to protect non-linux workloads (such as Windows or Unix systems).

Deployed Directly on Linux Machines

The following diagram outlines a sample deployment when the target system is running a supported version of linux. <img src="https://res.cloudinary.com/arctiq/image/upload/q_auto/posts/enforcer-local-agent.png" position="center" size="XXL" alt="local agent" %}

In this diagram, the Enterprise firewall rules aren’t changed, however, policy can be further applied on the target host to filter out connections from specific containers that reside in the approved cluster.

Deployed In a Proxy Model

The following diagram outlines a sample deployment when the target system cannot run the Aporeto Enforcer agent and proxy hosts are needed. <img src="https://res.cloudinary.com/arctiq/image/upload/q_auto/posts/enforcer-proxy-agent.png" position="center" size="XXL" alt="proxy agent" %}

In this diagram, the Enterprise firewall rules are changed in that incoming connections are only directed to the Aporeto Proxy host. This proxy host then determines which containers may communicate with the back-end database service.

Policy Configuration Steps

This blog will focus on the deployment model that includes a proxy host. In this environment:

  • Aporeto Enforcers have been deployed in an OpenShift 4 Cluster
    • Enforcers are registered into the Aporeto Namespace: /sheastewart/kubernetes-clusters/ocp4-1
  • Aporeto Enforcers have been deployed into 2 linux proxy VM’s
    • Aporeto Enforcers are registered into the Aporeto Namespace: /sheastewart/linux-hosts
    • 2 Enforcer Proxies are used to maintain availability
  • Each linux proxy VM runs HAProxy to load balance tcp/3306 to the back-end database

The following diagram provides more detail about this configuration:

<img src="https://res.cloudinary.com/arctiq/image/upload/q_auto/posts/enforcer-proxy-db-detailed-design.png" position="center" size="XXL" alt="design" %}

It should be noted, in this configuration high-availability could be achieved through a couple of simple solutions such as round-robin DNS between the linux proxy hosts or utilizing keepalived too maintain a single floating IP address across linux proxy hosts.

Configuring the Enforcer Proxies

HAProxy Configuration

The HAProxy hosts in this configuration are very simple for the purposes of this demonstration;

  • Example haproxy.cfg
global   log /dev/log  local0   log /dev/log  local1 notice   stats socket /var/lib/haproxy/stats level admin   chroot /var/lib/haproxy   user haproxy   group haproxy   maxconn 256  defaults   log global   mode  tcp   option  dontlognull         timeout connect 5000         timeout client 50000         timeout server 50000  frontend mysqlfrontend     bind 0.0.0.0:3306     default_backend mysqlbackend  backend mysqlbackend     balance roundrobin     server db-1 10.100.1.192:3306 check

Aporeto Host Service Configuration

When configuring a linux host to be protected by an Aporeto enforcer, we must define Host Services along with a Host Service Mapping Policies. In the case of this environment, 1 of each will suffice. All linux host Aporeto Enforcers are deployed into the /sheastewart/linux-hosts namespace.

  • Create the Host Service
$ cat mysql-host-services.yml APIVersion: 0 data:   hostservices:     - associatedTags:         - 'hs:name=mysql'       name: MYSQL       propagate: true       services:         - tcp/3306 identities:   - hostservice label: proxy-host-services  $ apoctl api import --file mysql-host-services.yml -n /sheastewart/linux-hosts
  • Create the Host Service Mapping Policy
$ cat host-service-profile-mapping.yml APIVersion: 0 data:   hostservicemappingpolicies:     - name: mysql-hs-mapping       object:         - - 'hs:name=mysql'       subject:         - - $identity=enforcer identities:   - hostservicemappingpolicy label: proxy-host-service-mapping  $ apoctl api import --file host-service-profile-mapping.yml -n /sheastewart/linux-hosts

The above is a very simply configuration that states that any linux host enforcer in this namespace should be mapped to the MySQL service. More complex environments would add additional constraints to the subject definition.

Configuring the Back-End Database

To support this new configuration, as well as adding an additional layer of security, the database can be configured to only allow traffic from the connecting proxy hosts:

  • Configure the database for access from the proxy hosts
# Grant access to Proxy Hosts GRANT ALL PRIVILEGES ON *.* TO 'username'@'10.100.1.190' IDENTIFIED BY 'password'; GRANT ALL PRIVILEGES ON *.* TO 'username'@'10.100.1.191' IDENTIFIED BY 'password'; # Create Grafana Database CREATE DATABASE grafana;

Validate that No Traffic Shall Pass

In this stage, validate that both grafana instances, from either project/namespace, cannot connect to the database:

  • Validate through the grafana logs
$ oc logs -f grafana-2-vhch8 t=2019-10-15T18:40:09+0000 lvl=info msg="Starting Grafana" logger=server version=6.4.2 commit=443a0ba branch=HEAD compiled=2019-10-08T09:10:35+0000 t=2019-10-15T18:40:09+0000 lvl=info msg="Config loaded from" logger=settings file=/usr/share/grafana/conf/defaults.ini t=2019-10-15T18:40:09+0000 lvl=info msg="Config loaded from" logger=settings file=/etc/grafana/grafana.ini t=2019-10-15T18:40:09+0000 lvl=info msg="Config overridden from command line" logger=settings arg="default.paths.data=/var/lib/grafana" t=2019-10-15T18:40:09+0000 lvl=info msg="Config overridden from command line" logger=settings arg="default.paths.logs=/var/log/grafana" t=2019-10-15T18:40:09+0000 lvl=info msg="Config overridden from command line" logger=settings arg="default.paths.plugins=/var/lib/grafana/plugins" t=2019-10-15T18:40:09+0000 lvl=info msg="Config overridden from command line" logger=settings arg="default.paths.provisioning=/etc/grafana/provisioning" t=2019-10-15T18:40:09+0000 lvl=info msg="Config overridden from command line" logger=settings arg="default.log.mode=console" t=2019-10-15T18:40:09+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_PATHS_DATA=/var/lib/grafana" t=2019-10-15T18:40:09+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_PATHS_LOGS=/var/log/grafana" t=2019-10-15T18:40:09+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" t=2019-10-15T18:40:09+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" t=2019-10-15T18:40:09+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_DATABASE_TYPE=mysql" t=2019-10-15T18:40:09+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_DATABASE_HOST=10.100.1.191:3306" t=2019-10-15T18:40:09+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_DATABASE_USER=username" t=2019-10-15T18:40:09+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_DATABASE_PASSWORD=*********" t=2019-10-15T18:40:09+0000 lvl=info msg="Path Home" logger=settings path=/usr/share/grafana t=2019-10-15T18:40:09+0000 lvl=info msg="Path Data" logger=settings path=/var/lib/grafana t=2019-10-15T18:40:09+0000 lvl=info msg="Path Logs" logger=settings path=/var/log/grafana t=2019-10-15T18:40:09+0000 lvl=info msg="Path Plugins" logger=settings path=/var/lib/grafana/plugins t=2019-10-15T18:40:09+0000 lvl=info msg="Path Provisioning" logger=settings path=/etc/grafana/provisioning t=2019-10-15T18:40:09+0000 lvl=info msg="App mode production" logger=settings t=2019-10-15T18:40:09+0000 lvl=info msg="Initializing SqlStore" logger=server t=2019-10-15T18:40:09+0000 lvl=info msg="Connecting to DB" logger=sqlstore dbtype=mysql t=2019-10-15T18:40:09+0000 lvl=info msg="Starting DB migration" logger=migrator  t=2019-10-15T18:42:20+0000 lvl=eror msg="Server shutdown" logger=server reason="Service init failed: Migration failed err: dial tcp 10.100.1.191:3306: connect: connection timed out"

Configuring the Aporeto Network Access Policy

There are a few ways that the policies can be configured to permit traffic from the appropriate containerized workloads. In this section we will explore two options, each of which are valid approaches which depend on who has the appropriate authorization to create network access policies at which level in the Aporeto Namespace Hierarchy.

Configuration with a Single Policy

If we have the correct authorization to create a policy within the parent namespace /sheastewart/, then this single policy can allow connectivity from our desired OpenShift namespace into the proxy hosts, thus permitting access to the back-end database.

  • Configure a single policy at the parent namespace level
$ cat permit-frontend-to-mysql.yml APIVersion: 0 data:   networkaccesspolicies:     - logsEnabled: true       name: permitted-frontend-to-legacy-mysql       object:         - - $namespace=/sheastewart/linux-hosts           - 'hs:name=mysql'       propagate: true       subject:         - - >-             $namespace=/sheastewart/kubernetes-clusters/ocp4-1/permitted-frontend identities:   - networkaccesspolicy label: permitted-frontend-to-legacy-mysql  $ apoctl api import --file permit-frontend-to-mysql.yml -n /sheastewart/

Configuration with a 2-Stage Policy

In the case where different teams are responsible for policy configuration at the OpenShift and linux host resource namespaces, 2 combined policies can be created:

  • Create the following policy within the OpenShift cluster using the Aporeto operator and the CRD
$ cat namespace-policy.yml apiVersion: api.aporeto.io/vbeta1 kind: NetworkAccessPolicy metadata:   name: permitted-frontend-to-legacy-mysql spec:   logsEnabled: true   object:   - - $namespace=/sheastewart/linux-hosts     - hs:name=mysql   subject:   - - $namespace=/sheastewart/kubernetes-clusters/ocp4-1/permitted-frontend  $ oc apply -f namespace-policy.yml -n permitted-frontend

<img src="https://res.cloudinary.com/arctiq/image/upload/q_auto/posts/ocp-namespace-policy.png" position="center" size="XXL" alt="ocp" %}

$ cat proxy-host-mysql-access.yml APIVersion: 0 data:   networkaccesspolicies:     - logsEnabled: true       name: proxy-host-mysql-access       object:         - - $namespace=/sheastewart/linux-hosts           - 'hs:name=mysql'       subject:         - - >-             $namespace=/sheastewart/kubernetes-clusters/ocp4-1/permitted-frontend identities:   - networkaccesspolicy label: proxy-host-mysql-access  $ apoctl api import --file proxy-host-mysql-access.yml -n /sheastewart/linux-hosts

Visualizing Flows from Unauthorized Workloads

In this stage, validate that the pods from denied-frontend namespace are unable to communicate with the back-end database proxies:

  • Delete the grafana pod to establish a new flow
$ oc project denied-frontend Now using project "denied-frontend" on server "https://api.ocp.cloud.lab:6443". [root@bastion-01 ~]# oc get pods  NAME               READY   STATUS             RESTARTS   AGE grafana-1-deploy   0/1     Completed          0          36h grafana-1-mlbxh    0/1     CrashLoopBackOff   16         95m $  oc delete pod grafana-1-mlbxh pod "grafana-1-mlbxh" deleted

The Aporeto Namespace Hierarchical Approach

It’s the powerful hierarchical namespace approach that makes Aporeto policies massively scalable and easy to manage, yet providing the right amount of granular scope and access that can be used to fit any organizational structure. In our previous example we demonstrated that the same policy can be created in at least 2 different ways, each one depending on the level of access a user may have to manage a specific resource.

In our namespace we have the following hierarchy to manage our enforcers:

└── sheastewart     ├── kubernetes-clusters     │   └── ocp4-1     │       ├── enforcer-10-100-1-21.kubelet.kube-system.svc.cluster.local     │       ├── enforcer-10-100-1-22.kubelet.kube-system.svc.cluster.local     │       ├── enforcer-10-100-1-23.kubelet.kube-system.svc.cluster.local     │       └── enforcer-10-100-1-24.kubelet.kube-system.svc.cluster.local     ├── linux-hosts     │   ├── enforcer-proxy-1     │   └── enforcer-proxy-2

At this point, Aporeto allows us to apply authorizations to any namespace level, which allows delegation of control to child namespaces for specific teams, but also powerfully applies broad policies & configuration from a parent namespace that is propagated into child namespaces. For example, above if we create only a single policy we can apply it at a parent namespace and propagate it to the children; it would look like this:

└── sheastewart     ├── kubernetes-clusters     │   └── ocp4-1     │       ├── enforcer-10-100-1-21.kubelet.kube-system.svc.cluster.local     │       ├── enforcer-10-100-1-22.kubelet.kube-system.svc.cluster.local     │       ├── enforcer-10-100-1-23.kubelet.kube-system.svc.cluster.local     │       └── enforcer-10-100-1-24.kubelet.kube-system.svc.cluster.local     ├── linux-hosts     │   ├── enforcer-proxy-1     │   └── enforcer-proxy-2     └── networkaccesspolicy-permitted-frontend-to-legacy-mysql

Of course, though, in our other example 2 different teams may manage each set of enforcers independently which would mean that each team needs to create a network policy in their corresponding child namespace:

└── sheastewart     ├── kubernetes-clusters     │   └── ocp4-1     │       ├── enforcer-10-100-1-21.kubelet.kube-system.svc.cluster.local     │       ├── enforcer-10-100-1-22.kubelet.kube-system.svc.cluster.local     │       ├── enforcer-10-100-1-23.kubelet.kube-system.svc.cluster.local     │       ├── enforcer-10-100-1-24.kubelet.kube-system.svc.cluster.local     │       └── networkaccesspolicy-permitted-frontend-to-legacy-mysql     └── linux-hosts         ├── enforcer-proxy-1         ├── enforcer-proxy-2         └── networkaccesspolicy-permitted-frontend-to-legacy-mysql

We hope this illustrates the simplicity and power that Aporeto has applied to their design with a goal towards ease of configuration while achieving massive scale to protect workloads across on-prem containers, Virtual Machines, and cloud environments. A single policy, configured correctly, only ever needs to be defined once and can be applied across an entire organizations computing assets.

Interested in learning more about Aporeto’s Zero Trust Security approach? Reach out to me in the comments or on social media!

//take the first step