Network Security

Network Segmentation: VLANs, Microsegmentation, and Zero Trust

Flat networks are an attacker's best friend. Learn how to design security zones with VLANs, enforce boundaries with firewalls and ACLs, and implement microsegmentation to stop lateral movement.

August 15, 20257 min readShipSafer Team

When a breach happens on a flat network, the attacker's job is trivially easy. They compromise one machine — a developer's laptop, a contractor's VPN credentials, an unpatched internet-facing service — and then they pivot. With no internal boundaries to stop them, they scan the entire RFC1918 address space, find the database servers, find the backup systems, find the domain controller, and within hours or days they own the environment.

Network segmentation is the architectural practice of dividing a network into zones, enforcing strict boundaries between them, and requiring explicit authorization for traffic to cross those boundaries. Done well, segmentation means that compromising one zone does not automatically give an attacker access to another. It limits the blast radius of every incident.

Why Flat Networks Are Dangerous

The classic flat network gives every device on the corporate network visibility to every other device. A developer's MacBook can reach the production database. A guest WiFi device might be on the same subnet as production servers. A printer in the lobby can ARP-scan and discover internal systems.

This is not a hypothetical. The Target breach in 2013 entered through an HVAC vendor with network access and pivoted to payment card systems because there was no meaningful segmentation between the vendor network and the retail point-of-sale systems.

The principle of least privilege, applied to networking, means hosts should only be able to reach the specific services they need, on the specific ports those services require, and nothing else. Network segmentation is the enforcement mechanism for that principle.

VLAN Design for Security Zones

A VLAN (Virtual Local Area Network) is a logical network boundary enforced at Layer 2 by switches. Devices on different VLANs cannot communicate directly; traffic must route through a Layer 3 device (a router or firewall), where you can apply access control lists.

A reasonable security zone model for most organizations looks like this:

VLANNamePurposeTypical CIDR
10ProductionProduction servers, databases10.10.0.0/24
20StagingPre-production environments10.20.0.0/24
30DevelopmentDev servers, build agents10.30.0.0/24
40CorporateEmployee workstations10.40.0.0/22
50ManagementNetwork devices, IPMI, iDRAC10.50.0.0/24
60GuestGuest WiFi, visitor devices10.60.0.0/22
70DMZInternet-facing services10.70.0.0/24
80IoTPrinters, cameras, sensors10.80.0.0/24

The management VLAN deserves special attention. Out-of-band management interfaces (IPMI, iDRAC, Cisco CIMC) typically have administrative capabilities far beyond normal SSH access — including power cycling and BIOS modification. This VLAN should be accessible only from a dedicated jump host, never from general corporate workstations.

Cisco IOS VLAN and ACL configuration example:

! Create VLANs
vlan 10
 name PRODUCTION
vlan 40
 name CORPORATE
vlan 60
 name GUEST

! Configure trunk port to firewall
interface GigabitEthernet0/1
 switchport mode trunk
 switchport trunk allowed vlan 10,20,30,40,50,60,70,80

! Apply ACL to prevent corporate devices reaching production directly
! (inter-VLAN routing handled at firewall layer)
ip access-list extended BLOCK_CORPORATE_TO_PROD
 deny ip 10.40.0.0 0.0.3.255 10.10.0.0 0.0.0.255
 permit ip any any

In practice, inter-VLAN routing should happen at a stateful firewall, not a router, so you can log denied connections and apply stateful inspection. The ACL above is illustrative; the real enforcement belongs on the firewall.

Firewall Zones and the Default-Deny Model

A zone-based firewall assigns each interface (or VLAN) to a security zone and defines policies between zones. The default-deny model means that unless a policy explicitly permits traffic between two zones, all traffic is dropped.

A basic zone model for a mid-size organization:

INTERNET --> [Firewall] --> DMZ --> [App Load Balancer] --> PRODUCTION
                 ^
                 |
           CORPORATE --> MANAGEMENT
           STAGING --> PRODUCTION (specific ports only)
           GUEST --> (internet only, no internal access)

Example pfSense/OPNsense firewall rules for the corporate-to-production zone (this is the minimum-permission model):

# Corporate to Production — Explicit allows only
PASS  TCP  10.40.0.0/22  10.10.0.10/32  port 443  # Internal app over HTTPS
PASS  TCP  10.40.0.0/22  10.10.0.20/32  port 5432  # PostgreSQL from DBA hosts only (further restrict by source IP)
BLOCK ALL  10.40.0.0/22  10.10.0.0/24   ANY        # Everything else denied, logged

# Guest network — internet only
PASS  ANY  10.60.0.0/22  !10.0.0.0/8    ANY        # Internet access
BLOCK ALL  10.60.0.0/22  10.0.0.0/8     ANY        # No RFC1918 access

Microsegmentation with Software-Defined Networking

VLANs segment at Layer 2; firewalls control inter-zone traffic at Layer 3/4. But both rely on network topology — once you're inside a zone, you have free movement within it. Microsegmentation takes segmentation down to the workload level: individual virtual machines, containers, or processes enforce policy on every connection, regardless of which subnet they share.

VMware NSX and Cisco ACI are the traditional enterprise microsegmentation platforms. They insert firewall logic at the hypervisor layer, so traffic between two VMs on the same host never leaves the hypervisor before being inspected. This makes east-west traffic (VM-to-VM within the same tier) subject to policy just like north-south traffic.

A practical alternative for organizations not running VMware enterprise stacks is host-based firewalls with centralized policy management — tools like Illumio, Akamai Guardicore, or even just well-managed iptables/nftables rules deployed via configuration management.

Kubernetes Network Policies

Kubernetes clusters are particularly prone to flat-network problems. By default, every pod in a cluster can reach every other pod on any port. If a pod running a public-facing API is compromised, the attacker can directly reach pods running internal services, databases, and the Kubernetes API server itself.

Kubernetes NetworkPolicy resources let you define ingress and egress rules at the pod level. NetworkPolicies require a CNI plugin that supports them (Calico, Cilium, or Weave Net — the default kindnet does not).

# Default deny all ingress and egress in the production namespace
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-all
  namespace: production
spec:
  podSelector: {}
  policyTypes:
    - Ingress
    - Egress
---
# Allow the API server pods to receive traffic from the load balancer
# and to reach the database pods
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: api-server-policy
  namespace: production
spec:
  podSelector:
    matchLabels:
      app: api-server
  policyTypes:
    - Ingress
    - Egress
  ingress:
    - from:
        - namespaceSelector:
            matchLabels:
              name: ingress-nginx
      ports:
        - protocol: TCP
          port: 8080
  egress:
    - to:
        - podSelector:
            matchLabels:
              app: postgres
      ports:
        - protocol: TCP
          port: 5432
    - to:
        - namespaceSelector:
            matchLabels:
              name: kube-system
      ports:
        - protocol: UDP
          port: 53  # Allow DNS resolution

Cilium extends Kubernetes network policy with Layer 7 awareness — you can write policies that allow HTTP GET requests but deny POST requests to the same endpoint, or allow specific gRPC methods. This level of granularity is difficult to achieve with traditional network security tools.

Lateral Movement Prevention

Segmentation is most valuable as a lateral movement control. When designing your segmentation model, think through common attack chains:

Scenario 1: Compromised developer laptop. The attacker has a shell on a corporate workstation. With flat networking, they can reach production databases directly. With segmentation, they can reach only the applications the corporate zone is explicitly allowed to access. They cannot SSH to production servers, access IPMI interfaces, or talk to internal APIs not intended for corporate users.

Scenario 2: Compromised web server in DMZ. The attacker has code execution on an internet-facing application server. With proper DMZ design, this server can only connect to the specific internal services it needs (a database on a specific port, an internal API on a specific port). It cannot reach corporate workstations, other application tiers, or the management network.

Scenario 3: Compromised CI/CD runner. Build systems are high-value targets because they often have broad network access and production secrets. The CI network should be its own zone, with access restricted to artifact repositories, specific deployment targets, and nothing else. If a build agent can reach your production databases directly, your segmentation model needs revision.

The key discipline is to map your application dependencies honestly — what actually needs to talk to what — and build your segmentation model around those legitimate communication paths. Everything else gets blocked and logged.

network-segmentation
vlans
microsegmentation
lateral-movement
kubernetes
zero-trust

Check Your Security Score — Free

See exactly how your domain scores on DMARC, TLS, HTTP headers, and 25+ other automated security checks in under 60 seconds.