{
	"id": "28aa0acd-1fc3-4dad-b497-e7daebeadb93",
	"created_at": "2026-04-06T00:18:11.127291Z",
	"updated_at": "2026-04-10T03:20:54.047194Z",
	"deleted_at": null,
	"sha1_hash": "9fc81f9b89bbab8b26c53527d2139f3de0411398",
	"title": "Packet Mirroring",
	"llm_title": "",
	"authors": "",
	"file_creation_date": "0001-01-01T00:00:00Z",
	"file_modification_date": "0001-01-01T00:00:00Z",
	"file_size": 105311,
	"plain_text": "Packet Mirroring\r\nArchived: 2026-04-05 22:30:52 UTC\r\nPacket Mirroring Stay organized with collections Save and categorize content\r\nbased on your preferences.\r\nThis page is an overview of Packet Mirroring in the Virtual Private Cloud (VPC) network. If you want to analyze\r\nyour workloads' network traffic at scale and monitor network traffic using third-party virtual appliances, use\r\nNetwork Security Integration Packet Mirroring. For more information, see Out-of-band integration overview.\r\nPacket Mirroring clones the traffic of specified instances in your VPC network and forwards it for examination.\r\nPacket Mirroring captures all traffic and packet data, including payloads and headers. The capture can be\r\nconfigured for both egress and ingress traffic, only ingress traffic, or only egress traffic.\r\nThe mirroring happens on the virtual machine (VM) instances, not on the network. Consequently, Packet\r\nMirroring consumes additional bandwidth on the VMs.\r\nPacket Mirroring is useful when you need to monitor and analyze your security status. It exports all traffic, not\r\nonly the traffic between sampling periods. For example, you can use security software that analyzes mirrored\r\ntraffic to detect all threats or anomalies. Additionally, you can inspect the full traffic flow to detect application\r\nperformance issues. For more information, see the example use cases.\r\nHow it works\r\nPacket Mirroring copies traffic from mirrored sources and sends it to a collector destination. To configure Packet\r\nMirroring, you create a packet mirroring policy that specifies the source and destination.\r\nMirrored sources are Compute Engine VM instances that you can select by specifying subnets, network\r\ntags, or instance names. If you specify a subnet, all existing and future instances in that subnet are\r\nmirrored. You can specify one or more source types—if an instance matches at least one of them, it's\r\nmirrored.\r\nPacket Mirroring collects traffic from an instance's network interface in the network where the packet\r\nmirroring policy applies. In cases where an instance has multiple network interfaces, the other interfaces\r\naren't mirrored unless another policy has been configured to do so.\r\nA collector destination is an instance group that is behind an internal load balancer. Instances in the\r\ninstance group are referred to as collector instances.\r\nWhen you specify the collector destination, you enter the name of a forwarding rule that is associated with\r\nthe internal passthrough Network Load Balancer. Google Cloud then forwards the mirrored traffic to the\r\ncollector instances. An internal load balancer for Packet Mirroring is similar to other internal load\r\nhttps://cloud.google.com/vpc/docs/packet-mirroring\r\nPage 1 of 8\n\nbalancers except that the forwarding rule must be configured for Packet Mirroring. Any non-mirrored\r\ntraffic that is sent to the load balancer is dropped.\r\nFiltering\r\nBy default, Packet Mirroring collects all IPv4 traffic of mirrored instances. Instead of collecting all IPv4 traffic,\r\nyou can use filters to expand the traffic that's collected to include all or some IPv6 traffic. You can also use filters\r\nto narrow the traffic that's mirrored, which can help you limit the bandwidth that's used by mirrored instances.\r\nYou can configure filters to collect traffic based on protocol, CIDR ranges (IPv4, IPv6, or both), direction of\r\ntraffic (ingress-only, egress-only, or both), or a combination.\r\nPolicy order\r\nMultiple packet mirroring policies can apply to an instance. The priority of a packet mirroring policy is always\r\n1000 and cannot be changed. Identical policies are not supported. Google Cloud can send traffic to any of the\r\nload balancers that have been configured with identical packet mirroring policies. To predictably and consistently\r\nsend mirrored traffic to a single load balancer, create policies that have filters with non-overlapping address\r\nranges. If ranges overlap, set unique filter protocols.\r\nDepending on each policy's filter, Google Cloud chooses a policy for each flow. If you have distinct policies,\r\nGoogle Cloud uses the corresponding policy that matches the mirrored traffic. For example, you might have one\r\npolicy that has the filter 198.51.100.3/24:TCP and another policy that has the filter 2001:db8::/64:TCP:UDP .\r\nBecause the policies are distinct, there's no ambiguity about which policy Google Cloud uses.\r\nHowever, if you have overlapping policies, Google Cloud evaluates their filters to choose which policy to use. For\r\nexample, you might have two policies, one that has a filter for 10.0.0.0/24:TCP and another for\r\n10.0.0.0/16:TCP . These policies overlap because their CIDR ranges overlap.\r\nWhen choosing a policy, Google Cloud prioritizes policies by comparing their filter's CIDR range size.\r\nGoogle Cloud chooses a policy based on a filter:\r\nIf policies have different but overlapping CIDR ranges and the same exact protocols, Google Cloud\r\nchooses the policy that uses the most specific CIDR range. Suppose the destination for a TCP packet\r\nleaving a mirrored instance is 10.240.1.4 , and there are two policies with the following filters:\r\n10.240.1.0/24:ALL and 10.240.0.0/16:TCP . Because the most specific match for 10.240.1.4 is\r\n10.240.1.0/24:ALL , Google Cloud uses the policy that has the filter 10.240.1.0/24:ALL .\r\nIf policies specify the same exact CIDR range with overlapping protocols, Google Cloud chooses a policy\r\nwith the most specific protocol. For example, the following filters have the same range but overlapping\r\nprotocols: 10.240.1.0/24:TCP and 10.240.1.0/24:ALL . For matching TCP traffic, Google Cloud uses\r\nthe 10.240.1.0/24:TCP policy. The 10.240.1.0/24:ALL policy applies to matching traffic for all other\r\nprotocols.\r\nhttps://cloud.google.com/vpc/docs/packet-mirroring\r\nPage 2 of 8\n\nIf policies have the same exact CIDR range but distinct protocols, these policies don't overlap. Google\r\nCloud uses the policy that corresponds to the mirrored traffic's protocol. For example, you might have a\r\npolicy for 2001:db8::/64:TCP and another for 2001:db8::/64:UDP . Depending on the mirrored traffic's\r\nprotocol, Google Cloud uses either the TCP or UDP policy.\r\nIf overlapping policies have the same exact filter, they are identical. In this case, Google Cloud might\r\nchoose the same policy or a different policy each time that matching traffic is re-evaluated against these\r\npolicies. We recommend that you avoid creating identical packet mirroring policies.\r\nVPC Flow Logs\r\nVPC Flow Logs doesn't log mirrored packets. If a collector instance is on a subnet that has VPC Flow Logs\r\nenabled, traffic that is sent directly to the collector instance is logged, including traffic from mirrored instances.\r\nThat is, if the original destination IPv4 or IPv6 address matches the IPv4 or IPv6 address of the collector instance,\r\nthe flow is logged.\r\nFor more information about VPC Flow Logs, see Using VPC Flow Logs.\r\nKey properties\r\nThe following list describes constraints or behaviors with Packet Mirroring that are important to understand before\r\nyou use it:\r\nEach packet mirroring policy defines mirrored sources and a collector destination. You must adhere to the\r\nfollowing rules:\r\nAll mirrored sources must be in the same project, VPC network, and Google Cloud region.\r\nA collector destination must be in the same region as the mirrored sources. A collector destination\r\ncan be located in either the same VPC network as the mirrored sources or a VPC network connected\r\nto the mirrored sources' network using VPC Network Peering.\r\nEach mirroring policy can only reference a single collector destination. However, a single collector\r\ndestination can be referenced by multiple mirroring policies.\r\nAll layer 4 protocols are supported by Packet Mirroring.\r\nYou cannot mirror and collect traffic on the same network interface of a VM instance because doing this\r\nwould cause a mirroring loop.\r\nTo mirror traffic passing between Pods on the same Google Kubernetes Engine (GKE) node, you must\r\nenable Intranode visibility for the cluster.\r\nTo mirror IPv6 traffic, use filters to specify the IPv6 CIDR ranges of the IPv6 traffic that you want to\r\nmirror. You can mirror all IPv6 traffic by using a CIDR range filter of ::/0 . You can mirror all IPv4 and\r\nIPv6 traffic by using the following comma-separated CIDR range filter: 0.0.0.0/0,::/0 .\r\nhttps://cloud.google.com/vpc/docs/packet-mirroring\r\nPage 3 of 8\n\nMirroring traffic consumes bandwidth on the mirrored instance. For example, if a mirrored instance\r\nexperiences 1 Gbps of ingress traffic and 1 Gbps of egress traffic, the total traffic on the instances is 1 Gbps\r\nof ingress and 3 Gbps of egress (1 Gbps of normal egress traffic and 2 Gbps of mirrored egress traffic). To\r\nlimit what traffic is collected, you can use filters.\r\nThe cost of Packet Mirroring varies depending on the amount of egress traffic traveling from a mirrored\r\ninstance to an instance group and whether the traffic travels between zones.\r\nPacket Mirroring applies to both ingress and egress direction. If two VM instances that are being mirrored\r\nsend traffic to each other, Google Cloud collects two versions of the same packet. You can alter this\r\nbehaviour by specifying that only ingress or only egress packets are mirrored.\r\nThere is a maximum number of packet mirroring policies that you can create for a project. For more\r\ninformation, see the per-project quotas on the quotas page.\r\nFor each packet mirroring policy, the maximum number of mirrored sources that you can specify depends\r\non the source type:\r\n5 subnets\r\n5 tags\r\n50 instances\r\nThe maximum number of packet mirroring filters is 30, which is the number of IPv4 and IPv6 address\r\nranges multiplied by the number of protocols. For example, you can specify 30 ranges and 1 protocol,\r\nwhich would be 30 filters. However, you cannot specify 30 ranges and 2 protocols, which would be 60\r\nfilters and greater than the maximum.\r\nMirrored traffic is encrypted only if the VM encrypts that traffic at the application layer. While VM-to-VM\r\nconnections within VPC networks and peered VPC networks are encrypted, the encryption and decryption\r\nhappens in the hypervisors. From the perspective of the VM, this traffic is not encrypted.\r\nUse cases\r\nThe following sections describe real-world scenarios that demonstrate why you might use Packet Mirroring.\r\nEnterprise security\r\nSecurity and network engineering teams must ensure that they are catching all anomalies and threats that might\r\nindicate security breaches and intrusions. They mirror all traffic so that they can complete a comprehensive\r\ninspection of suspicious flows. Because attacks can span multiple packets, security teams must be able to get all\r\npackets for each flow.\r\nFor example, the following security tools require you to capture multiple packets:\r\nIntrusion detection system (IDS) tools require multiple packets of a single flow to match a signature so that\r\nthe tools can detect persistent threats.\r\nhttps://cloud.google.com/vpc/docs/packet-mirroring\r\nPage 4 of 8\n\nDeep Packet Inspection engines inspect packet payloads to detect protocol anomalies.\r\nNetwork forensics for PCI compliance and other regulatory use cases require that most packets be\r\nexamined. Packet Mirroring provides a solution for capturing different attack vectors, such as infrequent\r\ncommunication or attempted but unsuccessful communication.\r\nApplication performance monitoring\r\nNetwork engineers can use mirrored traffic to troubleshoot performance issues reported by application and\r\ndatabase teams. To check for networking issues, network engineers can view what's going over the wire rather\r\nthan relying on application logs.\r\nFor example, network engineers can use data from Packet Mirroring to complete the following tasks:\r\nAnalyze protocols and behaviors so that they can find and fix issues, such as packet loss or TCP resets.\r\nAnalyze (in real time) traffic patterns from remote desktop, VoIP, and other interactive applications.\r\nNetwork engineers can search for issues that affect the application's user experience, such as multiple\r\npacket resends or more than expected reconnections.\r\nExample collector destination topologies\r\nYou can use Packet Mirroring in various setups. The following examples show the location of collector\r\ndestinations and their policies for different packet mirroring configurations, such as VPC Network Peering and\r\nShared VPC.\r\nCollector destination in the same network\r\nThe following example shows a packet mirroring configuration where the mirrored source and collector\r\ndestination are in the same VPC network.\r\nA packet mirroring policy with a mirrored source and a destination collector in the same VPC\r\nnetwork.\r\nPacket mirroring policy that has all resources in the same VPC network (click to enlarge).\r\nIn the preceding diagram, the packet mirroring policy is configured to mirror mirrored-subnet and send\r\nmirrored traffic to the internal passthrough Network Load Balancer. Google Cloud mirrors the traffic on existing\r\nand future instances in the subnet. All traffic to and from the internet, on-premises hosts, or Google services is\r\nmirrored.\r\nCollector destination in a peer network\r\nYou can build a centralized collector model, where instances in different VPC networks send mirrored traffic to a\r\ncollector destination in a central VPC network. That way, you can use a single destination collector.\r\nIn the following example, the collector-load-balancer internal passthrough Network Load Balancer is in the\r\nus-central1 region in the network-a VPC network in project-a . This destination collector can be used by\r\nhttps://cloud.google.com/vpc/docs/packet-mirroring\r\nPage 5 of 8\n\ntwo packet mirroring policies:\r\npolicy-1 collects packets from mirrored sources in the us-central1 region in the network-a VPC\r\nnetwork in project-a and sends them to the collector-load-balancer destination.\r\npolicy-2 collects packets from mirrored sources in the us-central1 region in the network-b VPC\r\nnetwork in project-b and sends them to the same collector-load-balancer destination.\r\nTwo mirroring policies are required because mirrored sources exist in different VPC networks.\r\nA packet mirroring policy in a central network where the collector destination lives. The network\r\nis peered with other networks where the mirrored sources live.\r\nPacket mirroring policies in a central network peered with other networks that have mirrored\r\nsources (click to enlarge).\r\nIn the preceding diagram, the collector destination collects mirrored traffic from subnets in two different networks.\r\nAll resources (the source and destination) must be in the same region. The setup in network-a is similar to the\r\nexample where the mirrored source and collector destination are in the same VPC network. policy-1 is\r\nconfigured to collect traffic from subnet-a and send it to collector-ilb .\r\npolicy-2 is configured in project-a but specifies subnet-b as a mirrored source. Because network-a and\r\nnetwork-b are peered, the destination collector can collect traffic from subnet-b .\r\nThe networks are in different projects and might have different owners. It's possible for either owner to create the\r\npacket mirroring policy if they have the right permissions:\r\nIf the owners of project-a create the packet mirroring policy, they must have the\r\ncompute.packetMirroringAdmin role on the network, subnet, or instances to mirror in project-b .\r\nIf the owners of project-b create the packet mirroring policy, they must have\r\ncompute.packetMirroringUser role in project-a .\r\nFor more information about enabling private connectivity across two VPC networks, see VPC Network Peering.\r\nShared VPC\r\nIn the following Shared VPC scenarios, the mirrored instances for the collector destination are all in the same\r\nShared VPC network. Even though the resources are all in the same network, they can be in different projects,\r\nsuch as the host project or several different service projects. The following examples show where packet mirroring\r\npolicies must be created and who can create them.\r\nIf both the mirrored sources and collector destination are in the same project, either in a host project or service\r\nproject, the setup is similar to having everything in the same VPC network. The project owner can create all the\r\nresources and set the required permissions in that project.\r\nFor more information, see Shared VPC overiew.\r\nhttps://cloud.google.com/vpc/docs/packet-mirroring\r\nPage 6 of 8\n\nCollector destination in service project\r\nIn the following example, the collector destination is in a service project that uses a subnet in the host project. In\r\nthis case, the policy is also in the service project. The policy could also be in the host project.\r\nThe relationship between the host and service projects for Packet Mirroring.\r\nCollector destination in service project (click to enlarge).\r\nIn the preceding diagram, the service project contains the collector instances that use the collector subnet in the\r\nShared VPC network. The packet mirroring policy was created in the service project and is configured to mirror\r\ninstances that have a network interface in subnet-mirrored .\r\nService or host project users can create the packet mirroring policy. To do so, users must have the\r\ncompute.packetMirroringUser role in the service project where the collector destination is located. Users must\r\nalso have the compute.packetMirroringAdmin role on the mirrored sources.\r\nCollector destination in host project\r\nIn the following example, the collector destination is in the host project and mirrored instances are in the service\r\nprojects.\r\nThis example might apply to scenarios where developers deploy applications in service projects and use the\r\nShared VPC network. They don't have to manage the networking infrastructure or Packet Mirroring. Instead, a\r\ncentralized networking or security team, who have control over the host project and Shared VPC network, are\r\nresponsible for provisioning packet mirroring policies.\r\nThe relationship between the host and service projects for Packet Mirroring.\r\nCollector destination in host project (click to enlarge).\r\nIn the preceding diagram, the packet mirroring policy is created in the host project, where the collector destination\r\nis located. The policy is configured to mirror instances in the mirrored subnet. VM instances in service projects\r\ncan use the mirrored subnet, and their traffic is mirrored.\r\nService or host project users can create the packet mirroring policy. To do so, users in the service project must\r\nhave the compute.packetMirroringUser role in the host project. Users in the host project require the\r\ncompute.packetMirroringAdmin role for mirrored sources in the service projects.\r\nMulti-interface VM instances\r\nYou can include VM instances that have multiple network interfaces in a packet mirroring policy.\r\nA policy can mirror resources only from a single network. If your multi-NIC instance has network interfaces in\r\ndifferent networks, you cannot create one policy to mirror traffic for all of the network interfaces. If you need to\r\nmirror additional network interfaces that are attached to different networks, you must create one packet mirroring\r\npolicy for each interface.\r\nhttps://cloud.google.com/vpc/docs/packet-mirroring\r\nPage 7 of 8\n\nPricing\r\nYou are charged for the amount of data processed by Packet Mirroring. For details, see Packet Mirroring pricing.\r\nYou are also charged for all the prerequisite components and egress traffic that are related to Packet Mirroring. For\r\nexample, the instances that collect traffic are charged at the regular rate. Also, if packet mirroring traffic travels\r\nbetween zones, you are charged for the egress traffic. For pricing details, see the related pricing page.\r\nWhat's next\r\nUse Packet Mirroring.\r\nMonitor Packet Mirroring.\r\nInternal passthrough Network Load Balancer overview.\r\nPacket Mirroring partner providers.\r\nOut-of-band integration overview.\r\nSource: https://cloud.google.com/vpc/docs/packet-mirroring\r\nhttps://cloud.google.com/vpc/docs/packet-mirroring\r\nPage 8 of 8",
	"extraction_quality": 1,
	"language": "EN",
	"sources": [
		"MITRE"
	],
	"references": [
		"https://cloud.google.com/vpc/docs/packet-mirroring"
	],
	"report_names": [
		"packet-mirroring"
	],
	"threat_actors": [],
	"ts_created_at": 1775434691,
	"ts_updated_at": 1775791254,
	"ts_creation_date": 0,
	"ts_modification_date": 0,
	"files": {
		"pdf": "https://archive.orkl.eu/9fc81f9b89bbab8b26c53527d2139f3de0411398.pdf",
		"text": "https://archive.orkl.eu/9fc81f9b89bbab8b26c53527d2139f3de0411398.txt",
		"img": "https://archive.orkl.eu/9fc81f9b89bbab8b26c53527d2139f3de0411398.jpg"
	}
}