{
	"id": "d44a04b2-b60e-45f2-baa0-a10746a09265",
	"created_at": "2026-04-06T00:13:25.09263Z",
	"updated_at": "2026-04-10T03:20:56.094073Z",
	"deleted_at": null,
	"sha1_hash": "18efc99a7ac4557d1d6456c318794dedfca467cf",
	"title": "Reflections on reflection (attacks)",
	"llm_title": "",
	"authors": "",
	"file_creation_date": "0001-01-01T00:00:00Z",
	"file_modification_date": "0001-01-01T00:00:00Z",
	"file_size": 650270,
	"plain_text": "Reflections on reflection (attacks)\r\nBy Marek Majkowski\r\nPublished: 2017-05-24 · Archived: 2026-04-05 14:13:34 UTC\r\n2017-05-24\r\n7 min read\r\nRecently Akamai published an article about CLDAP reflection attacks. This got us thinking. We saw attacks from\r\nConnectionless LDAP servers back in November 2016 but totally ignored them because our systems were\r\nautomatically dropping the attack traffic without any impact.\r\nhttps://blog.cloudflare.com/reflections-on-reflections/\r\nPage 1 of 13\n\nCC BY 2.0 image by RageZ\r\nWe decided to take a second look through our logs and share some statistics about reflection attacks we see\r\nregularly. In this blog post, I'll describe popular reflection attacks, explain how to defend against them and why\r\nCloudflare and our customers are immune to most of them.\r\nA recipe for reflection\r\nLet's start with a brief reminder on how reflection attacks (often called \"amplification attacks\") work.\r\nhttps://blog.cloudflare.com/reflections-on-reflections/\r\nPage 2 of 13\n\nTo bake a reflection attack, the villain needs four ingredients:\r\nA server capable of performing IP address spoofing.\r\nA protocol vulnerable to reflection/amplification. Any badly designed UDP-based request-response\r\nprotocol will do.\r\nA list of \"reflectors\": servers that support the vulnerable protocol.\r\nA victim IP address.\r\nThe general idea:\r\nThe villain sends fake UDP requests.\r\nThe source IP address in these packets is spoofed: the attacker sticks the victim's IP address in the source IP\r\naddress field, not their own IP address as they normally would.\r\nEach packet is destined to a random reflector server.\r\nThe spoofed packets traverse the Internet and eventually are delivered to the reflector server.\r\nThe reflector server receives the fake packet. It looks at it carefully and thinks: \"Oh, what a nice request\r\nfrom the victim! I must be polite and respond!\". It sends the response in good faith.\r\nThe response, though, is directed to the victim.\r\nThe victim will end up receiving a large volume of response packets it never had requested. With a large enough\r\nattack the victim may end up with congested network and an interrupt storm.\r\nThe responses delivered to victim might be larger than the spoofed requests (hence amplification). A carefully\r\nmounted attack may amplify the villain's traffic. In the past we've documented a 300Gbps attack generated with an\r\nestimated 27Gbps of spoofing capacity.\r\nPopular reflections\r\nDuring the last six months our DDoS mitigation system \"Gatebot\" detected 6,329 simple reflection attacks (that's\r\none every 40 minutes). Here is the list by popularity of different attack vectors. An attack is defined as a large\r\nflood of packets identified by a tuple: (Protocol, Source Port, Target IP). Basically - a flood of packets with the\r\nsame source port to a single target. This notation is pretty accurate - during normal Cloudflare operation, incoming\r\npackets rarely share a source port number!\r\n Count Proto Src port\r\n 3774 udp 123 NTP\r\n 1692 udp 1900 SSDP\r\n 438 udp 0 IP fragmentation\r\n 253 udp 53 DNS\r\n 42 udp 27015 SRCDS\r\nhttps://blog.cloudflare.com/reflections-on-reflections/\r\nPage 3 of 13\n\n20 udp 19 Chargen\r\n 19 udp 20800 Call Of Duty\r\n 16 udp 161 SNMP\r\n 12 udp 389 CLDAP\r\n 11 udp 111 Sunrpc\r\n 10 udp 137 Netbios\r\n 6 tcp 80 HTTP\r\n 5 udp 27005 SRCDS\r\n 2 udp 520 RIP\r\nSource port 123/udp NTP\r\nBy far the most popular reflection attack vector remains NTP. We have blogged about NTP in the past:\r\nUnderstanding and mitigating NTP-based DDoS attacks\r\nTechnical Details Behind a 400Gbps NTP Amplification DDoS Attack\r\nGood News: Vulnerable NTP Servers Closing Down\r\nOver the last six months we've seen 3,374 unique NTP amplification attacks. Most of them were short. The\r\naverage attack duration was 11 minutes, with the longest lasting 22 hours (1,300 minutes). Here's a histogram\r\nshowing the distribution of NTP attack duration:\r\nMinutes min:1.00 avg:10.51 max:1297.00 dev:35.02 count:3774\r\nMinutes:\r\n value |-------------------------------------------------- count\r\n 0 | 2\r\n 1 | * 53\r\n 2 | ************************* 942\r\n 4 |************************************************** 1848\r\n 8 | *************** 580\r\n 16 | ***** 221\r\n 32 | * 72\r\n 64 | 35\r\n 128 | 11\r\n 256 | 7\r\n 512 | 2\r\n 1024 | 1\r\nMost of the attacks used a small number of reflectors - we've recorded an average of 1.5k unique IPs per attack.\r\nThe largest attack used an estimated 12.3k reflector servers.\r\nUnique IPs min:5.00 avg:1552.84 max:12338.00 dev:1416.03 count:3774\r\nUnique IPs:\r\n value |-------------------------------------------------- count\r\nhttps://blog.cloudflare.com/reflections-on-reflections/\r\nPage 4 of 13\n\n16 | 0\r\n 32 | 1\r\n 64 | 8\r\n 128 | ***** 111\r\n 256 | ************************* 553\r\n 512 | ************************************************* 1084\r\n 1024 |************************************************** 1093\r\n 2048 | ******************************* 685\r\n 4096 | ********** 220\r\n 8192 | 13\r\nThe peak attack bandwidth was on average 5.76Gbps and max of 64Gbps:\r\nPeak bandwidth in Gbps min:0.06 avg:5.76 max:64.41 dev:6.39 count:3774\r\nPeak bandwidth in Gbps:\r\n value |-------------------------------------------------- count\r\n 0 | ****** 187\r\n 1 | ********************* 603\r\n 2 |************************************************** 1388\r\n 4 | ***************************** 818\r\n 8 | ****************** 526\r\n 16 | ******* 212\r\n 32 | * 39\r\n 64 | 1\r\nThis stacked chart shows the geographical distribution of the largest NTP attack we've seen in the last six months.\r\nYou can see the packets per second number directed to each datacenter. One our datacenters (San Jose to be\r\nprecise) received about a third of the total attack volume, while the remaining packets were distributed roughly\r\nevenly across other datacenters.\r\nThe attack lasted 20 minutes, used 527 reflector NTP servers and generated about 20Mpps / 64Gbps at peak.\r\nDividing these numbers we can estimate that a single packet in that attack had on average size of 400 bytes. In\r\nfact, in NTP attacks the great majority of packets have a length of precisely 468 bytes (less often 516). Here's a\r\nsnippet from tcpdump:\r\nhttps://blog.cloudflare.com/reflections-on-reflections/\r\nPage 5 of 13\n\n$ tcpdump -n -r 3164b6fac836774c.pcap -v -c 5 -K\r\n11:38:06.075262 IP -(tos 0x20, ttl 60, id 0, offset 0, proto UDP (17), length 468)\r\n 216.152.174.70.123 \u003e x.x.x.x.47787: [|ntp]\r\n11:38:06.077141 IP -(tos 0x0, ttl 56, id 0, offset 0, proto UDP (17), length 468)\r\n 190.151.163.1.123 \u003e x.x.x.x.44540: [|ntp]\r\n11:38:06.082631 IP -(tos 0xc0, ttl 60, id 0, offset 0, proto UDP (17), length 468)\r\n 69.57.241.60.123 \u003e x.x.x.x.47787: [|ntp]\r\n11:38:06.095971 IP -(tos 0x0, ttl 60, id 0, offset 0, proto UDP (17), length 468)\r\n 126.219.94.77.123 \u003e x.x.x.x.21784: [|ntp]\r\n11:38:06.113935 IP -(tos 0x0, ttl 59, id 0, offset 0, proto UDP (17), length 516)\r\n 69.57.241.60.123 \u003e x.x.x.x.9285: [|ntp]\r\nSource port 1900/udp SSDP\r\nThe second most popular reflection attack was SSDP, with a count of 1,692 unique events. These attacks were\r\nusing much larger fleets of reflector servers. On average we've seen around 100k reflectors used in each attack,\r\nwith the largest attack using 1.23M reflector IPs. Here's the histogram of number of unique IPs used in SSDP\r\nattacks:\r\nUnique IPs min:15.00 avg:98272.02 max:1234617.00 dev:162699.90 count:1691\r\nUnique IPs:\r\n value |-------------------------------------------------- count\r\n 256 | 0\r\n 512 | 4\r\n 1024 | **************** 98\r\n 2048 | ************************ 152\r\n 4096 | ***************************** 178\r\n 8192 | ************************* 158\r\n 16384 | **************************** 176\r\n 32768 | *************************************** 243\r\n 65536 |************************************************** 306\r\n 131072 | ************************************ 225\r\n 262144 | *************** 95\r\n 524288 | ******* 47\r\n 1048576 | * 7\r\nThe attacks were also longer, with 24 minutes average duration:\r\n$ cat 1900-minutes| ~/bin/mmhistogram -t \"Minutes\"\r\nMinutes min:2.00 avg:23.69 max:1139.00 dev:57.65 count:1692\r\nMinutes:\r\n value |-------------------------------------------------- count\r\n 0 | 0\r\n 1 | 10\r\nhttps://blog.cloudflare.com/reflections-on-reflections/\r\nPage 6 of 13\n\n2 | ***************** 188\r\n 4 | ******************************** 354\r\n 8 |************************************************** 544\r\n 16 | ******************************* 342\r\n 32 | *************** 168\r\n 64 | **** 48\r\n 128 | * 19\r\n 256 | * 16\r\n 512 | 1\r\n 1024 | 2\r\nInterestingly the bandwidth doesn't follow a normal distribution. The average SSDP attack was 12Gbps and the\r\nlargest just shy of 80Gbps:\r\n$ cat 1900-Gbps| ~/bin/mmhistogram -t \"Bandwidth in Gbps\"\r\nBandwidth in Gbps min:0.41 avg:11.95 max:78.03 dev:13.32 count:1692\r\nBandwidth in Gbps:\r\n value |-------------------------------------------------- count\r\n 0 | ******************************* 331\r\n 1 | ********************* 232\r\n 2 | ********************** 235\r\n 4 | *************** 165\r\n 8 | ****** 65\r\n 16 |************************************************** 533\r\n 32 | *********** 118\r\n 64 | * 13\r\nLet's take a closer look at the largest (80Gbps) attack we've recorded. Here's a stacked chart showing packets per\r\nsecond going to each datacenter. This attack was using 940k reflector IPs, generated 30Mpps. The datacenters\r\nreceiving the largest proportion of the traffic were San Jose, Los Angeles and Moscow.\r\nThe average packet size was 300 bytes. Here's how the attack looked on the wire:\r\n$ tcpdump -n -r 4ca985a2211f8c88.pcap -K -c 7\r\n10:24:34.030339 IP - 219.121.108.27.1900 \u003e x.x.x.x.25255: UDP, length 301\r\nhttps://blog.cloudflare.com/reflections-on-reflections/\r\nPage 7 of 13\n\n10:24:34.406943 IP - 208.102.119.37.1900 \u003e x.x.x.x.37081: UDP, length 331\r\n10:24:34.454707 IP - 82.190.96.126.1900 \u003e x.x.x.x.25255: UDP, length 299\r\n10:24:34.460455 IP - 77.49.122.27.1900 \u003e x.x.x.x.25255: UDP, length 289\r\n10:24:34.491559 IP - 212.171.247.139.1900 \u003e x.x.x.x.25255: UDP, length 323\r\n10:24:34.494385 IP - 111.1.86.109.1900 \u003e x.x.x.x.37081: UDP, length 320\r\n10:24:34.495474 IP - 112.2.47.110.1900 \u003e x.x.x.x.37081: UDP, length 288\r\nSource port 0/udp IP fragmentation\r\nSometimes we see reflection attacks showing UDP source and destination port numbers set to zero. This is usually\r\na side effect of attacks where the reflecting servers responded with large fragmented packets. Only the first IP\r\nfragment contains a UDP header, preventing subsequent fragments from being reported properly. From a router\r\npoint of view this looks like a UDP packet without UDP header. A confused router reports a packet from source\r\nport 0, going to port 0!\r\nThis is a tcpdump-like view:\r\n$ tcpdump -n -r 4651d0ec9e6fdc8e.pcap -c 8\r\n02:05:03.408800 IP - 190.88.35.82.0 \u003e x.x.x.x.0: UDP, length 1167\r\n02:05:03.522186 IP - 95.111.126.202.0 \u003e x.x.x.x.0: UDP, length 1448\r\n02:05:03.525476 IP - 78.90.250.3.0 \u003e x.x.x.x.0: UDP, length 839\r\n02:05:03.550516 IP - 203.247.133.133.0 \u003e x.x.x.x.0: UDP, length 1472\r\n02:05:03.571970 IP - 54.158.14.127.0 \u003e x.x.x.x.0: UDP, length 1328\r\n02:05:03.734834 IP - 1.21.56.71.0 \u003e x.x.x.x.0: UDP, length 1250\r\n02:05:03.745220 IP - 195.4.131.174.0 \u003e x.x.x.x.0: UDP, length 1472\r\n02:05:03.766862 IP - 157.7.137.101.0 \u003e x.x.x.x.0: UDP, length 1122\r\nAn avid reader will notice - the source IPs above are open DNS resolvers! Indeed, from our experience most of\r\nthe attacks categorized as fragmentation are actually a side effect of DNS amplifications.\r\nSource port 53/udp DNS\r\nOver the last six months we've seen 253 DNS amplifications. On average an attack used 7100 DNS reflector\r\nservers and lasted 24 minutes. Average bandwidth was around 3.4Gbps with largest attack using 12Gbps.\r\nThis is a simplification though. As mentioned above multiple DNS attacks were registered by our systems as two\r\ndistinct vectors. One was categorized as source port 53, and another as source port 0. This happened when the\r\nDNS server flooded us with DNS responses larger than max packet size, usually about 1,460 bytes. It's easy to see\r\nif that was the case by inspecting the DNS attack packet lengths. Here's an example:\r\nDNS attack packet lengths min:44.00 avg:1458.94 max:1500.00 dev:208.14 count:40000\r\nDNS attack packet lengths:\r\n value |-------------------------------------------------- count\r\n 8 | 0\r\n 16 | 0\r\nhttps://blog.cloudflare.com/reflections-on-reflections/\r\nPage 8 of 13\n\n32 | 129\r\n 64 | 479\r\n 128 | 84\r\n 256 | 164\r\n 512 | 268\r\n 1024 |************************************************** 38876\r\nThe great majority of the received DNS packets were indeed close to the max packet size. This suggests the DNS\r\nresponses were large and were split into multiple fragmented packets. Let's see the packet size distribution for\r\naccompanying source port 0 attack:\r\n$ tcpdump -n -r 4651d0ec9e6fdc8e.pcap \\\r\n | grep length \\\r\n | sed -s 's#.*length \\([0-9]\\+\\).*#\\1#g' \\\r\n | ~/bin/mmhistogram -t \"Port 0 packet length\" -l -b 100\r\nPort 0 packet length min:0.00 avg:1264.81 max:1472.00 dev:228.08 count:40000\r\nPort 0 packet length:\r\n value |-------------------------------------------------- count\r\n 0 | 348\r\n 100 | 7\r\n 200 | 17\r\n 300 | 11\r\n 400 | 17\r\n 500 | 56\r\n 600 | 3\r\n 700 | ** 919\r\n 800 | * 520\r\n 900 | * 400\r\n 1000 | ******** 3083\r\n 1100 | ************************************ 12986\r\n 1200 | ***** 1791\r\n 1300 | ***** 2057\r\n 1400 |************************************************** 17785\r\nAbout half of the fragments were large, close to the max packet length in size, and rest were just shy of 1,200\r\nbytes. This makes sense: a typical max DNS response is capped at 4,096 bytes. 4,096 bytes would be seen on the\r\nwire as one DNS packet fragment with an IP header, one max length packet fragment and one fragment of around\r\n1,100 bytes:\r\n4,096 = 1,460+1,460+1,060\r\nFor the record, the particular attack illustrated here used about 17k reflector server IPs, lasted 64 minutes,\r\ngenerated about 6Gbps on the source port 53 strand and 11Gbps of source port 0 fragments.\r\nWe have blogged about DNS reflection attacks in the past:\r\nhttps://blog.cloudflare.com/reflections-on-reflections/\r\nPage 9 of 13\n\nHow to Launch a 65Gbps DDoS, and How to Stop One\r\nDeep Inside a DNS Amplification DDoS Attack\r\nHow the Consumer Product Safety Commission is (Inadvertently) Behind the Internet’s Largest DDoS\r\nAttacks\r\nOther protocols\r\nWe've seen amplification using other protocols such as:\r\nport 19 - Chargen\r\nport 27015 - SRCDS\r\nport 20800 - Call Of Duty\r\n...and many other obscure protocols. These attacks were usually small and not notable. We didn't see enough of\r\nthen to provide meaningful statistics but the attacks were automatically mitigated.\r\nPoor observability\r\nUnfortunately we're not able to report on the contents of the attack traffic. This is notable for the NTP and DNS\r\namplifications - without case by case investigations we can't report what responses were actually being delivered\r\nto us.\r\nThis is because all these attacks stopped at the network layer. Routers are heavily optimized to perform packet\r\nforwarding and have a limited capacity of extracting raw packets. Basically there is no \"tcpdump\" there.\r\nWe track these attacks with netflow, and we observe them hit our routers firewall. The tcpdump snippets shown\r\nabove were actually fake, reconstructed artificially from netflow data.\r\nTrivial to mitigate\r\nhttps://blog.cloudflare.com/reflections-on-reflections/\r\nPage 10 of 13\n\nWith properly configured firewall and sufficient network capacity (which isn't always easy to come by unless you\r\nare the size of Cloudflare) it's trivial to block the reflection attacks. But note that we've seen reflection attacks up\r\nto 80Gbps so you do need sufficient capacity.\r\nProperly configuring a firewall is not rocket science: default DROP can get you quite far. In other cases you might\r\nwant to configure rate limiting rules. This is a snippet from our JunOS config:\r\nterm RATELIMIT-SSDP-UPNP {\r\n from {\r\n destination-prefix-list {\r\n ANYCAST;\r\n }\r\n next-header udp;\r\n source-port 1900;\r\n }\r\n then {\r\n policer SA-POLICER;\r\n count ACCEPT-SSDP-UPNP;\r\n next term;\r\n }\r\n}\r\nBut properly configuring firewall requires some Internet hygiene. You should avoid using the same IP for inbound\r\nand outbound traffic. For example, filtering a potential NTP DDoS will be harder if you can't just block inbound\r\nport 123 indiscriminately. If your server requires NTP, make sure it exits to the Internet over non-server IP\r\naddress!\r\nCapacity game\r\nWhile having sufficient network capacity is necessary, you don't need to be a Tier 1 to survive amplification\r\nDDoS. The median attack size we've received was just 3.35Gbps, average 7Gbps, Only 195 attacks out of 6,353\r\nattacks recorded - 3% - were larger than 30Gbps.\r\nAll attacks in Gbps: min:0.04 avg:7.07 med:3.35 max:78.03 dev:9.06 count:6329\r\nAll attacks in Gbps:\r\n value |-------------------------------------------------- count\r\n 0 | **************** 658\r\n 1 | ************************* 1012\r\n 2 |************************************************** 1947\r\n 4 | ****************************** 1176\r\n 8 | **************** 641\r\n 16 | ******************* 748\r\n 32 | **** 157\r\n 64 | 14\r\nhttps://blog.cloudflare.com/reflections-on-reflections/\r\nPage 11 of 13\n\nBut not all Cloudflare datacenters have equal sized network connections to the Internet. So how can we manage?\r\nCloudflare was architected to withstand large attacks. We are able to spread the traffic on two layers:\r\nOur public network uses Anycast. For certain attack types - like amplification - this allows us to split the\r\nattack across multiple datacenters avoiding a single choke point.\r\nAdditionally we use ECMP internally to spread a traffic destined to single IP address across multiple\r\nphysical servers.\r\nIn the examples above, I showed a couple of amplification attacks getting nicely distributed across dozens of\r\ndatacenters across the globe. In the shown attacks, if our router firewall failed, our physical servers wouldn't\r\nreceive more than 500kpps of attack data. A well tuned iptables firewall should be able to cope with such a\r\nvolume without a special kernel offload help.\r\nInter-AS Flowspec for the rest\r\nWithstanding reflection attacks requires sufficient network capacity. Internet citizens not having fat network\r\ncables should use a good Internet Service Provider supporting flowspec.\r\nFlowspec can be thought of as a protocol enabling firewall rules to be transmitted over a BGP session. In theory\r\nflowspec allows BGP routers on different Autonomous Systems to share firewall rules. The rule can be set up on\r\nthe attacked router and distributed to the ISP network with the BGP magic. This will stop the packets closer to the\r\nsource and effectively relieve network congestion.\r\nUnfortunately, due to performance and security concerns only a handful of large ISP's allow inter-AS flowspec\r\nrules. Still - it's worth a try. Check if your ISP is willing to accept flowspec from your BGP router!\r\nAt Cloudflare we maintain an intra-AS flowspec infrastructure, and we have plenty of war stories about it.\r\nhttps://blog.cloudflare.com/reflections-on-reflections/\r\nPage 12 of 13\n\nSummary\r\nIn this blog post we've given details of three popular reflection attack vectors: NTP, SSDP and DNS. We discussed\r\nhow the Cloudflare Anycast network helps us avoid a single choke point. In most cases dealing with reflection\r\nattacks is not rocket science though sufficient network capacity is needed and simple firewall rules are usually\r\nenough to cope.\r\nThe types of DDoS attacks we see from other vectors (such as IoT botnets) are another matter. They tend to be\r\nmuch larger and require specialized, automatic DDoS mitigation. And, of course, there are many DDoS attacks\r\nthat occur using techniques other than reflection and not just using UDP.\r\nWhether you face DDoS attacks of 10Gbps+, 100Gbps+ or 1Tbps+, Cloudflare can mitigate them.\r\nCloudflare's connectivity cloud protects entire corporate networks, helps customers build Internet-scale\r\napplications efficiently, accelerates any website or Internet application, wards off DDoS attacks, keeps hackers at\r\nbay, and can help you on your journey to Zero Trust.\r\nVisit 1.1.1.1 from any device to get started with our free app that makes your Internet faster and safer.\r\nTo learn more about our mission to help build a better Internet, start here. If you're looking for a new career\r\ndirection, check out our open positions.\r\nAttacksDDoSReliabilitySecurity\r\nSource: https://blog.cloudflare.com/reflections-on-reflections/\r\nhttps://blog.cloudflare.com/reflections-on-reflections/\r\nPage 13 of 13",
	"extraction_quality": 1,
	"language": "EN",
	"sources": [
		"MITRE"
	],
	"references": [
		"https://blog.cloudflare.com/reflections-on-reflections/"
	],
	"report_names": [
		"reflections-on-reflections"
	],
	"threat_actors": [],
	"ts_created_at": 1775434405,
	"ts_updated_at": 1775791256,
	"ts_creation_date": 0,
	"ts_modification_date": 0,
	"files": {
		"pdf": "https://archive.orkl.eu/18efc99a7ac4557d1d6456c318794dedfca467cf.pdf",
		"text": "https://archive.orkl.eu/18efc99a7ac4557d1d6456c318794dedfca467cf.txt",
		"img": "https://archive.orkl.eu/18efc99a7ac4557d1d6456c318794dedfca467cf.jpg"
	}
}