|By Lori MacVittie||
|May 1, 2014 09:00 AM EDT||
This week's "bad news" with respect to information security centers on Facebook and the exploitation of HTTP caches to affect a DDoS attack. Reported as a 'vulnerability', this exploit takes advantage of the way the application protocol is designed to work. In fact, the same author who reports the Facebook 'vulnerability' has also shown you can use Google to do the same thing. Just about any site that enables you to submit content containing links and then retrieves those links for you (for caching purposes) could be used in this way. It's not unique to Facebook or Google, for that matter, they just have the perfect environment to make such an exploit highly effective.
The exploit works by using a site (in this case Facebook) to load content and takes advantage of the general principle of amplification to effectively DDoS a third-party site. This is a flood-based like attack, meaning it's attempting to overwhelm a server by flooding it with requests that voraciously consume server-side resources and slow everyone down - to the point of forcing it to appear "down" to legitimate users.
The requests brokered by Facebook are themselves 110% legitimate requests. The requests for an image (or PDF or large video file) are well-formed, and nothing about the requests on an individual basis could be detected as being an attack. This is, in part, why the exploit works: because the individual requests are wholly legitimate requests.
How it Works
The trigger for the "attack" is the caching service. Caches are generally excellent at, well, caching static objects with well-defined URIs. A cache doesn't have a problem finding /myimage.png. It's either there, or it's not and the cache has to go to origin to retrieve it. Where things get more difficult is when requests for content are dynamic; that is, they send parameters that the origin server interprets to determine which image to send, e.g. /myimage?id=30. This is much like an old developer trick to force the reload of dynamic content when browser or server caches indicate a match on the URL. By tacking on a random query parameter, you can "trick" the browser and the server into believing it's a brand new object, and it will go to origin to retrieve it - even though the query parameter is never used. That's where the exploit comes in.
HTTP servers accept as part of the definition of a URI any number of variable query parameters. Those parameters can be ignored or used at the discretion of the application. But when the HTTP server is looking to see if that content has been served already, it does look at those parameters. The reference for a given object is its URL, and thus tacking on a query parameter forces (or tricks if you prefer) the HTTP server to believe the object has never been served before and thus can't be retrieved from a cache.
Caches act on the same principles as an HTTP server because when you get down to brass tacks, a cache is a very specialized HTTP server, focused on mirroring content so it's closer to the user.
<img src=http://target.com/file?r=1> <img src=http://target.com/file?r=2> <img src=http://target.com/file?r=3> ... <img src=http://target.com/file?r=1000>
Many, many, many, many (repeat as necessary) web applications are built using such models. Whether to retrieve text-based content or images is irrelevant to the cache. The cache looks at the request and, if it can't match it somehow, it's going to go to origin.
Which is what's possible with Facebook Notes and Google. By taking advantage of (exploiting) this design principle, if a note crafted with multiple image objects retrieved via a dynamic query is viewed by enough users at the same time, the origin can become overwhelmed or its network oversubscribed.
This is what makes it an exploit, not a vulnerability. There's nothing wrong with the behavior of these caches - they are working exactly as they were designed to act with respect to HTTP. The problem is that when the protocol and caching behavior was defined, such abusive behavior was not considered.
In other words, this is a protocol exploit not specific to Facebook (or Google). In fact, similar exploits have been used to launch attacks in the past. For example, consider some noise raised around WordPress in March 2014 that indicated it was being used to attack other sites by bypassing the cache and forcing a full reload from the origin server:
If you notice, all queries had a random value (like “?4137049=643182″) that bypassed their cache and force a full page reload every single time. It was killing their server pretty quickly.
But the most interesting part is that all the requests were coming from valid and legitimate WordPress sites. Yes, other WordPress sites were sending that random requests at a very large scale and bringing the site down.
The WordPress exploit was taking advantage of the way "pingbacks" work. Attackers were using sites to add pingbacks to amplify an attack on a third party site (also, ironically, a WordPress site).
It's not just Facebook, or Google - it's inherent in the way caching is designed to work.
Not Just HTTP
This isn't just an issue with HTTP. We can see similar behavior in a DNS exploit that renders DNS caching ineffective as protection against certain attack types. In the DNS case, querying a cache with a random host name results in a query to the authoritative (origin) DNS service. If you send enough random host names at the cache, eventually the DNS service is going to feel the impact and possibly choke.
In general, these types of exploits are based on protocol and well-defined system behavior. A cache is, by design, required to either return a matching object if found or go to the origin server if it is not. In both the HTTP and DNS case, the caching services are acting properly and as one would expect.
The problem is that this proper behavior can be exploited to affect a DDoS attack - against third-parties in the case of Facebook/Google and against the domain owner in the case of DNS.
These are not vulnerabilities, they are protocol exploits. This same "vulnerability" is probably present in most architectures that include caching. The difference is that Facebook's ginormous base of users allows for what is expected behavior to quickly turn into what looks like an attack.
The general consensus right now is the best way to mitigate this potential "attack" is to identify and either rate limit or disallow requests coming from Facebook's crawlers by IP address. In essence, the suggestion is to blacklist Facebook (and perhaps Google) to keep it from potentially overwhelming your site.
The author noted in his post regarding this exploit that:
Facebook crawler shows itself as facebookexternalhit. Right now it seems there is no other choice than to block it in order to avoid this nuisance.
The post was later updated to note that blocking by agent may not be enough, hence the consensus on IP-based blacklisting.
The problem is that attackers could simply find another site with a large user base (there are quite a few of them out there with the users to support a successful attack) and find the right mix of queries to bypass the cache (cause caches are a pretty standard part of a web-scale infrastructure) and voila! Instant attack.
Blocking Facebook isn't going to stop other potential attacks and it might seriously impede revenue generating strategies that rely on Facebook as a channel. Rate limiting based on inbound query volume for specific content will help mitigate the impact (and ensure legitimate requests continue to be served) but this requires some service to intermediate and monitor inbound requests and, upon seeing behavior indicative of a potential attack, the ability to intercede or apply the appropriate rate limiting policy. Such a policy could go further and blacklist IP addresses showing sudden increases in requests or simply blocking requests for the specified URI in question - returning instead some other content.
Another option would be to use a caching solution capable of managing dynamic content. For example, F5 Dynamic Caching includes the ability to designate parameters as either indicative of new content or not. That is, the caching service can be configured to ignore some (or all) parameters and serve content out of cache instead of hammering on the origin server.
Let's say the URI for an image was: /directory/images/dog.gif?ver=1;sz=728X90 where valid query parameters are "ver" (version) and "sz" (size). A policy can be configured to recognize "ver" as indicative of different content while all other query parameters indicate the same content and can be served out of cache. With this kind of policy an attacker could send any combination of the following and the same image would be served from cache, even though "sz" is different and there are random additional query parameters.
/directory/images/dog.gif?ver=1;sz=728X90; id=1234 /directory/images/dog.gif?ver=1;sz=728X900; id=123456 /directory/images/dog.gif?ver=1;sz=728X90; cid=1234
By placing an application fluent cache service in front of your origin servers, when Facebook (or Google) comes knocking, you're able to handle the load.
There have been no reports of an attack stemming from this exploitable condition in Facebook Notes or Google, so blacklisting crawlers from either Facebook or Google seems premature. Given that this condition is based on protocol behavior and system design and not a vulnerability unique to Facebook (or Google), though, it would be a good idea to have a plan in place to address, should such an attack actually occur - from there or some other site.
You should review your own architecture and evaluate its ability to withstand a sudden influx of dynamic requests for content like this, and put into place an operational plan for dealing with it should such an event occur.
For more information on protecting against all types of DDoS attacks, check out a new infographic we’ve put together here.
SYS-CON Events announced today that Super Micro Computer, Inc., a global leader in Embedded and IoT solutions, will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. Supermicro (NASDAQ: SMCI), the leading innovator in high-performance, high-efficiency server technology, is a premier provider of advanced server Building Block Solutions® for Data Center, Cloud Computing, Enterprise IT, Hadoop/Big Data, HPC and ...
May. 31, 2016 05:15 PM EDT Reads: 984
18th Cloud Expo, taking place June 7-9, 2016, at the Javits Center in New York City, NY, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Meanwhile, 94% of enterprises are using some...
May. 31, 2016 05:00 PM EDT Reads: 3,325
"What we see what happens when you have a completely networked society and the potential to now drive the value creation and the collaboration and the ecosystems that are possible when you start to be able to connect people and industries together in ways that have never been possible before," explained Esmeralda Swartz, VP of Marketing Enterprise & Cloud at Ericsson, in this SYS-CON.tv interview at @ThingsExpo, held November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA.
May. 31, 2016 04:45 PM EDT Reads: 1,891
WebRTC is bringing significant change to the communications landscape that will bridge the worlds of web and telephony, making the Internet the new standard for communications. Cloud9 took the road less traveled and used WebRTC to create a downloadable enterprise-grade communications platform that is changing the communication dynamic in the financial sector. In his session at @ThingsExpo, Leo Papadopoulos, CTO of Cloud9, will discuss the importance of WebRTC and how it enables companies to fo...
May. 31, 2016 04:45 PM EDT Reads: 2,657
Cloud computing delivers on-demand resources that provide businesses with flexibility and cost-savings. The challenge in moving workloads to the cloud has been the cost and complexity of ensuring the initial and ongoing security and regulatory (PCI, HIPAA, FFIEC) compliance across private and public clouds. Manual security compliance is slow, prone to human error, and represents over 50% of the cost of managing cloud applications. Determining how to automate cloud security compliance is critical...
May. 31, 2016 03:00 PM EDT Reads: 2,039
SYS-CON Events announced today that IBM Cloud Data Services has been named “Bronze Sponsor” of SYS-CON's 18th Cloud Expo, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. IBM Cloud Data Services offers a portfolio of integrated, best-of-breed cloud data services for developers focused on mobile computing and analytics use cases.
May. 31, 2016 02:45 PM EDT Reads: 1,748
SYS-CON Events announced today that ContentMX, the marketing technology and services company with a singular mission to increase engagement and drive more conversations for enterprise, channel and SMB technology marketers, has been named “Sponsor & Exhibitor Lounge Sponsor” of SYS-CON's 18th Cloud Expo, which will take place on June 7-9, 2016, at the Javits Center in New York City, New York. “CloudExpo is a great opportunity to start a conversation with new prospects, but what happens after the...
May. 31, 2016 02:30 PM EDT Reads: 1,499
The IoT is changing the way enterprises conduct business. In his session at @ThingsExpo, Eric Hoffman, Vice President at EastBanc Technologies, discuss how businesses can gain an edge over competitors by empowering consumers to take control through IoT. We'll cite examples such as a Washington, D.C.-based sports club that leveraged IoT and the cloud to develop a comprehensive booking system. He'll also highlight how IoT can revitalize and restore outdated business models, making them profitable...
May. 31, 2016 02:00 PM EDT Reads: 3,146
Customer experience has become a competitive differentiator for companies, and it’s imperative that brands seamlessly connect the customer journey across all platforms. With the continued explosion of IoT, join us for a look at how to build a winning digital foundation in the connected era – today and in the future. In his session at @ThingsExpo, Chris Nguyen, Group Product Marketing Manager at Adobe, will discuss how to successfully leverage mobile, rapidly deploy content, capture real-time d...
May. 31, 2016 01:45 PM EDT Reads: 1,804
What a difference a year makes. Organizations aren’t just talking about IoT possibilities, it is now baked into their core business strategy. With IoT, billions of devices generating data from different companies on different networks around the globe need to interact. From efficiency to better customer insights to completely new business models, IoT will turn traditional business models upside down. In the new customer-centric age, the key to success is delivering critical services and apps wit...
May. 31, 2016 12:30 PM EDT Reads: 1,420
Join us at Cloud Expo | @ThingsExpo 2016 – June 7-9 at the Javits Center in New York City and November 1-3 at the Santa Clara Convention Center in Santa Clara, CA – and deliver your unique message in a way that is striking and unforgettable by taking advantage of SYS-CON's unmatched high-impact, result-driven event / media packages.
May. 31, 2016 12:00 PM EDT Reads: 2,634
As cloud and storage projections continue to rise, the number of organizations moving to the cloud is escalating and it is clear cloud storage is here to stay. However, is it secure? Data is the lifeblood for government entities, countries, cloud service providers and enterprises alike and losing or exposing that data can have disastrous results. There are new concepts for data storage on the horizon that will deliver secure solutions for storing and moving sensitive data around the world. ...
May. 31, 2016 12:00 PM EDT Reads: 1,513
In his keynote at 18th Cloud Expo, Andrew Keys, Co-Founder of ConsenSys Enterprise, will provide an overview of the evolution of the Internet and the Database and the future of their combination – the Blockchain. Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life ...
May. 31, 2016 11:45 AM EDT Reads: 2,135
SYS-CON Events announced today that MobiDev will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. MobiDev is a software company that develops and delivers turn-key mobile apps, websites, web services, and complex software systems for startups and enterprises. Since 2009 it has grown from a small group of passionate engineers and business managers to a full-scale mobile software company with over 200 develope...
May. 31, 2016 10:15 AM EDT Reads: 2,935
SoftLayer operates a global cloud infrastructure platform built for Internet scale. With a global footprint of data centers and network points of presence, SoftLayer provides infrastructure as a service to leading-edge customers ranging from Web startups to global enterprises. SoftLayer's modular architecture, full-featured API, and sophisticated automation provide unparalleled performance and control. Its flexible unified platform seamlessly spans physical and virtual devices linked via a world...
May. 31, 2016 09:00 AM EDT Reads: 2,456
SYS-CON Events announced today that BMC Software has been named "Siver Sponsor" of SYS-CON's 18th Cloud Expo, which will take place on June 7-9, 2015 at the Javits Center in New York, New York. BMC is a global leader in innovative software solutions that help businesses transform into digital enterprises for the ultimate competitive advantage. BMC Digital Enterprise Management is a set of innovative IT solutions designed to make digital business fast, seamless, and optimized from mainframe to mo...
May. 31, 2016 08:45 AM EDT Reads: 2,431
Companies can harness IoT and predictive analytics to sustain business continuity; predict and manage site performance during emergencies; minimize expensive reactive maintenance; and forecast equipment and maintenance budgets and expenditures. Providing cost-effective, uninterrupted service is challenging, particularly for organizations with geographically dispersed operations.
May. 31, 2016 08:00 AM EDT Reads: 2,370
The Internet of Things (IoT) is growing rapidly by extending current technologies, products and networks. By 2020, Cisco estimates there will be 50 billion connected devices. Gartner has forecast revenues of over $300 billion, just to IoT suppliers. Now is the time to figure out how you’ll make money – not just create innovative products. With hundreds of new products and companies jumping into the IoT fray every month, there’s no shortage of innovation. Despite this, McKinsey/VisionMobile data...
May. 31, 2016 07:45 AM EDT Reads: 1,883
The IoTs will challenge the status quo of how IT and development organizations operate. Or will it? Certainly the fog layer of IoT requires special insights about data ontology, security and transactional integrity. But the developmental challenges are the same: People, Process and Platform. In his session at @ThingsExpo, Craig Sproule, CEO of Metavine, will demonstrate how to move beyond today's coding paradigm and share the must-have mindsets for removing complexity from the development proc...
May. 31, 2016 06:00 AM EDT Reads: 2,045
SYS-CON Events announced today that MangoApps will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. MangoApps provides modern company intranets and team collaboration software, allowing workers to stay connected and productive from anywhere in the world and from any device. For more information, please visit https://www.mangoapps.com/.
May. 31, 2016 05:45 AM EDT Reads: 1,235