Welcome!

Apache Authors: Pat Romanski, Liz McMillan, Elizabeth White, Christopher Harrold, Janakiram MSV

Related Topics: @DevOpsSummit, Containers Expo Blog, @DXWorldExpo

@DevOpsSummit: Article

Compression: Making the Big Smaller and Faster (Part 1) | @DevOpsSummit #DevOps #WebPerf

The sharing of information in a fast and efficient manner has been an area of constant study and research

Compression: Making the Big Smaller and Faster (Part 1)
By Nilabh Mishra

How important is data compression? The sharing of information in a fast and efficient manner has been an area of constant study and research. Companies like Google and Facebook have spent a lot of time and effort trying to develop faster and better compression algorithms. Compression algorithms have existed since the ’70s and the ongoing research to have better algorithms proves just how important compression is for the Internet and for all of us.

The Need for Data Compression
The World Wide Web (WWW) has undergone a lot of changes since it was made available to the public in 1991. Believe it or not, the copy of the world’s first website can still be browsed here. Back then, webpages were very simple. Today, they are increasingly more complex and there is an evident need to have compression algorithms that are lossless, fast, and efficient.

There are several best practices that help optimize page load times. Here is a blog from that discusses webpage optimization. In this article, we will spend some time understanding the basics of compression and how it works. We will also cover a new type of compression method called “Brotli” in the second part of this blog.

Encoding and Data Compression
Let’s start by understanding what data encoding and compression are:

The word “compression” comes from the Latin word compressare, which means to press together. “Encoding” is the process of placing a sequence of characters in a specialized format that allows efficient data storage as well as transmission. Per Wikipedia: “Data compression involves encoding information using fewer bits than the original representation.

Compression plays a key role when it comes to saving bandwidth and speeding up your site. Modern day websites involve a lot of HTTP requests and responses between the client (the browser) and the server to serve a webpage. With an overall increase in the number of HTTP requests and responses, it becomes important to ensure that these transfers are taking place at a fast and efficient rate.

HTTP works on a request-response model, as demonstrated below:

In this case, we are not using any compression method to compress the response being sent by the server.

  • The browser sends an HTTP request asking for the Index.html page
  • The server looks for the requested file and responds with the requested resource and a 200 OK HTTP status message
  • The browser receives the server’s response and renders the page

As we can see, in this case there is no compression involved. The server responded with a 300 KB file (index.html page). If the file size was bigger, it would have taken more time for the response to be sent on the wire and this would have increased the overall page load time. Please note that we are currently looking only at a single HTTP response. Modern websites receive hundreds of such HTTP responses from the server to render a webpage.

The image below shows the same HTTP request – response between the browser and the server, but in this case, we use compression to reduce the size of the response being sent by the server to the browser.

Today, complex and dynamic websites generate hundreds of HTTP requests/responses. This made it important to have a system which would ensure fast and efficient data transfer between the server and the browser. This is when compression algorithms like Deflate and Gzip came into existence.

Introduction to Gzip
Gzip is a compression method that is used to make files smaller for storage and faster transmission over the network. Gzip is one of the most popular, powerful, and effective ways of compressing data and it can reduce the file size by up to 70%.

Gzip is based on the DEFLATE algorithm, which in turn is a combination of LZ77 and Huffman coding. Understanding how LZ77 works is essential to understand how compression methods like DEFLATE and Gzip work.

LZ77
Developed in the late ’70s by Abraham Lempel and Jacob Ziv, the LZ77 method of compression looks for sequences of characters that recur in a text. It performs compression by replacing the recurring occurrences of strings using pointers that backreference identical strings, previously encountered in the text, that needs to be compressed.

The pointer or backreference is of the form <relative jump, length>, where relative jump signifies how many bytes are there between the current occurrence of the string and its last occurrence and length is the total number of identical bytes found.

Now let us understand this better with the help of an example. Assume, there is a text file with the following text:

As idle as a painted ship, upon a painted ocean.

In this file, we see the following strings: “as” and “painted” occurring multiple times. What LZ77 method does is, it replaces multiple occurrences of strings with the notation: <relative jump, length>.

So using LZ77, the text will get encoded in the following way:

As idle <8,2> a painted ship, upon a <21,7> ocean.

To encode the text, we took the following steps:

  1. Looked at the string and tried to find occurrences of the same “string” or “substrings”.
  2. Replaced multiple occurrences of a string with the notation: <relative jump, length>; The two strings: “as” and “painted” were replaced the multiple occurrences of the strings with <relative jump, length>.
  3. The string “painted” which would have earlier occupied 7 bytes (i.e. the number of characters in the word: “painted”) X 1 byte = 7 bytes was compressed to occupy only 2 bytes. 2 bytes or 16 bits is the size of the pointer or backreference.

HUFFMAN Coding
Huffman Coding is another lossless data compression algorithm. The frequency of occurrence of a string in a text file or pixels in images form the basis of Huffman coding. To get a deeper understanding of this algorithm, read this detailed tutorial that clearly explains how Huffman Coding works.

All modern browsers support Gzip compression for HTTP Requests. With Gzip, one of the most important question is what to compress. It works best with text-based resources like static HTML, CSS files and JavaScript resources but is not very efficient for already compressed resources such as Images. To support Gzip, the server must be configured to allow gzip compression.

The image above shows the impact Gzip compression can have on a text-based resource like a JavaScript file. In this case, we ran 2 instant tests using Catchpoint to the URL: https://code.jquery.com/jquery-3.2.1.js.

For the first test run, we did not specify any encoding to be used by passing the custom header: Accept-Encoding: identity along with the request. The first image shows no Content-Encoding being passed for the request.

In the second image, the browser is sending Accept-Encoding:zip, for which the server is sending zipped file as the response.

We can clearly see how Gzip can drastically compress the files to improve data transmission rate over the wire.

Catchpoint’s Scheduled tests also highlight the difference between compressed and not-compressed content loading on webpages.

In the screenshot above, we see the difference in downloaded bytes for static content (CSS, JavaScript) when using G-zip vs. when not using any encoding.

Brotli Compression
A new compression method called Brotli was introduced not too long ago. The Brotli compression algorithm is optimized for the web and specifically for small text documents. We will discuss more about this compression method and what is has to offer to the World Wide Web community in the second part of the article.

The post Compression: Making the Big Smaller and Faster (Part 1) appeared first on Catchpoint's Blog - Web Performance Monitoring.

More Stories By Mehdi Daoudi

Catchpoint radically transforms the way businesses manage, monitor, and test the performance of online applications. Truly understand and improve user experience with clear visibility into complex, distributed online systems.

Founded in 2008 by four DoubleClick / Google executives with a passion for speed, reliability and overall better online experiences, Catchpoint has now become the most innovative provider of web performance testing and monitoring solutions. We are a team with expertise in designing, building, operating, scaling and monitoring highly transactional Internet services used by thousands of companies and impacting the experience of millions of users. Catchpoint is funded by top-tier venture capital firm, Battery Ventures, which has invested in category leaders such as Akamai, Omniture (Adobe Systems), Optimizely, Tealium, BazaarVoice, Marketo and many more.

IoT & Smart Cities Stories
CloudEXPO New York 2018, colocated with DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
The best way to leverage your Cloud Expo presence as a sponsor and exhibitor is to plan your news announcements around our events. The press covering Cloud Expo and @ThingsExpo will have access to these releases and will amplify your news announcements. More than two dozen Cloud companies either set deals at our shows or have announced their mergers and acquisitions at Cloud Expo. Product announcements during our show provide your company with the most reach through our targeted audiences.
The Internet of Things will challenge the status quo of how IT and development organizations operate. Or will it? Certainly the fog layer of IoT requires special insights about data ontology, security and transactional integrity. But the developmental challenges are the same: People, Process and Platform and how we integrate our thinking to solve complicated problems. In his session at 19th Cloud Expo, Craig Sproule, CEO of Metavine, demonstrated how to move beyond today's coding paradigm and sh...
@DevOpsSummit at Cloud Expo, taking place November 12-13 in New York City, NY, is co-located with 22nd international CloudEXPO | first international DXWorldEXPO and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time t...
What are the new priorities for the connected business? First: businesses need to think differently about the types of connections they will need to make – these span well beyond the traditional app to app into more modern forms of integration including SaaS integrations, mobile integrations, APIs, device integration and Big Data integration. It’s important these are unified together vs. doing them all piecemeal. Second, these types of connections need to be simple to design, adapt and configure...
Cell networks have the advantage of long-range communications, reaching an estimated 90% of the world. But cell networks such as 2G, 3G and LTE consume lots of power and were designed for connecting people. They are not optimized for low- or battery-powered devices or for IoT applications with infrequently transmitted data. Cell IoT modules that support narrow-band IoT and 4G cell networks will enable cell connectivity, device management, and app enablement for low-power wide-area network IoT. B...
Contextual Analytics of various threat data provides a deeper understanding of a given threat and enables identification of unknown threat vectors. In his session at @ThingsExpo, David Dufour, Head of Security Architecture, IoT, Webroot, Inc., discussed how through the use of Big Data analytics and deep data correlation across different threat types, it is possible to gain a better understanding of where, how and to what level of danger a malicious actor poses to an organization, and to determin...
Nicolas Fierro is CEO of MIMIR Blockchain Solutions. He is a programmer, technologist, and operations dev who has worked with Ethereum and blockchain since 2014. His knowledge in blockchain dates to when he performed dev ops services to the Ethereum Foundation as one the privileged few developers to work with the original core team in Switzerland.
Cloud-enabled transformation has evolved from cost saving measure to business innovation strategy -- one that combines the cloud with cognitive capabilities to drive market disruption. Learn how you can achieve the insight and agility you need to gain a competitive advantage. Industry-acclaimed CTO and cloud expert, Shankar Kalyana presents. Only the most exceptional IBMers are appointed with the rare distinction of IBM Fellow, the highest technical honor in the company. Shankar has also receive...
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...