Apache Authors: Pat Romanski, Liz McMillan, Elizabeth White, Christopher Harrold, Janakiram MSV

Related Topics: Apache, Java IoT

Apache: Article

Analyzing Load on Apache Web Server

Monitoring performance over any desired time period

A web server plays an increasingly important role in the computing world as the world has shifted away from traditional stand-alone desktop computing to Internet's Client-Server and its variants based computing paradigm. Almost everything on networks is consumed either by using web services or web-pages, for all of which, a web-server is integral. As such, it becomes imperative for organizations to be able to finely monitor their servers' load usage and average performance over different times of day and according to the kind of resources hosted on the server. This raises the need for a solution through which organizations and individuals can monitor their web servers' performance to an arbitrary level of granularity depending on their needs.

This article addresses the same issue and implements an approach for the same in Java programming language.

Problem Statement - Use Case Scenario
Consider the case where you want to monitor the performance of an Apache Web Server for some past time. You would want to know what are the numbers of pages served by the server in that elapsed time span so as to identify the performance as well as utilization of the Web Server.

The mod_status module in the Apache Web Server already provides us with a status file which determines the performance of the server through several statistics including number of worker, requests, time the server has been running since it was started, and number of requests amongst other things.

But a major disadvantage of mod_status file is that it tells us about the average number of requests since the time server was started and this may not reflect the actual performance of the server in various instances. For example, consider that the server was started at 10:00 AM and it ran till 12:00 Noon. In those 2 hours, it server 120 requests. So the number of request returned by mod_status file would be 1 request per minute. But, during that time span, let us say the server was idle between 10:00 - 11:00 AM and it actually server 120 request in a time span of one hour only. So the actual performance would be 2 requests per minute from 11:00-12:00. There is no way in mod_status file to the status for different time instances. It returns the average only from the time it was started to the present time.

In this article we will implement a way to find out the average performance of the server for different time periods without the need to stop/start the server again and again. We will also store those values for a fairly large period in a database so that we can get a better know-how of the server performance over different time periods.

Tech and Services to be Used
We are going to implement the solution in Java and will make use of the below mentioned services and tech:

  1. Apache Web Server: We will be using an instance of Apache Web Serve in order to monitor it and send requests to it to fetch the machine readable status file generated by mod_status module.
  2. A listener to receive the records. In this example we will showcase it through H2 Database. It will be used to store the values fetched from the status module. Any relational database can be used instead of H2. Users are free to register another listener and implement it the way they want.
  3. Servlets and JSP (Optional): The implementation would communicate with the Apache web server using a servlet hosted on tomcat server. The same can be achieved using command line.


Let us now delve into the actual implementation of the application. This article uses Eclipse as a development environment and assumes familiarity with Eclipse. The article also assumes that you have an Apache web server installed and running on your system. If you do not have Apache Web Server installed, you can download and install it from http://httpd.apache.org/download.cgi

Implementation Roadmap
In this implementation, we will first get the machine readable file which contains server statistics from the Apache web server. We will store these values in a database for a number of time stamps in a convenient fashion. These values would then be used for the calculation of the average number of requests served for different time periods by the Apache Web Server. In the final step, we will allow this app to be used through a Servlet. Alternatively, users can also issue commands through a command line or design a UI for the same which allows them to specify the time frame for which they want the average number of requests for the Apache Web server.

Enable and Test Status Support for mod_status

In order to get the server statistics from mod_status, you first have to enable it so that it can be queried to generate the status file. In order to do so, go to the Apache Web Server installation directory and traverse to the httpd.conf file in the conf directory and add the following to the file:

Also, edit the httpd.conf and mark ExtendedStatus of the server as On.

ExtendedStatus On

You should be able to get the server statistics now. Hit the following URL in the web browser to get the status file:


The file is generated in two formats. One is the human readable one, which is what you get when you hit the above mentioned URL in the browser. The second is the machine readable file, which is like a normal text file and can be directly used by an application to use the statistics reported by the file. For our purpose, we will be calling this machine readable file directly from our application. The machine readable file can be accessed by using the following link:


Create a Listener:
You can extend the IApacheSnapshotProcessor interface which has two methos.  The implementation is upto the user. A sample implementation in provided through H2Persiter which extends the IApacheSnapshotProcessor interface.

Create Schema in H2
As a second step, we will create a schema and a table for storing the values retrieved from the machine readable file. The application would repeatedly ping the Apache Web server URL to get the machine readable file at different time intervals. We will be parsing the values obtained from the file and store them in the database for future use.

Download and install the H2 database from http://www.h2database.com/html/main.html.

Connect to H2 using the following credentials:

Create a schema and table in H2 to store the values. The table should contain the following fields:

1.        ID (serves as the primary key)

2.       TOTAL_ACCESS (for storing the total number of accesses from the time server started running)

3.       TOTAL_KBYTES (total number of KBs since the time server was running)

4.       UPTIME

5.       REQ_PS

6.       BYTES_PS

7.       BYTES_PR

8.       BUSY_WORKER

9.       IDLE_WORKER

10.   SNAPSHOT_TIME_DIFF (this field will be used as integer count for the count difference between the number of request at the time first server snapshot was taken and the number of request for which the current snapshot has been taken. So let's say that we take server snapshot every 2 sec and the server started at 10:00 AM. So for the first request at 10:00:00 AM SNAPSHOT_TIME_DIFF value would be 0. At 10:00:02 AM, value would be 1 and so on.)


Our table is now ready and we can use it to store values of all the fields which we get from the mod_status file.

Getting Values from the Machine Readable File
We now have to get the values from the machine readable mod_status file and parse them before storing those values into the database. Create a class (RequestCounter.java) which will access the URL at regular intervals, will read the file, and parse the values into variables, and these variables will then be passed on to a persister class which will then store them in the database.

We will first construct the URL in the proper format as required by Apache Web Server in a method (pool())

This method will again call the pool() method which will then get the Apache Server status from inside of a method (TimerTask). This method will run till the time we have specified. Suppose, we want to store the values for last 2 hours, then we can set the timer for that time period. After that time period is elapsed, it will call the cancel() method and will stop hitting the Apache Web Server URL.

getApacheStatus(url); calls the method which gets the machine readable file.

Once we have the file, we read it line by line and parse it and store the values in private variables.

Storing Values in the Database
Once we have the parsed values from the file, we now need to store them into the database. For this, create a file (H2Persister.java) which will create a connection to the database and will insert values in the database. Later on, we will use the same file to get average performance values from the database as well.

Create a method (connect()) which will create a JDBC connection object for the database.

Class.forName("org.h2.Driver"); specifies the JDBC driver name. We get the connection from DriverManager using getConnection method, and this assign this to conn variable which is an instance of the Connection object.

Create a method (process()), which will take as arguments the values which we parsed, and will then prepare an sql query and store those values in the database by executing that query.

Statement stmt = conn.createStatement();creates a Statement object. We prepare a query which will insert the values into the database.

String sql = "insert into APACHE_MOD_STATUS.COUNT_HANDLER values (DEFAULT," + TOTAL_ACCESS + "," + TOTAL_KBYTES + "," + UPTIME + "," + REQ_PS + "," + BYTES_PS + "," + BYTES_PR + "," + BUSY_WORKER + "," + IDLE_WORKER     + "," + (System.currentTimeMillis() / 1000 - startupTime) + ", DEFAULT" + ")";

Afterwards, we execute the query to insert the values in the database.

We call this process() method from our previously created RequestCounter file and pass the values which we parsed there to the process() method.

hPersister.process(totalAccesses, totalKBytes, uptime, reqPerSec, ytesPerSec, bytesPerReq, busyWorkers,idleWorkers);

Getting Average Values

We now have to have a mechanism to get the values stored in the database and process them to get the average number of requests during the time frame requested by the user. For this, create a method (getAverage) in H2Persister.java file which will do the same. This method takes the time frame as an argument. First of all, create a SQL query to retrieve the desired values from the database. For our purpose, we need Total_ACCESS and UPTIME fields values from the database. The following query does just that:

String sql = "SELECT ID,TOTAL_ACCESS,UPTIME FROM APACHE_MOD_STATUS.COUNT_HANDLER where SNAPSHOT_TIME_DIFF>"    + (System.currentTimeMillis() / 1000 - startupTime - timeslot - 1);

Statement stmt = conn.createStatement();

ResultSet rs = stmt.executeQuery(sql);

Create an ArrayList which will store the values retrieved by the ResultSet object.

ArrayList<RPSTuple> rpsTupleList = new ArrayList<RPSTuple>();

This ArrayList is of the type RPSTuple which has two private fields totalAccess and uptime and getter and setter methods for the same.

Iterate over the ResultSet object rs and store the values into the ArrayList one by one.

Once we have the desired values in the ArrayList, let us now process them to get the average number of requests for the server over the specified period of time.

From the rpsTupleList, we first get its first node and last node.

From this, we get the average by subtracting total access numbers of last node from first node, and divide that by difference of first node and last node uptime.

lastNode.getTotalAccess()-firstNode.getTotalAccess()) / (lastNode.getUptime()-firstNode.getUptime()

This gives us the performance of an Apache Web Server link in terms of average number of requests over a desired time frame and not just the time frame from which the server started running to the current one.

Create a Servlet
We will now create a Servlet (CounterServlet.java) which will display the average number of requests which we get from the above.

In addition to a Servlet, one can also call the same app from a command line as well.

This article describes the process of monitoring an Apache Web Server link performance by giving the users and system administrators the facility to monitor the performance over any desired time period. This enables the users more control over the unraveling and understanding of the usage statistics of their servers over different periods of time so that they can better utilize their systems according to peak loads and various other factors.

More Stories By Peeyush Taori

Peeyush Taori is a Senior Systems Engineer with Infosys Technologies Limited and has worked in Java Technology Domain for more than 5 years

More Stories By Kumar Tiwari

Kumar Manava Tiwari has six years of IT experience. He has worked in Cloud Computing, software factories, enterprise application development and application frameworks development. He is currently working as a Technology Lead with Infosys Technologies Limited, India.

@ThingsExpo Stories
"Space Monkey by Vivent Smart Home is a product that is a distributed cloud-based edge storage network. Vivent Smart Home, our parent company, is a smart home provider that places a lot of hard drives across homes in North America," explained JT Olds, Director of Engineering, and Brandon Crowfeather, Product Manager, at Vivint Smart Home, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that Conference Guru has been named “Media Sponsor” of the 22nd International Cloud Expo, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. A valuable conference experience generates new contacts, sales leads, potential strategic partners and potential investors; helps gather competitive intelligence and even provides inspiration for new products and services. Conference Guru works with conference organizers to pass great deals to gre...
The Internet of Things will challenge the status quo of how IT and development organizations operate. Or will it? Certainly the fog layer of IoT requires special insights about data ontology, security and transactional integrity. But the developmental challenges are the same: People, Process and Platform. In his session at @ThingsExpo, Craig Sproule, CEO of Metavine, demonstrated how to move beyond today's coding paradigm and shared the must-have mindsets for removing complexity from the develop...
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering m...
"Evatronix provides design services to companies that need to integrate the IoT technology in their products but they don't necessarily have the expertise, knowledge and design team to do so," explained Adam Morawiec, VP of Business Development at Evatronix, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. In his session at @BigDataExpo, Jack Norris, Senior Vice President, Data and Applications at MapR Technologies, reviewed best practices to ...
Widespread fragmentation is stalling the growth of the IIoT and making it difficult for partners to work together. The number of software platforms, apps, hardware and connectivity standards is creating paralysis among businesses that are afraid of being locked into a solution. EdgeX Foundry is unifying the community around a common IoT edge framework and an ecosystem of interoperable components.
Large industrial manufacturing organizations are adopting the agile principles of cloud software companies. The industrial manufacturing development process has not scaled over time. Now that design CAD teams are geographically distributed, centralizing their work is key. With large multi-gigabyte projects, outdated tools have stifled industrial team agility, time-to-market milestones, and impacted P&L stakeholders.
"Akvelon is a software development company and we also provide consultancy services to folks who are looking to scale or accelerate their engineering roadmaps," explained Jeremiah Mothersell, Marketing Manager at Akvelon, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
"IBM is really all in on blockchain. We take a look at sort of the history of blockchain ledger technologies. It started out with bitcoin, Ethereum, and IBM evaluated these particular blockchain technologies and found they were anonymous and permissionless and that many companies were looking for permissioned blockchain," stated René Bostic, Technical VP of the IBM Cloud Unit in North America, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Conventi...
In his session at 21st Cloud Expo, Carl J. Levine, Senior Technical Evangelist for NS1, will objectively discuss how DNS is used to solve Digital Transformation challenges in large SaaS applications, CDNs, AdTech platforms, and other demanding use cases. Carl J. Levine is the Senior Technical Evangelist for NS1. A veteran of the Internet Infrastructure space, he has over a decade of experience with startups, networking protocols and Internet infrastructure, combined with the unique ability to it...
22nd International Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, and co-located with the 1st DXWorld Expo will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud ...
"Cloud Academy is an enterprise training platform for the cloud, specifically public clouds. We offer guided learning experiences on AWS, Azure, Google Cloud and all the surrounding methodologies and technologies that you need to know and your teams need to know in order to leverage the full benefits of the cloud," explained Alex Brower, VP of Marketing at Cloud Academy, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clar...
Gemini is Yahoo’s native and search advertising platform. To ensure the quality of a complex distributed system that spans multiple products and components and across various desktop websites and mobile app and web experiences – both Yahoo owned and operated and third-party syndication (supply), with complex interaction with more than a billion users and numerous advertisers globally (demand) – it becomes imperative to automate a set of end-to-end tests 24x7 to detect bugs and regression. In th...
"MobiDev is a software development company and we do complex, custom software development for everybody from entrepreneurs to large enterprises," explained Alan Winters, U.S. Head of Business Development at MobiDev, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Coca-Cola’s Google powered digital signage system lays the groundwork for a more valuable connection between Coke and its customers. Digital signs pair software with high-resolution displays so that a message can be changed instantly based on what the operator wants to communicate or sell. In their Day 3 Keynote at 21st Cloud Expo, Greg Chambers, Global Group Director, Digital Innovation, Coca-Cola, and Vidya Nagarajan, a Senior Product Manager at Google, discussed how from store operations and ...
"There's plenty of bandwidth out there but it's never in the right place. So what Cedexis does is uses data to work out the best pathways to get data from the origin to the person who wants to get it," explained Simon Jones, Evangelist and Head of Marketing at Cedexis, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that CrowdReviews.com has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5–7, 2018, at the Javits Center in New York City, NY. CrowdReviews.com is a transparent online platform for determining which products and services are the best based on the opinion of the crowd. The crowd consists of Internet users that have experienced products and services first-hand and have an interest in letting other potential buye...
SYS-CON Events announced today that Telecom Reseller has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Telecom Reseller reports on Unified Communications, UCaaS, BPaaS for enterprise and SMBs. They report extensively on both customer premises based solutions such as IP-PBX as well as cloud based and hosted platforms.
It is of utmost importance for the future success of WebRTC to ensure that interoperability is operational between web browsers and any WebRTC-compliant client. To be guaranteed as operational and effective, interoperability must be tested extensively by establishing WebRTC data and media connections between different web browsers running on different devices and operating systems. In his session at WebRTC Summit at @ThingsExpo, Dr. Alex Gouaillard, CEO and Founder of CoSMo Software, presented ...