Welcome!

Apache Authors: Pat Romanski, Liz McMillan, Elizabeth White, Christopher Harrold, Janakiram MSV

Related Topics: Apache, Microservices Expo, @CloudExpo

Apache: Article

Rightsizing Your Risk Architecture

A comparison of risk management and architectures that satisfy the demands and requirements of enterprises today

In today's evolving corporate ecosystem, managing risk has become a core focus and function of many enterprises. Mitigating consequences such as loss of revenue, unfavorable publicity and loss of market share, among others, are of utmost importance. Risk applications vary in a variety ways and the underlying technology is also quite diverse. This article will compare and contrast aspects of risk management and suggest architectures (from cloud-based solutions to high-performance solutions) that satisfy the demands and requirements of enterprises today. We will start with a review of "Risk Management," followed by a look at the term "architecture." Then we will review two specific use cases to illustrate the thought process involved in mapping risk management requirements to appropriate architectures.

Risk Management Reviewed
Risk Management is the process of identifying, classifying, and choosing strategies to deal with uncertainties in relation to the business. Given the rate of change in the world today, uncertainties abound. Moreover, they are constantly changing, making risk management a real-time, continuous and ongoing exercise. For an organization committed to achieving an integrated risk management environment, establishing a risk-aware culture is paramount. For example, meeting financial goals can be managed by identifying events that reduce revenue (negative events) as well as opportunities to increase revenue (positive events). This would be followed with prioritizing those events in terms of likelihood and impact, and then choosing appropriate strategies to deal with the risks. Strategies typically include risk avoidance (i.e., eliminating the possibility of an identified event), risk mitigation (i.e., reducing the probability of a risk event), risk deflection (e.g., buy insurance or stock options), and risk acceptance. Likewise, compliance management (e.g., Sarbanes Oxley, ISO 9000 - Quality and ISO 27001 - Security) can be couched in similar risk management terms. For example, in the event the organization does not perform its due diligence to comply with regulatory or industry standard guidelines, the organization may be fined for legal violations, the corporate officers may be subject to imprisonment or customers may be lost. As you can see, every aspect of managing a business can and should be considered in terms of risk management.

Architecture Reviewed
To understand what computer architecture is all about, let's consider building architecture. It is the job of a structural architect to define a structure for a purpose. An architect must first consider the desired use for a building as well as other limiting contextual factors such as desired construction cost, time, and quality (e.g., the building must withstand winds of 150 miles per hour). Next, using knowledge of construction materials and technologies as well as a variety of existing building architectures that can potentially be used as a template, they perform a creative process to suggest possible building designs along with their associated tradeoffs. Finally, they collaborate with the building owners to choose the design considered most appropriate given the scenario. This joint business-technical approach to systems architecture has been referred to a "total architecture," and the process has been referred to as "Total Architecture Synthesis (TAS)." In the immortal words of Paul Brown, the author of Implementing SOA: Total Architecture in Practice, "Total architecture is not a choice. It's a concession to reality. Attempts to organize business processes, people, information, and systems independently result in structures that do not fit together particularly well. The development process becomes inefficient, the focus on the ultimate business objective gets lost, and it becomes increasingly difficult to respond to changing opportunities and pressures. The only real choice you have lies in how to address the total architecture."[1]

Often, the right architecture reflects the advice of Albert Einstein: "Make everything as simple as possible, but not simpler."[2] In other words, the right architecture satisfies all of the business and contextual requirements (unless some are negotiated away) in the most efficient manner. We architects call this combination of effectiveness (it does the job) and efficiency (it uses the least resources) as "elegance."

Contrast this with common expectations for risk management computer systems architecture. I am often asked to document the right risk management reference architecture. That request reflects a naiveté and any such reference architecture falls into Einstein's category of "too simple." In reality, there are levels of architecture and many potential reference architectures for risk management systems. The process for determining the appropriate one, as in the case of structural architecture, involves the same steps of discovery, analysis, comparison, and collaboration culminating in the "elegant" design choice.

The remainder of this article will illustrate this process using two specific risk management scenarios. The first use case will reflect a generalized compliance management (e.g., SOX, ISO 9000 and ISO 27001) scenario. The second use case will illustrate a scenario at the other end of the risk management spectrum - managing risk for an algorithmic trading portfolio. Each scenario will include standard functional requirements as well as illustrative contextual limiting factors to be considered. Then we will review some options with associated tradeoffs in order to arrive at an "elegant" design.

Generalized Compliance Management
Compliance management typically involves defining control objectives (what do we have to control to satisfy regulatory or other requirements?), controls (what organization, policies, procedures, and tasks must be enacted to facilitate compliance?), monitoring (are our controls effective and efficient?) and reporting (we need to prove compliance to external entities - auditors and regulators).

Consequently, compliance management tends to be well defined and somewhat standardized. One standard framework is "Committee of Sponsoring Organizations" (COSO). A standard library of objectives is "Control Objectives for Information and related Technology" (COBIT). And the "Open Compliance and Ethics Group" (OCEG) "GRC Capability Model." Generalized compliance management processing is characterized by manual data entry, document management, workflow, and reporting. Accordingly, high-performance computing requirements are rare, and capacity requirements tend to increase predictably slowly in a linear manner. Security requirements consist primarily of access control and auditing requirements. For the sake of illustration, we will assume that the client has minimal internal resources to build a custom system and a need to be fully operational in a short time. Finally, the information in the system is confidential, but would not be considered to be secret or top secret. That is, there would be minimal impact if information somehow leaked.

Given this set of requirements and context we consider a number of architectural options.

The first option is to build a homegrown application, either via coding or using an out-of-the box platform (e.g., SharePoint). Typically, a customer with the profile above cannot consider coding a homegrown solution due to the cost and time limitations. While an out-of-the-box platform appears to be optimal on the surface, it is not strange to discover that many changes are required to meet specific requirements.

The second option is to consider the use of commercial-off-the-shelf (COTS) software in the customer's environment. Given the fact that compliance management is somewhat standardized, there is a short time frame for implementation and few available resources, this option is more attractive than building an internal application. The strength of most current COTS solutions is that they are typically highly configurable and extensible via adaptation. The term "adaptation" implies that:

  • Changes are accomplished via visual design or light scripting.
  • The solution remains fully supported by the vendor.
  • The solution remains upgradeable.

The associated challenge of the COTS approach is that internal technical and business staff needs to understand the configuration and adaptation capabilities so that an optimal COTS design can be defined. Fortunately, an elegant method exists to achieve the required level of understanding. That method consists of a joint workshop where customer technical and business staff collaborates with a COTS subject matter expert (SME) to model and define adaptations. We refer to this as an "enablement" workshop. I have observed that customer staff can become knowledgeable and proficient in the use of a COTS solution in about two weeks. A word of warning - it is tempting to try to bypass the initial learning curve in the name of efficiency. But customers are in danger of wasting many months of effort only to discover that the COTS platform provides the desired functionality out-of-the-box. For example, one customer I encountered created a custom website to allow data entry of external information into their COTS system only to discover that the application provided import/export features out-of-the box. In this case, they wasted $100,000 and many months of effort before they discovered the feature. In the case of COTS implementations, ignorance is not bliss.

Before we consider the third option, which uses the phrase "cloud computing," we need to define the context of this phrase. In the most general context, "cloud computing" could represent any architectural option that uses any Internet protocol. That could represent an internal cloud that uses dedicated, high-performance infrastructure. However, the phrase is typically used to indicate a solution that is at least partially outsourced. Outsourcing could involve Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), or Software-as-a-Service (SaaS). In this article we will assume that "cloud computing" indicates outsourcing to a service provider that is providing commodity infrastructure, platform, and software potentially on a virtual platform.

The third option to consider is "cloud computing" on dedicated, physical hardware. Given the customer's limited resources and the rapid-time-to-implementation requirement, this option can be quite attractive. With this option, service provider security becomes an issue. Is their environment protected from outside access as well as access from other clients? In such a case, provider certifications might be considered. For example, are they ISO 27001 certified? Likewise, one would have to consider failover and disaster recovery requirements and provider capabilities. In general, remember that the use of a service provider does not relieve you of architectural responsibility. Due diligence is required. Therefore the "Total Architect" must formulate specific Service Level Agreements (SLAs) with the business owners and corresponding Operation Level Agreements (OLAs) with the service provider.

A fourth option consists of cloud computing hosted on a virtual machine. The use of virtual machines can reduce the cost of hardware and associated costs (e.g., cooling) by sharing the physical environment. The tradeoff is that the virtual layer typically adds latency, thus reducing performance. But in this given scenario, we have deemed that performance is not a critical requirement so this is also a viable option and perhaps the "elegant" solution design.

In summary of this business-driven scenario, notice the architectural issues could be classified as business-oriented or service-level oriented. Nowhere did we discuss bits, bytes, chips, networks or other low-level technical concerns. This illustrates that architecture is a multi-layered discipline. The next scenario will demonstrate the addition of the technical architecture layer.

Algorithmic Trading Portfolio Risk Management
This scenario represents the polar opposite of our initial scenario. Specifically, the crux of high-frequency, algorithmic trading is the use of highly proprietary algorithms and high-performance computing to exploit market inefficiencies and thus achieve large volumes of small profits.

Risk management in standard trading portfolio analysis usually looks at changes in the Value at Risk (VaR) for a current portfolio. That metric considers a held trading portfolio and runs it against many models to ascertain a worst case loss amount over a window of time and at a given probability. In the case of high frequency algorithmic trading, positions are only held for minutes to hours. Therefore, VaR is somewhat meaningless. Instead, risk management for algorithmic training is less about the portfolio and more about the algorithm. Potential outcomes for new algorithms must be evaluated before they are enacted. Without getting too much into the specifics, which is the subject of many books, the challenge to managing risk for an algorithmic trading strategies involves managing high volumes of "tick" data (aka market price changes) and exercising models (both historical and simulated) to estimate the risk and reward characteristics of the algorithm. Those models can be statistical and such statistical models should assuming both normality and using Extreme-Value-Theory (EVT).

Consequently, the technical architecture of risk systems deals with tools and techniques to achieve low latency, data management, and parallel processing. The remainder of this article will review those topics and potential delivery architectures. In this scenario, contextual limitations include the need for extreme secrecy around trading algorithms and the intense competition to be the fastest. Since so much money is as stake, firms are willing to invest in resources to facilitate their trading platform and strategy.

While in the previous scenario we are willing to move processing into the cloud, accept reduced performance, and reduce costs using virtualization, the right architecture in this scenario seeks to bring processing units closer together, increase performance and accept increased cost to realize gains in trading revenue while controlling risks.

A number of tools and techniques exist to achieve low latency. One technique is to co-locate an algorithmic trading firm's servers at or near exchange data centers. Using this technique, one client reduced their latency by 21 milliseconds - a huge improvement in trading scenarios where latency is currently measured in microseconds. Most cloud applications use virtualization, so many people assume that association. But this does not have to be the case. That's why I'm explicitly including references to both the cloud and virtualization. Another technique to achieve low latency is to replace Ethernet and standard TCP/IP processing with more efficient communications mechanisms. For example, one might use an Infiniband switched fabric communications link to replace a 10 Gigabit Ethernet interconnect fabric. One estimate is that this change provides 4x lower latency. Another example replaces the traditional TCP/IP socket stack with a more efficient socket stack. Such a stack reduces latency by bypassing the kernel, using Remote Direct Memory Access (RDMA), and zero-copy. This change can reduce latency by 80-90%. A final example places the use of single-core CPUs with multi-core CPUs. In this configuration intra-CPU communications take place over silicon rather than a bus or network. Using this tool, latency for real-time events can be reduced by an order of magnitude, say from 1-4 milliseconds to 1-3 microseconds.

When it comes to data, one estimate is that a single day of tick data (individual price changes for a traded instrument) is equivalent to 30 years of daily observations, depending on how frequently an instrument is traded. Again, there are architectural options for reducing such data and replicating it to various data stores where parallelized models can process the data locally. A key tool for reducing real-time data is the use of Complex Event Processing (CEP). A number of such tools exist. They take large volumes of streaming data and produce an aggregate value over a time window. For example, a CEP tool can take hundreds of ticks and reduce them to a single metric such as the Volume-Weighted Average Price (VWAP) - the ratio of the value traded to total volume traded over a time horizon (e.g., one minute). By reducing hundreds of events into a single event, one can add efficiency thus making a risk management model tractable.

A number of possibilities exist to facilitate parallel processing - from the use of multi-core CPUs to multi-processor CPUs to computing grids. And these options are not orthogonal. For example, one can compute a number of trade scenarios in parallel across a grid of multi-core CPUs. The right parallel architecture is dependent on a number of dimensions. For example, "Does the computational savings gained by having local data justify the cost of data replication?" Or, "Is the cost of a multi-core CPU with specialized cores warranted compared to the use of a commodity CPU?" Or even, "Is the cost of a network communication amortized over the life of a distributed computation?"

As is always the case, the architect must understand the requirements, the context, and be able to elaborate the capability and limitations of each option to choose the "right" architecture. Clearly, in this case, the customer is highly concerned about secrecy so the use of an external provider might be out of the question. Also, because virtualization adds one or more layers with corresponding latency, the use of a virtual environment might also be unacceptable. Finally, the wish to outperform competitors implies that any standardized COTS solution out of the question.

In summary, this article has demonstrated that the "right" risk architecture is a function of the scenario. We have illustrated that there are various levels of architecture (business, context, delivery, technical, data, parallelism, etc.) to be considered depending on the scenario. But when a concrete scenario is presented, the "Total Architecture" approach can be applied and the "right" architecture becomes clear and decisions become obvious.

References

  1. Paul C. Brown, Implementing SOA: Total Architecture in Practice (Upper Saddle River, NJ: Addison-Wesley Professional, 09-APR-2008), 5.
  2. Albert Einstein. (n.d.). BrainyQuote.com. Retrieved September 6, 2010, from BrainyQuote.com Web site

More Stories By Frank Bucalo

Frank Bucalo is an enterprise architect with CA Technologies specializing in risk management systems. Prior to his time at CA Technologies, he was an architect and a consultant with banking and brokerage clients. He is a member of the Global Association of Risk Professionals (GARP) and an ITIL Service Manager. Mr. Bucalo represents the new breed of “Total Architect” – knowledgeable and experienced across business, technology, and technology management domains.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@ThingsExpo Stories
Nordstrom is transforming the way that they do business and the cloud is the key to enabling speed and hyper personalized customer experiences. In his session at 21st Cloud Expo, Ken Schow, VP of Engineering at Nordstrom, discussed some of the key learnings and common pitfalls of large enterprises moving to the cloud. This includes strategies around choosing a cloud provider(s), architecture, and lessons learned. In addition, he covered some of the best practices for structured team migration an...
Recently, REAN Cloud built a digital concierge for a North Carolina hospital that had observed that most patient call button questions were repetitive. In addition, the paper-based process used to measure patient health metrics was laborious, not in real-time and sometimes error-prone. In their session at 21st Cloud Expo, Sean Finnerty, Executive Director, Practice Lead, Health Care & Life Science at REAN Cloud, and Dr. S.P.T. Krishnan, Principal Architect at REAN Cloud, discussed how they built...
In his session at 21st Cloud Expo, Raju Shreewastava, founder of Big Data Trunk, provided a fun and simple way to introduce Machine Leaning to anyone and everyone. He solved a machine learning problem and demonstrated an easy way to be able to do machine learning without even coding. Raju Shreewastava is the founder of Big Data Trunk (www.BigDataTrunk.com), a Big Data Training and consulting firm with offices in the United States. He previously led the data warehouse/business intelligence and B...
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering m...
With tough new regulations coming to Europe on data privacy in May 2018, Calligo will explain why in reality the effect is global and transforms how you consider critical data. EU GDPR fundamentally rewrites the rules for cloud, Big Data and IoT. In his session at 21st Cloud Expo, Adam Ryan, Vice President and General Manager EMEA at Calligo, examined the regulations and provided insight on how it affects technology, challenges the established rules and will usher in new levels of diligence arou...
The 22nd International Cloud Expo | 1st DXWorld Expo has announced that its Call for Papers is open. Cloud Expo | DXWorld Expo, to be held June 5-7, 2018, at the Javits Center in New York, NY, brings together Cloud Computing, Digital Transformation, Big Data, Internet of Things, DevOps, Machine Learning and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding busin...
Smart cities have the potential to change our lives at so many levels for citizens: less pollution, reduced parking obstacles, better health, education and more energy savings. Real-time data streaming and the Internet of Things (IoT) possess the power to turn this vision into a reality. However, most organizations today are building their data infrastructure to focus solely on addressing immediate business needs vs. a platform capable of quickly adapting emerging technologies to address future ...
No hype cycles or predictions of a gazillion things here. IoT is here. You get it. You know your business and have great ideas for a business transformation strategy. What comes next? Time to make it happen. In his session at @ThingsExpo, Jay Mason, an Associate Partner of Analytics, IoT & Cybersecurity at M&S Consulting, presented a step-by-step plan to develop your technology implementation strategy. He also discussed the evaluation of communication standards and IoT messaging protocols, data...
22nd International Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, and co-located with the 1st DXWorld Expo will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud ...
22nd International Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, and co-located with the 1st DXWorld Expo will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud ...
DevOps at Cloud Expo – being held June 5-7, 2018, at the Javits Center in New York, NY – announces that its Call for Papers is open. Born out of proven success in agile development, cloud computing, and process automation, DevOps is a macro trend you cannot afford to miss. From showcase success stories from early adopters and web-scale businesses, DevOps is expanding to organizations of all sizes, including the world's largest enterprises – and delivering real results. Among the proven benefits,...
@DevOpsSummit at Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, is co-located with 22nd Cloud Expo | 1st DXWorld Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait...
Cloud Expo | DXWorld Expo have announced the conference tracks for Cloud Expo 2018. Cloud Expo will be held June 5-7, 2018, at the Javits Center in New York City, and November 6-8, 2018, at the Santa Clara Convention Center, Santa Clara, CA. Digital Transformation (DX) is a major focus with the introduction of DX Expo within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive ov...
SYS-CON Events announced today that T-Mobile exhibited at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. As America's Un-carrier, T-Mobile US, Inc., is redefining the way consumers and businesses buy wireless services through leading product and service innovation. The Company's advanced nationwide 4G LTE network delivers outstanding wireless experiences to 67.4 million customers who are unwilling to compromise on qua...
SYS-CON Events announced today that Cedexis will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Cedexis is the leader in data-driven enterprise global traffic management. Whether optimizing traffic through datacenters, clouds, CDNs, or any combination, Cedexis solutions drive quality and cost-effectiveness. For more information, please visit https://www.cedexis.com.
SYS-CON Events announced today that Google Cloud has been named “Keynote Sponsor” of SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Companies come to Google Cloud to transform their businesses. Google Cloud’s comprehensive portfolio – from infrastructure to apps to devices – helps enterprises innovate faster, scale smarter, stay secure, and do more with data than ever before.
SYS-CON Events announced today that Vivint to exhibit at SYS-CON's 21st Cloud Expo, which will take place on October 31 through November 2nd 2017 at the Santa Clara Convention Center in Santa Clara, California. As a leading smart home technology provider, Vivint offers home security, energy management, home automation, local cloud storage, and high-speed Internet solutions to more than one million customers throughout the United States and Canada. The end result is a smart home solution that sav...
SYS-CON Events announced today that Opsani will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Opsani is the leading provider of deployment automation systems for running and scaling traditional enterprise applications on container infrastructure.
SYS-CON Events announced today that Nirmata will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Nirmata provides a comprehensive platform, for deploying, operating, and optimizing containerized applications across clouds, powered by Kubernetes. Nirmata empowers enterprise DevOps teams by fully automating the complex operations and management of application containers and its underlying ...
SYS-CON Events announced today that Opsani to exhibit at SYS-CON's 21st Cloud Expo, which will take place on October 31 through November 2nd 2017 at the Santa Clara Convention Center in Santa Clara, California. Opsani is creating the next generation of automated continuous deployment tools designed specifically for containers. How is continuous deployment different from continuous integration and continuous delivery? CI/CD tools provide build and test. Continuous Deployment is the means by which...