Welcome!

Apache Authors: Pat Romanski, Liz McMillan, Elizabeth White, Christopher Harrold, Janakiram MSV

Related Topics: Apache, Microservices Expo, @CloudExpo

Apache: Article

Rightsizing Your Risk Architecture

A comparison of risk management and architectures that satisfy the demands and requirements of enterprises today

In today's evolving corporate ecosystem, managing risk has become a core focus and function of many enterprises. Mitigating consequences such as loss of revenue, unfavorable publicity and loss of market share, among others, are of utmost importance. Risk applications vary in a variety ways and the underlying technology is also quite diverse. This article will compare and contrast aspects of risk management and suggest architectures (from cloud-based solutions to high-performance solutions) that satisfy the demands and requirements of enterprises today. We will start with a review of "Risk Management," followed by a look at the term "architecture." Then we will review two specific use cases to illustrate the thought process involved in mapping risk management requirements to appropriate architectures.

Risk Management Reviewed
Risk Management is the process of identifying, classifying, and choosing strategies to deal with uncertainties in relation to the business. Given the rate of change in the world today, uncertainties abound. Moreover, they are constantly changing, making risk management a real-time, continuous and ongoing exercise. For an organization committed to achieving an integrated risk management environment, establishing a risk-aware culture is paramount. For example, meeting financial goals can be managed by identifying events that reduce revenue (negative events) as well as opportunities to increase revenue (positive events). This would be followed with prioritizing those events in terms of likelihood and impact, and then choosing appropriate strategies to deal with the risks. Strategies typically include risk avoidance (i.e., eliminating the possibility of an identified event), risk mitigation (i.e., reducing the probability of a risk event), risk deflection (e.g., buy insurance or stock options), and risk acceptance. Likewise, compliance management (e.g., Sarbanes Oxley, ISO 9000 - Quality and ISO 27001 - Security) can be couched in similar risk management terms. For example, in the event the organization does not perform its due diligence to comply with regulatory or industry standard guidelines, the organization may be fined for legal violations, the corporate officers may be subject to imprisonment or customers may be lost. As you can see, every aspect of managing a business can and should be considered in terms of risk management.

Architecture Reviewed
To understand what computer architecture is all about, let's consider building architecture. It is the job of a structural architect to define a structure for a purpose. An architect must first consider the desired use for a building as well as other limiting contextual factors such as desired construction cost, time, and quality (e.g., the building must withstand winds of 150 miles per hour). Next, using knowledge of construction materials and technologies as well as a variety of existing building architectures that can potentially be used as a template, they perform a creative process to suggest possible building designs along with their associated tradeoffs. Finally, they collaborate with the building owners to choose the design considered most appropriate given the scenario. This joint business-technical approach to systems architecture has been referred to a "total architecture," and the process has been referred to as "Total Architecture Synthesis (TAS)." In the immortal words of Paul Brown, the author of Implementing SOA: Total Architecture in Practice, "Total architecture is not a choice. It's a concession to reality. Attempts to organize business processes, people, information, and systems independently result in structures that do not fit together particularly well. The development process becomes inefficient, the focus on the ultimate business objective gets lost, and it becomes increasingly difficult to respond to changing opportunities and pressures. The only real choice you have lies in how to address the total architecture."[1]

Often, the right architecture reflects the advice of Albert Einstein: "Make everything as simple as possible, but not simpler."[2] In other words, the right architecture satisfies all of the business and contextual requirements (unless some are negotiated away) in the most efficient manner. We architects call this combination of effectiveness (it does the job) and efficiency (it uses the least resources) as "elegance."

Contrast this with common expectations for risk management computer systems architecture. I am often asked to document the right risk management reference architecture. That request reflects a naiveté and any such reference architecture falls into Einstein's category of "too simple." In reality, there are levels of architecture and many potential reference architectures for risk management systems. The process for determining the appropriate one, as in the case of structural architecture, involves the same steps of discovery, analysis, comparison, and collaboration culminating in the "elegant" design choice.

The remainder of this article will illustrate this process using two specific risk management scenarios. The first use case will reflect a generalized compliance management (e.g., SOX, ISO 9000 and ISO 27001) scenario. The second use case will illustrate a scenario at the other end of the risk management spectrum - managing risk for an algorithmic trading portfolio. Each scenario will include standard functional requirements as well as illustrative contextual limiting factors to be considered. Then we will review some options with associated tradeoffs in order to arrive at an "elegant" design.

Generalized Compliance Management
Compliance management typically involves defining control objectives (what do we have to control to satisfy regulatory or other requirements?), controls (what organization, policies, procedures, and tasks must be enacted to facilitate compliance?), monitoring (are our controls effective and efficient?) and reporting (we need to prove compliance to external entities - auditors and regulators).

Consequently, compliance management tends to be well defined and somewhat standardized. One standard framework is "Committee of Sponsoring Organizations" (COSO). A standard library of objectives is "Control Objectives for Information and related Technology" (COBIT). And the "Open Compliance and Ethics Group" (OCEG) "GRC Capability Model." Generalized compliance management processing is characterized by manual data entry, document management, workflow, and reporting. Accordingly, high-performance computing requirements are rare, and capacity requirements tend to increase predictably slowly in a linear manner. Security requirements consist primarily of access control and auditing requirements. For the sake of illustration, we will assume that the client has minimal internal resources to build a custom system and a need to be fully operational in a short time. Finally, the information in the system is confidential, but would not be considered to be secret or top secret. That is, there would be minimal impact if information somehow leaked.

Given this set of requirements and context we consider a number of architectural options.

The first option is to build a homegrown application, either via coding or using an out-of-the box platform (e.g., SharePoint). Typically, a customer with the profile above cannot consider coding a homegrown solution due to the cost and time limitations. While an out-of-the-box platform appears to be optimal on the surface, it is not strange to discover that many changes are required to meet specific requirements.

The second option is to consider the use of commercial-off-the-shelf (COTS) software in the customer's environment. Given the fact that compliance management is somewhat standardized, there is a short time frame for implementation and few available resources, this option is more attractive than building an internal application. The strength of most current COTS solutions is that they are typically highly configurable and extensible via adaptation. The term "adaptation" implies that:

  • Changes are accomplished via visual design or light scripting.
  • The solution remains fully supported by the vendor.
  • The solution remains upgradeable.

The associated challenge of the COTS approach is that internal technical and business staff needs to understand the configuration and adaptation capabilities so that an optimal COTS design can be defined. Fortunately, an elegant method exists to achieve the required level of understanding. That method consists of a joint workshop where customer technical and business staff collaborates with a COTS subject matter expert (SME) to model and define adaptations. We refer to this as an "enablement" workshop. I have observed that customer staff can become knowledgeable and proficient in the use of a COTS solution in about two weeks. A word of warning - it is tempting to try to bypass the initial learning curve in the name of efficiency. But customers are in danger of wasting many months of effort only to discover that the COTS platform provides the desired functionality out-of-the-box. For example, one customer I encountered created a custom website to allow data entry of external information into their COTS system only to discover that the application provided import/export features out-of-the box. In this case, they wasted $100,000 and many months of effort before they discovered the feature. In the case of COTS implementations, ignorance is not bliss.

Before we consider the third option, which uses the phrase "cloud computing," we need to define the context of this phrase. In the most general context, "cloud computing" could represent any architectural option that uses any Internet protocol. That could represent an internal cloud that uses dedicated, high-performance infrastructure. However, the phrase is typically used to indicate a solution that is at least partially outsourced. Outsourcing could involve Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), or Software-as-a-Service (SaaS). In this article we will assume that "cloud computing" indicates outsourcing to a service provider that is providing commodity infrastructure, platform, and software potentially on a virtual platform.

The third option to consider is "cloud computing" on dedicated, physical hardware. Given the customer's limited resources and the rapid-time-to-implementation requirement, this option can be quite attractive. With this option, service provider security becomes an issue. Is their environment protected from outside access as well as access from other clients? In such a case, provider certifications might be considered. For example, are they ISO 27001 certified? Likewise, one would have to consider failover and disaster recovery requirements and provider capabilities. In general, remember that the use of a service provider does not relieve you of architectural responsibility. Due diligence is required. Therefore the "Total Architect" must formulate specific Service Level Agreements (SLAs) with the business owners and corresponding Operation Level Agreements (OLAs) with the service provider.

A fourth option consists of cloud computing hosted on a virtual machine. The use of virtual machines can reduce the cost of hardware and associated costs (e.g., cooling) by sharing the physical environment. The tradeoff is that the virtual layer typically adds latency, thus reducing performance. But in this given scenario, we have deemed that performance is not a critical requirement so this is also a viable option and perhaps the "elegant" solution design.

In summary of this business-driven scenario, notice the architectural issues could be classified as business-oriented or service-level oriented. Nowhere did we discuss bits, bytes, chips, networks or other low-level technical concerns. This illustrates that architecture is a multi-layered discipline. The next scenario will demonstrate the addition of the technical architecture layer.

Algorithmic Trading Portfolio Risk Management
This scenario represents the polar opposite of our initial scenario. Specifically, the crux of high-frequency, algorithmic trading is the use of highly proprietary algorithms and high-performance computing to exploit market inefficiencies and thus achieve large volumes of small profits.

Risk management in standard trading portfolio analysis usually looks at changes in the Value at Risk (VaR) for a current portfolio. That metric considers a held trading portfolio and runs it against many models to ascertain a worst case loss amount over a window of time and at a given probability. In the case of high frequency algorithmic trading, positions are only held for minutes to hours. Therefore, VaR is somewhat meaningless. Instead, risk management for algorithmic training is less about the portfolio and more about the algorithm. Potential outcomes for new algorithms must be evaluated before they are enacted. Without getting too much into the specifics, which is the subject of many books, the challenge to managing risk for an algorithmic trading strategies involves managing high volumes of "tick" data (aka market price changes) and exercising models (both historical and simulated) to estimate the risk and reward characteristics of the algorithm. Those models can be statistical and such statistical models should assuming both normality and using Extreme-Value-Theory (EVT).

Consequently, the technical architecture of risk systems deals with tools and techniques to achieve low latency, data management, and parallel processing. The remainder of this article will review those topics and potential delivery architectures. In this scenario, contextual limitations include the need for extreme secrecy around trading algorithms and the intense competition to be the fastest. Since so much money is as stake, firms are willing to invest in resources to facilitate their trading platform and strategy.

While in the previous scenario we are willing to move processing into the cloud, accept reduced performance, and reduce costs using virtualization, the right architecture in this scenario seeks to bring processing units closer together, increase performance and accept increased cost to realize gains in trading revenue while controlling risks.

A number of tools and techniques exist to achieve low latency. One technique is to co-locate an algorithmic trading firm's servers at or near exchange data centers. Using this technique, one client reduced their latency by 21 milliseconds - a huge improvement in trading scenarios where latency is currently measured in microseconds. Most cloud applications use virtualization, so many people assume that association. But this does not have to be the case. That's why I'm explicitly including references to both the cloud and virtualization. Another technique to achieve low latency is to replace Ethernet and standard TCP/IP processing with more efficient communications mechanisms. For example, one might use an Infiniband switched fabric communications link to replace a 10 Gigabit Ethernet interconnect fabric. One estimate is that this change provides 4x lower latency. Another example replaces the traditional TCP/IP socket stack with a more efficient socket stack. Such a stack reduces latency by bypassing the kernel, using Remote Direct Memory Access (RDMA), and zero-copy. This change can reduce latency by 80-90%. A final example places the use of single-core CPUs with multi-core CPUs. In this configuration intra-CPU communications take place over silicon rather than a bus or network. Using this tool, latency for real-time events can be reduced by an order of magnitude, say from 1-4 milliseconds to 1-3 microseconds.

When it comes to data, one estimate is that a single day of tick data (individual price changes for a traded instrument) is equivalent to 30 years of daily observations, depending on how frequently an instrument is traded. Again, there are architectural options for reducing such data and replicating it to various data stores where parallelized models can process the data locally. A key tool for reducing real-time data is the use of Complex Event Processing (CEP). A number of such tools exist. They take large volumes of streaming data and produce an aggregate value over a time window. For example, a CEP tool can take hundreds of ticks and reduce them to a single metric such as the Volume-Weighted Average Price (VWAP) - the ratio of the value traded to total volume traded over a time horizon (e.g., one minute). By reducing hundreds of events into a single event, one can add efficiency thus making a risk management model tractable.

A number of possibilities exist to facilitate parallel processing - from the use of multi-core CPUs to multi-processor CPUs to computing grids. And these options are not orthogonal. For example, one can compute a number of trade scenarios in parallel across a grid of multi-core CPUs. The right parallel architecture is dependent on a number of dimensions. For example, "Does the computational savings gained by having local data justify the cost of data replication?" Or, "Is the cost of a multi-core CPU with specialized cores warranted compared to the use of a commodity CPU?" Or even, "Is the cost of a network communication amortized over the life of a distributed computation?"

As is always the case, the architect must understand the requirements, the context, and be able to elaborate the capability and limitations of each option to choose the "right" architecture. Clearly, in this case, the customer is highly concerned about secrecy so the use of an external provider might be out of the question. Also, because virtualization adds one or more layers with corresponding latency, the use of a virtual environment might also be unacceptable. Finally, the wish to outperform competitors implies that any standardized COTS solution out of the question.

In summary, this article has demonstrated that the "right" risk architecture is a function of the scenario. We have illustrated that there are various levels of architecture (business, context, delivery, technical, data, parallelism, etc.) to be considered depending on the scenario. But when a concrete scenario is presented, the "Total Architecture" approach can be applied and the "right" architecture becomes clear and decisions become obvious.

References

  1. Paul C. Brown, Implementing SOA: Total Architecture in Practice (Upper Saddle River, NJ: Addison-Wesley Professional, 09-APR-2008), 5.
  2. Albert Einstein. (n.d.). BrainyQuote.com. Retrieved September 6, 2010, from BrainyQuote.com Web site

More Stories By Frank Bucalo

Frank Bucalo is an enterprise architect with CA Technologies specializing in risk management systems. Prior to his time at CA Technologies, he was an architect and a consultant with banking and brokerage clients. He is a member of the Global Association of Risk Professionals (GARP) and an ITIL Service Manager. Mr. Bucalo represents the new breed of “Total Architect” – knowledgeable and experienced across business, technology, and technology management domains.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@ThingsExpo Stories
"Space Monkey by Vivent Smart Home is a product that is a distributed cloud-based edge storage network. Vivent Smart Home, our parent company, is a smart home provider that places a lot of hard drives across homes in North America," explained JT Olds, Director of Engineering, and Brandon Crowfeather, Product Manager, at Vivint Smart Home, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that Conference Guru has been named “Media Sponsor” of the 22nd International Cloud Expo, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. A valuable conference experience generates new contacts, sales leads, potential strategic partners and potential investors; helps gather competitive intelligence and even provides inspiration for new products and services. Conference Guru works with conference organizers to pass great deals to gre...
The Internet of Things will challenge the status quo of how IT and development organizations operate. Or will it? Certainly the fog layer of IoT requires special insights about data ontology, security and transactional integrity. But the developmental challenges are the same: People, Process and Platform. In his session at @ThingsExpo, Craig Sproule, CEO of Metavine, demonstrated how to move beyond today's coding paradigm and shared the must-have mindsets for removing complexity from the develop...
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering m...
"Evatronix provides design services to companies that need to integrate the IoT technology in their products but they don't necessarily have the expertise, knowledge and design team to do so," explained Adam Morawiec, VP of Business Development at Evatronix, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. In his session at @BigDataExpo, Jack Norris, Senior Vice President, Data and Applications at MapR Technologies, reviewed best practices to ...
Widespread fragmentation is stalling the growth of the IIoT and making it difficult for partners to work together. The number of software platforms, apps, hardware and connectivity standards is creating paralysis among businesses that are afraid of being locked into a solution. EdgeX Foundry is unifying the community around a common IoT edge framework and an ecosystem of interoperable components.
Large industrial manufacturing organizations are adopting the agile principles of cloud software companies. The industrial manufacturing development process has not scaled over time. Now that design CAD teams are geographically distributed, centralizing their work is key. With large multi-gigabyte projects, outdated tools have stifled industrial team agility, time-to-market milestones, and impacted P&L stakeholders.
"Akvelon is a software development company and we also provide consultancy services to folks who are looking to scale or accelerate their engineering roadmaps," explained Jeremiah Mothersell, Marketing Manager at Akvelon, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
"IBM is really all in on blockchain. We take a look at sort of the history of blockchain ledger technologies. It started out with bitcoin, Ethereum, and IBM evaluated these particular blockchain technologies and found they were anonymous and permissionless and that many companies were looking for permissioned blockchain," stated René Bostic, Technical VP of the IBM Cloud Unit in North America, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Conventi...
In his session at 21st Cloud Expo, Carl J. Levine, Senior Technical Evangelist for NS1, will objectively discuss how DNS is used to solve Digital Transformation challenges in large SaaS applications, CDNs, AdTech platforms, and other demanding use cases. Carl J. Levine is the Senior Technical Evangelist for NS1. A veteran of the Internet Infrastructure space, he has over a decade of experience with startups, networking protocols and Internet infrastructure, combined with the unique ability to it...
22nd International Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, and co-located with the 1st DXWorld Expo will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud ...
"Cloud Academy is an enterprise training platform for the cloud, specifically public clouds. We offer guided learning experiences on AWS, Azure, Google Cloud and all the surrounding methodologies and technologies that you need to know and your teams need to know in order to leverage the full benefits of the cloud," explained Alex Brower, VP of Marketing at Cloud Academy, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clar...
Gemini is Yahoo’s native and search advertising platform. To ensure the quality of a complex distributed system that spans multiple products and components and across various desktop websites and mobile app and web experiences – both Yahoo owned and operated and third-party syndication (supply), with complex interaction with more than a billion users and numerous advertisers globally (demand) – it becomes imperative to automate a set of end-to-end tests 24x7 to detect bugs and regression. In th...
"MobiDev is a software development company and we do complex, custom software development for everybody from entrepreneurs to large enterprises," explained Alan Winters, U.S. Head of Business Development at MobiDev, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Coca-Cola’s Google powered digital signage system lays the groundwork for a more valuable connection between Coke and its customers. Digital signs pair software with high-resolution displays so that a message can be changed instantly based on what the operator wants to communicate or sell. In their Day 3 Keynote at 21st Cloud Expo, Greg Chambers, Global Group Director, Digital Innovation, Coca-Cola, and Vidya Nagarajan, a Senior Product Manager at Google, discussed how from store operations and ...
"There's plenty of bandwidth out there but it's never in the right place. So what Cedexis does is uses data to work out the best pathways to get data from the origin to the person who wants to get it," explained Simon Jones, Evangelist and Head of Marketing at Cedexis, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that CrowdReviews.com has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5–7, 2018, at the Javits Center in New York City, NY. CrowdReviews.com is a transparent online platform for determining which products and services are the best based on the opinion of the crowd. The crowd consists of Internet users that have experienced products and services first-hand and have an interest in letting other potential buye...
SYS-CON Events announced today that Telecom Reseller has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Telecom Reseller reports on Unified Communications, UCaaS, BPaaS for enterprise and SMBs. They report extensively on both customer premises based solutions such as IP-PBX as well as cloud based and hosted platforms.
It is of utmost importance for the future success of WebRTC to ensure that interoperability is operational between web browsers and any WebRTC-compliant client. To be guaranteed as operational and effective, interoperability must be tested extensively by establishing WebRTC data and media connections between different web browsers running on different devices and operating systems. In his session at WebRTC Summit at @ThingsExpo, Dr. Alex Gouaillard, CEO and Founder of CoSMo Software, presented ...