Welcome!

Apache Authors: Elizabeth White, Pat Romanski, Liz McMillan, Christopher Harrold, Janakiram MSV

Related Topics: Microservices Expo

Microservices Expo: Article

SOA Is Here - Are You Ready for IT?

How loosely coupled applications and their need for stronger governance will impact your IT organization

While significant attention has been paid to the benefits offered by service-oriented architecture (SOA), which has led to an increased understanding of the challenges that SOA poses as well, far less consideration has been given to the changes that this approach will impart on the IT organization itself. With the discussions around SOA having recently shifted from "if" and "why" to "when" and "how," three important questions now need to be addressed by organizations embarking on an SOA strategy: How will you manage your SOA, how will you pay for your SOA, and how will you staff your SOA?

As most would agree, using existing services within an SOA to develop and support new applications provides IT with the opportunity to take a quantum leap forward with regard to productivity and efficiency. As a result, enterprises can address a variety of process requirements faster and more completely than otherwise possible. However, this expanded reuse of existing assets is predicated on a consistent adherence to common standards, which requires most IT organizations to demonstrate far more discipline around governance than they've delivered to date. In reality, this approach produces faster development cycles that are running headfirst into the greater scrutiny required within an SOA, which significantly reduces margin of error as it eliminates many of the safety nets upon which enterprises have come to rely.

Consequently, Eric Austvold of AMR Research recently wrote [Service-Oriented Architectures: The Promise and the Challenge (October 6, 2005)], "SOA will expose the gap between the disciplined and undisciplined IT organization, creating the opportunity for fantastic success and spectacular failure." For example, competing SOA fiefdoms are rising in some organizations. At some point, mass confusion will emerge as these systems are unable to reconcile which "get credit" service is which. Instead of using SOA to streamline their operations, these organizations run the risk of adding further complexity as a new layer of middleware - the super SOA - is now needed to coordinate these various initiatives. The end result is that this "hybrid" approach further limits abstraction, cost effectiveness, and enterprise flexibility. What this means is that the approach to developing, deploying, and managing enterprise applications within an SOA needs to change to secure the promised benefits, and this process entails a variety of significant changes that impact the IT organization.

The Rise of the Shared Service Organization
Most IT organizations are already familiar with the concept of a shared service organization, which is often used to support the "common" assets of the enterprise such as mainframe computing, networks, and the corporate database infrastructure. Because applications are now becoming universal enterprise services, there is a need to increasingly view these individual services as a shared corporate asset as well. As such, the rationale of a shared service model as applied to other asset management requirements begins to make sense here as well.

For example, while many application development and deployment functions will remain closely tied to specific business units or operating groups, there is also an overriding need for the enterprise itself to govern the use of these common assets. As a matter of fact, the effectiveness of these governance efforts will be the key determinant of SOA success. Granted, some of these governance issues are technological in nature and can be solved with centralized registries, automated service monitoring, a common metadata repository, or through the use of an enterprise service bus. However, an even more fundamental need exists to simply define the standards that these technologies will use and to monitor and enforce usage requirements across the asset life cycle. To fulfill this requirement, an SOA Center of Excellence is needed.

Depending on the unique parameters of the organization and its culture, the role of the SOA Center of Excellence can range from light oversight or simple coordination through overriding responsibility for the delivery of services. In any of these scenarios, the fundamental goal should be the elimination of any doubt regarding the appropriate usage of a specific asset, and the SOA Center of Excellence should ultimately deliver the discipline and coordination needed to ensure efficient and effective operations.

As such, an SOA Center of Excellence should be entrusted with maintaining a single view of available services via a common registry along with their definitions. This organization would also be responsible for the enforcement of specific standards that govern usage such as metadata models, versioning standards, release protocols, and testing procedures.

Beyond just managing these services, the SOA Center of Excellence can also be used to deliver the necessary training and additional development standards needed to ensure a common SOA development methodology as well. The most forward-looking enterprises will even look to this organization to help prioritize long-term technology investments against existing business and IT requirements with a goal of ensuring that the SOA fully supports all of these requirements.

Another important role for the SOA Center of Excellence is helping to overcome the human factors that can potentially limit service reuse. As anyone who has ever run a development shop can attest, many projects are hampered by user concerns regarding the quality or suitability of "third-party" services, or by an unwillingness to make up-front investments that might only benefit those who are able to subsequently reuse the service as a result. In regard to overcoming this grassroots resistance by developers, a variety of "carrot & stick" approaches can and should be employed, and many of these enforcement tools fall under the existing mandate for service governance. With regard to the carrot, other ways to facilitate greater reuse of existing services include the integration of registry information into the development platform to maximize awareness of available services (this approach is typically supported by other forms of educational outreach). Because the ultimate goal is to create a culture in which service reuse is recognized and appreciated, it's not unreasonable to suggest that organizations tally "reusage" and respond and reward accordingly.

Paying the Piper
Of course, these added development and management steps produce additional up-front costs that the enterprise must address if it is to enjoy the benefit of subsequent reuse. With regard to specific models for addressing development costs, a number of approaches have already begun to emerge. The most simplistic and easy to implement is what I would call the "anti-enterprise" model, in which these additional costs are solely borne by the development group because they're the ones in the most immediate need of the core functionality. The additional cost associated with service enablement simply becomes a mandated requirement for all development efforts. Unfortunately, this approach is often shortsighted because it gives little incentive outside of decree for investing the additional funds needed to ensure widespread reuse of the developed service. As such, organizations are left to pursue the bare minimum as oppose to the optimal.

Likewise, some organizations have taken a "head in the sand" approach that completely ignores the issue of added cost, arguing that service reuse is so new a concept that little data exists for developing a cost model. Therefore, the true cost of service enablement is typically ignored within the overall budget. The challenge that this approach creates is that the IT organization or business group may be subsequently unable to show effective ROI for these projects. Thus, users have an incentive to do the bare minimum possible, including avoiding this requirement altogether.

Arguably, the best approach is to recognize these costs up front because this encourages both accountability and efficiency throughout the development process. For example, the added cost for service enablement can be defined as a fixed percentage of the total project cost and these additional costs are fully borne by a dedicated source of enterprise funding. With regard to specific budget parameters, a recent study by the Aberdeen Group offers some guidance. According to the research firm, a $10 billion company with a $300 million annual IT budget can save $30 million a year in five years by service-enabling 75 percent of their applications. As such, a $2 million fund for service enablement would result in a very favorable ROI.

In addition, enterprise budget models also need to address the costs associated with actual usage. For example, who bears the budgetary impact when a service developed by your group is subsequently employed as the cornerstone of another group's business model? For most organizations, the chargeback mechanisms or other activity-based pricing that they already employ become the model to be used for funding these ongoing costs. Specific mechanisms could include shared service units in which costs are closely tied to consumption, tiered service units that make allowance for each group's business objectives and modify pricing accordingly, or an enterprise pool model that relies upon headcount or other non-usage-based metrics. The important point to remember is that these fees are in lieu of additional development costs, and therefore represent significant savings for the business.

More Stories By Lance Hill

Lance Hill is the vice president of webMethods' product and solution marketing, where he leads a number of strategic initiatives focused on the development, commercialization, and adoption of webMethods' SOA-based technology. Prior to joining webMethods, he served as the vice president of enterprise engineering and later the Fusion Technology Group for National City Bank. In this capacity, he spearheaded the creation of an internal, end-to-end solution delivery and support organization with responsibilities for integration, application development, workflow, imaging, business intelligence, and portal technology.

Comments (2) View Comments

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Most Recent Comments
SYS-CON Italy News Desk 05/05/06 09:59:57 AM EDT

While significant attention has been paid to the benefits offered by service-oriented architecture (SOA), which has led to an increased understanding of the challenges that SOA poses as well, far less consideration has been given to the changes that this approach will impart on the IT organization itself. With the discussions around SOA having recently shifted from 'if' and 'why' to 'when' and 'how,' three important questions now need to be addressed by organizations embarking on an SOA strategy: How will you manage your SOA, how will you pay for your SOA, and how will you staff your SOA?

Press Release 03/07/06 02:11:50 PM EST

The definition of a service-oriented architecture (soa) involving services and
connections (includes graphic). If a service-oriented architecture is to be effective, we need a clear understanding of the term service. A service is a function that is well-defined, self-contained, and does not depend on the context or state of other services.

Dave
http://www.onearticles.net
http://www.hostcube.co.uk

IoT & Smart Cities Stories
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
In an era of historic innovation fueled by unprecedented access to data and technology, the low cost and risk of entering new markets has leveled the playing field for business. Today, any ambitious innovator can easily introduce a new application or product that can reinvent business models and transform the client experience. In their Day 2 Keynote at 19th Cloud Expo, Mercer Rowe, IBM Vice President of Strategic Alliances, and Raejeanne Skillern, Intel Vice President of Data Center Group and G...
Discussions of cloud computing have evolved in recent years from a focus on specific types of cloud, to a world of hybrid cloud, and to a world dominated by the APIs that make today's multi-cloud environments and hybrid clouds possible. In this Power Panel at 17th Cloud Expo, moderated by Conference Chair Roger Strukhoff, panelists addressed the importance of customers being able to use the specific technologies they need, through environments and ecosystems that expose their APIs to make true ...
The current age of digital transformation means that IT organizations must adapt their toolset to cover all digital experiences, beyond just the end users’. Today’s businesses can no longer focus solely on the digital interactions they manage with employees or customers; they must now contend with non-traditional factors. Whether it's the power of brand to make or break a company, the need to monitor across all locations 24/7, or the ability to proactively resolve issues, companies must adapt to...
We are seeing a major migration of enterprises applications to the cloud. As cloud and business use of real time applications accelerate, legacy networks are no longer able to architecturally support cloud adoption and deliver the performance and security required by highly distributed enterprises. These outdated solutions have become more costly and complicated to implement, install, manage, and maintain.SD-WAN offers unlimited capabilities for accessing the benefits of the cloud and Internet. ...
Business professionals no longer wonder if they'll migrate to the cloud; it's now a matter of when. The cloud environment has proved to be a major force in transitioning to an agile business model that enables quick decisions and fast implementation that solidify customer relationships. And when the cloud is combined with the power of cognitive computing, it drives innovation and transformation that achieves astounding competitive advantage.
DXWorldEXPO LLC announced today that "IoT Now" was named media sponsor of CloudEXPO | DXWorldEXPO 2018 New York, which will take place on November 11-13, 2018 in New York City, NY. IoT Now explores the evolving opportunities and challenges facing CSPs, and it passes on some lessons learned from those who have taken the first steps in next-gen IoT services.
Founded in 2000, Chetu Inc. is a global provider of customized software development solutions and IT staff augmentation services for software technology providers. By providing clients with unparalleled niche technology expertise and industry experience, Chetu has become the premiere long-term, back-end software development partner for start-ups, SMBs, and Fortune 500 companies. Chetu is headquartered in Plantation, Florida, with thirteen offices throughout the U.S. and abroad.
DXWorldEXPO LLC announced today that ICC-USA, a computer systems integrator and server manufacturing company focused on developing products and product appliances, will exhibit at the 22nd International CloudEXPO | DXWorldEXPO. DXWordEXPO New York 2018, colocated with CloudEXPO New York 2018 will be held November 11-13, 2018, in New York City. ICC is a computer systems integrator and server manufacturing company focused on developing products and product appliances to meet a wide range of ...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...