You are currently browsing the archives for the Integration architecture category.





Archive for the 'Integration architecture' Category

Explaining the importance of a good data integration architecture to senior management

Monday, October 22nd, 2012

In the summer I ran a workshop with the Portfolio Executive, which is our senior decision making body at the University regarding the distribution of funds to internal projects. In one half of the workshop I tackled a discussion about the need to regard master data as an asset and the problems with our current data integration architecture. I claimed that too high a percentage of our current system-to-system integrations are point-to-point, this position having been arrived at naturally following many years of allowing technical developers to create bespoke integrations for new systems.

I discussed how this architecture is risky for the institution as the combination of the lack of documentation, lack of a standards-based approach to data integration and too many system-to-system joins (“spaghetti” landscape of systems)  results in data model changes often propagating through our network systems in an unmanaged way. This can result in the sudden, unplanned loss of functionality in ‘consumer’ IT systems because the implications of changes made in a source IT system are not easy to appraise. We also suffer in some cases from a lack of agility in terms of our ability to respond quickly and efficiently to new imperatives (such as Government requirements for Key Information Sets or changes in REF requirements and so on). So, for example, in terms of data model change we’ve had a case where the organisational hierarchy was changed and unexpectedly broke a system that happened to be depending on it. As far as agility is concerned, we find that when we are starting to replace a system (or multiple systems) with a new one we usually have to go back to the drawing board each time to try to deduce how many interfaces there are to the current system and which interfaces will have to be put in for the new one. This can be overly time-consuming. All too often we create interfaces that are essentially doing the same task as other interfaces e.g. moving student data objects between systems, or transferring data about organisational units, research publications and so on. Although we began to tackle this duplication problem some years ago when we attempted to replicate key master data in a core system called the Datahub, we have not fully met requirements with this approach: for example the Datahub is not realtime (some data in it can be up to 24 hours old) causing many new system implementations to avoid use of the Datahub and instead to connect direct to source data systems. The consequence of this is that we simply perpetuate the problem of having many point to point integrations.

Now, this is is all rather technical. IT Services would like to make an investment in developing a new, more sophisticated architecture, whereby we have abstracted out key functionality (such as system requests of the type ‘give me all the students from department X’ and so on), developed a service oriented architecture where appropriate and deployed ESB technology wherever realtime data needs are most pressing.  We see the benefits this can bring, such as more reliable management of the knock on effect of system changes (reduction in system downtime), quicker project start up times due to a more agile integration architecture and a more standardised and therefore sustainable system integration architecture longer term. However, how to convince senior management to invest in this rather intangible sounding concept of a more mature data integration architecture is difficult when constrained to use non-technical speak! This is a brief summary of how I attempted to describe the concepts to the portfolio executive this summer:

Using a jigsaw analogy suggested to me by the Head of IT services I explained that we constantly try to fit new systems into our existing systems architecture seamlessly as though they are pieces from the same jigsaw puzzle – quite a challenge:

Jigsaw Architecture problem

 

The more we buy third party products, the more this is a real puzzle. The yellow boxes, by the way, are to do with students/researcher lifecycles – a concept that the portfolio executive were already familiar with and which I have blogged about elsewhere (see my enterprise architecture blog).

Next I discussed how we can think of our IT systems roughly as supporting the three areas of research, education and support services (such as finance and administration). Sticking with the jigsaw analogy, I described how we try to connect systems wherever we need to reuse data from one system in another. For example, where we might want to copy lists of courses stored in our Student Information System over to our timetabling system, or present publications stored in our research information system on the Web via our content management system. The ‘joins’, therefore, are created wherever we need a connection through which we can pass data automatically. This enables us to keep data in sync across systems and avoids any manual reentry of data. It’s the data reuse advantage. I used the diagram below to help discuss this concept:

Illustration of how we currently join IT systems together

 

I described how, as our requirements around information become more sophisticated, so the pressure on our data integration architecture increases. For example, we need integrated views of both research and teaching data to help inform discussions about individual academic performance, also we need cross-linked information about research and teaching on our website etc. If our data integration architecture is not fit for purpose (i.e. if the overall approach to system ‘joins’ is not planned, standardised, organised, documented and well governed) then we will struggle to deliver important benefits.

I used the following diagram to discuss what the future vision for the data architecture would look like:

The To Be vision of Joining Systems

 

This diagram is deliberately simplistic and purely illustrative. The blue ring is totally conceptual, but what it allowed me to talk about is the need to decouple systems that consume data from connecting direct to master data systems (i.e. to get away from such a proliferation of point-to-point system integrations). Naturally I’ve only shown a small subset of our systems in these diagrams, but this was enough to explain the concepts I needed to convey. I described how the blue ring could be made up of a combination of technologies, but that we would need to standardise carefully on these, organisation-wide, in order to increase our chances of sustaining the integration architecture over time. I didn’t mention service oriented architecture, but some of the blue ring could be composed of services that abstract out key, commonly used functionality using SOA technology. We didn’t discuss ESB, but some of the blue ring could use ESB technology. We have a data warehouse solution and this could be used to replicate some or even all our master data if we wish to go that route for data reuse too.

Determining the exact composition of the blue ring (ie the exact future data integration architecture for our institution) is not possible for us to do yet because we are still gathering information about the “As Is” (see my blog on the Interface Catalog). When we have fuller data about our current architecture then we will be able to review it and decide by what percentage we wish to reduce groups of point-to-point integrations (replacing them with web service api’s, say) and how we might want to replace Datahub with Datawarehouse technology and so on.

In order to throw more resource at developing the information needed for this analysis, we will be delivering a business case to the portfolio executive, requesting funds to help us. I hope to indicate how we’ve described the business model in that document in a future blog. Meanwhile, it is possible to continue to mature the integration architecture on a project by project basis.

The JISC are hosting an Enterprise Architecture workshop next month at which SOA is on the agenda. I hope to have useful conversations about where other Universities are at in maturing their data integration architectures at that event. One thing I can report is that the Portfolio Executive felt that they did understand the concept of the ‘blue ring’ and the importance of it, and are prepared to accept our forthcoming business case. This feels like an important step forward for our organisation.

 

 

Core Data – an important asset in terms of efficiency, agility and strategy

Sunday, October 21st, 2012

Earlier in this project I read the JISC infoNet “10 Steps to Improving Organisational Efficiency” with interest, and considered where this project and other activities we are doing at the University of Bristol fit in with the ten steps summarised.

Step 1 contains the instruction “Review a copy of your college or university’s strategic plan”. This is indeed one of the first jobs I undertook when I started as Enterprise Architect at Bristol in 2011. I have been doing some benefits maps work since to help understand our overarching strategy better – see my blog post about this at http://enterprisearchitect.blogs.ilrt.org/2012/10/13/what-are-the-benefits-of-benefits-maps/.

Step 2 recommends: “Create a basic view of the functions carried out by the institution”. There is a recommended JISC infoNet Business Classification Scheme to help with this. I am not sure we have done this step explicitly so far, although one of my early pieces of work on Lifecycles looks at IT systems from a stakeholder perspective and more recently I have been working with the Director of Planning who is using the balanced scorecard approach, so I think it would be useful for me to review how far we meet this objective with clarity.

Step 3: “Create a basic view of your IT Architecture showing the applications and services, the interfaces between them and the data transferred”. Well, it takes so few words to describe this step and yet this is a massive step for IT Services in my organisation. We have documented applications and services and are applying ITIL to good effect in terms of using a service catalogue as a central record for all IT systems offered as services within the organisation. However, the interfaces between our systems are absolutely key for us and the focus of this project. I have documented what we’re doing with our Interface Catalog in my other blog: http://enterprisearchitect.blogs.ilrt.org/category/interface-catalogue/ and we’ve now started a data dictionary (a summary overview of which is offered via an Entity Relationship diagram for the data model) in relation to the data entities exchanged/manipulated at the interface level. I will blog soon on how we are agreeing on the data dictionary template as a standard and what data entities need to be documented in the data dictionary and why. This work is one of the main thrusts behind this JISC project.

Step 4: “Identify any ‘bloated’ or redundant applications that consume resource far in excess of their actual value to the organisation and plan to phase them out. In time you will look at the business processes that drive this.” This step is about looking at return on investment and although I wouldn’t say we are managing this step through a formal methodology at the moment, we are looking at benefit in terms of benefits maps and we are also developing extremely useful documentation about the costs of our services (especially in terms of developer FTE’s required to manage the systems that constitute those services or the costs of the  outsourced support costs, in addition to any software licensing costs after upfront investment).

Step 5: “Use the IT architecture conclusions as a starting point for discussions involving management, teaching and administrative colleagues about architecture at enterprise level. ….. to build a roadmap of integrated process and ICT change.” This is a valuable step and at the moment this is happening through benefits work taking place with the Education, Research and Support areas of our organisation. When the three sets of benefits maps come together I think we will have an incredibly useful piece of dialogue around the priorities around IT support for strategic requirements considered holistically at the University.

Step 6: “Identify the likely lifespan and replacement cycle for existing applications.” This is happening as part of our application of the ITIL standard across IT Services.

Step 7: “Consider how a service-oriented approach (SOA) to your data layer could streamline the architecture and reduce the need for interfaces/data retyping. Plan to turn the ‘spaghetti’ into a ‘lasagne’.”. Although there are 3 steps after this one, I will stop at step 7 in this blog post because this is very much the focus of this JISC project. The documentation in our interface catalog is showing us where, say, ten different interfaces are exchanging the same student entities, in which case we could abstract out the common functionality and offer it via a single API, possibly via a Web Service. That way, should we change the structure of a student entity in our master data system, we can manage the propagation of that change through the systems that consume that data far more safely, easily and efficiently than at present. We can do this in turn for every piece of functionality that we see is highly used. We can also consider the need for realtime data updates across our systems and where we might start deploying ESB solutions in earnest. For more information about our interface catalogue please see: http://enterprisearchitect.blogs.ilrt.org/category/interface-catalogue/.

 

 

 

 

 

 

Developing a new Integration Architecture – the importance of the “As Is”

Thursday, August 23rd, 2012

The core data integration project is about identifying core master data as a valuable, reusable asset within the organisation. One part of this project’s activity is designing the “to be” architecture that will support our goals in terms of master data. However, we also need to document the “as is” architecture so that we can roadmap between the “as is” and the “to be”. This methodology is part of TOGAF.

To this end I have been working on establishing an Interface Catalog, now being contributed to by technical teams at the University, according to a defined standard to suit our purposes. You can read more about this at http://enterprisearchitect.blogs.ilrt.org/2012/07/31/the-value-of-an-interface-catalog/.

Performance Enhancement – how should technology enable it?

Friday, May 4th, 2012

The Performance Enhancement project is now fully underway at the University of Bristol. The project team comprises people with a wide ranging set of skills – from Change Managers to User Experience and Web Design experts, and includes me as Enterprise Architect.

In an excellent initial meeting a few weeks ago, the team members all met each other to understand better how all our skills will complement each other to achieve the overarching goal of the project. In Enterprise Architecture, one of the steps in applying a TOGAF-based methodology is to express the architecture vision and principles of any endeavour. So, in high level terms, just what is Performance Enhancement considered to be in the context of our project and how do we wish to enhance or change the processes by which it is currently undertaken at Bristol?

Performance Enhancement is not considered to be only related to staff review meetings:

“Performance enhancement should be an ongoing, continuous process that is not only seen as constructive but is in fact expected of leaders and managers by staff.” (http://www.bris.ac.uk/hr/policies/performance/)

Here is a diagram from our HR department that indicates the range of processes constituting performance enhancement (please note that the use of the word ‘tools’ here is not to be confused with the use of the word to mean technology-based tools):

Aspects of Performance Enhancement

The Performance Enhancement project aims to equip leaders and managers with the skills and confidence to have appropriate and effective conversations throughout the whole range of performance enhancement mechanisms and processes in place. It will also provide leaders, managers and their staff with the information they need to enable better-informed discussions to take place around performance.

This information aspect is where the technology component of our project comes in. For the first time at Bristol we wish to offer good quality and complete (as far as possible) data to those participating in performance enhancement conversations. The information we wish to provide should enable an holistic view of the sorts of activities that a member of academic staff participates in, from data about teaching contact and preparation hours through to research publications and participation in events – even measures of esteem or leadership roles that are not documented in our master data corporate systems, but which are nonetheless relevant to discussions about achievement or progression.  In terms of technical architectures, then, how are we to provide timely and accurate information from both our corporate teaching-related and also research-related systems to be surfaced in a view layer tool to support performance enhancement discussions? Well, our current architecture involves introducing a Web Services component to layer on top of our current timetabling system (in order to extract teaching contact hours in a sustainable way), plus using the Pure research information system (to provide research related information). Over the course of the project (which will run for at least another two years) we hope to be able to introduce new sources of information, perhaps some external. We also need to think carefully about the sorts of data we wish to archive and for how many years we wish to preserve it. For example we might wish to store year-on-year the percentage FTE’s of academics teaching across various courses, but in this scenario we wouldn’t want to record, say, the time of day or room that the academics were teaching in. However, all assumptions are being carefully thought about at this stage, in terms of the more global business intelligence requirements we have at the University (for example, although performance enhancement requirements may not include buildings and rooms data, Estates might wish to have this data archived for their own entirely separate purposes).

I will blog more about our technical integration architecture as it unfolds (implementation will start in earnest in the summer). Meanwhile the project is undertaking a programme of engagement with departments, to talk about the cultural and process aspects of performance enhancement as well as the introduction of our new technical tool to support processes. Feedback from this round of engagement will help us (on the technical side) to develop the view layer part of the tool so that is user-friendly and appropriate to the tone of the project. We intend to use the tool to solicit feedback from a pilot group who have agreed to trial it for staff review discussions in the Autumn.

A review phase of the project (at the end of this year) will include an evaluation of the pilot tool against any viable third party solutions should they become available. If nothing is available externally then we have the option to develop our in-house pilot tool further, make it production grade and to extend its use throughout the University. Either way, the data integration work we’re doing at the middleware layer will be supportive of any view layer tool that is deemed suitable for performance enhancement in the long term.