Designing a SOA Roadmap for the University

August 5th, 2013

Over on my other blog, I talk about our first pass Service Oriented Architecture (SOA) roadmap for Bristol, arising from the Master Data Integration strategy. See the description at http://enterprisearchitect.blogs.ilrt.org/2013/08/05/looking-back-on-year-2/. In brief, it contains 6 steps, listed as follows,

Step 1. Make the initial business case for Service Oriented Architecture (SOA)

Purpose: gaining senior level buy-in

blogbusinesscase

 

 

Step 2. Complete the data dictionary and the interface catalogue to at least 80% for all master data system integrations.

Purpose: good documentation of the As Is data integration architecture, to speed up analysis done in advance of new system integration, to speed up analysis of the impact of any changes to master data structures and also to offer clear semantics to support the interpretation of business reports.

blogintcatalogue

Blogdatadictionary

Step 3: Introduce data governance.

Purpose: ensure that changes to master data structures are managed in a controlled way as they propagate throughout our IT system landscape (avoiding unexpected system and process failure and/or damage to our organization’s reputation)

blogdatasteward

Step 4: Analyse the interface catalogue and develop a logical data model from the data dictionary.

Purpose: describe more specifically the cost-savings and agility that the University stands to gain if it invests in consistently applied SOA technology (reducing the proliferation of point-point integrations) and design the “To Be” architecture – a precursor for step 5.

Step 5: Deploy a SOA technology solution, train IT developers to use and maintain it.

blogbluering

This is just an illustrative diagram rather than a systems architecture diagram. Through this step we hope to deliver:

  • Cheaper, less complex to maintain, standardized master data integration architecture (blue ring).
  • Higher guarantees of data security & compliance (data services to have Service Level Agreements and data security to be built in from the outset)
  • Agility to respond quickly to Cloud and Shared Services opportunities
  • Quicker to respond to changes in external reporting data structures (KIS, HESA, REF etc.)

Step 6: Train our IT developers and Business Analysts to work together to work together using a standard set of skills and tools based around BPEL, BPM etc.

At this point we expect to reach the level of SOA sophistication where we have the ability to orchestrate and optimize end to end processes that share and manipulate data, creating efficiencies and business agility across the institution.
Suffice it to say, we are some years off Step 6! We are currently working on steps 2-4  above. This Autumn I am running a knowledge exchange workshop for HEI’s working on SOA roadmaps to gather and compare and contrast their plans, successes and lessons learned to date. I look forward to sharing the outcomes of the event.

 

 

 

 

 

Case Study Project Report

August 5th, 2013

This is the case study report given to the JISC, July 2013:

Project Name: Core Data Integration Project

Lead Institution: University of Bristol

Project Lead: Nikki Rogers, Enterprise Architect, IT Services

 

Background

The original proposal for this project described how we planned to augment our institutional Enterprise Architecture approach with JISC resources in order to tackle the problem of achieving full integration of data in our various learning, teaching and research IT systems. This was primarily in order to provide better quality data to support our staff Performance Management processes – a big strategic driver at the time, and an area for which the seamless combination of data from a number of separate IT systems was required. This was in order for the University to be able to view the activities of academic staff (who may also teach) in an holistic way, and for trained managers to be able to support staff better in achieving optimal success in research, education and citizenship. Since the time that the Core Data Integration project was conceived, the Performance Management project has scaled down its IT ambitions somewhat and focussed much more on cultural change – a very good path to take, prompted by a realisation that IT alone shouldn’t be used to drive cultural change in this sensitive area.

Nonetheless, improving data integration at our institution, whether to underpin the smooth-running of the organisation’s operational processes, to provide the data platform to support our increasingly ambitious business intelligence goals, or to support the reporting and dissemination of our University’s activities via the Web and to the government, is a generic concern for any organisation. Indeed over the lifetime of the project whilst for completely separate reasons the Performance Management project became less of a driver for the data integration project, nonetheless other projects and strategic initiatives at the University came to bear with even great weight on the pressing need for Bristol to improve its data integration architecture. The need to invest in the right IT architecture to support an improved data integration strategy at Bristol has become recognised at a very high senior level within the institution during the lifetime of this project – an important step forward for our institution.

 

Aims and objectives

The broad aim of this project was to see to what extent the University of Bristol should and could adopt a Service Oriented Architecture (SOA) approach to existent problems of integrating master data automatically, across master data systems (our student administration system, our research information administration system, our estates management system, our finance system and so on). The objectives were to:

  • document where the problems exist in our current, bespoke data integration architecture,
  • identity where costs could be reduced in terms of maintaining it,
  • describe how agility could be improved – in terms of the speed and ease with which the data integration architecture can be adapted to cope with IT system replacements, the changing requirements of government reporting and so on,
  • convince senior management at the University of the value of investing in SOA technology (if indeed we could be sure of the benefit of this approach).

Context

The growth in the number of IT systems at the University in recent years has been significant. There are now systems to support the student and research lifecycles end to end. For example, to support the student lifecycle we now have a CRM system, an applicant portal, a student portal, a timetabling system, a student administration system, a VLE solution, an alumni database and the list goes on. The automatic passing of student (and other) data between these systems is essential if we are to avoid the likely errors and significant amount of time involved in re-keying large student datasets from one system into the next.

However, typically, over the years, these systems have been joined using a varying set of technologies, based on individual developer choices, using technologies that they were familiar with and had existing expertise in at the time. As the University’s systems architecture has increased in complexity, the number of joins has proliferated and the management of the joins has become increasingly cumbersome and expensive.  Due to the scale of information sharing that is required to support University processes across the student and research lifecycles, the number of joins (integrations) that now exist run in to the thousands. This is a problem for many institutions and is sometimes referred to as a spaghetti-like systems architecture.  Hundreds of joins are not documented at all and have been implemented using many different IT techniques, often only known to the technical developer who created them, and they can be hard for other developers to unravel and maintain.

We are now in a position where seemingly relatively small system changes have had major consequences for other joined systems, and have resulted in ‘broken’ point-to-point interfaces.  For example, we have had cases where an organisational hierarchy change made in our HR system, caused University system breakages elsewhere, because there was no process in place to manage the safe propagation of the change throughout the set of systems that rely on this master data.   Moreover, staff users report that in many cases it is no longer clear how data is being stored and used between different systems.  Even expert users often know that data is transferred from one system to another ‘somehow’ but in the case of cross-functional processes are often unable to easily ascertain to what extent this is via an automated interface, and when or how manual intervention may be required

In the current economic climate we not only need to improve the reliability and cost-effectiveness of our data integration architecture to cope with operational process that increasingly depend on IT systems, but we also need to respond effectively to changes in government-driven reporting requirements: for example the REF (Research Excellence Framework), KIS (Key Information Sets), HESA reporting and so on.  The external demand for data puts increasing pressure on our internal ability to manage our data well.

Furthermore, the growth of Cloud opportunities such as Software As A Service (SAAS) and the possibility of shared services within the HE sector mean that in order to be ready to take advantage of these potentially cost-saving external solutions, we need to understand how we share data across processes and IT systems better than ever. How, for example, could we move our student administration system “into the Cloud” if we can’t easily import data from our CRM into it, and export data easily from it into our Alumni database, for example?

Finally, in an increasingly competitive sector, the University is paying greater than ever attention to business intelligence. Our current strategy for measuring the university’s year-on-year performance against its strategic performance indicators depends on the quality of the data supplying the data warehouse that is used for BI reporting. If we don’t have a well managed, standardised data integration architecture together with centrally (and unambiguously) defined semantics for our master data structures, then most likely we will fail to deliver on our BI strategy in the longer term.

The business case

University projects are the agents of change at Bristol. To initiate a project, the project proposal is taken through a two-stage business case process: Stage 0 initiates the process and involves a standard-format, initial business case document being presented to a senior level decision making body (our portfolio executive) for approval. We have been through this process for this project at Bristol and in November 2012 our Stage 0 Business Case for Master Data Integration was approved, allowing us to proceed and develop the Stage 1 Business Case, a fully costed and very thorough document. We are still in the process of writing the Stage 1 business case at the time of writing, but it can be summarised as follows.

The master data integration project will deliver the following outcomes

  • Completion of documentation for the data dictionary and interface catalogue
  • Full analysis of the above, to identify where we can reduce many point-to-point system integrations to a minimum number of sustainable data services,
  • The establishment of a data governance structure, to include data stewards responsible for the control of changes to data structures in master systems over time,
  • A full technical options analysis and a recommended SOA solution for the University of Bristol
  • Incremental implementation of SOA for the reuse of data in our master data systems, colloquially known at Bristol as the “blue ring”

The master data integration project will deliver the following benefits

  • Improve the operational management of master data and how it is automatically passed between IT systems,
  • Improve our ability to carry out strategic level reporting and to easily provide management or government/funding reporting information on demand,
  • Increase the speed and ease (thereby reducing the cost) of implementation of new IT systems on an on-going basis (currently much time is spent in analysing existing system integrations ahead of every new IT system to be integrated, due to a lack of central documentation).
  • Reduce the cost and increase the reliability of maintaining our data integration architecture on an ongoing basis (note that Gartner undertook research that demonstrated that 92% of the Total Cost of Ownership – http://www.gartner.com/it-glossary/total-cost-of-ownership-tco/ – of an average IT application, based on 15 years of use is incurred after the implementation project has finished. A significant part of those costs will be concerned with maintaining the application’s seamless integration within the organisation’s application architecture, so by simplifying this aspect we will save costs over time),
  • Ensure that Bristol is part of the sector-wide movement towards addressing data issues, and that we are ready to take advantage of Cloud and Shared Services opportunities

The master data integration project will cost:

To be continued! We are finalising costings for a business case to be completed in the Autumn of 2013.

Key drivers 

The key drivers have been described above but can be summarised as follows:

  • The University needs to save money on cumbersome manual data-re-entry/data cleansing processes that could be automated,
  • The University needs good data in order to do business intelligence reporting,
  • The University needs to reduce the cost and unreliability of the current data integration architecture (caused by the proliferation of point-to-point system integrations),
  • The University needs to be agile in response to on-going changes in government and funder reporting requirements,
  • The University needs to be confident that data security is managed well as student and staff personal data is passed from system to system.

 

JISC resources/technology used

JISC infoNet “10 Steps to Improving Organisational Efficiency” (see http://coredataintegration.isys.bris.ac.uk/category/jisc-infonet-10-steps-to-improving-organisational-efficiency/ )

Support from the JISC Transformations Programme in networking with colleagues and arranging follow up meetings regarding SOA and the application of an Enterprise Architecture approach to developing our master data integration strategy at Bristol.

JISC ICT Strategic Toolkit (http://www.nottingham.ac.uk/gradschool/sict/ )

JISC Archi Modelling tool (http://archi.cetis.ac.uk/)

Data asset framework – audit tool for data (http://www.data-audit.eu/ )

The #luceroproject (http://lucero-project.info/lb/ )

 

Outcomes:

Achievements

  • Achieving senior level buy-in to the master data integration strategy that evolved from this project. This was achieved through a one-off workshop session with the University’s Portfolio Executive and a follow up Stage 0 business case which was approved.
  • Getting buy-in from within IT Services to starting and maintaining the interface catalogue (see http://coredataintegration.isys.bris.ac.uk/2013/06/16/important-documentation-for-soa-the-interface-catalogue-and-data-dictionary/ ) and similarly the enterprise data dictionary which people from beyond just IT services are helping to complete.
  • Collaboration with other Universities – useful visits to the University’s of Cardiff and Oxford to discuss their own experiences of deploying SOA solutions, with a follow up workshop at Bristol planned for October 2013.
  • The project blog which attracted interest and feedback from useful contacts around the sector http://coredataintegration.isys.bris.ac.uk/category/enterprise-architect/
  • Developing the SOA roadmap, tailored to Bristol’s needs.

Benefits

  • Starting the interface catalogue and data dictionary for particular projects has brought benefit directly to those projects (for example, a project that will introduce a new ERP system to Bristol over the next year is using the data dictionary approach currently to record the data models for our finance, payroll and HR databases ready for data migration). This in itself demonstrates the value of central documentation and has increased buy-in for the approach and an understanding that we need these two information stores to be centrally supported services, maintained in a devolved way across the organisation, on an ongoing basis. It has become clear that just by taking this step we will save costs in future when migrating to new systems. In addition, the interface catalogue has already revealed a large number of separate, point-to-point system integrations that essentially perform tasks on the same data (many are passing student data, many are passing organisational hierarchy data, many are passing staff data between systems and so on). This gives us evidence that we have obvious opportunities for rationalising our integration architecture significantly.
  • Collecting examples of where data problems exist across the organisation helps strengthen the case for SOA,
  • Helping senior management to clearly understand why Bristol needs good master data management has been extremely beneficial to this project and a necessary precursor to future investment in SOA.
  • Collaboration with other Universities continues to be invaluable in acquiring lessons learned, comparing and contrasting approaches and gaining knowledge from others’ experiences of deploying SOA solutions – we are lucky to be in a sector that is so willing to share information in this way. It also paves the way to our sector potentially having more of a collective impact on vendors in the future to ensure that they meet our technical and data requirements when developing their products.

Drawbacks

  • Getting senior level buy-in for significant investment in something that is not clearly visible to the end-user – i.e. middleware – and which may involve high up front costs and a longer time before benefits are realised is a challenge that requires a lot of strong evidence and good communication with a non-technical audience.
  • At Bristol, using Business Cases as the sole mechansim through which we initiate a project is somewhat restrictive and time-consuming, although it can be argued conversely that it enforces the kind of rigour needed to scrutinise the costs and benefits of implementing each stage of our SOA roadmap.
  • Getting consensus across IT teams can be difficult when the prevailing culture has been based around a freedom to implement point-to-point interfaces between systems, and the debate ensues as to whether SOA should be rolled out experimentally, taking a toe-in-the-water approach with opensource solutions initially, versus going in first with procuring a relatively expensive commercial product to then roll out incrementally over time.
  • We anticipate that implementing data governance to support SOA will again challenge some of the prevailing organisational culture where there is a localised sense of “data ownership” in certain parts of the institution, rather than a general appreciation that IT systems may hold master data which should be managed as a corporate data asset rather than viewed as in some way owned and controlled exclusively by a particular group of stakeholders.

Key lessons

  • cultural change is necessary – both for IT services (in terms of moving away from traditional point-to-point system integration approaches) and wider in terms of senses of data “ownership” – convincing people on the ground requires identifying real pain points that they can relate to
  • takes longer than you would expect to develop an institutional roadmap for SOA that all relevant parties (from non-technical senior management down to “techies” in IT services) can buy in to.
  • hearing about real experiences from other HEI’s is valuable
  • doing an institutional maturity assessment is essential in understanding where on the road to SOA you are
  • understanding the relationship of the design of a SOA plan with business processes and strategic goals is essential – hence an EA approach is highly recommended
  • describing a roadmap for the particular institution concerned is essential in getting buy-in and helping people to understand the business case for SOA

Looking ahead

The next phases on the master data integration strategy roadmap are:

  • an Autumn Knowledge Exchange Workshop on SOA to be hosted at the University of Bristol,
  • submission of the Stage 1 business case (described above) to be submitted to the Portfolio Executive, recommending the procurement of a SOA solution and developer training to support its rollout,
  • beyond reusable master data services in the first instance, we would then want to get to a stage whereby we have a SOA competency centre (the right set of SOA-trained business analysts and IT developers) giving us the ability to orchestrate and optimize end to end processes that share and manipulate data, creating efficiencies across the institution (across the research and student lifecycles).
  • we would want to continue to collaborate with other Universities in the HE sector who are also developing SOA competencies, to the point where we may adopt greater sector-level data model standardisation and begin to influence vendors with a stake in our sector to encourage them to standardise their IT products according to the set of data services we require.

Sustainability
Our strategy for sustainable Master Data Management is intended to include:

  • Developing and maintaining a SOA skillset in-house (there is the opportunity to outsource this at a later stage, but certainly at this stage it makes sense to develop in-house our knowledge of the organisation’s needs regarding data services and how to implement them effectively within the institution)
  • Implementing formal process change within the appropriate governance structure such that new system integrations cannot be implemented according purely to individual developers’ programming preference, but such that the integration method has to be proposed, reviewed against the master data management strategy and formally approved before a “go live” date is agreed.
  • Implementing data governance – stewards within the organisation who will take responsibility for the safe propagation of master data structure changes throughout the organisation, and to guarantee the quality and security of master data that will be reused by other IT systems.
  • Developing and maintaining an institutional SOA roadmap so the organisation clearly understands its direction of travel with respect to SOA and such that it can be clear when it needs to adapt to fresh external or internal drivers as and when they should arise.

 

 

Important documentation for SOA: the Interface Catalogue and Data Dictionary

June 16th, 2013

Interface Catalogue

A few months ago I blogged about our initiation of an Interface Catalogue here at the University of Bristol (http://enterprisearchitect.blogs.ilrt.org/2012/07/31/the-value-of-an-interface-catalog/). Each row in our interface catalogue represents a single integration – a join – between two different IT Systems. Despite our having established an operational datastore (called the Datahub) some years ago, we have a proliferation of point-to-point system integrations at Bristol (mainly because the Datahub is currently only updated nightly whereas many of integrations need to exchange data in real-time or near real-time). We’d like to significantly reduce our percentage of point-to-point integrations and mature our integration architecture, hence the need to do some analysis of the current integration architecture: the Interface Catalogue is giving us good “As Is” documentation to this end.

We have implemented it in an Oracle database thus far and are building a user-friendly Web frontend for the maintenance of entries in the catalogue over time. The schema is shown below

Interface Catalogue Schema Diagram

Tables in the Interface Catalogue

Explanation:

The interfaces table is completed such that each row in it represents an interface between two systems. We use unique codenames for all our IT systems, so the services table provides a look up for those system names. Most interfaces transform source system data and get it into the data format that is required by the destination system. For example, we might use PL/SQL to extract data about research grants from our Finance system’s Oracle database and present the data as a database schema view (an SQL result set) for our Research Information System to access. In this example, then, we would describe our data exchange mechanism as having a transformation type of PL/SQL and a data exchange format of SQL result set. In some more complex integrations we may do repeated transformations of the data and in this case we would link an ordered set of data exchange mechanisms (each having a transformation type and data exchange format) to the interface. Note that the Source Object is the data object being extracted from the source system – in this example it’s research grant data objects. In an attempt to converge on a controlled vocabulary for developers to use we have a look up table of values for all of transformation type, data exchange format and objects. In the case of data objects we aim to eventually sync that table with the data dictionary which is intended to become a complete set of documentation about the data in our systems – something we are only just embarking on and which I describe further below. The users and groups tables are used to support access control policy in terms of who is allowed to edit what in the interface catalogue. We have currently set responsibility for maintaining data on the destination system owners, mainly because those system owners are typically most likely to know specific information about the data objects they are obtaining from the source system and can add an explanation of the business purpose of that interface. Here’s an example entry in the interface catalogue (I haven’t included all fields for the sake of simplicity):

ExampleRowInterfaceCatalogue

 

Enterprise Data Dictionary

As with the Interface Catalogue, we started off building consensus around the schema for our data dictionary by using a shared wiki. In my view this is quite a nice way to browse a data dictionary as it is possible to describe each data object on a separate wiki page and link from one to another (and also to controlled vocabularies that might be related to certain entity attributes). However I wanted to get the data dictionary populated in a devolved way as soon as possible, and I have been taking an opportunistic approach to getting entries filled out by a range of people around the organisation who are already having to capture this sort of information for their own purposes; these people are most comfortable with using Excel spreadsheets so I am willing to work in this way for now. We will migrate to a more sophisticated solution later this year (I come back to this below). I currently have business users in five different areas of the organisation all collaborating with me to complete information about their data objects in the same way, in this central repository, they are in: Finance, Payroll, HR, Identity Management and Student Information. The reason that each group is already wanting to document data is that we are migrating to a new ERP system in one area of the organisation (to replace several finance, payroll and HR systems), elsewhere, IT Services is implementing a new Identity Management system, and also we are developing improved data structures in SITS (our student information system). Each group therefore is in some sense needing to migrate current data and they only way they can do that safely is to document the “As Is” data structures in our systems and to then evaluate how to migrate existing data to target (“To Be”) data structures. I managed to convince each group to work with me in building up a central data store of this information that we will need to maintain over time. Just having this sort of centralised data in future should not only help with practical tasks being undertaken right now, but also greatly reduce the cost of migrating to new systems over time as, going forward, we’ll have good documentation at our finger tips and held in a consistent format.

Our current schema is at version 0.1 and subject to further development, however I post it here for information:

Data Dictionary Schema v 1.0
We are using a separate tab for each data object (entity) being documented (Person, Appointment, Address, Progamme, Unit and so on), and each data object is being documented as per the schema above.

Clearly this spreadsheet is growing rapidly and we are pushing this current documentation approach to its limits! So, how to host the data dictionary in such as way as it both access controlled and also intuitive to use and maintain by non-technical as well as technical people? Well, we currently propose to take a similar approach to that which we’ve taken with the Interface Catalogue thus far (i.e. a fairly simple Oracle database backend layered with a Java Web App that presents an access controlled administrative, Web frontend, integrated with our University’s Single Sign On system). However I am very interested in whether there are good, equally low cost tools out there, perhaps in SOA suites that we could be interested in evaluating for future adoption. I did a very quick trawl of suppliers of enterprise data dictionary tools out there and came up with this:

My next job is to talk more with our Business Intelligence team at Bristol. This team is already documenting data structures as in effect they are what I think of as consumers of master data – they need it to access data to put into our data warehouse to enable management information reporting at many levels within the organisation. They need to clearly understand the semantics of the data as well as to integrate data structures because clearly the end-users who read the generated reports need to be able to interpret the meaning of the information shown in them without ambiguity. We use SAP Business Objects solutions for our business intelligence purposes at Bristol and I plan to find out if there’s anything in this solution which will cover off our data dictionary needs and enable cross-linking with the interface catalogue. Meanwhile, if anyone reading this has useful suggestions, please let me know!

 

Explaining the importance of a good data integration architecture to senior management

October 22nd, 2012

In the summer I ran a workshop with the Portfolio Executive, which is our senior decision making body at the University regarding the distribution of funds to internal projects. In one half of the workshop I tackled a discussion about the need to regard master data as an asset and the problems with our current data integration architecture. I claimed that too high a percentage of our current system-to-system integrations are point-to-point, this position having been arrived at naturally following many years of allowing technical developers to create bespoke integrations for new systems.

I discussed how this architecture is risky for the institution as the combination of the lack of documentation, lack of a standards-based approach to data integration and too many system-to-system joins (“spaghetti” landscape of systems)  results in data model changes often propagating through our network systems in an unmanaged way. This can result in the sudden, unplanned loss of functionality in ‘consumer’ IT systems because the implications of changes made in a source IT system are not easy to appraise. We also suffer in some cases from a lack of agility in terms of our ability to respond quickly and efficiently to new imperatives (such as Government requirements for Key Information Sets or changes in REF requirements and so on). So, for example, in terms of data model change we’ve had a case where the organisational hierarchy was changed and unexpectedly broke a system that happened to be depending on it. As far as agility is concerned, we find that when we are starting to replace a system (or multiple systems) with a new one we usually have to go back to the drawing board each time to try to deduce how many interfaces there are to the current system and which interfaces will have to be put in for the new one. This can be overly time-consuming. All too often we create interfaces that are essentially doing the same task as other interfaces e.g. moving student data objects between systems, or transferring data about organisational units, research publications and so on. Although we began to tackle this duplication problem some years ago when we attempted to replicate key master data in a core system called the Datahub, we have not fully met requirements with this approach: for example the Datahub is not realtime (some data in it can be up to 24 hours old) causing many new system implementations to avoid use of the Datahub and instead to connect direct to source data systems. The consequence of this is that we simply perpetuate the problem of having many point to point integrations.

Now, this is is all rather technical. IT Services would like to make an investment in developing a new, more sophisticated architecture, whereby we have abstracted out key functionality (such as system requests of the type ‘give me all the students from department X’ and so on), developed a service oriented architecture where appropriate and deployed ESB technology wherever realtime data needs are most pressing.  We see the benefits this can bring, such as more reliable management of the knock on effect of system changes (reduction in system downtime), quicker project start up times due to a more agile integration architecture and a more standardised and therefore sustainable system integration architecture longer term. However, how to convince senior management to invest in this rather intangible sounding concept of a more mature data integration architecture is difficult when constrained to use non-technical speak! This is a brief summary of how I attempted to describe the concepts to the portfolio executive this summer:

Using a jigsaw analogy suggested to me by the Head of IT services I explained that we constantly try to fit new systems into our existing systems architecture seamlessly as though they are pieces from the same jigsaw puzzle – quite a challenge:

Jigsaw Architecture problem

 

The more we buy third party products, the more this is a real puzzle. The yellow boxes, by the way, are to do with students/researcher lifecycles – a concept that the portfolio executive were already familiar with and which I have blogged about elsewhere (see my enterprise architecture blog).

Next I discussed how we can think of our IT systems roughly as supporting the three areas of research, education and support services (such as finance and administration). Sticking with the jigsaw analogy, I described how we try to connect systems wherever we need to reuse data from one system in another. For example, where we might want to copy lists of courses stored in our Student Information System over to our timetabling system, or present publications stored in our research information system on the Web via our content management system. The ‘joins’, therefore, are created wherever we need a connection through which we can pass data automatically. This enables us to keep data in sync across systems and avoids any manual reentry of data. It’s the data reuse advantage. I used the diagram below to help discuss this concept:

Illustration of how we currently join IT systems together

 

I described how, as our requirements around information become more sophisticated, so the pressure on our data integration architecture increases. For example, we need integrated views of both research and teaching data to help inform discussions about individual academic performance, also we need cross-linked information about research and teaching on our website etc. If our data integration architecture is not fit for purpose (i.e. if the overall approach to system ‘joins’ is not planned, standardised, organised, documented and well governed) then we will struggle to deliver important benefits.

I used the following diagram to discuss what the future vision for the data architecture would look like:

The To Be vision of Joining Systems

 

This diagram is deliberately simplistic and purely illustrative. The blue ring is totally conceptual, but what it allowed me to talk about is the need to decouple systems that consume data from connecting direct to master data systems (i.e. to get away from such a proliferation of point-to-point system integrations). Naturally I’ve only shown a small subset of our systems in these diagrams, but this was enough to explain the concepts I needed to convey. I described how the blue ring could be made up of a combination of technologies, but that we would need to standardise carefully on these, organisation-wide, in order to increase our chances of sustaining the integration architecture over time. I didn’t mention service oriented architecture, but some of the blue ring could be composed of services that abstract out key, commonly used functionality using SOA technology. We didn’t discuss ESB, but some of the blue ring could use ESB technology. We have a data warehouse solution and this could be used to replicate some or even all our master data if we wish to go that route for data reuse too.

Determining the exact composition of the blue ring (ie the exact future data integration architecture for our institution) is not possible for us to do yet because we are still gathering information about the “As Is” (see my blog on the Interface Catalog). When we have fuller data about our current architecture then we will be able to review it and decide by what percentage we wish to reduce groups of point-to-point integrations (replacing them with web service api’s, say) and how we might want to replace Datahub with Datawarehouse technology and so on.

In order to throw more resource at developing the information needed for this analysis, we will be delivering a business case to the portfolio executive, requesting funds to help us. I hope to indicate how we’ve described the business model in that document in a future blog. Meanwhile, it is possible to continue to mature the integration architecture on a project by project basis.

The JISC are hosting an Enterprise Architecture workshop next month at which SOA is on the agenda. I hope to have useful conversations about where other Universities are at in maturing their data integration architectures at that event. One thing I can report is that the Portfolio Executive felt that they did understand the concept of the ‘blue ring’ and the importance of it, and are prepared to accept our forthcoming business case. This feels like an important step forward for our organisation.

 

 

Core Data – an important asset in terms of efficiency, agility and strategy

October 21st, 2012

Earlier in this project I read the JISC infoNet “10 Steps to Improving Organisational Efficiency” with interest, and considered where this project and other activities we are doing at the University of Bristol fit in with the ten steps summarised.

Step 1 contains the instruction “Review a copy of your college or university’s strategic plan”. This is indeed one of the first jobs I undertook when I started as Enterprise Architect at Bristol in 2011. I have been doing some benefits maps work since to help understand our overarching strategy better – see my blog post about this at http://enterprisearchitect.blogs.ilrt.org/2012/10/13/what-are-the-benefits-of-benefits-maps/.

Step 2 recommends: “Create a basic view of the functions carried out by the institution”. There is a recommended JISC infoNet Business Classification Scheme to help with this. I am not sure we have done this step explicitly so far, although one of my early pieces of work on Lifecycles looks at IT systems from a stakeholder perspective and more recently I have been working with the Director of Planning who is using the balanced scorecard approach, so I think it would be useful for me to review how far we meet this objective with clarity.

Step 3: “Create a basic view of your IT Architecture showing the applications and services, the interfaces between them and the data transferred”. Well, it takes so few words to describe this step and yet this is a massive step for IT Services in my organisation. We have documented applications and services and are applying ITIL to good effect in terms of using a service catalogue as a central record for all IT systems offered as services within the organisation. However, the interfaces between our systems are absolutely key for us and the focus of this project. I have documented what we’re doing with our Interface Catalog in my other blog: http://enterprisearchitect.blogs.ilrt.org/category/interface-catalogue/ and we’ve now started a data dictionary (a summary overview of which is offered via an Entity Relationship diagram for the data model) in relation to the data entities exchanged/manipulated at the interface level. I will blog soon on how we are agreeing on the data dictionary template as a standard and what data entities need to be documented in the data dictionary and why. This work is one of the main thrusts behind this JISC project.

Step 4: “Identify any ‘bloated’ or redundant applications that consume resource far in excess of their actual value to the organisation and plan to phase them out. In time you will look at the business processes that drive this.” This step is about looking at return on investment and although I wouldn’t say we are managing this step through a formal methodology at the moment, we are looking at benefit in terms of benefits maps and we are also developing extremely useful documentation about the costs of our services (especially in terms of developer FTE’s required to manage the systems that constitute those services or the costs of the  outsourced support costs, in addition to any software licensing costs after upfront investment).

Step 5: “Use the IT architecture conclusions as a starting point for discussions involving management, teaching and administrative colleagues about architecture at enterprise level. ….. to build a roadmap of integrated process and ICT change.” This is a valuable step and at the moment this is happening through benefits work taking place with the Education, Research and Support areas of our organisation. When the three sets of benefits maps come together I think we will have an incredibly useful piece of dialogue around the priorities around IT support for strategic requirements considered holistically at the University.

Step 6: “Identify the likely lifespan and replacement cycle for existing applications.” This is happening as part of our application of the ITIL standard across IT Services.

Step 7: “Consider how a service-oriented approach (SOA) to your data layer could streamline the architecture and reduce the need for interfaces/data retyping. Plan to turn the ‘spaghetti’ into a ‘lasagne’.”. Although there are 3 steps after this one, I will stop at step 7 in this blog post because this is very much the focus of this JISC project. The documentation in our interface catalog is showing us where, say, ten different interfaces are exchanging the same student entities, in which case we could abstract out the common functionality and offer it via a single API, possibly via a Web Service. That way, should we change the structure of a student entity in our master data system, we can manage the propagation of that change through the systems that consume that data far more safely, easily and efficiently than at present. We can do this in turn for every piece of functionality that we see is highly used. We can also consider the need for realtime data updates across our systems and where we might start deploying ESB solutions in earnest. For more information about our interface catalogue please see: http://enterprisearchitect.blogs.ilrt.org/category/interface-catalogue/.

 

 

 

 

 

 

Developing a new Integration Architecture – the importance of the “As Is”

August 23rd, 2012

The core data integration project is about identifying core master data as a valuable, reusable asset within the organisation. One part of this project’s activity is designing the “to be” architecture that will support our goals in terms of master data. However, we also need to document the “as is” architecture so that we can roadmap between the “as is” and the “to be”. This methodology is part of TOGAF.

To this end I have been working on establishing an Interface Catalog, now being contributed to by technical teams at the University, according to a defined standard to suit our purposes. You can read more about this at http://enterprisearchitect.blogs.ilrt.org/2012/07/31/the-value-of-an-interface-catalog/.

Performance Enhancement – how should technology enable it?

May 4th, 2012

The Performance Enhancement project is now fully underway at the University of Bristol. The project team comprises people with a wide ranging set of skills – from Change Managers to User Experience and Web Design experts, and includes me as Enterprise Architect.

In an excellent initial meeting a few weeks ago, the team members all met each other to understand better how all our skills will complement each other to achieve the overarching goal of the project. In Enterprise Architecture, one of the steps in applying a TOGAF-based methodology is to express the architecture vision and principles of any endeavour. So, in high level terms, just what is Performance Enhancement considered to be in the context of our project and how do we wish to enhance or change the processes by which it is currently undertaken at Bristol?

Performance Enhancement is not considered to be only related to staff review meetings:

“Performance enhancement should be an ongoing, continuous process that is not only seen as constructive but is in fact expected of leaders and managers by staff.” (http://www.bris.ac.uk/hr/policies/performance/)

Here is a diagram from our HR department that indicates the range of processes constituting performance enhancement (please note that the use of the word ‘tools’ here is not to be confused with the use of the word to mean technology-based tools):

Aspects of Performance Enhancement

The Performance Enhancement project aims to equip leaders and managers with the skills and confidence to have appropriate and effective conversations throughout the whole range of performance enhancement mechanisms and processes in place. It will also provide leaders, managers and their staff with the information they need to enable better-informed discussions to take place around performance.

This information aspect is where the technology component of our project comes in. For the first time at Bristol we wish to offer good quality and complete (as far as possible) data to those participating in performance enhancement conversations. The information we wish to provide should enable an holistic view of the sorts of activities that a member of academic staff participates in, from data about teaching contact and preparation hours through to research publications and participation in events – even measures of esteem or leadership roles that are not documented in our master data corporate systems, but which are nonetheless relevant to discussions about achievement or progression.  In terms of technical architectures, then, how are we to provide timely and accurate information from both our corporate teaching-related and also research-related systems to be surfaced in a view layer tool to support performance enhancement discussions? Well, our current architecture involves introducing a Web Services component to layer on top of our current timetabling system (in order to extract teaching contact hours in a sustainable way), plus using the Pure research information system (to provide research related information). Over the course of the project (which will run for at least another two years) we hope to be able to introduce new sources of information, perhaps some external. We also need to think carefully about the sorts of data we wish to archive and for how many years we wish to preserve it. For example we might wish to store year-on-year the percentage FTE’s of academics teaching across various courses, but in this scenario we wouldn’t want to record, say, the time of day or room that the academics were teaching in. However, all assumptions are being carefully thought about at this stage, in terms of the more global business intelligence requirements we have at the University (for example, although performance enhancement requirements may not include buildings and rooms data, Estates might wish to have this data archived for their own entirely separate purposes).

I will blog more about our technical integration architecture as it unfolds (implementation will start in earnest in the summer). Meanwhile the project is undertaking a programme of engagement with departments, to talk about the cultural and process aspects of performance enhancement as well as the introduction of our new technical tool to support processes. Feedback from this round of engagement will help us (on the technical side) to develop the view layer part of the tool so that is user-friendly and appropriate to the tone of the project. We intend to use the tool to solicit feedback from a pilot group who have agreed to trial it for staff review discussions in the Autumn.

A review phase of the project (at the end of this year) will include an evaluation of the pilot tool against any viable third party solutions should they become available. If nothing is available externally then we have the option to develop our in-house pilot tool further, make it production grade and to extend its use throughout the University. Either way, the data integration work we’re doing at the middleware layer will be supportive of any view layer tool that is deemed suitable for performance enhancement in the long term.

 

 

 

 

 

 

 

 

 

 

Welcome

November 25th, 2011

This is the new blog website for the Core Data Integration Project at the University of Bristol, funded by the JISC Transformations Programme.

Contact Enterprise Architect, Nikki Rogers (nikki.rogers@bristol.ac.uk) for more information. You can also see some background information on the About page.