Search
Close this search box.

Information system re-engineering, a key factor in hybrid cloud IS architecture

Table of contents
Share:
Discover Phoenix Data Platform
Information system re-engineering, a key factor in hybrid cloud IS architecture

While talk of migrating to the cloud is on everyone’s lips, the majority of small businesses and mid-caps still run hybrid infrastructures, a mixture of cloud and on-site hosting.

For these organisations, a gradual transition is often the only way to progress painlessly, shifting the various applications one by one. In this way, business software is added to the cloud architecture depending on technical capacity and when requirements so demand.

As SaaS applications make inroads into businesses, information systems are becoming increasingly hybrid. This mixed environment is very often intended to last, because very few businesses envisage migrating their entire IS into the cloud. Most of them must still compromise, juggling with different environments and the burdens that they impose.

Under such circumstances, structuring the IS is a necessity. Beyond organising data interchanges, information systems re-engineering can have a high impact on the performance of hybrid architectures.

What re-engineering issues emerge with IS cloudification?

1. The diversity of applications and formats

With the boom in SaaS systems running alongside on-premise applications, information systems are becoming increasingly complex. More and more versions and generations of software are consequently required to co-exist. Some of this software is very old.

All these difference create language disparities. Standards and formats differ, some ultimately become obsolete. Communication between applications not speaking the same language becomes strained.

These non-standardised interchanges can quickly result in erroneous or incomplete data processing. Data is however a valuable decision-support tool, and its integrity should be protected.

2. Application lifecycles

From deployment to end-of-life, applications evolve differently, as does their maintenance. This phenomenon is all the more marked when software is run as a service, where development is rapid, version changes are frequent, and maintenance is automated.

esb-bpm white paper

How process (BPM) and data (ESB) integration can create value for IT and business departments ?

These applications are however interdependent. And the more disparate their development cycles, the harder it is to keep control over these interdependencies. If an application is upgraded late, or wrongly, the knock-on effect on the whole IS can be long-lasting.

Harmonising and mapping the IS taking due account of such dependencies will make it possible to locate sources of error and avoid crippling the business processes that make use of the data.

3. Application availability

Application availability is crucial to implementing IT processes and business processes that cut across the information system. In the case of SaaS applications, this availability can be disrupted, resulting in data transmission problems or even data loss caused by external issues (such as network failure, server downtime, etc.).

To ensure communication takes place, the business needs to have some IT component that guarantees data traffic continuity despite interruptions.Data persistence is needed to release users from manual work (which can easily be overlooked).

4. Data location

While data location is relatively straightforward when infrastructure is on the premises, hybrid, unstructured information systems make the matter more complicated.

The shift in applications and hosting arrangements sometimes creates routing changes for data, or other access issues presenting an obstacle to data use.

In an IS that is not re-engineered, where communication between applications is limited, the whole architecture suffers when unexpected changes occur, and it must be modified piecemeal, by hand, with all the risks of error and slowness that such work entails. Unified communication, however, means any change is notified automatically, as soon as it arises.

5. Lack of clarity when errors occur 

Having an increasing number of applications, running on-premises and in the cloud, with no overall structure, muddles our understanding of the IS, making it difficult to locate sources when errors occur. When there is little structure in communication between applications, these sources are all the more numerous.

However, business performance depends on the ability to fix errors quickly. With all processes using data, all functional business areas are affected by the need for clarity in the IS. If data flows are mapped and comprehensive supervision installed, it is easier to pinpoint failure points in data traffic.

expert view ESB vs ETL

ESB vs ETL ? The distinction between is no longer relevant relative to today’s business requirements.

How re-engineering the information system can efficiently meet these challenges

Re-engineering the information system helps to make the IS architecture clearer to grasp and more streamlined. Through the organisation of its information system, a business can produce a map of its data flows and how its applications fit together and then better adapt to hybrid environments.

In addition to this thorough knowledge of the IS, there needs to be smooth communication between the applications already in place, which is a major advantage for small and medium-sized companies that are not really in a position to replace their oldest applications.

Re-engineering creates a data chain between applications and therefore makes efficient action possible, whether the IS is working properly or if possible failure points need to be dealt with.

Re-engineering operates in four main areas:

  • Communication between applications: point-to-point communication is eliminated. Applications come to fetch the data they need using a central data bus. Data circulates more freely thanks to semi-connectors and the use of common formats.
  • Application lifecycles: sources of errors related to application developments and their consequences can be analysed. Any change is communicated to the entire system, which can swiftly adapt without needing to shutdown.
  • Message persistence: re-engineering can include a middleware tier dedicated to information continuity. If communication is ever interrupted, any data interchange is put “on hold” and scheduled to re-start as soon as possible.
  • Data stream supervision: re-engineering means action can be taken more quickly, even when an error has occurred. Thanks to a more comprehensive picture of the IS, errors can be categorised and real diagnostics installed for long-term corrections.

These contributions from IS re-engineering meet the requirements of businesses that have opted for a hybrid cloud and of multi-cloud users. Multi-cloud operations bring a dose of additional complexity with potential interoperability issues and a need for greater supervision.

Blueway solutions to support re-engineering of hybrid cloud architectures

Blueway’s objective is to support and guide businesses in organising their data stream governance and optimising their processes. These two aspects are inextricably linked, as the requirements of processes always guide the path taken by data.

With business departments more involved than ever in how data circulates and is used, some thinking about process modelling is crucial. These deliberations should include all interactions with data, and enable some anticipation of requirements, and also involve functional representatives, as data’s prime users.

To this end, the Blueway platform designed for information systems re-engineering combines ESB (an application bus) and BPM solutions, to instil a philosophy focused on data flows.

Process users consequently become the key stakeholders in IS re-engineering. BPM paired with the ESB gives them the ability to act on data interchanges within the re-engineered system (email warnings, special user interface, and set roles and responsibilities for sequences of events, etc.) so as to improve data processing throughout the data’s lifecycle.

Besides re-engineering and data interchange supervision, this approach also makes the IS more readily upgradeable. The architecture is more flexible and enables complex scenarios to be envisaged, such as hybrid structures for instance. More agile, small and medium-sized firms can consequently prepare upcoming projects by making best use of both their on-premise systems and cloud-based software.

Schedule a call

Want to discuss interoperability challenges with an expert?

Auteur
Edouard Cante
Executive Vice President Product Technical and functional expert, Edouard has specialized in IS urbanization and data governance for nearly 20 years. A man of the field, he and his teams support customers in their projects, and don’t hesitate to use this feedback to shape the product roadmap and gain in agility.
Dans la catégorie Data Integration, Interoperability