Duct Tape And IT Management

Norman Marks

IT managementThe five years I spent as an IT executive ( after 10 years in IT audit and before 20 years running internal audit departments) had a lasting influence on my thinking about technology and its management.

I have seen a little good and a lot of bad IT management.

I have seen very few situations where IT led the organization to strategic excellence and operational quality.

I have seen many situations where IT served as a mechanic, liberally applying duct tape to keep the infrastructure operational. The only relationship they had with the seats at the executive table involved making sure they were well oiled. They didn’t even make sure they were a matched set that looked good together!

Consider these situations:

  • As a member of the Finance leadership team, I called the senior IT director responsible for supporting the CFO and invited her to an offsite meeting. The purpose of the offsite was to lay out a vision for Finance, including how we would leverage the opportunities presented by new and emerging technology. The IT director said she would prefer that we meet without her, decide what we needed, and let her know. She would implement whatever we selected.

I had to explain to her that we needed her to understand what technology, both new and emerging, was available and what it would allow us to do. But, she again declined. “Just tell me what you want”.

Not only did we not have her at the strategy table, but she demonstrated no interest in leading the organization.

  • I joined a company where the corporate IT function was engaged in selecting new corporate-wide ERP and supporting software. The latter would be selected not only for its individual functionality, but its ability to integrate with the ERP and other applications.

When the evaluation project was completed, the corporate CIO obtained the approval of the board. However, the company had set up each geographical region with its own CIO, reporting to the region leaders not the corporate CIO. One by one, they all rejected the corporate selection and opted for different solutions – one for each region.

As a result, duct tape was rolled out to bind the regional systems together to deliver fragile enterprise-wide reporting, both operational and financial.

Total cost far exceeded what a corporate solution would have entailed, and the individual ERPs were augmented by a variety of solutions (several for the same purpose) that had tenuous integration with the ERP and among each other.

  • At a conference, during a presentation I was delivering on the need for timely risk and performance information, one attendee said that he liked my vision but it was impossible for his company. When I asked why, he explained that they had a variety of legacy systems cobbled together with string. There was no way they could replace them with new technology without great risk and an extended timeline. So much for agility!

Consider these questions for your organization:

  • Does the CIO not only have a seat at the leadership table but occupy it? Is he part of the team that develops strategy and does the company look to leverage technology, with him as visionary, to deliver new services, products, and capabilities to the market?
  • Do the CIO and his team have effective control over the technology deployed across the organization? Does he even know what is used to run the business, or are business executives heads as well as their apps ‘in the cloud’? Do they ignore any need to have a consistent technology infrastructure where the needs of the whole take priority to the needs of the individual?
  • Does the technology deployed across the organization work together without duct tape? Is it clear that it will continue to do so in the future?
  • When multiple solutions are selected, from different vendors and using different technologies (including different cloud platforms and vendors), how do you expect the information security practitioners to protect the organization?
  • Does the business trust IT?
  • Is your CIO a leader or a mechanic?

Recommended for you:

The In-Memory Database Revolution

SAP Guest

By Carl Olofson, Research Vice President, Application Development and Deployment at IDC

In-Memory Database RevolutionThis is a seminal moment in the history of database technology: a moment when the dominant paradigms for database management for the past 40 years are being challenged by new approaches designed to take advantage of changes in system power and architecture and shifts in the underlying cost structure.

We are seeing the convergence of very fast, multi-core processors, lower cost main memory, and fast, configurable networks with demand for extreme transaction rates, high speed complex queries, and operational flexibility. One area of software technology that has arisen in response to this convergence is memory-base database management.

Unlike disk-based database management, memory-based technology does not require optimization for disk storage, has no overhead for such optimization, and dramatically reduces the storage footprint while simultaneously delivering extremely high throughput rates.

Memory-based databases are optimized for manipulation in memory, with less frequently accessed data swapped out to disk. Not all memory based databases require disk swapping, however. The fastest form of this technology is that which holds the entire database in memory all the time; this is commonly called in-memory database (IMDB) technology.

The implications of this new technology are broad and varied. We are just seeing a glimpse of what may be done when all the data is managed in memory. Because reorganizing data on disk is slow and cumbersome, and because supporting alternate forms of access adds unacceptable overhead, disk-based databases have required fixed schematic structures that afford only a single mode of access, usually involving base tables, with views defined to offer a bit of access flexibility.

IMDB, on the other hand, allows data to be dynamically reorganized, and viewed according to multiple paradigms. As a result, an IMDB can handle on-the-fly schema changes, and in many cases can render the same data either in conventional relational table form, or as complex objects or documents as required.

Operationally, the implications are just as profound. Database administrators no longer need to spend most of their time pondering storage allocation, index definition, and scheduling unload/reloads for data reorganization and re-indexing.

They can, instead, concentrate on building data structures and renderings that address the business needs of the enterprise, and provide higher value support to applications and users; the kind of support that gains recognition and yields professional rewards.

A number of new database technology firms have emerged over the past few years, delivering IMDB products optimized for various workloads. Some more established firms have joined in, offering new IMDB technologies that promise to disrupt the database technology marketplace. One that is already making waves with its ability to mix transactional and analytic workloads is SAP HANA, part of SAP’s “Real Time Data Platform”.

The “old guard” vendors also are evolving their technology feverishly in an IMDB direction. Anyone involved in database technology from either a data or application management perspective would be well advised to learn about these companies and initiatives; they are the future of this business. Embrace the new paradigm, and plan for it!

For more information, listen to a replay of the webinar entitled “The Key to Running in Real-Time:  In-Memory Database Technology”.


Recommended for you:

What Is Software Testing?

Higginbotham, Stacey

Software glitches have always caused major damage. And recently, the biggest failures haven’t been in aerospace or defense, but have increasingly affected the average consumer. Often, we hear that the software testing wasn’t done properly. But what does this mean exactly?

 Photo: iStockphoto Photo: iStockphoto

Every now and then, really spectacular software breakdowns occur. The opening of Heathrow Terminal 5 became a public embarrassment because the baggage system failed to function. More than 17 million customer accounts at RBS and its subsidiaries NatWest and Ulster Bank could be accessed for some or all of the day because the installation of customer management software corrupted the entire system. One of the biggest Austrian banks paid out €21 million to appease its customers with vouchers because the new online banking software didn’t work for days on end.

Errors like these are not only damaging to a company’s brand, but can also be very costly. The goal of software testing is to avoid such incidents and the consequences. On the following pages, we explore the topic of software testing and address these main questions:

Hans Hartmann, test director at Objentis since 2007. (Photo: Private)

We can assume that in the cases previously mentioned, the software in question was definitely tested: Banks and insurance companies know the risks of using software that has not been tested. So how can such malfunctions continue to occur? Some, but certainly not all, software glitches can be caused by storms and natural disasters. Still, this provides no explanation for the increase in software errors of late. Testing has always been done and it used to work well. And natural disasters are a known, if unpredictable, factor. So why should the tried-and-true formulas suddenly fail?

The reason is simple: Programs have become more complex. And to address this complexity, more testing is required. How much more? Take the years 2000 and 2010. In this time, the volume of data being moved around increased by a factor of 50,000. If a program was tested for two weeks in 2000, it would have to be tested for 100,000 weeks in 2010 – in other words, around two thousand years.

More interactions, not more data, increases complexity

Working and calculating this way is clearly not an option. After all, software is now more efficient, development tools allow many errors to be detected before the program is first created, and modern object-oriented software design enables developers to code neatly and in a less error-prone way. But even if testing is only increased by a factor of 50, it would still have to be tested for 100 weeks – or two years. That simply isn’t feasible.

Comparing the difference in size and quantity alone doesn’t necessarily mean that the software has become more complex. In fact, one of the main arguments for using a computer is that it doesn’t matter whether it has to perform a calculation five times or 5,000 times. It should simply be reliable. It is not the increase in the quantity of data that causes complexity, but rather the increase in possible connections and systems.

Look at the development of mobile telephony: In Germany, Radio Telephone Network C came along first in cumbersome cases, followed by the much more manageable digital cellular network D-Netz. In comparison, today’s smartphones have the processing power of mainframe computers from 20 years ago. Apart from the pure advancement of technical data, think about all the things that can now be done with a smartphone. Above all, think about the number of other systems that can be tapped into – at the same time, even. It is the number of possible connections that causes the corresponding increase in complexity.

The main difference between today and yesterday is not the advancement in programming languages – even though developers may no longer code in Assembler or COBOL, these languages can still be used to write good programs today – but rather the number of possible solutions there are for a certain problem.

Take this analogy of trying to cross a river that is 30 feet wide without using a boat and without getting wet. In the past, there was one solution: system analysts would look for places where big rocks could be used jump across the river to the other side. Today, there are 10 different bridges crossing the river, that is, 10 different ways to solve the problem.

The software architect, then, has to choose a particular solution based on whether it meets various quality requirements. Let’s say there is a highway bridge crossing the river as well as a wooden walkway. To use the highway, you need to build feeder roads. Even if the simple wooden walkway is sufficient and building feeder roads requires more effort, the software architect may still choose to use the highway with the reasoning that other people want to cross the river, too.

It’s impossible to test every combination

Here is another example: Forty years ago, when passengers would buy a train ticket from a ticket machine, they would have to answer a series of questions, one after the other. From where do you wish to depart? To where do you wish to travel? How old are you? Are you entitled to a reduced fare? In which class do you wish to travel? And so on. If they discovered while answering the questions that they didn’t have enough money, they would have to cancel the transaction and start again from the beginning.

At today’s ticket machines, passengers will find the questions slightly more hidden in different fields. Instead of entering their age, they select standard fare, half price, or other offers. Rather than typing the destination in full, they type the first few letters, and only the possible destinations are then displayed. While the layout of the input fields suggests that the information can be entered in any order, that is still not possible. For example, if users have entered a discount ticket, they cannot subsequently upgrade to first class. However, instead of getting an error message that says, “First class must be entered before you select a discount,” users will see a message like, “You must purchase your first class ticket on the train.”

In this case, it is clear that developers made some small mistakes in the process of transferring an originally linear, simple input sequence to a graphical input system. Let’s say the machine needs to process five different inputs and they can be in any order. This means there are 120 different combinations of how entries can be made. So, it is understandable that not all input options were tested before the software was implemented.

In the past, it was possible to test each individual function and then test the complete process. Now it is necessary to test the interactions between individual functions. The number of these interactions depends directly on the number of possible sequence combinations, which can easily be a seven-digit sum. If you take a smartphone, for example, the number of possible combinations surpasses the example of the ticket machines by several orders of magnitude.


Recommended for you:

Does It Matter If A Control is Preventive Or Detective?

Norman Marks

The traditional answer is an emphatic “Yes!”

But times, they are a-changing.

Until now, detective controls have been based on a review of reports at the end of the day, week, month, etc. They are designed to detect errors that slipped past any controls earlier in the process.

AnalyticsDetective controls are often, but not always, cheaper to operate; but the risk is higher that an error (deliberate or otherwise) may not be prevented and its detection may be too late to prevent a loss. Often, a combination of preventive and detective controls is desired, simply because preventive controls are rarely perfect and detective controls will stop any lasting damage.

But the latest technology can move detection to a point where it is almost immediate.

For example, there are real time agents that run within the application that test transactions against predefined rules, sending alerts to an operator for action.

There has also been an immense, startling increase in the speed of analytics. They can run (using in-memory platforms) as much as 300,000 times faster.

A report used for detection that used to take many hours to run can now take seconds. I saw one report from an analyst that said that potential errors of anomalies were being detected in milliseconds!

So what does this all mean?

The distinction between preventive and these ‘immediate’ detective confrols has been blurred.

Those responsible for the design or assessment of controls should think again. Is it time to replace expensive preventive controls with less expensive, immediate detective controls?

I welcome your views.


Recommended for you:

Classic Rock, Analytics, And The On-Premise/Cloud Debate

Ray Rivera

Only the Velvet Underground could provide such bare contrasts between their world of lacerating noise and squalid cityscapes, and many of their peer musicians’ worlds of rainbows and escapist indulgences. Yet even as the Velvet Underground’s career as a group was drawing to a close in the early 1970′s, they coolly concluded that “it was alright”.

three paths to cloudAnd that is our conclusion as well regarding the increasingly wearying on-premise/cloud debate, one also marked by conspicuous contrasts between worlds. There is a story to tell about it all, but we are already way ahead of ourselves.

Despite all the computations…

Let’s begin with a more prosaic contrast between business analysis and analytics, a difference made clear by the ongoing debate about whether an organization should transfer computing operations to the cloud or remain on-premise.

Business analysis tends to model such choices in terms of value, focusing on concrete, often retrospective measures of cost objects and processes. Business analytics takes a more holistic look, considering which organization competencies that would be enhanced, while understanding that both computing options belong to a socio-technical system whose trade offs cannot all be reduced to last century’s efficiency metrics.

With regard to IT solutions, the business analysis view is highly market-oriented, giving a lot of attention to product features and functions, and thereby reflecting confusion in the market quite accurately. Even as highly crafted messages about the latest cloud and on-premise offerings appear in print and online advertisements, the thrust of such messages is often lost by readers who might be perusing on a Saturday afternoon, yet have only casual exposure to enterprise computing market stridency.

A better way to understand key differences between cloud and on-premise is to step away from myopic business analysis techniques. Applying an analytics approach instead, we seek to understand how each option follows the structure of the organizations that actually use them. Thus, most organizations can be divided into core and periphery, where the core contains the unique knowledge, competencies, processes, and staff, and the periphery the complementary resources which the organization needs to perform, but for whatever reason lacks.

Cloud: Throw out the hardware

Cloud computing requires a small core but a large periphery. More specifically, cloud computing requires an organization to own minimal IT resources, and provides numerous options for distributing IT operations across vendors and locations. An organization need only have a small core of administrators but can make use of a large periphery of service providers, contractors, vendors, and consultants.

A cloud arrangement can be thought of as similar to the jazz-rock group, Steely Dan. Known for very high production values, precise sound, and elaborate, urbane, and sometimes impenetrable lyrics, the group has consisted for nearly three decades of only two core members, with dozens of session players at the periphery. Seldom did the core retain the same session personnel from one album to the next. Nevertheless, Steely Dan remained a durable group that successfully preserved its recognizable sound and refined musical atmosphere, although it did experience a mid-career breakup as the periphery grew excessively large and alienated from the core.

A cloud arrangement signals to outsiders that the organization lacks sufficient economy of scale to make on-premise computing worthwhile, and the market can supply needed services more cost-efficiently. Cloud can favor smaller or less mature business functions, such as HR in new media or recently formed tech companies, or business functions in high growth companies or high volatility industries where scale is critical. However, management of several vendors may be necessary, and security costs must be considered along with direct costs.

On-premise: Big wheel keep on turning

In contrast to the cloud, on-premise computing employs a large core and a small periphery, which often reflects the structure of well-established, stable organizations. On-premise requires significant investment and maintenance of IT resources, and a staff who collectively oversees a set of unique competencies. On-premise will still require a periphery of consultants, particularly for implementation, customization, and upgrades. Yet the core will likely have developed numerous applications in-house, addressing unique business processes and value drivers.

The on-premise solution can be characterized by the folk-rock group Creedence Clearwater Revival. Also famous for high production values and crispness of sound, its songs are earthy and approachable, covering Southern Gothic and working-class themes. The group defined clear roles for its core personnel, which consisted of four musicians (two of whom were brothers). Their tightness of sound came about by playing as a unit uninterruptedly since teenhood, refined by some very lean periods. The group had almost no periphery, seldom working with session musicians or collaborators. Though durable in its early years, Creedence Clearwater Revival’s core produced a taut working arrangement that later became vulnerable to personnel changes in the core. The departure of a guitarist precipitated a quick decline and dissolution of the band, which by then had sealed off its core so tightly that no one from the periphery could replace a core member.

On-premise is cost-beneficial in organizations that have sufficient resources to provide scale, yet require more custom solutions. It often favors large, mature business functions in established, hard capital-intensive industries, or firms with significant institutional memory to preserve.

Hybrid: Go your own way

Hybrid computing accommodates a changing core and periphery, which characterizes business functions in many organizations. Rapid development of new markets, technological changes, global competition, and economic uncertainly all require organizations to adapt quickly. Changing computing needs may require scaling in either direction, and competencies must be as readily available as services. Organizations and their computing resources must both be highly agile.

Most popular music groups could be seen as similar to hybrid computing. With ever-changing personnel, rapid career evolution, and reversals of fortune, the core and periphery frequently shift. Some groups, such as Fleetwood Mac, go through several reinventions during their careers before achieving great success. Others follow a life cycle of first playing in clubs and ballrooms, later in sports arenas, and finally at fairs and casinos. A flexible core and periphery permits durability, even if a group ceases to perform or record for several years. Peripheral studio musicians may join the core, or core members may depart and their roles become absorbed by the periphery. As with Steely Dan and Creedence Clearwater Revival, where disproportions between core and periphery occurred, a flexible core and periphery also carries risks. There is never a guarantee that any rearrangement of core and periphery will align resources optimally for success, adapt an organization to market changes, or not cause unintended disruptions.

Hybrid can favor business functions that are likely to expand, reorganize, or be affected by a change management initiative. Organizations that lack a cloud strategy may find hybrid a feasible alternative. While cloud and on-premise decisions might occur in concert with business cycles or as part of major planned capital investment initiatives, hybrid decisions can occur off-phase.

You set the scene

No arrangement of core and periphery need determine the destiny of an organization. Nor does the IT arrangement that works in parallel restrict the range of what an organization can perform. Each of the groups discussed earlier, though having clear organizational structures, contained rather remarkable contrasts.

Most of Steely Dan’s work contains bitterly cynical and prickly lyrics. Yet they are also capable of extraordinary warmth and devotion, in sitting with a disconsolate friend in “Any Major Dude Will Tell You”. Everyone should be so fortunate to have that kind of friend in time of need.

Creedence Clearwater Revival portrayed a world of legend and nostalgia in much of their work. But perhaps better than any clamorous protester, they gave voice to the pressing injustices of society in “Fortunate Son”, and the dread of the times in “Bad Moon Rising”, using the very words of the people who were directly experiencing them.

And from a heritage of rather undistinguished experiments, Fleetwood Mac produced Rumors, one of the precious few rock albums that maintains enduring appeal far beyond the genre, appearing universally in music collections regardless of the prevailing taste.

Your organization probably contains all kinds of contrasts too. Knowing your core and periphery can help sort out how to deploy the computing solutions you need to perform, however contrasted your performance might be.

And it’s alright.

This story originally appeared on SAP Business Trends.


Recommended for you: