For many IT professionals that have not experienced painful performance problems (the lucky ones.) there exist several myths that allow the issue of performance to be put to the back of their minds and not to be addressed during a project. This article outline some of the common myths.
Add More Hardware
Performance is simply a problem of not having enough hardware available to do the job. Therefore, if we get a performance problem it is simply a question of purchasing more hardware. This is only a valid approach to fixing some problems with throughput i.e. the response times are acceptable but the system can’t support the required number of users. Addition hardware will work if your application has been designed to be scalable and to operate on large hardware. All too often projects hit performance problems, purchase more hardware and then discover the application has not been designed to utilize that additional hardware. Even with well designed applications they rarely manage to double performance by doubling the number of CPUs. Figure 1 shows for a “typical” application the sort of performance improvement that can be achieved by adding additional CPUs. The reason for the less than expected performance improvement is that often the overhead of managing processes increases as CPU are introduced and processes need to spend more time communicating between themselves and all the other processes
Figure 1 Performance vs Number of CPUs
Not all performance problems are hardware related but are due to a poor design that does not order or schedule the tasks in an efficient or appropriate manner. For example an on-line bank failed to generate pages within the required response times as all the call made to the bank-end system where made in sequence rather than in parallel causing the cumulative delay to exceed the response time requirement. In this example no additional hardware would solve the response time problem.
Figure 2 Bob soon discovered having the faster hardware doesn’t guarantee success
Even if the purchasing of additional hardware can solve the problem then two problems still exist (a) the cost of the additional hardware could be prohibitively expensive (b) your solution currently uses the top of the range hardware with no room to grow.
We can fix it later
Often performance engineering or performance tuning has been ignored early in the design cycle in deference to using the fix‑it‑later approach. The fix‑it‑later approach is to wait until the implementation is nearing completion or even in operation before the performance of the system is considered and then to rectify it by a quick fix. The arguments for the fix‑it‑later approach is that only a small portion of the code is performance critical and therefore can be optimised.
The quick fix approach has two problems, first is that the fix will possibly invalidate the design work that will lead to re‑documentation and additional work. The second, more fundamental problem is that only so much can be achieved with a code fix before a major design change is required. This is illustrated best by an analogy to energy saving within a house. An old house can be insulated against the cold by means of draft excluders and loft insulation. Further, more dramatic steps can be taken with changes to the building such as cavity wall insulation and double glazing. Unfortunately, no matter what improvements you make an old house it will never be as efficient as a new house that has been designed with energy saving in mind. A new house will have extra thick cavities and be positioned to get the best warmth from the sun.
You can’t design for performance
As traditional software and system development methodologies concentrate on achieving functional goals many designers and project managers have never attempted or thought about designing for performance. In addition, the complexity and intangible nature of software systems lead people to believe that early performance prediction it useless. In fact there are many negative responses to the initial thought of trying to uses performance assurance in the process, some but all are not all are listed below:
Wishful thinking – Performance problems won’t happen to me?
Pessimistic thinking – Performance Assurance will never work why bother?
Lack of scientific thinking – its more expeditious, less painfaul and often just more fun to guess what potential problems there might be than to know what they are within reasonable doubt
Its not important – performance assurance takes valuable time away from software development and testing
There is no time – development and test schedules are fixed, tight and shrinking.
Performance Assurance is not a requirement – there is no statement in development contract that the performance assurance process needs to be followed.
Performance Assurance does take time and requires the appropriate skill and management to be successful but will benefit the development process. A quotation from John F. Kennedy comes to mind: “There are risks and costs to a program of action. But they are far less than the long range risks and costs of comfortable inaction”.
It is to expensive
A successful Performance Assurance program should begin early and continue throughout the project life cycle. For systems development, this is especially true; Figure 1 illustrates why. The cost of making a change to a system increases dramatically with time. On the other hand, the uncertainty of how a system will function and perform decreases with time; in the early stages of a project, uncertainty can be quite high. Without Performance Assurance, much of the early phase of system design is “guesswork,” increasing the chances of needing to make costly changes later in the development life cycle. Performance Assurance reduces uncertainty, allowing necessary changes to be implemented early or eliminating the need for a change altogether.
Determining the need for changes early pays handsomely. Making a change during the design phase is vastly simpler than making a change during development; re-designing a system on paper is always much easier than re-designing one in the data center. If hardware and software have already been purchased, there is less latitude in what can be done to correct a problem. Furthermore, as in Figure 3, the later a problem is found, the more costly it is to fix.
Figure 3 The Benefits of Performance Assurance
Late stage changes and fixes are costly for many reasons. The further into development, the more there is to repair if a flaw appears. Database tables, stored procedures and triggers, code routines, GUI windows and much more could all be impacted by a single change. Worse, if the system fails or needs modification during the production phase, the cost of downtime—not to mention lost business, customers, or reputation—must be factored into the cost of a fix.
In short, Performance Assurance applied early can save a great deal of time and money in the long run and boost the overall quality of an information system. But the early stage of a project is not the only time Performance Assurance delivers value.
Applying Performance Assurance throughout the project life cycle, from planning through production, is the key to a successful distributed system. While risk and uncertainty are especially high in the early stages of a project, they never disappear—every addition or modification to a system introduces new risk.
During development, the addition of new features affects existing code and data structures. The sooner bugs and errors are detected, the less costly they are to fix, so new features should be tested as they are added.Likewise, during production, the addition or removal of users, data, hardware, software and networks, and the imposition of new requirements all contribute to the need for continued risk management.
One thought on “Why some people think you don’t need to do performance engineering”
That’s Imperative Post! As in today era performance from every aspect is evaluated which in last bring the targeted goal more effeciently.