Sybase Technical Library - Product Manuals Home
[Search Forms] [Previous Section with Hits] [Next Section with Hits] [Clear Search] Expand Search

About This Book [Table of Contents] Chapter 2: Database Design and Denormalizing for Performance

Performance and Tuning Guide

[-] Chapter 1: Introduction to Performance Analysis

Chapter 1

Introduction to Performance Analysis

This chapter introduces you to issues in performance tuning, and the layers at which the issues apply.

This chapter contains the following sections:

What Is "Good Performance"?

Performance is the measure of efficiency of an application or multiple applications running in the same environment. Performance is usually measured in response time and throughput.

Response Time

Response time is the time that a single task takes to complete. You can shorten response time by:

In some cases, Adaptive Server is optimized to reduce initial response time, that is, the time it takes to return the first row to the user. This is especially useful in applications where a user may retrieve several rows with a query and then browse through them slowly with a front-end tool.


Throughput refers to the volume of work completed in a fixed time period. There are two ways of thinking of throughput:

Throughput is commonly measured in transactions per second (tps), but it can also be measured per minute, per hour, per day, and so on.

Designing for Performance

Most of the gains in performance derive from good database design, thorough query analysis, and appropriate indexing. The largest performance gains can be realized by establishing a good database design and by learning to work with the Adaptive Server query optimizer as you develop your applications.

Other considerations, such as hardware and network analysis, can locate performance bottlenecks in your installation.

What Is Tuning?

Tuning is optimizing performance. A system model of Adaptive Server and its environment can be used to identify performance problems at each layer.

Figure 1-1: Adaptive Server system model

A major part of tuning is reducing contention for system resources. As the number of users increases, contention for resources such as data and procedure caches, spinlocks on system resources, and the CPU(s) increases. The probability of lock contention on data pages also increases.

Tuning Levels

Adaptive Server and its environment and applications can be broken into components, or tuning layers, to isolate certain components of the system for analysis. In many cases, two or more layers must be tuned so that they work optimally together.

In some cases, removing a resource bottleneck at one layer can reveal another problem area. On a more optimistic note, resolving one problem can sometimes alleviate other problems. For example, if physical I/O rates are high for queries, and you add more memory to speed response time and increase your cache hit ratio, you may ease problems with disk contention.

The tuning layers in Adaptive Server are:

Application Layer

Most of this guide describes tuning queries, since most of your efforts in maintaining high Adaptive Server performance will involve tuning the queries on your server.

Issues at the application layer include the following:

Options to address these issues include:

Database Layer

Issues at the database layer include:

Options to address these issues include:

Adaptive Server Layer

Issues at the Adaptive Server layer are as follows:

Options to address these issues include:

Devices Layer

Issues at the devices layer include the following:

Options to address these issues include:

Network Layer

Virtually all users of Adaptive Server access their data via the network. Major issues with the network layer are:

Options to address these issues include:

Hardware Layer

Issues at the hardware layer include:

Options to address these issues include:

Operating System Layer

At the operating system layer, the major issues are:

Options include:

Know the System Limits

There are limits to maximum performance. The physical limits of the CPU, disk subsystems, and networks impose limits. Some of these can be overcome by adding memory, using faster disk drives, switching to higher bandwidth networks, and adding CPUs.

Given a set of components, any individual query has a minimum response time. Given a set of system limitations, the physical subsystems impose saturation points.

Know Your Tuning Goals

For many systems, a performance specification developed early in the application life cycle sets out the expected response time for specific types of queries and the expected throughput for the system as a whole.

Steps in Performance Analysis

When there are performance problems, you need to determine the sources of the problems and your goals in resolving them. The steps for analyzing performance problems are:

  1. Collect performance data to get baseline measurements. For example, you might use one or more of the following tools:

  2. Analyze the data to understand the system and any performance problems. Create and answer a list of questions to analyze your Adaptive Server environment. The list might include questions such as the following:

  3. Define system requirements and performance goals:

  4. Define the Adaptive Server environment¾know the configuration and limitations at all layers.

  5. Analyze application design¾examine tables, indexes, and transactions.

  6. Formulate a hypothesis about possible causes of the performance problem and possible solutions, based on performance data.

  7. Test the hypothesis by implementing the solutions from the last step:

  8. Use the same tests used to collect baseline data in step 1 to determine the effects of tuning. Performance tuning is usually an iterative process.

  9. If testing shows that your hypothesis is correct, implement the solution in your development environment.

About This Book [Table of Contents] Chapter 2: Database Design and Denormalizing for Performance