http://schema.org/InStock
Availability: In Stock

Price: $139.94
Ex Tax: $127.22

Available Options

* Options:
- +

This unique manual will equip you with the knowledge to analyze, specify and debug Data Communications and Networking systems in the instrumentation and control environment.

Download Chapter List

Table of Contents

  1. Overview of data communication

1 Overview of  Data Communications

This chapter introduces the topic of data communications, and provides some historical back­ground to the subject. It discusses the need for standards in the data communications industry in terms of the physical transfer of information and the way in which data is handled. Finally, it takes a brief look at data communications as it applies to instrumentation and control systems.

 

Objectives

After studying this chapter you will be able to:

 

  • Describe the basic principles of all communication systems
  • Describe the historical background and evolution of data communications
  • Explain the role of standards and protocols
  • Describe the communication layers of the OSI model
  • Describe four important physical standards
  • Explain the purpose of instrumentation and control systems
  • Describe the four most important control devices; DCSs, PLCs, Smart Instruments and PCs

1.1          Introduction

Data communications involves the transfer of information from one point to another. In this manual we are specifically concerned with digital data communication. In this context, ‘data’ refers to information that is represented by a sequence of zeros and ones; the same sort of data that is handled by computers. Many communications systems handle analog data; examples are telephone systems, radio, and television. Modern instrumentation is almost wholly concerned with the transfer of digital data.

 

Any communications system requires a transmitter to send information, a receiver to accept it and a link between the two. Types of link include copper wire, optical fiber, radio, and microwave. Some short distance links use parallel connections; meaning that several wires are required to carry signals. This sort of connection is confined to devices such as local printers. Virtually all modern data communication use serial links, in which the data is transmitted in sequence over a single circuit.

 

Digital data is sometimes transferred using a system primarily designed for analog communications. A modem, for example, uses a digital data stream to modulate an analog signal that is sent over a telephone line. At the receiving end, another modem demodulates the signal to reproduce the original digital data. The word ‘modem’ is derived from ‘modulator’ and ‘demodulator’.

 

There must be mutual agreement on how data is to be encoded, i.e. the receiver must be able to understand what the transmitter is sending. The structure in which devices communicate is known as a protocol. In the past decade many standards and protocols have been established. This allows data communications technology to be used more effectively in industry. Designers and users are beginning to realize the tremendous economic and productivity gains possible with the integration of discrete systems that are already in operation.

1.2          Historical Background

Although there were many early systems (such as the French chain of semaphore stations) data communications in its modern electronic form started with the invention of the telegraph. The first systems used several parallel wires, but it soon became obvious that for long distances a serial method, over a single pair of wires, was the most economical.

 

The first practical telegraph system is generally attributed to Samuel Morse. At each end of a link there was an operator with a sending key and sounder. A message was sent as an encoded series of ‘dots’ (short pulses) and ‘dashes’ (longer pulses). This became known as the Morse code and comprises about 40 characters including the complete alphabet, numbers, and some punctuation. In operation, a sender would first transmit a starting sequence that would be acknowledged by a receiver. The sender would then transmit the message and wait for a final acknowledgment. Signals could only be transmitted in one direction at a time.

 

Manual encoding and decoding limited transmission speeds and attempts were soon made to automate the process. The first development was ‘teleprinting’ in which the dots and dashes were recorded directly onto a rotating drum and could later be decoded by the operator. The next stage was a machine that could decode the signal and print the actual characters by means of a wheel carrying the typefaces. Although this system was used for many years, it suffered from synchronization problems.

 

Perhaps the most severe limitation of Morse code is its use of a variable number of elements to represent the different characters. This can vary from a single dot or dash, to up to four dots and/or dashes, and makes it unsuitable for an automated system. An alternative ‘code’ was invented, in the late 1800s, by the French telegraphic engineer Maurice Emile Baudot. The Baudot code was the first uniform-length binary code. Each character was represented by a standard 5-bit character size. It encoded 32 (25) characters, including all the letters of the alphabet, but did not include numerals.

 

The International Telecommunications Union (ITU) later adopted the code as the standard for telegraph communications and incorporated a ‘shift’ function to accommodate a further set of 32 characters. The term ‘baud’ was coined in Baudot’s honor and used to indicate the rate at which a signal changes state. For example, 100 baud means 100 possible signal changes per second.

 

The telegraph system used electromechanical devices at each end of a link to encode and decode a message. Later machines allowed a user to encode a message off-line onto punched paper tape, and then transmit the message automatically via a tape reader. At the receiving end, an electric typewriter mechanism printed the text. Facsimile transmission, using computer technology as well as more sophisticated encoding and communications systems, has replaced telegraph transmissions.

The steady evolution of data communications has led to the modern era of very high speed systems, built on the sound theoretical and practical foundations established by the early pioneers.

1.3          Standards

Protocols are the structures used within a communications system so that, for example, a computer can talk to a printer. Traditionally, developers of software and hardware platforms developed protocols that only their own products could use. However, in order to develop more integrated instrumentation and control systems, standardization of these communication protocols is required.

Standards may evolve from the widespread use of one manufacturer’s protocol (a de facto standard) or may be specifically developed by bodies that represent an industry. Standards allow manufacturers to develop products that can communicate with equipment already in use, which for the customer, simplifies the integration of products from different sources.

1.4          Open Systems Interconnection (OSI) Model

The OSI model, developed by the International Organization for Standardization (ISO), has gained widespread industry support. The OSI model reduces every design and communication problem into a number of layers as shown in Figure 1.1. A physical interface standard such as RS-232 would fit into the ‘Physical layer’, while the other layers relate to various other protocols.

 

 

Figure 1.1

Representation of the OSI model

Messages or data are generally sent in packets, which are simply a sequence of bytes. The protocol defines the length of the packet, which is often fixed. Each packet requires a source address and a destination address so that the system knows where to send it, and the receiver knows where it came from. A packet starts at the top of the protocol stack, the Application layer, and passes down through the other software layers until it reaches the Physical layer. It is then sent over the link. When traveling down the stack, the packet acquires additional header information at each layer. This tells the next layer down what to do with the packet. At the receiving end the packet travels up the stack with each piece of header information being stripped off along the way. The Application layer on the receiving end eventually receives only the data sent by the Application layer at the transmitting side.

 

The arrows between layers in Figure 1.1 indicate that each layer reads the packet as coming from, or going to, the corresponding layer at the opposite end. This is known as peer-to-peer communication, although the actual packet is transported via the physical link. The middle stack in this particular case (representing a router) has only the three lower layers, which is all that is required for the correct routing of a packet between two devices.

 

The OSI model is useful in providing a universal framework for all communication systems. However, it does not define the actual protocol to be used at each layer. Groups of manufacturers in different areas of industry tend to collaborate in defining software and hardware standards appropriate to their particular industry. Those seeking an overall framework for their specific communications requirements have enthusiastically embraced the OSI model and used it as a basis for their industry specific standards, such as Foundation Fieldbus and HART.

1.5          Protocols

As previously mentioned, the OSI model provides a framework within which a specific protocol can be defined. A frame (packet) could consist of the following. The first byte could consist of a string of ‘1’s and ‘0’s to synchronize the receiver, or flags to indicate the start of the frame. The second byte could contain the destination address detailing where the message is going. The third byte could contain the source address noting where the message originated. The bytes in the middle of the message could be the actual data sent from transmitter to receiver. The final byte(s) could contain a checksum for error detection, as well as some optional end-of-frame flags.

 

 

Figure 1.2

Basic structure of an information frame as defined by a protocol

Protocols vary from very simple (such as ASCII based protocols) to very sophisticated, operating at high speeds and transferring megabits of data per second. There is no right or wrong protocol; the choice depends on the particular application.

1.6          Physical Standards

RS-232 interface standard

The most recent version of the standard is TIA-232F (1997), but it is commonly referred to simply by its original designation - RS-232.  The original standard was issued in the USA in 1969 to define the electrical and mechanical details of the interface between Data Terminal Equipment (DTE) and Data Communications Equipment (DCE) that employ serial binary data interchange.

 

A serial data communications system might consist of:

  • The DTE, a data sending terminal such as a computer, which is the source of the data (usually a series of characters coded into a suitable digital form)
  • The DCE, which acts as a data converter (such as a modem) to convert the signal into a form suitable for the communications link e.g. analog signals for the telephone system
  • The communications link itself, for example, a telephone system
  • A suitable receiver, such as a modem, also a DCE, which converts the analog signal back to a form suitable for the receiving terminal
  • A data receiving terminal, such as a printer, also a DTE, which receives the digital pulses for decoding back into a series of characters

 

Figure 1.3 illustrates the signal flows across a simple serial data communications link.

 

 

Figure 1.3

A typical serial data communications link

The EIA-232C interface standard described the interface between a terminal (DTE) and a modem (DCE) specifically for the transfer of serial binary digits. It left a lot of flexibility to the designers of the hardware and software protocols. With the passage of time, this interface standard has been adapted for use with numerous other types of equipment such as Personal Computers (PCs), printers, programmable controllers, Programmable Logic Controllers (PLCs), instruments and so on. To recognize these additional applications, a later version of the standard, TIA-232E, expanded the meaning of the acronym DCE from ‘Data Communications Equipment’ to the more general ‘Data Circuit-terminating Equipment”. The current version of the standard is TIA-232F.

 

RS-232 has a number of inherent weaknesses that make it unsuitable for data communications in an industrial environment. Consequently, other RS interface standards have been developed to overcome some of these limitations. The most commonly used among them for instrumentation and control systems are RS-423, RS-422 and RS-485. These will be described in more detail in the next chapter.

RS-423

The RS-423 interface standard is an unbalanced system similar to RS-232, with increased range and data transfer rates and up to 10 line receivers per line driver.

RS-422

The RS-422 interface system is a balanced system with the same range as RS-423, with increased data rates and up to 10 line receivers per line driver.

RS-485

The RS-485 is a balanced system with the same range as RS-422, but with increased data rates and up to 32 transmitters and receivers possible per line. The RS-485 interface standard is very useful for instrumentation and control systems where several devices may be interconnected on the same multi-point network.

1.7          Modern Instrumentation and Control Systems

In an instrumentation and control system, data is acquired by measuring instruments and is transmitted to a controller – typically a computer. The controller then transmits data (or control signals) to control devices, which act upon a given process.

 

Integration of a system via a network enables data to be transferred quickly and effectively between different systems in a plant along a data communications link. This eliminates the need for expensive and unwieldy wiring looms and termination points.

 

Productivity and quality are the principal objectives in the efficient management of any production activity. Management can be substantially improved by the availability of accurate and timely data. From this we can surmise that a good instrumentation and control system can facilitate both quality and productivity.

 

The main purpose of an instrumentation and control system, in an industrial environment, is to provide the following:

  • Control of the processes and alarms

      Traditionally, control of processes, such as temperature and flow, was provided by analog controllers operating on standard 4–20 mA loops. The 4–20 mA standard is used in equipment from a wide variety of suppliers. It is common for equipment from various sources to be mixed in the same control system. Stand-alone controllers and instruments have largely been replaced by integrated systems such as Distributed Control Systems (DCSs).

  • Control of sequencing, interlocking and alarms

      This was typically provided by relays, timers and other components hardwired into control panels and motor control centers. The sequence control, interlocking and alarm requirements have largely been replaced by PLCs.

  • An operator interface for display and control

      Traditionally, process and manufacturing plants were operated from local control panels by several operators, each responsible for a portion of the overall process. Modern control systems tend to use a central control room to monitor the entire plant. The control room is equipped with computer based operator workstations that gather data from the field instrumentation and use it for graphical display, to control processes, to monitor alarms, to control sequencing and for interlocking.

  • Management information

      Management information was traditionally provided by taking readings from meters, chart recorders, counters, and transducers and from samples taken from the production process. This data is required to monitor the overall performance of a plant or process and to provide the data necessary to manage the process. Data acquisition is now integrated into the overall control system. This eliminates the gathering of information and reduces the time required to correlate and use the information to remove bottlenecks. Good management can achieve substan­tial productivity gains.

 

The ability of control equipment to fulfill these requirements has depended on major advances in the fields of integrated electronics, microprocessors and data communications.

 

The four devices that have made the most significant impact on how plants are controlled are:

  • DCSs
  • PLCs
  • Smart Instruments (SIs)
  • PCs

1.8          DCSs

A DCS is hardware and software based digital process control and data acquisition system. The DCS is based on a ‘data highway’ (bus) and has a modular and distributed, but integrated architecture. Each module performs a specific dedicated task such as the operator interface, analog or loop control, or digital control. There is normally an interface unit situated on the data highway, allowing easy connection to other devices such as PLCs and supervisory computers.

1.9          PLCs

PLCs were developed in the late sixties to replace collections of electromagnetic relays, particularly in the automobile manufacturing industry. They were prima­rily used for sequence control and interlocking with racks of on/off inputs and outputs, called digital I/O. They are controlled by a central processor using easily written ‘ladder logic’ or ‘function block’ type programs. Modern PLCs now include analog and digital I/O modules as well as sophisticated programming capabilities similar to that of DCSs, e.g. PID loop programming. High speed inter-PLC links are also available, such as 100 Mbps Ethernet. A diagram of a typical PLC system is given in Figure 1.4.

 

Figure 1.4

A typical PLC system

1.10        Smart Instrumentation Systems

In the 1960s, the 4–20 mA analog interface was established as the de facto stand­ard for instrumentation technology; consequently the manufacturers of instrumen­tation equipment had a standard communication interface on which to base their products. Users had a choice of instruments and sensors from a wide range of suppliers, which could be integrated into their control systems.

 

With the advent of microprocessors and the development of digital technology, the situation has changed. Most users appreciate the many advantages of digital instruments such as more information being displayed on a single instru­ment, local and remote displays, reliability, economy, self-tuning, and diagnostic capabilities. There is therefore a gradual shift from analog to digital technology.

 

There are a number of intelligent digital sensors with digital communications capability for most traditional applications. These include sensors for measuring temperature, pressure, level, flow, mass (weight), density, and power system parameters. These new intelligent digital sensors are known as ‘Smart Instruments’ or ‘SIs’.

 

The main features SIs are:

  • Intelligent, digital sensors
  • Digital data communications capability
  • Ability to be multidropped with other devices

 

There is also an emerging range of intelligent, communicating digital devices called ‘smart actuators’. Examples include variable speed drives, soft starters, protection relays and switchgear control with dig­ital communication facilities.

 

 

Figure 1.5

A complex data communications system

 

 

Engineering Institute of Technology - Latest News