- Information
- AI Chat
Distributed systems - these are notes
Distributed Systems
Preview text
Distributed Systems
[R15A0520]
LECTURE NOTES
B III YEAR – II SEM(R15)
(2018-19)
DEPARTMENT OF
COMPUTER SCIENCE AND ENGINEERING
MALLA REDDY COLLEGE OF ENGINEERING &
TECHNOLOGY
(Autonomous Institution – UGC, Govt. of India)
Recognized under 2(f) and 12 (B) of UGC ACT 1956 (Affiliated to JNTUH, Hyderabad, Approved by AICTE - Accredited by NBA & NAAC – ‘A’ Grade - ISO 9001:2015 Certified) Maisammaguda, Dhulapally (Post Via. Hakimpet), Secunderabad – 500100, Telangana State, India
III Year B. Tech. CSE – II Sem L T/P/D C 4 -/- / - 3
(R15A0524) DISTRIBUTED SYSTEMS
Objectives: To learn the principles, architectures, algorithms and programming models used in distributed systems. To examine state-of-the-art distributed systems, such as Google File System. To design and implement sample distributed systems.
UNIT I Characterization of Distributed Systems: Introduction, Examples of Distributed systems, Resource sharing and web, challenges. System Models: Introduction, Architectural and Fundamental models.
UNIT II Time and Global States: Introduction, Clocks, Events and Process states, Synchronizing physical clocks, Logical time and Logical clocks, Global states, Distributed Debugging. Coordination and Agreement: Introduction, Distributed mutual exclusion, Elections, Multicast Communication, Consensus and Related problems.
UNIT III Inter Process Communication: Introduction, The API for the internet protocols, External Data Representation and Marshalling, Client-Server Communication, Group Communication, Case Study: IPC in UNIX. Distributed Objects and Remote Invocation: Introduction, Communication between Distributed Objects, Remote Procedure Call, Events and Notifications, Case study-Java RMI.
UNIT IV Distributed File Systems: Introduction, File service Architecture, Case Study1: Sun Network File System, Case Study 2: The Andrew File System. Name Services: Introduction, Name Services and the Domain Name System, Directory Services, Case study of the Global Name Service. Distributed Shared Memory: Introduction Design and Implementation issues, Sequential consistency and Ivy case study, Release consistency and Munin case study, other consistency models.
UNIT V Transactions and Concurrency Control: Introduction, Transactions, Nested Transactions, Locks, Optimistic concurrency control, Timestamp ordering, Comparison of methods for concurrency control. Distributed Transactions: Introduction, Flat and Nested Distributed Transactions, Atomic commit protocols, Concurrency control in distributed transactions, Distributed deadlocks, Transaction recovery
TEXT BOOK:
Distributed Systems, Concepts and Design, George Coulouris, J Dollimore and Tim Kindberg, Pearson Education, 4th Edition,2009.
REFERENCES:
- Distributed Systems, Principles and paradigms, Andrew S, Maarten Van Steen, Second Edition, PHI.
- Distributed Systems, An Algorithm Approach, Sikumar Ghosh, Chapman & Hall/CRC, Taylor & Fransis Group, 2007.
Outcomes: Students will identify the core concepts of distributed systems: the way in which several machines orchestrate to correctly solve problems in an efficient, reliable and scalable way. Students will examine how existing systems have applied the concepts of distributed systems in designing large systems, and will additionally apply these concepts to develop sample systems.
INDEX
UNIT NO TOPIC PAGE NO
I
Characterization of Distributed Systems 01 - 22
System Models 23 - 36
II
Time and Global States 37 - 50
Coordination and Agreement 51 - 66
III
Inter Process Communication 67 - 83
Distributed Objects and Remote Invocation
84 - 128
IV
Distributed File Systems 129 - 144
Name Services 145 - 159
Distributed Shared Memory 160 - 169
V
Transactions and Concurrency Control 170 - 180
Distributed Transactions 181 - 194
DISTRIBUTED SYSTEMS
UNIT I
Characterization of Distributed Systems: Introduction, Examples of Distributed systems, Resource sharing and web, challenges. System Models: Introduction, Architectural and Fundamental models.
Examples of Distributed Systems–Trends in Distributed Systems – Focus on resource sharing – Challenges. Case study: World Wide Web.
Introduction A distributed system is a software system in which components located on networked computers communicate and coordinate their actions by passing messages. The components interact with each other in order to achieve a common goal.
Distributed systems Principles
A distributed system consists of a collection of autonomous computers, connected through a network and distribution middleware, which enables computers to coordinate their activities and to share the resources of the system, so that users perceive the system as a single, integrated computing facility.
Centralised System Characteristics
One component with non-autonomous parts Component shared by users all the time All resources accessible Software runs in a single process Single Point of control Single Point of failure
Distributed System Characteristics
Multiple autonomous components Components are not shared by all users Resources may not be accessible Software runs in concurrent processes on different processors Multiple Points of control Multiple Points of failure
Examples of distributed systems and applications of distributed computing include the following:
telecommunication networks: telephone networks and cellular networks, computer networks such as the Internet, wireless sensor networks, routing algorithms;
network applications: World wide web and peer-to-peer networks, massively multiplayer online games and virtual reality communities, distributed databases and distributed database management systems,
network file systems, distributed information processing systems such as banking systems and airline reservation systems; real-time process control:
aircraft control systems, industrial control systems; parallel computation: scientific computing, including cluster computing and grid computing and various volunteer computing projects (see the list of distributed computing projects), distributed rendering in computer graphics. Common Characteristics
Certain common characteristics can be used to assess distributed systems
Resource Sharing Openness Concurrency Scalability Fault Tolerance Transparency
Resource Sharing
Ability to use any hardware, software or data anywhere in the system. Resource manager controls access, provides naming scheme and controls concurrency. Resource sharing model (e. client/server or object-based) describing how
resources are provided, they are used and provider and user interact with each other.
Openness
Openness is concerned with extensions and improvements of distributed systems. Detailed interfaces of components need to be published. New components have to be integrated with existing components. Differences in data representation of interface types on different processors (of different vendors) have to be resolved.
Concurrency
Components in distributed systems are executed in concurrent processes.
Components access and update shared resources (e. variables, databases, device drivers). Integrity of the system may be violated if concurrent updates are not coordinated. o Lost updates
o Inconsistent analysis
Scalability
Adaption of distributed systems to - accomodate more users - respond faster (this is the hard one) Usually done by adding more and/or faster processors. Components should not need to be changed when scale of a system increases. Design components to be scalable
Fault Tolerance
Hardware, software and networks fail!
Distributed systems must maintain availability even at low levels of hardware/software/network reliability. Fault tolerance is achieved by
- recovery
- redundancy
Transparency
Distributed systems should be perceived by users and application programmers as a whole rather than as a collection of cooperating components.
- Transparency has different dimensions that were identified by ANSA.
- These represent various properties that distributed systems should have.
Access Transparency
Enables local and remote information objects to be accessed using identical operations.
- Example: File system operations in NFS.
- Example: Navigation in the Web.
- Example: SQL Queries
Location Transparency Enables information objects to be accessed without knowledge of their location.
- Example: File system operations in NFS
- Example: Pages in the Web
- Example: Tables in distributed databases
Concurrency Transparency
Enables several processes to operate concurrently using shared information objects without interference between them.
- Example: NFS
- Example: Automatic teller machine network
- Example: Database management system
Replication Transparency
Enables multiple instances of information objects to be used to increase reliability and performance without knowledge of the replicas by users or application programs
- Example: Distributed DBMS
- Example: Mirroring Web Pages.
Failure Transparency
- Enables the concealment of faults
- Allows users and applications to complete their tasks despite the failure of other components.
- Example: Database Management System
Migration Transparency
Allows the movement of information objects within a system without affecting the operations of users or application programs
- Example: NFS
- Example: Web Pages
Performance Transparency Allows the system to be reconfigured to improve performance as loads vary.
- Example: Distributed make.
Scaling Transparency
Allows the system and applications to expand in scale without change to the system structure or the application algortithms.
- Example: World-Wide-Web
- Example: Distributed Database
Distributed Systems: Hardware Concepts
- Multiprocessors
- Multicomputers
Networks of Computers
Multiprocessors and Multicomputers Distinguishing features:
- Private versus shared memory
- Bus versus switched interconnection
Networks of Computers
High degree of node heterogeneity: - High-performance parallel systems (multiprocessors as well as multicomputers) - High-end PCs and workstations (servers) - Simple network computers (offer users only network access) - Mobile computers (palmtops, laptops) - Multimedia workstations
High degree of network heterogeneity: - Local-area gigabit networks - Wireless connections - Long-haul, high-latency connections - Wide-area switched megabit connections
Distributed Systems: Software Concepts Distributed operating system _ Network operating system _ Middleware
Distributed Operating System
Some characteristics: _ OS on each computer knows about the other computers _ OS on different computers generally the same _ Services are generally (transparently) distributed across computers
Network Operating System Some characteristics: _ Each computer has its own operating system with networking facilities _ Computers work independently (i., they may even have different operating systems) _ Services are tied to individual nodes (ftp, telnet, WWW) _ Highly file oriented (basically, processors share only files)
Distributed System (Middleware) Some characteristics: _ OS on each computer need not know about the other computers _ OS on different computers need not generally be the same _ Services are generally (transparently) distributed across computers
-
Need for Middleware Motivation: Too many networked applications were hard or difficult to integrate: _ Departments are running different NOSs _ Integration and interoperability only at level of primitive NOS services _ Need for federated information systems:
- Combining different databases, but providing a single view to applications
- Setting up enterprise-wide Internet services, making use of existing information systems
- Allow transactions across different databases
- Allow extensibility for future services (e., mobility, teleworking, collaborative applications) _ Constraint: use the existing operating systems, and treat them as the underlying environment (they provided the basic functionality anyway)
Communication services: Abandon primitive socket based message passing in favor of: _ Procedure calls across networks _ Remote-object method invocation _ Message-queuing systems _ Advanced communication streams _ Event notification service Information system services: Services that help manage data in a distributed system: _ Large-scale, system wide naming services _ Advanced directory services (search engines) _ Location services for tracking mobile objects _ Persistent storage facilities _ Data caching and replication
Control services: Services giving applications control over when, where, and how they access data:
_ Distributed transaction processing _ Code migration Security services: Services for secure processing and communication: _ Authentication and authorization services _ Simple encryption services _ Auditing service
Comparison of DOS, NOS, and Middleware
-
Networks of computers are everywhere. The Internet is one, as are the many networks of which it is composed. Mobile phone networks, corporate networks, factory networks, campus networks, home networks, in-car networks – all of these, both separately and in combination, share the essential characteristics that make them relevant subjects for study under the heading distributed systems.
Distributed systems has the following significant consequences:
Concurrency: In a network of computers, concurrent program execution is the norm. I can do my work on my computer while you do your work on yours, sharing resources such as web pages or files when necessary. The capacity of the system to handle shared resources can be increased by adding more resources (for example. computers) to the network. We will describe ways in which this extra capacity can be usefully deployed at many points in this book. The coordination of concurrently executing programs that share resources is also an important and recurring topic.
No global clock: When programs need to cooperate they coordinate their actions by exchanging messages. Close coordination often depends on a shared idea of the time at which the programs’ actions occur. But it turns out that there are limits to the accuracy with which the computers in a network can synchronize their clocks – there is no single global notion of the correct time. This is a direct consequence of the fact that the only communication is by sending messages through a network.
Independent failures: All computer systems can fail, and it is the responsibility of system designers to plan for the consequences of possible failures. Distributed systems can fail in new ways. Faults in the network result in the isolation of the computers that are connected to it, but that doesn’t mean that they stop running. In fact, the programs on them may not be able to detect whether the network has failed or has become unusually slow. Similarly, the failure of a computer, or the unexpected termination of a program somewhere in the system (a crash), is not immediately made known to the other components with which it communicates. Each component of the system can fail independently, leaving the others still running.
TRENDS IN DISTRIBUTED SYSTEMS
Distributed systems are undergoing a period of significant change and this can be traced back to
a number of influential trends:
the emergence of pervasive networking technology; the emergence of ubiquitous computing coupled with the desire to support user mobility in distributed systems; the increasing demand for multimedia services; the view of distributed systems as a utility.
Internet The modern Internet is a vast interconnected collection of computer networks of many different types, with the range of types increasing all the time and now including, for example, a wide range of wireless communication technologies such as WiFi, WiMAX, Bluetooth and third- generation mobile phone networks. The net result is that networking has become a pervasive resource and devices can be connected (if desired) at any time and in any place.
A typical portion of the Internet
The Internet is also a very large distributed system. It enables users, wherever they are, to make use of services such as the World Wide Web, email and file transfer. (Indeed, the Web is sometimes incorrectly equated with the Internet.) The set of services is open-ended – it can be extended by the addition of server computers and new types of service. The figure shows a collection of intranets – subnetworks operated by companies and other organizations and typically protected by firewalls. The role of a firewall is to protect an intranet by preventing unauthorized messages from leaving or entering. A firewall is implemented by filtering incoming and outgoing messages. Filtering might be done by source or destination, or a firewall might allow only those messages related to email and web access to pass into or out of the intranet that it protects. Internet Service Providers (ISPs) are companies that provide broadband links and other types of connection to individual users and small organizations, enabling them to access services anywhere in the Internet as well as providing local services such as email and web hosting. The intranets are linked together by backbones. A backbone is a network link with a high transmission capacity, employing satellite connections, fibre optic cables and other high- bandwidth circuits
Computers vs. Web servers in the Internet
Intranet - A portion of the Internet that is separately administered and has a boundary that can be configured to enforce local security policies - Composed of several LANs linked by backbone connections - Be connected to the Internet via a router
A typical intranet email server Desktop
print and otherservers
cteorms pu
Web server
Local area network
email server
File server
the rest of the Internet router/fire wall
other
servers
Main issues in the design of components for the use in intranet - File services - Firewall - The cost of software installation and support
Mobile and ubiquitous computing Technological advances in device miniaturization and wireless networking have led increasingly to the integration of small and portable computing devices into distributed systems. These devices include: Laptop computers. Handheld devices, including mobile phones, smart phones, GPS-enabled devices, pagers, personal digital assistants (PDAs), video cameras and digital cameras. Wearable devices, such as smart watches with functionality similar to a PDA. Devices embedded in appliances such as washing machines, hi-fi systems, cars and refrigerators.
The portability of many of these devices, together with their ability to connect conveniently to networks in different places, makes mobile computing possible. Mobile computing is the
performance of computing tasks while the user is on the move, or visiting places other than their usual environment. In mobile computing, users who are away from their ‘home’ intranet (the intranet at work, or their residence) are still provided with access to resources via the devices they carry with them. They can continue to access the Internet; they can continue to access resources in their home intranet; and there is increasing provision for users to utilize resources such as printers or even sales points that are conveniently nearby as they move around. The latter is also known as location-aware or context-aware computing. Mobility introduces a number of challenges for distributed systems, including the need to deal with variable connectivity and indeed disconnection, and the need to maintain operation in the face of device mobility.
Portable and handheld devices in a distributed system
Internet
Host intranet WAP Wireless LAN gateway Home intranet
Printer
Mobile phone Laptop Camera Host site
Ubiquitous computing is the harnessing of many small, cheap computational devices that are present in users’ physical environments, including the home, office and even natural settings. The term ‘ubiquitous’ is intended to suggest that small computing devices will eventually
become so pervasive in everyday objects that they are scarcely noticed. That is, their computational behaviour will be transparently and intimately tied up with their physicalfunction.
The presence of computers everywhere only becomes useful when they can communicate with one another. For example, it may be convenient for users to control their washing machine or their entertainment system from their phone or a ‘universal remote control’ device in the home. Equally, the washing machine could notify the user via a smart badge or phone when the washing is done.
Ubiquitous and mobile computing overlap, since the mobile user can in principle benefit from computers that are everywhere. But they are distinct, in general. Ubiquitous computing could benefit users while they remain in a single environment such as the home or a hospital. Similarly, mobile computing has advantages even if it involves only conventional, discrete computers and devices such as laptops and printers. RESOURCE SHARING
- Is the primary motivation of distributed computing
- Resources types
- Hardware, e. printer, scanner, camera
- Data, e. file, database, web page
- More specific functionality, e. search engine, file
- Service
- manage a collection of related resources and present their functionalities to users and applications
- Server
- a process on networked computer that accepts requests from processes on other computers to perform a service and responds appropriately
- Client
- the requesting process
- Remote invocation
A complete interaction between client and server, from the point when the client sends its request to when it receives the server’s response
- Motivation of WWW
- Documents sharing between physicists of CERN
- Web is an open system: it can be extended and implemented in new ways without disturbing its existing functionality.
- Its operation is based on communication standards and document standards
- Respect to the types of ‘resource’ that can be published and shared on it.
- HyperText Markup Language
- A language for specifying the contents and layout of pages
- Uniform Resource Locators
- Identify documents and other resources
- A client-server architecture with HTTP
- By with browsers and other clients fetch documents and other resources from web servers
HTML
HTML text is stored in a file of a web server.
A browser retrieves the contents of this file from a web server.
-The browser interprets the HTML text
-The server can infer the content type from the filename extension.
URL
HTTP URLs are the most widely used An HTTP URL has two main jobs to do: - To identify which web server maintains the resource - To identify which of the resources at that server
Web servers and web browsers
HTTP URLs
HTTP
- Defines the ways in which browsers and any other types of client interact with web servers (RFC2616)
- Main features
- Request-replay interaction
- Content types. The strings that denote the type of content are called MIME (RFC2045,2046)
- One resource per request. HTTP version 1
- Simple access control
More features-services and dynamic pages
- Dynamic content
- Common Gateway Interface: a program that web servers run to generate content for their clients
- Downloaded code
- JavaScript
- Applet
Discussion of Web
Dangling: a resource is deleted or moved, but links to it may still remain Find information easily: e. Resource Description Framework which standardize the format of metadata about web resources Exchange information easily: e. XML – a self describing language Scalability: heavy load on popular web servers More applets or many images in pages increase in the download time
THE CHALLENGES IN DISTRIBUTED SYSTEM:
Heterogeneity
The Internet enables users to access services and run applications over a heterogeneous collection of computers and networks. Heterogeneity (that is, variety and difference) applies to all of the following:
networks; computer hardware; operating systems; programming languages; implementations by different developers
Although the Internet consists of many different sorts of network, their differences are masked by the fact that all of the computers attached to them use the Internet protocols to communicate with one another. For example, a computer attached to an Ethernet has an implementation of the Internet protocols over the Ethernet, whereas a computer on a different sort of network will need an implementation of the Internet protocols for that network. Data types such as integers may be represented in different ways on different sorts of hardware – for example, there are two alternatives for the byte ordering of integers. These differences in representation must be dealt with if messages are to be exchanged between programs running on different hardware. Although the operating systems of all computers on the Internet need to include an implementation of the Internet protocols, they do not necessarily all provide the same application programming interface to these protocols. For example, the calls for exchanging messages in UNIX are different from the calls in Windows.
Different programming languages use different representations for characters and data structures such as arrays and records. These differences must be addressed if programs written in different languages are to be able to communicate with one another. Programs written by different developers cannot communicate with one another unless they use common standards, for example, for network communication and the representation of primitive data items and data structures in messages. For this to happen, standards need to be agreed and adopted – as have the Internet protocols.
Middleware • The term middleware applies to a software layer that provides a programming abstraction as well as masking the heterogeneity of the underlying networks, hardware, operating systems and programming languages. The Common Object Request Broker (CORBA), is an example. Some middleware, such as Java Remote Method Invocation (RMI), supports only a single programming language. Most middleware is implemented over the Internet protocols, which themselves mask the differences of the underlying networks, but all middleware deals with the differences in operating systems and hardware.
Heterogeneity and mobile code • The term mobile code is used to refer to program code that can be transferred from one computer to another and run at the destination – Java applets are an example. Code suitable for running on one computer is not necessarily suitable for running on another because executable programs are normally specific both to the instruction set and to the host operating system. The virtual machine approach provides a way of making code executable on a variety of host computers: the compiler for a particular language generates code for a virtual machine instead of
-
a particular hardware order code. For example, the Java compiler produces code for a Java virtual machine, which executes it by interpretation. The Java virtual machine needs to be implemented once for each type of computer to enable Java programs to run. Today, the most commonly used form of mobile code is the inclusion Javascript programs in some web pages loaded into client browsers.
Openness
The openness of a computer system is the characteristic that determines whether the system can be extended and reimplemented in various ways. The openness of distributed systems is determined primarily by the degree to which new resource-sharing services can be added and be made available for use by a variety of client programs.
Openness cannot be achieved unless the specification and documentation of the key software interfaces of the components of a system are made available to software developers. In a word, the key interfaces are published. This process is akin to the standardization of interfaces, but it often bypasses official standardization procedures, which are usually cumbersome and slow-moving. However, the publication of interfaces is only the starting point for adding and extending services in a distributed system. The challenge to designers is to tackle the complexity of distributed systems consisting of many components engineered by different people. The designers of the Internet protocols introduced a series of documents called ‘Requests For Comments’, or RFCs, each of which is known by a number. The specifications of the Internet communication protocols were published in this series in the early 1980s, followed by specifications for applications that run over them, such as file transfer, email and telnet by the mid-1980s.
Systems that are designed to support resource sharing in this way are termed open distributed systems to emphasize the fact that they are extensible. They may be extended at the hardware level by the addition of computers to the network and at the software level by the introduction of new services and the reimplementation of old ones, enabling application programs to share resources.
To summarize:
- Open systems are characterized by the fact that their key interfaces are published.
- Open distributed systems are based on the provision of a uniform communication mechanism and published interfaces for access to shared resources.
- Open distributed systems can be constructed from heterogeneous hardware and software, possibly from different vendors. But the conformance of each component to the published standard must be carefully tested and verified if the system is to work correctly.
Security
Many of the information resources that are made available and maintained in distributed systems have a high intrinsic value to their users. Their security is therefore of considerable importance. Security for information resources has three components: confidentiality (protection against disclosure to unauthorized individuals), integrity
-
(protection against alteration or corruption), and availability (protection against interference with the means to access the resources).
In a distributed system, clients send requests to access data managed by servers, which involves sending information in messages over a network. For example:
- A doctor might request access to hospital patient data or send additions to that data.
- In electronic commerce and banking, users send their credit card numbers across the Internet.
In both examples, the challenge is to send sensitive information in a message over a network in a secure manner. But security is not just a matter of concealing the contents of messages – it also involves knowing for sure the identity of the user or other agent on whose behalf a message was sent.
However, the following two security challenges have not yet been fully met:
Denial of service attacks: Another security problem is that a user may wish to disrupt a service for some reason. This can be achieved by bombarding the service with such a large number of pointless requests that the serious users are unable to use it. This is called a denial of service attack. There have been several denial of service attacks on well-known web services. Currently such attacks are countered by attempting to catch and punish the perpetrators after the event, but that is not a general solution to the problem.
Security of mobile code: Mobile code needs to be handled with care. Consider someone who receives an executable program as an electronic mail attachment: the possible effects of running the program are unpredictable; for example, it may seem to display an interesting picture but in reality it may access local resources, or perhaps be part of a denial of service attack.
Scalability
Distributed systems operate effectively and efficiently at many different scales, ranging from a small intranet to the Internet. A system is described as scalable if it will remain effective when there is a significant increase in the number of resources and the number of users. The number of computers and servers in the Internet has increased dramatically. Figure 1 shows the increasing number of computers and web servers during the 12-year history of the Web up to 2005 [zakon]. It is interesting to note the significant growth in both computers and web servers in this period, but also that the relative percentage is flattening out – a trend that is explained by the growth of fixed and mobile personal computing. One web server may also increasingly be hosted on multiple computers.
The design of scalable distributed systems presents the following challenges:
Controlling the cost of physical resources: As the demand for a resource grows, it should be possible to extend the system, at reasonable cost, to meet it. For example, the frequency with which files are accessed in an intranet is likely to grow as the number of users and computers increases. It must be possible to add server computers to avoid the performance bottleneck that would arise if a single file server had to handle all file access requests. In general, for a system with n users to be scalable, the quantity of physical resources required to support them should be
at most O(n) – that is, proportional to n. For example, if a single file server can support 20 users, then two such servers should be able to support 40 users.
Controlling the performance loss: Consider the management of a set of data whose size is proportional to the number of users or resources in the system – for example, the table with the correspondence between the domain names of computers and their Internet addresses held by the Domain Name System, which is used mainly to look up DNS names such as amazon. Algorithms that use hierarchic structures scale better than those that use linear structures. But even with hierarchic structures an increase in size will result in some loss in performance: the time taken to access hierarchically structured data is O(log n), where n is the size of the set of data. For a system to be scalable, the maximum performance loss should be no worse than this.
Preventing software resources running out: An example of lack of scalability is shown by the numbers used as Internet (IP) addresses (computer addresses in the Internet). In the late 1970s, it was decided to use 32 bits for this purpose, but as will be explained in Chapter 3, the supply of available Internet addresses is running out. For this reason, a new version of the protocol with 128-bit Internet addresses is being adopted, and this will require modifications to many software components.
Avoiding performance bottlenecks: In general, algorithms should be decentralized to avoid having performance bottlenecks. We illustrate this point with reference to the predecessor of the Domain Name System, in which the name table was kept in a single master file that could be downloaded to any computers that needed it. That was fine when there were only a few hundred computers in the Internet, but it soon became a serious performance and administrative bottleneck.
Failure handling
Computer systems sometimes fail. When faults occur in hardware or software, programs may produce incorrect results or may stop before they have completed the intended computation. Failures in a distributed system are partial – that is, some components fail while others continue to function. Therefore the handling of failures is particularly difficult.
Detecting failures: Some failures can be detected. For example, checksums can be used to detect corrupted data in a message or a file. It is difficult or even impossible to detect some other
failures, such as a remote crashed server in the Internet. The challenge is to manage in the presence of failures that cannot be detected but may be suspected.
Masking failures: Some failures that have been detected can be hidden or made less severe. Two examples of hiding failures:
- Messages can be retransmitted when they fail to arrive.
- File data can be written to a pair of disks so that if one is corrupted, the other may still be correct.
Tolerating failures: Most of the services in the Internet do exhibit failures – it would not be practical for them to attempt to detect and hide all of the failures that might occur in such a large network with so many components. Their clients can be designed to tolerate failures, which generally involves the users tolerating them as well. For example, when a web browser cannot contact a web server, it does not make the user wait for ever while it keeps on trying – it informs the user about the problem, leaving them free to try again later. Services that tolerate failures are discussed in the paragraph on redundancy below. Recovery from failures: Recovery involves the design of software so that the state of permanent data can be recovered or ‘rolled back’ after a server has crashed. In general, the computations performed by some programs will be incomplete when a fault occurs, and the permanent data that they update (files and other material stored in permanent storage) may not be in a consistent state.
Redundancy: Services can be made to tolerate failures by the use of redundant components. Consider the following examples:
- There should always be at least two different routes between any two routers in the Internet.
- In the Domain Name System, every name table is replicated in at least two different servers.
- A database may be replicated in several servers to ensure that the data remains accessible after the failure of any single server; the servers can be designed to detect faults in their peers; when a fault is detected in one server, clients are redirected to the remaining servers.
Concurrency Both services and applications provide resources that can be shared by clients in a distributed system. There is therefore a possibility that several clients will attempt to access a shared resource at the same time. For example, a data structure that records bids for an auction may be accessed very frequently when it gets close to the deadline time. The process that manages a shared resource could take one client request at a time. But that approach limits throughput. Therefore services and applications generally allow multiple client requests to be processed concurrently. To make this more concrete, suppose that each resource is encapsulated as an object and that invocations are executed in concurrent threads. In this case it is possible that several threads may be executing concurrently within an object, in which case their operations on the object may conflict with one another and produce inconsistent results.
Transparency Transparency is defined as the concealment from the user and the application programmer of the separation of components in a distributed system, so that the system is perceived as a whole rather than as a collection of independent components. The implications of transparency are a major influence on the design of the system software.
Access transparency enables local and remote resources to be accessed using identical operations. Location transparency enables resources to be accessed without knowledge of their physical or network location (for example, which building or IP address).
Concurrency transparency enables several processes to operate concurrently using shared resources without interference between them. Replication transparency enables multiple instances of resources to be used to increase reliability and performance without knowledge of the replicas by users or application programmers. Failure transparency enables the concealment of faults, allowing users and application programs to complete their tasks despite the failure of hardware or software components. Mobility transparency allows the movement of resources and clients within a system without affecting the operation of users or programs. Performance transparency allows the system to be reconfigured to improve performance as loads vary. Scaling transparency allows the system and applications to expand in scale without change to the system structure or the application algorithms.
Quality of service
Once users are provided with the functionality that they require of a service, such as the file service in a distributed system, we can go on to ask about the quality of the service provided. The main nonfunctional properties of systems that affect the quality of the service experienced by clients and users are reliability, security and performance. Adaptability to meet changing system configurations and resource availability has been recognized as a further important aspect of service quality.
Some applications, including multimedia applications, handle time-critical data – streams of data that are required to be processed or transferred from one process to another at a fixed rate. For example, a movie service might consist of a client program that is retrieving a film from a video server and presenting it on the user’s screen. For a satisfactory result the successive frames of video need to be displayed to the user within some specified time limits. In fact, the abbreviation QoS has effectively been commandeered to refer to the ability of systems to meet such deadlines. Its achievement depends upon the availability of the necessary computing and network resources at the appropriate times. This implies a requirement for the system to provide guaranteed computing and communication resources that are sufficient to enable applications to complete each task on time (for example, the task of displaying a frame of video).
INTRODUCTION TO SYSTEM MODELS
Systems that are intended for use in real-world environments should be designed to function correctly in the widest possible range of circumstances and in the face of many possible difficulties and threats.
Each type of model is intended to provide an abstract, simplified but consistent description of a relevant aspect of distributed system design: Physical models are the most explicit way in which to describe a system; they capture the hardware composition of a system in terms of the computers (and other devices, such as mobile phones) and their interconnecting networks.
Architectural models describe a system in terms of the computational and communication tasks performed by its computational elements; the computational elements being individual computers or aggregates of them supported by appropriate network interconnections.
Fundamental models take an abstract perspective in order to examine individual aspects of a distributed system. The fundamental models that examine three important aspects of distributed systems: interaction models, which consider the structure and sequencing of the communication between the elements of the system; failure models, which consider the ways in which a system may fail to operate correctly and; security models, which consider how the system is protected against attempts to interfere with its correct operation or to steal its data.
Architectural models
The architecture of a system is its structure in terms of separately specified components and their interrelationships. The overall goal is to ensure that the structure will meet present and likely future demands on it. Major concerns are to make the system reliable, manageable, adaptable and cost-effective. The architectural design of a building has similar aspects – it determines not only its appearance but also its general structure and architectural style (gothic, neo-classical, modern) and provides a consistent frame of reference for the design.
Software layers The concept of layering is a familiar one and is closely related to abstraction. In a layered approach, a complex system is partitioned into a number of layers, with a given layer making use of the services offered by the layer below. A given layer therefore offers a software abstraction, with higher layers being unaware of implementation details, or indeed of any other layers beneath them.
In terms of distributed systems, this equates to a vertical organization of services into service layers. A distributed service can be provided by one or more server processes, interacting with each other and with client processes in order to maintain a consistent system-wide view of the service’s resources. For example, a network time service is implemented on the Internet based on the Network Time Protocol (NTP) by server processes running on hosts throughout the Internet that supply the current time to any client that requests it and adjust their version of the current time as a result of interactions with each other. Given the complexity of distributed systems, it is often helpful to organize such services into layers. the important terms platform and middleware, which define as follows:
The important terms platform and middleware, which is defined as follows:
A platform for distributed systems and applications consists of the lowest-level hardware and software layers. These low-level layers provide services to the layers above them, which are implemented independently in each computer, bringing the system’s programming interface up to a level that facilitates communication and coordination between processes. Intel x86/Windows, Intel x86/Solaris, Intel x86/Mac OS X, Intel x86/Linux and ARM/Symbian are major examples.
- Remote Procedure Calls – Client programs call procedures in server programs
- Remote Method Invocation – Objects invoke methods of objects on distributed hosts
- Event-based Programming Model – Objects receive notice of events in other objects in which they have interest
Middleware
- Middleware: software that allows a level of programming beyond processes and message
passing
- Uses protocols based on messages between processes to provide its higher-level abstractions such as remote invocation and events
- Supports location transparency
- Usually uses an interface definition language (IDL) to define interfaces
Applications, services
Middleware
Operating system
Computer and networkhardware
-
Interfaces in Programming Languages
Current PL allow programs to be developed as a set of modules that communicate with each other. Permitted interact ions between modules are defined by interfaces
A specified interface can be implemented by different modules without the need to modify other modules using the interface
Interfaces in Distributed Systems
When modules are in different processes or on different hosts there are limitations on the interactions that can occur. Only actions with parameters that are fully specified and understood can communicate effectively to request or provide services to modules in another process
A service interface allows a client to request and a server to provide particular services
A remote interface allows objects to be passed as arguments to and results from distributed modules
Object Interfaces
An interface defines the signatures of a set of methods, including arguments, argument types, return values and exceptions. Implementation details are not included in an interface. A class may implement an interface by specifying behavior for each method in the interface. Interfaces do not have constructors.
System architectures
Client-server: This is the architecture that is most often cited when distributed systems are discussed. It is historically the most important and remains the most widely employed. Figure 2 illustrates the simple structure in which processes take on the roles of being clients or servers. In particular, client processes interact with individual server processes in potentially separate host computers in order to access the shared resources that they manage. Servers may in turn be clients of other servers, as the figure indicates. For example, a web server is often a client of a local file server that manages the files in which the web pages are stored. Web servers and most other Internet services are clients of the DNS service, which translates Internet domain names to network addresses.
Clients invoke individual servers
Another web-related example concerns search engines, which enable users to look up summaries of information available on web pages at sites throughout the Internet. These summaries are made by programs called web crawlers, which run in the background at a search engine site using HTTP requests to access web servers throughout the Internet. Thus a search engine is both a server and a client: it responds to queries from browser clients and it runs web crawlers that act as clients of other web servers. In this example, the server tasks (responding to user queries) and the crawler tasks (making requests to other web servers) are entirely independent; there is little need to synchronize them and they may run concurrently. In fact, a typical search engine would normally include many concurrent threads of execution, some serving its clients and others running web crawlers. In Exercise 2, the reader is invited to consider the only synchronization issue that does arise for a concurrent search engine of the type outlined here.
Peer-to-peer: In this architecture all of the processes involved in a task or activity play similar roles, interacting cooperatively as peers without any distinction between client and server processes or the computers on which they run. In practical terms, all participating processes run the same program and offer the same set of interfaces to each other. While the client-server model offers a direct and relatively simple approach to the sharing of data and other resources, it scales poorly.
Shara ble obje cts
A distributed application based on peer processes Peer 2
Peer 1 Application
Application
Peer 3
Applicat ion
Client invocation invocation Server
result Server result
Client Process: Computer: Key:
A number of placement strategies have evolved in response to this problem, but none of them addresses the fundamental issue – the need to distribute shared resources much more widely in order to share the computing and communication loads incurred in accessing them amongst a much larger number of computers and network links. The key insight that led to the development of peer-to-peer systems is that the network and computing resources owned by the users of a service could also be put to use to support that service. This has the useful consequence that the resources available to run the service grow with the number of users.
Models of systems share some fundamental properties. In particular, all of them are composed of processes that communicate with one another by sending messages over a computer network. All of the models share the design requirements of achieving the performance and reliability characteristics of processes and networks and ensuring the security of the resources in the system.
About their characteristics and the failures and security risks they might exhibit. In general, such a fundamental model should contain only the essential ingredients that need to consider in order to understand and reason about some aspects of a system’s behaviour. The purpose of such a model is:
- To make explicit all the relevant assumptions about the systems we are modelling.
- To make generalizations concerning what is possible or impossible, given those assumptions. The generalizations may take the form of general-purpose algorithms or desirable properties that are guaranteed. The guarantees are dependent on logical analysis and, where appropriate, mathematical proof.
The aspects of distributed systems that we wish to capture in our fundamental models are intended to help us to discuss and reason about:
Interaction: Computation occurs within processes; the processes interact by passing messages, resulting in communication (information flow) and coordination (synchronization and ordering of activities) between processes. In the analysis and design of distributed systems we are concerned especially with these interactions. The interaction model must reflect the facts that communication takes place with delays that are often of considerable duration, and that the accuracy with which independent processes can be coordinated is limited by these delays and by the difficulty of maintaining the same notion of time across all the computers in a distributed system.
Failure: The correct operation of a distributed system is threatened whenever a fault occurs in any of the computers on which it runs (including software faults) or in the network that connects them. Our model defines and classifies the faults. This provides a basis for the analysis of their potential effects and for the design of systems that are able to tolerate faults of each type while continuing to run correctly.
Security: The modular nature of distributed systems and their openness exposes them to attack by both external and internal agents. Our security model defines and classifies the forms that such
attacks may take, providing a basis for the analysis of threats to a system and for the design of systems that are able to resist them.
Interaction model
Fundamentally distributed systems are composed of many processes, interacting in complex ways. For example:
Multiple server processes may cooperate with one another to provide a service; the examples mentioned above were the Domain Name System, which partitions and replicates its data at servers throughout the Internet, and Sun’s Network Information Service, which keeps replicated copies of password files at several servers in a local area network. A set of peer processes may cooperate with one another to achieve a common goal: for example, a voice conferencing system that distributes streams of audio data in a similar manner, but with strict real-time constraints.
Most programmers will be familiar with the concept of an algorithm – a sequence of steps to be taken in order to perform a desired computation. Simple programs are controlled by algorithms in which the steps are strictly sequential. The behaviour of the program and the state of the program’s variables is determined by them. Such a program is executed as a single process. Distributed systems composed of multiple processes such as those outlined above are more complex. Their behaviour and state can be described by a distributed algorithm – a definition of the steps to be taken by each of the processes of which the system is composed, including the transmission of messages between them. Messages are transmitted between processes to transfer information between them and to coordinate their activity.
Two significant factors affecting interacting processes in a distributed system:
- Communication performance is often a limiting characteristic.
- It is impossible to maintain a single global notion of time.
Performance of communication channels • The communication channels in our model are realized in a variety of ways in distributed systems – for example, by an implementation of streams or by simple message passing over a computer network. Communication over a computer network has the following performance characteristics relating to latency, bandwidth and jitter:
The delay between the start of a message’s transmission from one process and the beginning of its receipt by another is referred to as latency. The latency includes:
- The time taken for the first of a string of bits transmitted through a network to reach its destination. For example, the latency for the transmission of a message through a satellite link is the time for a radio signal to travel to the satellite and back.
-
The delay in accessing the network, which increases significantly when the network is heavily loaded. For example, for Ethernet transmission the sending station waits for the network to be free of traffic.
The time taken by the operating system communication services at both the sending and the receiving processes, which varies according to the current load on the operating systems.
The bandwidth of a computer network is the total amount of information that can be transmitted over it in a given time. When a large number of communication channels are using the same network, they have to share the available bandwidth.
Jitter is the variation in the time taken to deliver a series of messages. Jitter is relevant to multimedia data. For example, if consecutive samples of audio data are played with differing time intervals, the sound will be badly distorted.
Computer clocks and timing events • Each computer in a distributed system has its own internal clock, which can be used by local processes to obtain the value of the current time. Therefore two processes running on different computers can each associate timestamps with their events. However, even if the two processes read their clocks at the same time, their local clocks may supply different time values. This is because computer clocks drift from perfect time and, more importantly, their drift rates differ from one another. The term clock drift rate refers to the rate at which a computer clock deviates from a perfect reference clock. Even if the clocks on all the computers in a distributed system are set to the same time initially, their clocks will eventually vary quite significantly unless corrections are applied.
Two variants of the interaction model • In a distributed system it is hard to set limits on the time that can be taken for process execution, message delivery or clock drift. Two opposing extreme positions provide a pair of simple models – the first has a strong assumption of time and the second makes no assumptions about time:
Synchronous distributed systems: Hadzilacos and Toueg define a synchronous distributed system to be one in which the following bounds are defined:
- The time to execute each step of a process has known lower and upper bounds.
- Each message transmitted over a channel is received within a known bounded time.
- Each process has a local clock whose drift rate from real time has a known bound.
Asynchronous distributed systems: Many distributed systems, such as the Internet, are very useful without being able to qualify as synchronous systems. Therefore we need an alternative model. An asynchronous distributed system is one in which there are no bounds on:
- Process execution speeds – for example, one process step may take only a picosecond and another a century; all that can be said is that each step may take an arbitrarily long time.
- Message transmission delays – for example, one message from process A to process B may be delivered in negligible time and another may take several years. In other words, a message may be received after an arbitrarily long time.
- Clock drift rates – again, the drift rate of a clock is arbitrary.
1 4
m send 1
m 2
receive
2 3
recei ve
send receive receive
Event ordering • In many cases, we are interested in knowing whether an event (sending or receiving a message) at one process occurred before, after or concurrently with another event at another process. The execution of a system can be described in terms of events and their ordering despite the lack of accurate clocks. For example, consider the following set of exchanges between a group of email users, X, Y, Z and A, on a mailing list:
- User X sends a message with the subject Meeting.
- Users Y and Z reply by sending a message with the subject Re: Meeting.
In real time, X’s message is sent first, and Y reads it and replies; Z then reads both X’s message and Y’s reply and sends another reply, which references both X’s and Y’s messages. But due to the independent delays in message delivery, the messages may be delivered as shown in the following figure and some users may view these two messages in the wrong order. send X
receive receive
Y Physic al time
Z
m 3 m 1 m 2 receive receive receive t 1 t 2 t 3
A
Failure model
In a distributed system both processes and communication channels may fail – that is, they may depart from what is considered to be correct or desirable behaviour. The failure model defines the ways in which failure may occur in order to provide an understanding of the effects of failures. Hadzilacos and Toueg provide a taxonomy that distinguishes between the failures of processes and communication channels. These are presented under the headings omission failures, arbitrary failures and timing failures.
Omission failures • The faults classified as omission failures refer to cases when a process or communication channel fails to perform actions that it is supposed to do.
Process omission failures: The chief omission failure of a process is to crash. When, say that a process has crashed we mean that it has halted and will not execute any further steps of its program ever. The design of services that can survive in the presence of faults can be simplified if it can be assumed that the services on which they depend crash cleanly – that is, their processes either function correctly or else stop. Other processes may be able to detect such a
crash by the fact that the process repeatedly fails to respond to invocation messages. However, this method of crash detection relies on the use of timeouts – that is, a method in which one process allows a fixed period of time for something to occur. In an asynchronous system a timeout can indicate only that a process is not responding – it may have crashed or may be slow, or the messages may not have arrived.
Communication omission failures: Consider the communication primitives send and receive. A process p performs a send by inserting the message m in its outgoing message buffer. The communication channel transports m to q’s incoming message buffer. Process q performs a receive by taking m from its incoming message buffer and delivering it. The outgoing and incoming message buffers are typically provided by the operating system.
processp process q
Outgoing messagebuffer Incomingmessagebuffer
Arbitrary failures • The term arbitrary or Byzantine failure is used to describe the worst possible failure semantics, in which any type of error may occur. For example, a process may set wrong values in its data items, or it may return a wrong value in response to an invocation. An arbitrary failure of a process is one in which it arbitrarily omits intended processing steps or
Communication channel
send m receive
takes unintended processing steps. Arbitrary failures in processes cannot be detected by seeing whether the process responds to invocations, because it might arbitrarily omit to reply.
Communication channels can suffer from arbitrary failures; for example, message contents may be corrupted, nonexistent messages may be delivered or real messages may be delivered more than once. Arbitrary failures of communication channels are rare because the communication software is able to recognize them and reject the faulty messages. For example, checksums are used to detect corrupted messages, and message sequence numbers can be used to detect nonexistent and duplicated messages.
Timing failures • Timing failures are applicable in synchronous distributed systems where time limits are set on process execution time, message delivery time and clock drift rate. Timing failures are listed in the following figure. Any one of these failures may result in responses being unavailable to clients within a specified time interval. In an asynchronous distributed system, an overloaded server may respond too slowly, but we cannot say that it has a timing failure since no guarantee has been offered. Real-time operating systems are designed with a view to providing timing guarantees, but they are more complex to design and may require redundant hardware. Most general-purpose operating systems such as UNIX do not have to meet real-time constraints.
Masking failures • Each component in a distributed system is generally constructed from a collection of other components. It is possible to construct reliable services from components that exhibit failures. For example, multiple servers that hold replicas of data can continue to provide a service when one of them crashes. A knowledge of the failure characteristics of a component can enable a new service to be designed to mask the failure of the components on which it depend
Distributed systems - these are notes
Course: Distributed Systems
- Discover more from: