Skip to document

Chapter 15 Emerging Trends

Hey friends This thread contains quality notes/handout for the subjec...
Course

Principles Of Computer Science

84 Documents
Students shared 84 documents in this course
Academic year: 2023/2024
Uploaded by:
0followers
377Uploads
34upvotes

Comments

Please sign in or register to post comments.

Preview text

Chapter 15 EMERGING TRENDS

By changes to the environment, we mean the changes that occur to the different technologies that underlie computer hardware, system software, networking, and peripheral devices. Let us examine the way the environment has changed of late. This can indicate the challenges being posed to the software development principles. This in turn would give us some insight into the way in which the software engineering techniques are evolving of late. The important changes to the environment that have occurred in the last two decades include the following: The prices of computers have dropped drastically in this period. At the same time, they have become more powerful. Now they can perform computations much faster and store much larger volumes of data. The sizes of computers have shrunk and laptops and palmtops are becoming popular. The Internet has become extremely popular. Internet connects millions of computers world-wide and makes enormous available to the users. Networking techniques have made rapid progress. The speed of data transfer has increased unbelievably and at the same time, the cost of networking computers has dropped dramatically. Just to give an example of currently supported speed of data transfer, desktops now come with a default 1Gbps network port. Mobile phones have dramatically captured imagination of all. The level of acceptance that mobile phones have achieved in less than a decade

appears like a chapter straight out of a science fiction book. Mobile phones are rapidly transforming themselves into handheld computing devices. In addition to high speed fixed line connections, GPRS and

wireless LANs have become common place. Over the last decade, cloud computing has become popular. In cloud computing, applications are hosted on cloud operating on a data center. Cloud computing is becoming more and more popular as it helps a user run sophisticated applications without much upfront investments and also frees him from buying and maintaining sophisticated hardware and software. In the face of the discussed developments, software developers are facing several challenges. Following are some of the challenges that are being faced by software developers.

Challenges faced by software developers

Following are some of the challenges that are being faced by software developers: To cope up with fierce competitions, business houses are rapidly changing their business processes. This requires rapid changes to also occur to the software that support the business process activities. Therefore, there is a pressing demand to shorten the software delivery time. However, software is still taking unacceptably long time to develop and is turning out to be a bottleneck in implementing rapid business process changes. To reduce the software delivery times, software is being developed by teams working from globally distributed locations. How software can be effectively developed using globally distributed development teams is not yet clear and poses many challenges. On the other hand, radical changes to the software development principles are being put forward to shorten the development time. Business houses are getting tired of astronomical software costs, late deliveries, and poor quality products. On the other hand, hardware costs are dropping and at the same time hardware is becoming more powerful, sophisticated, and reliable. Hardware and software cost differentials are becoming more and more glaring. The wisdom of

accruing from adopting this concept. Let us deliberate on the important advantages of the client-server paradigm.

Advantages of client-server software

There are many reasons for the popularity of client-server software. A few important reasons are as follows: Concurrency: A client-server software divides the computing work among

many different client and server components that could be residing on different machines. Thus client-server solutions are inherently concurrent and as a result offer the advantage of faster processing. Loose coupling: Client and server components are inherently loosely- coupled, making these easy to understand and develop. Flexibility: A client-server software i s flexible in the sense that clients and servers can be attached and removed as and when required. Also, clients can access the servers from anywhere. Cost-effectiveness: The client-server paradigm usually leads to cost- effective solutions. Clients usually run on cheap desktop computers, whereas severs may run on sophisticated and expensive computers. Even to use a sophisticated software, one needs to own only a cheap client machine to invoke the server. Heterogeneous hardware: In a client-server solution, it is easy to have specialised servers that can efficiently solve specific problems. It is possible to efficiently integrate heterogeneous computing platforms to support the requirements of different types of server software. Fault-tolerance: Client-server solutions are usually fault-tolerant. It is

possible to have many servers providing the same service. If one server becomes unavailable, then client requests can be directed to any other working server. Mobile computing: Mobile computing implicitly requires uses of client- server technique. Cell phones are, of late, evolving as handheld computing and communicating devices and are being provided with small processing power, keyboard, small memory, and LCD display. The handhelds have limited processing power and storage capacity, and therefore can act only as clients. To perform any non-trivial task, the handheld computers can possibly only support the necessary user interface to place requests on some remote servers. Application service provisioning: There are many application software products that are extremely expensive to own. A client-server based approach can be used to make these software products affordable for use. In this approach, a n application service provider (ASP) would own it, and the users would pay the ASP based on the charges per unit time of usage. Component-based development: Client-server paradigm fits well with the component- based software development. Component-based software

development holds out the promise of achieving substantial reductions to cost and delivery time and at the same time achieve increased product reliability. Component-based development is similar to the way hardware equipments

required. Compatibility: Clients and servers may not be compatible to each other. Since the client and server components may be manufactured by different vendors, they may not be compatible with respect to data types, languages, number representation, etc. Inconsistency: Replication of servers can potentially create problems as whenever there is replication of data, there is a danger of the data becoming inconsistent.

15 CLIENT-SERVER ARCHITECTURES

The simplest way to connect clients and servers is by using a two-tier

architecture shown in Figure 15(a). In a two-tier architecture, any client can get service from any server by sending a request over the network.

Limitations of two-tier client-server architecture

A two-tier architecture for client-server applications though is an intuitively obvious solution, but it turns out to be not practically usable. The main problem is that client and server components are usually manufactured by different vendors, who may adopt their own interfacing and implementation solutions. As a result, the different components may not interface with (talk to) each other easily.

Three-tier client-server architecture

The three-tier architecture overcomes the main limitations of the two- tier architecture. In the three-tier architecture, a middleware is added between client and the server components as shown in Figure 15(b). The middleware keeps track of all servers. It also translates client requests into server understandable form. For example, the client can deliver its request to the middleware and disengage because the middleware will access the data and return the answer to the client.

Figure 15: Two-tier and three-tier client-server architectures.

Functions of middleware

The important activities of the middleware include the following: The middleware keeps track of the addresses of servers. Based on a client request, it can therefore easily locate the required server. It can translate between client and server formats of data and vice versa. Two popular middleware standards are: Common Object Request Broker Architecture (CORBA) COM/DCOM CORBA is being promoted by Object Management Group (OMG), a consortium of a large number of computer industries such as IBM, HP, Digital, etc. However, OMG is not a standards body. OMG in fact does not have any authority to make or enforce standards. It just tries to popularize good solutions with the hope that if a solution becomes highly popular, it would ultimately become a standard. COM/DCOM is being promoted mainly by Microsoft. In the following subsections, we discuss these two important middleware standards.

15 CORBA

Common object request broker architecture (CORBA) is a specification of a standard architecture for middleware. Using a CORBA implementation, a client can transparently invoke a service of a server object, which can be on the same machine or across a network. CORBA automates many common network programming tasks such as object registration,

Naming Service: This allows clients to find objects based on names. Naming service is also called white page service. Trading Service: This allows clients to find objects based on their properties. Trading service is also called yellow page service. Using trading service a specific service can be searched. This is akin to searching a service such as automobile repair shop in a yellow page directory. There can be other services which can be provided by object services such as security services, life-cycle services and so on.

Common facilities

Like object service interfaces, these interfaces are also horizontally- oriented, but unlike object services they are oriented towards end- user

applications. An example of such a facility is the distributed document component facility (DDCF), a compound document common facility based on OpenDoc. DDCF allows for the presentation and interchange of objects based on a document model, for example, facilitating the linking of a spreadsheet object into a report document.

Application interfaces

These are interfaces developed specifically for a given application.

15.3 CORBA ORB Architecture

The representation of Figure 15 is simplified since it does not show the various components of ORB. Let us now discuss the important components of CORBA architecture and how they operate. The ORB must support a large number of functions in order to operate consistently and effectively. In the carefully thought-out design of ORB, the ORB implements much of these functionality as pluggable modules

to simplify the design and implementation of ORB and to make it efficient. Figure 15: CORBA ORB architecture.

ORB

CORBA’s most fundamental component is the object request broker (ORB) whose task is to facilitate communication between objects. The main responsibility of ORB is to transmit the client request to the server and get the response back to the client. ORB abstracts out the complexities of service invocation across a network and makes service invocation by client seamless and easy. The ORB simplifies distributed programming by decoupling clients from the details of the service invocations. This makes client requests appear to be local procedure calls. When a client invokes an operation, the ORB is responsible for finding the object implementation, transparently activating it if necessary, delivering the request to the object, and returning any response to the caller. ORB allows objects to hide their implementation details from clients. The different aspects of a program that are hidden (abstracted out) from the client include programming language, operating system, host hardware,and object location.

Stubs and skeletons

Using a CORBA implementation clients can communicate to the server in two ways—by using stubs or by using dynamic invocation interface (DII). The stubs help static service invocation, where a client requests for a specific service using the required parameters. In the dynamic service invocation, the client need not know before hand about the required parameters and these are determined at the run time. Though dynamic service invocation is more flexible, static service invocation is more efficient that dynamic service invocation. Service invocation by client through stub is suitable when the interface

—the client part and the serv part. Next, the exact client and server interfaces are determined. To specify an interface, interface definition language (IDL) is used. IDL is very similar to C++ and Java except that it has no executable statements. Using IDL only data interface between clients and servers can be defined. It supports inheritance so that interfaces can be reused in the same or across different applications. It also supports exception. After the client-server interface is specified in IDL, an IDL compiler is used to compile the IDL specification. Depending on whether the target language in which the application is to be developed is Java, C++, C, etc., Different IDL compilers such as IDL2Java, IDL2C++, IDL2C etc. can be used as required. When the IDL specification is compiled, it generates the skeletal code for stub and skeleton. The stub and skeleton contain interface definitions and only the method body needs to be written by the programmers developing the components.

Inter-ORB communication

Initially, CORBA could only integrate components running on the same LAN. However, on certain applications, it becomes necessary to run the different components of the application in different networks. This shortcoming of CORBA 1 was removed by CORBA 2. CORBA 2. defines general interoperability standard. The general inter-orb protocol (GIOP) is an abstract meta-protocol. It specifies a standard transfer syntax and a set of message formats for object requests. The GIOP is designed to work over many different transport protocols. In a

distributed implementation, every ORB must support GIOP mapped onto its local transport. GIOP can be used by almost any connection- oriented byte stream transport.

GIOP is popularly implemented on TCP/IP known as internet inter- ORB protocol (IIOP).

15 COM/DCOM

15.4 COM

The main idea in the component object model (COM) is that different vendors can sell binary components. Application can be developed by integrating off-the-shelf components. COM can be used to develop component applications on a single computer. The concepts used are very similar to CORBA. The components are known as binary objects. These can be generated using languages such as Visual Basic, Delphi, Visual C++ etc. These languages have the necessary features to create COM components. COM components are binary objects and they exist in the form of either .exe or .dll (dynamic link library). The .exe components have separate existence. But .dll COM components are in- process servers, that get linked to a process. For example, ActiveX is a dll type server, which gets loaded on the client-side.

15.4 DCOM

Distributed component object model (DCOM) is the extension of the component object model (COM). The restriction that clients and servers reside in the same computer is relaxed here. So, DCOM can operate on

There are several similarities between services and components, which are as follows: Reuse: Both a component and a service are reused across multiple applications. Generic: The components and services are usually generic enough to be useful to a wide range of applications. Composable: Both services and components are integrated together to develop an application. Encapsulated: Both components and services are non-investigable through their interfaces. Independent development and versioning: Both components and services are developed independently by different vendors and also continue to evolve independently. Loose coupling: Both applications developed using the component paradigm and the SOA paradigm have loose coupling inherent to them. However, there are several dissimilarities between the components and the SOA paradigm, which are as follow: The granularity (size) of services in the SOA paradigm are often 100 to 1,000 times larger than the components of the component paradigm. Services may be developed and hosted on separate machines. Normally components in the component paradigm are procured for use as per requirement (ownership). On the other hand, services are usually availed in a pay per use arrangement. Instead of services embedding calls to each other in their source code, services use well-defined protocols which describe how services can talk to each other. This architecture facilitates a business process expert to tailor an

application as per requirement. To meet a new business requirement, the

business process expert can link and sequences services in a process known as orchestration. SOA targets fairly large chunks of functionality to be strung together to form new services. That is, large services can be developed by integrating existing software services. The larger the chunks, the fewer the interfacings required. This leads to faster development. However, very large chunks may prove to be difficult to reuse.

15.5 Service-oriented Architecture (SOA): Nitty Gritty

The SOA paradigm utilises services that may be hosted on different computers. The different computers and services may be under the control of different owners. To facilitate application development, SOA must provide a means to offer, disc over, interact with and use capabilities of the services to achieve desired results. SOA involves statically and dynamically plugging-in services to build software. SOA players—BEA Aqua logic, Oracle Web services manager, HP Systinet Registry, MS .Net, IBM Web Sphere, Iona Artrix, Java composite application suite. Web services can be used to implement a service- oriented architecture. Web services can make functional building blocks accessible over standard Internet protocols independent of platforms and programming languages. One of the central assumptions of SOA is that once a market place for

In this context, SaaS makes a case for pay per usage of software rather than owning software for use. SaaS is a software delivery model and involves customers to pay for any software per unit time of usage, with the price reflecting market place supply and demand. As we can see, SaaS shifts “ownership” of the software from the customer to a service provider. Software owner provides maintenance, daily technical operation, and support for the software. Services are provided to the clients on amount of usage basis. The service provider is a vendor who hosts the software and lets the users execute on-demand charges per usage units. It also shifts the responsibility for hardware and software management from the customer to the provider. The cost of providing software services reduces as more and more customers subscribe to the service. Elements of outsourcing and application service provisioning are implicit in the SaaS model, it makes the software accessible to a large number of customers who cannot afford to purchase the software outright. Target the “long tail” of small customers. If we compare SaaS to SOA, we can observe that SaaS is a software delivery model, whereas SOA is a software construction model. Despite significant differences, both SOA and SaaS espouse closely related architecture models. SaaS and SOA complement each other. SaaS helps to

offer components for SOA to use. SOA helps to help quickly realise SaaS. Also, the main enabler of SaaS and SOA are the Internet and web services technologies.

Was this document helpful?

Chapter 15 Emerging Trends

Course: Principles Of Computer Science

84 Documents
Students shared 84 documents in this course
Was this document helpful?
Chapter
15
EMERGING TRENDS
By changes to the environment, we mean the changes
that occur to the different technologies that underlie computer
hardware, system software, networking, and peripheral devices. Let
us
examine the way the environment has changed of late. This can
indicate the challenges being posed to the software development
principles. This in turn would give us some insight into the way in
which
the software engineering techniques are evolving of late.
The important changes to the environment that have occurred in the
last
two decades include the following:
The prices of computers have dropped drastically in this period. At the
same time, they have become more powerful. Now they can perform
computations much faster and store much larger volumes of data.
The
sizes of computers have shrunk and laptops and palmtops are
becoming popular.
The Internet has become extremely popular. Internet connects
millions
of computers world-wide and makes enormous available to the users.
Networking techniques have made rapid progress. The speed of data
transfer has increased unbelievably and at the same time, the cost of
networking computers has dropped dramatically. Just to give an
example of currently supported speed of data transfer, desktops now
come with a default 1Gbps network port.
Mobile phones have dramatically captured imagination of all. The level
of acceptance that mobile phones have achieved in less than a decade
appears like a chapter straight out of a science fiction book. Mobile
phones are rapidly transforming themselves into handheld computing
devices. In addition to high speed fixed line connections, GPRS and