Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to understand remote object calls in web

2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly explains "how to understand remote object calls in web". The content in the article is simple and clear, and it is easy to learn and understand. Please follow the editor's train of thought to study and learn "how to understand remote object calls in web".

To say "remote object", you must first say "remote call", that is, RPC. One of the more famous RPC frameworks is the recently popular gRPC, that is, Google's open source RPC. In addition, there are Facebook open source Thrift and so on. There are also many RPC frameworks in our factory, which are dizzying. Java also supports RMI (Remote Method Invoke: remote method request) functionality in JDK, which can also be thought of as a RPC, but this is actually more like the "remote object call" we're going to talk about now.

In many RPC, we basically think of it as a call request to a function running in another process (or computer) through the network. Since it is a function call, we naturally pass in parameters and expect to get the return value. In this process, we often only need to enter: function name + parameters, RPC can find a remote process, to execute the corresponding function, and then pass in the target parameters. In this process, the process that executes this function is considered stateless, and all output is only related to the input parameters, unless part of the state is recorded on the database (persistence device). Therefore, the process of calculation (algorithm), and the calculated data are actually separated, and the data required for these calculations are either from parameters or from database equipment. The requested function, and the container that loads the function, the process, does not guarantee any state maintenance capabilities.

"remote object invocation", on the other hand, is precisely in the "state" link, unlike RPC-- it is a framework that guarantees a certain state. When we make a remote object call, we need to first "find" a remote object, and then initiate a "method" (member function) call. There are two obvious differences between this and RPC:

We need to locate the object in some way, not just a function name. Object is a more complex remote concept, because it is possible to belong to the same class (class), and there are multiple objects with consistent or inconsistent state that exist on the remote machine. We can't just use a fixed routing flag (such as a class name) to find such an object. The way remote objects are routed becomes a significant difference between different remote object invocation frameworks.

We do not need to send all the data to the remote object through parameters on each request, because it can contain a large number of process states for the same remote object. As long as we find the correct remote object, we can get the result state caused by the previous operation. Remote objects tend to live in the memory of the process, so it can be very fast to access their own state data, which is very useful for programs with latency pressure.

Therefore, the most important feature of remote object calls is that data and computing are merged together-which not only improves the convenience of object-oriented programming, but also greatly reduces the delay caused by data pull in remote calls.

Advantages of remote objects: DB stress, ease of use

In the traditional "request-response"-based distributed server, the most common data system is a four-tier structure of access-logic-cache-database. In order to distribute the "logical" modules that bear the computing pressure to different processes, we tend to make the "logical" modules "stateless", so that we can start and stop the processes of any logical module at will without worrying about losing user data. But in doing so, the logic module is relaxed, and the "cache-database" brothers who undertake state storage are under great pressure. Because every data operation needs to read the data from them, and then write back the result (if there is a data modification operation).

A client program that wants to access an EJB object generally needs to use an API called JNDI to specifically connect to the EJB object. The full name of JNDI is Java Naming and Directory Inerface, which is basically equal to the name and directory service interface that we often say. Java unifies the use of various directory servers through a set of API specifications. All J2EE containers must provide a JNDI service, and client programs access EJB objects within the container by using the JNDI provided by the J2EE container. The way to use JNDI is basically to enter a string and API will return an object to you. In the context of J2EE, this object is the Home interface object of the EJB object (an image corresponding to the remote EJB object, also known as the stub object). The code is similar to:

Context ctx = new InitialContext (env); Object ejbHome = ctx.lookup ("java:comp/env/ejb/HelloBean"); HelloHome empHome = (HelloHome) PortableRemoteObject.narrow (ejbHome, HelloHome.class)

Enter the string of the lookup () function, which is anything you can define by yourself, as long as the correspondence is registered in the corresponding EJB container. From this code, we can see that if EJB wants to do disaster recovery, load balancing and other functions, it can be achieved through the interface ctx.lookup (). In addition, the Home interface (pile code) of the remote object needs to be pre-deployed on the client side for testing, which in the above example is the HelloHome.class class. The Home interface class of the EJB object is automatically generated by the EJB tool through the source EJB object class definition. Compared with CORBA,Thrift and other technologies, EJB can directly use .java source code instead of IDL definition, and then automatically generate pile code, which is really much easier.

The EJB specification defines three kinds of remote objects: stateless session Bean, stateful session Bean, and message-driven Bean. This means that the EJB container manages the life cycle of EJB objects. The declaration cycle of stateless session Bean and message-driven Bean is similar, and it is possible to new a Bean object with a request (message-driven means that every JMS message comes). Of course, it is not possible to create a new object every time. In short, the container does not guarantee to maintain the life cycle of Bean objects, so that the container can flexibly manage a large number of Bean objects according to the load pressure. The most special is the "stateful session Bean", where the container maintains the Bean object according to the client's session state (corresponding to the client's context object), that is, each client context corresponds to a stateful Bean. If you use this client context and initiate multiple lookup () lookups, the EJB object you will visit will all be the same. This is very convenient for services that need to remain logged in. Customers do not need to maintain the life cycle of a remote object themselves, but can get the function of state preservation.

Finally, let's talk about the deployment configuration of EJB. The previous deployment of EJB containers is extremely complex. In addition to writing a business JAVA class that inherits from a specific base class, there are a lot of configuration details. After EJB3.0, through the JAVA annotation function (Annotation), these configurations can be written with the source code, and the business JAVA class does not need to integrate specific interfaces and types, it can be any ordinary class (POJO), just need to add some specific comments. The EJB container provides tools to deal with these EJB annotated JAVA classes. On the one hand, the JAVA class is automatically deployed into the container, and on the other hand, the client Home interface class file is generated for users to publish (copy) to the client server they need. Some EJB containers, such as Weblogic, also provide graphical interface tools for Eclipse (IDE), so that the entire process almost eliminates the need to write additional configuration and command line operations.

2.MS WCF

WCF, whose full name is Windows Communication Foundation, is a framework released by Microsoft for building service-oriented applications. The bottom layer of this framework is Windows's COM+ technology, while the programming interface is more likely to use C # / VB language and .net platform. This is similar to EJB, except that remote objects in WCF do not need a virtual machine like JVM, but are integrated into the WINDOWS operating system.

Coincidentally, WCF's remote interface definition is also made up of using C#/VB code directly, coupled with annotated "Attribute" function annotations similar to annotations, and tagging on a defined interface (Interface). A specific business implementation class, as long as it "implements" the defined interface, is no different from an ordinary class. The difference with EJB is that we still need to write a XML configuration to register the interface and lookup string of the remote object with the omnipotent IIS server. Once registered, it can be accessed through a string such as URL: http://xx.xx.xx.xx/servicesname/service.svc. At the same time, if the client wants to access the remote object, you need to use the svcuitl.exe tool and enter the URL you just registered to generate the corresponding client stub code base. The client can directly new the newly created pile type object and then call its method directly, just like the method of the local object.

/ / Create a client.CalculatorClient client = new CalculatorClient (); / / Call the Add service operation.double value1 = 100.00D switch double value2 = 15.99D scape double result = client.Add (value1, value2); Console.WriteLine ("Add ({0}, {1}) = {2}", value1, value2, result)

Of course, if you want to connect to different servers, there is a chance that an internally generated client code will use a configuration file. In it, you can change the address of the remote server (the same registered URL).

In addition to providing WCF remote object services through IIS, you can also write a separate program that completely controls these remote objects by defining main (), thus providing services. In addition, in addition to directly mapping to a remote object through URL, WCF can also flexibly route remote object calls to the same URL by writing a "routing service." Although WCF does not provide remote object lifecycle management like EJB, you can code any form of remote object lifecycle management through WCF's service API and routing services.

3.IBM RMI-IIOP

IBM's RMI-IIOP service is based on JAVA technology, but it is different from another set of remote object technology of EJB. This set of technology is closer to the CORBA system based on JAVA. This technology uses the standard JAVA RMI interface (RMIInterface) as the interface of remote objects and the serialization and deserialization capabilities of JAVA as coding capabilities. Then write a main () function and create an org.omg.CORBA.ORB object to build a remote server. The client uses a string to locate the remote object you want to access. This string is similar to: corbaloc:iiop:1.2@localhost:8080/OurLittleClient. We can see that there is an IP and port in it, as well as a string OurlLittleClient that is registered when the server remote object is written. We deploy the remote object through a command line such as rmic-iiop Server, then start the server with start java Server and the client with start java Client. These names are included in IBM Developer Kit for Java technology v1.3.1. We can see that RMI-IIOP is a more primitive remote object scheme, which is basically a combination of CORBA's API implementation. It's a bit tedious to use, but the advantage is that you don't need to learn and deploy complex container services, and you can completely code yourself to implement a set of remote object services. There is no limit to what method you use to locate and find remote objects, nor how you manage the life cycle of remote objects, everything is written and implemented by the developers themselves.

Summary specification remote object positioning remote object life cycle management server deployment EJBJNDI path string lookup automatic management, with session state object using Container Service WCFURL, routing service no deployment to IIS or self-written main () RMI-IIOPCOBRA URL location without self-written main ()

In the choice of object positioning, string lookup is already the standard, and complex custom routes can also be hidden under this lookup operation. The lifecycle management of remote objects is actually the management of server resources. Except for the container support of EJB, other solutions seldom provide such capabilities, indicating that this part is more difficult. In terms of server deployment, users can use API to write main () to build the server, which provides a great deal of flexibility.

Challenges for remote objects: lifecycle management, data consistency

Through the above analysis, we can find that the life cycle management of remote objects is an important and complex topic. It is difficult for us to ensure that such a lifecycle management program has a common strategy to maintain the stability of server resources in various business situations. And in the case of distributed systems, for load balancing, it is necessary to deploy the same type of remote objects to different processes, which introduces a new problem: data consistency.

The life cycle of remote objects not only takes up the memory resources of the server, but also takes up the routing space that records its address, and checks the CPU computing time of the maintenance life cycle. If we provide automated object lifecycle management, it is necessary to provide education in this area when customers use it, as well as defensive strategies to prevent failure of object management when customers use errors, overloads, and so on. So even the EJB container only provides very simple lifecycle management strategies: session state and stateless.

For general Internet applications, only two kinds of life cycle management remote objects, EJB, are basically sufficient. Because of the general Internet applications, most of the data are persistent data and need to read and write to the database. Generally speaking, there is not much temporary state data, mainly some process data generated after the user logs in, and a life cycle of "Session" type is sufficient. However, if our business is an online game, then such a simple life cycle is not enough, because there are a large number of temporary states in the game, such as the state of the team, the state of the player's room, the status of the level copy, and so on. These temporary states require us to control and manage the corresponding object life cycle through business logic code. So a remote object system suitable for games needs to provide the ability for client programs to select, "create / initialize" and "destroy" remote objects.

When managing remote objects, we often use a technology called "object pooling" to avoid frequent creation and destruction of objects. But if these objects are stateful, then our "pool" must be indexed, and the object must have a key. At the same time, our object also needs to have a "reset" reset method to return the object to the initialization state.

In the distributed system, because our object pool is stored on different machines, it is often difficult to maintain its consistency. However, we can turn this problem into the problem of building a "distributed object pool". If each object pool, according to some law of KEY, such as consistent hash, store different objects. So as long as the request is directed to the process where the object is located when the remote call is initiated, that is, when the remote object is found through lookup (), the object can be easily obtained from the local process object pool. The combination of "positioning" and "consistency" of remote objects in finding objects is a very good idea. This can further simplify the use of remote state objects, users do not need to care about where the remote objects are, and can quickly access the correct objects.

[remote object migration under expansion]

When there is a partial process failure of the distributed object container, or when dynamic expansion is needed, as long as we do some data relocation or cache cleaning for the data found by the object, we can easily redistribute the object. If the object can also support persistence, then this kind of data relocation only needs to simply make the object written to persist. Then on the new machine, the object can be read from the persistence device through the policy established by the cache.

Thank you for your reading, the above is the content of "how to understand remote object calls in web". After the study of this article, I believe you have a deeper understanding of how to understand remote object calls in web, and the specific use needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report