Tani OpcPipe protocol  1.0.5
Tani OpcPipe protocol

Table of Contents

API for OpcPipe.

Basics

This interface is for logical handling transport in an OPC like manner. It works with topics (like connections or plc names) and items (contents of the plc). Starting the connection will convert topic and item names to id numbers giving good performance. Data handling is event driven, data types and data structures are supported. All functions are asynchronous, no wait is performed. There are two assumptions for time, no timeout in standard mode and timeout checking in safety mode, exception is the OPCpipe network connection itself. Application timeouts (no answer from plc and so on) need to be handled in the applications on both sides of the OPCpipe.

Real opc works with syncronous calls, this is different with this logic because synchronous calls in network connection results in poor performance. All answers may return asynchronously later, they will call the specified events. So lot of calls can be made without waiting for results.

Options with acks for every function which modifies something are existing. This allowes safety logics. The speed of the acknowledges are depending on the network speed, the behaviour of the operating system and the speed of the application using the data. Gigabit networks and not completely overloaded computer systems will result in stable acknowledge time under one millisecond. Broken connections will be signalled in the same time raster.

There is no housekeeping in global calls, so WmkProtClientStopConnection() can be called without disconnecting all registered topics and items. This needs to be done by the application itself. Only allocated memory is freed, open connections are closed.

The client implementation supports automatic reinitializing topics and items in case of broken and reestablished network connections. The server stops all items and topics if network connection is lost. In case of broken connections all affected items will signal bad quality. The state of the connection reflects in return values, quality values and a status call.

All logic is working asynchronously. In standard mode all requests from the client side to the server side do not wait for an answer, normally no answer is sent if no error occures (Unix: "No News are Good News"). Registring items normally will give responses, this may be data or bad quality. This asynchronous logic increases speed in networks with low ping times. In case of a function that is returning a wait status and which did send data it could be continued by calling next function with sending if the previous function is ready. This can be checked with registering a confirm for this (WmkProtWaitConfirm()) or polling with WmkProtConnStatus().

Internal timings especialy for life ack handling can be handled with WmkProtTimings.

The active side (Client) calls the passive side (Server) via the network OPCpipe connection. The passive side sends data and status to the active side of the OPCpipe.

There is an option using this interface for OPC-UA and OPC-DA. The only need is the OpcDaxx.so or OpcUaxx.so libraries.

How it works

The OPCpipe connects two applications. One side is client, the other server. The OPCpipe logic is similar to opc logic. It is designed for easily handling and good performance in handling a large number on different elements, organized in topics and their corresponding items.

OPCpipe works similar but with the main difference that there is no DCOM and no shared memory inside. Opc based on DCOM was designed using in one machine, so shared memory is fast and simple - but synchronous always. Network connections are asynchonous. Waiting for a result in every call will have poor performance and high network loads. So the topic and item creating will finish always immediately, the network is hidden from sight of the client and server application. Because of this the OPCpipe handles the connections internally. The client can start the connection and create topics and items regardeless of the network connection is established. If network will work - immediately or later - the pipe is created and initialized, and now data can be send and received. If the network connection is lost and reestablished the OPCpipe reinitializes all registered topics and items again. From sight of the OPCpipe server all items and topics are deleted if the network connection is lost.

There are no blocking functions. Each function will return immediately. All results will come with confirm calls.

All functions are reentrant and multiprocessor safe and thread safe, unless explicitly noted.

Because of user right problems in shared memory DCOM is very complex to configure. It uses multiple network connections with variable ports, and it uses a windows internally used port for its internal registration. Applications identify semselves with their UUIDs which need to be registered in the windows registry. So it is bound to the windows platform. OPCpipe does not need any of these things. It is neutral to any operation system, works on any network and only needs POSIX compatibility of the underlaying system. Each OPCpipe can handle a lot of topics and items with one freely configurable network connection.

Highlights

Options

The OPCpipe works with the opc logic, so it needs a lot of functionality the client and server toolkits have. This are data caching, optising for asynchronous communication lines, type conversion and endian translations.

Most opc servers are working with polling the data from tle plc's. Lot of data does not change very often so it is unnecessary sending them over the network. So there are options that data are transmitted over the pipe if their content did change.

The opc specification is old, so there are no actual browsing functionalities. The OPCpipe has a lot of options for browsing.

Normally the OPCpipe works asynchronously but it can be set up working synchronous.

No other specification defined customer diagnostic functions.

How to program a Client

The client first initializes itself as a client with WmkProtClientInit(). This registers the confirm calls used later. Then it sets up the connection with WmkProtClientConnection() (OpcPipe), WmkProtOpcClientConnection() (OPC DA classic) or WmkProtOpcUaClientConnection() (Opc UA). Now the contents of the server side can be browsed with WmkProtBrowse(). Items are created with WmkProtCreateItem(). Items default to inactive state (unless TYP_FLAGS_CREATE_ACTIVATED is given in create call). For receiving data from them they need to be activated with WmkProtActivateItem(). Data is received from the server with PLC_OPCPIPE_CONFIRMS::confirmCliCyclicReadData(), the server sends them with WmkProtCyclicReadData(). Data can be sent to the server with WmkProtWriteItem(). Temporarily not used items can be switched off with WmkProtDeactivateItem(), so no more data will arrive from it. All items of a topic can resend their data with WmkProtRefreshItem. They can be deleted with WmkProtDeleteItem(). Remote diagnostics will be triggered with WmkProtDiagnostics(), the diagnostics data will received with PLC_OPCPIPE_CONFIRMS::confirmCliDiagnosticsData(). The topic and item names which were given in registering the elements can be requested with WmkProtDiagTopicName and WmkProtDiagItemName for more nice user displaying. An opened connection can be deleted with WmkProtClientStopConnection() but this cancels the connection and does not delete items and topics individually. The complete OPCpipe is finished with WmkProtClientTerminate(), but this work similar to WmkProtClientStopConnection().

Creation or deleting of items and change of activating status will have positive result even if the network line can not handle this actually. The requests will be cached in internal memory and handled if network load becomes free again. Please notice that very slowly network lines may need lot of time until all is handled and first data will arrive. The runtime capabilities can be queried with WmkProtGetCapabilities.

File access via OPC UA

After opening an OPC UA client connection, file access works via the WmkProtRpcCall function. First a file must be opened with RPC_ID_FILE_OPEN (using an appropriate file mode). The returned handle can be used with multiple calls of RPC_ID_FILE_READ and RPC_ID_FILE_WRITE. Each call transfers the specified amount of data. Reading may return less than requested if the file is too short (so for reading a complete file one can call read until the returned length is 0). Finally the file should be closed via RPC_ID_FILE_CLOSE. If required, the file size can be determined with RPC_ID_FILE_ATTR.

Browsing of files works via WmkProtBrowse() with WMK_PROT_BROWSE_ITEMS like any other browsing operation. Files are indicated by a datatype of TYP_USE_FILE.

More file operations are possible, see well-known functions for OPC UA RPC calls.

How to program a Server

The server first initializes with WmkProtServerInit(). This registers the confirm calls used later. Then it sets up the connection with WmkProtServerConnection(). Now the client may register items which will result in the PLC_OPCPIPE_CONFIRMS::confirmSrvCreateTopic and PLC_OPCPIPE_CONFIRMS::confirmSrvCreateItem calls. Browsing is handled by PLC_OPCPIPE_CONFIRMS::confirmSrvBrowseItems(). Syntax and handling is similar to the client side, the parameters of the confirm functions are similar. If there are new Values available, the server calls WmkProtCyclicReadData() to send them to the client. At the end of a connection the server calls WmkProtServerStopConnection(). The complete OPCpipe is finished with WmkProtServerTerminate(), but this work similar to WmkProtServerStopConnection().

Server work modes

The OPCpipe library internally is a real time data base system. It connects topic and item each together saving computation time in using the topic and item names on creation only.

There are two major types of working:

Separate handling separates the connections in network completely. This can be used for gateways. All elements to an open and running network connection is separated from other connections.

The global table handling may be pretty usable for server end handling to plcs. It combines topics and their identifiers to one topic list accessing the plc. Multiple network stations can connect to the same topic, the application using the OPCpipe server side gets one topic with the same name only. So applications can be simple, no multiple connection combining is needed. equal items are combined, too, so sending process data once in server implementations will dispatch them to multiple stations in your network.

Detailed Calls

All calls are asynchonously. The calls for handling the partner side of the OPCpipe will respond later with one of the registered confirm calls. Because of the network connection which may be slower than the requests it can happen that it is full. In this case the call which can not be handled immediately will return ERR_OPCPIPE_WAIT. The next call can be done if the wait confirm did come informing thaf the queue is open again. Calls in the state of overloaded queues will return ERR_OPCPIPE_TRY_AGAIN.

If a call returns with ERR_OPCPIPE_WAIT the data need to be stable because the functions will use them. The data will become untouched again after the wait confirm did tell this.

If confirm is given it will be called if data has been sent or connection died or later if the connection status is changing.

Details