Types for Netplex
Log levels, modeled after syslog
A logger receives log messages
Receive a log message
of level
from component
. The component
string is the name of the socket service emitting the message.
Optionally, one can specify a subchannel
when a single component
needs to write to several log files. For example, a subchannel
of "access"
could select the access log of a webserver.
The main log file is selected by passing the empty string as
subchannel
.
Same as log_subch
when subchannel
is set to the empty string.
This means that the message is sent to the main log file of the
component.
Reopen the log files
A logger receives log messages
Type of parallelization:
`Multi_processing
on a single host`Multi_threading
on a single host`Controller_attached
means that the service runs within the
controller. This is (for now) only allowed for controller-internal
services.A system-specific identifier of the thread/process
The state of a socket:
`Enabled
: The controller allows containers to accept connections.
Note that this does not necessarily means that there such containers.`Disabled
: It is not allowed to accept new connections. The
socket is kept open, however.`Restarting b
: The containers are being restarted. The boolean
argument says whether the socket will be enabled after that.`Down
: The socket is down/closedSuch objects identify containers. As additional info, the method
socket_service_name
returns the name of the socket service the
container implements
The container state for workload management:
`Accepting(n,t)
: The container is accepting further connections.
It currently processes n
connections. The last connection was
accepted at time t
(seconds since the epoch).`Busy
: The container does not accept connections`Starting t
: The container was started at time t
and is not
yet ready.`Shutting_down
: The container is being shutted down.How many connections a container can accept in addition to the existing connections:
`Normal_quality(n,greedy)
: It can accept n connections with normal
service quality, n > 0
`Low_quality(n,greedy)
: It can accept n connections with low
service quality (e.g. because it is already quite loaded), n > 0
`Unavailable
: No capacity freeThe greedy
flag sets whether greedy accepts are allowed.
Possible addresses:
`Socket s
: The socket at this socket address`Socket_file f
: The file f
contains the (anonymous) port number
of a socket bound to 127.0.0.1
(This is meant as substitute for
Unix Domain sockets on Win32.)`W32_pipe name
: The Win32 pipe with this name
which must
be of the form "\\.\pipe\<pname>"`W32_pipe_file f
: The file f
contains the (random) name of a
Win32 pipe`Container(socket_dir,service_name,proto_name,thread_id)
:
The special endpoint
of the container for service_name
that is running as thread_id
.
It is system-dependent what this endpoint "is" in reality, usually
a socket or named pipe. If any container of a service is meant,
the value `Any
is substituted as a placeholder for a not yet
known thread_id
.The list of controlled services
Adds a new service. Containers for these services will be started soon. It is allowed to add several services with the same name (but it will be hard to distinguish them later).
Adds a message receiver. This receiver runs in the context of the
controller and receives all messages sent to it. The name
method
must return the name.
Adds a plugin. If the plugin object has already been added, this is a no-op.
Plugins must have been added before the first container is started.
This is not checked, however. You are on the safe side when the
plugin is added in the create_processor
factory method, or in
the post_add_hook
of the processor.
add_admin setup
: Allows to bind another RPC program to the admin
socket. The function setup
will be called whenever a connection
to the admin socket is established, and this function can call
Rpc_server.bind
to bind another RPC program. By default, only
the Admin
interface is available as described in netplex_ctrl.x
.
Note that this RPC server runs in the scope of the controller! No additional process or thread is created.
The event system used by the controller. It must not be used from a container.
Initiates a restart of all containers: All threads/processes are terminated and replaced by newly initialized ones.
Initiates a shutdown of all containers. It is no longer possible to add new services. When the shutdown has been completed, the controller will terminate itself. Note that the shutdown is performed asynchronously, i.e. this method returns immediately, and the messaging required to do the shutdown is done in the background.
send_message destination msgname msgargs
: Sends a message to
destination
. When this method returns, it is only ensured that
the receivers registered in the controller have been notified about
the message (so it can be made sure that any newly forked containers
know about the message). It is not guaranteed that the existing
containers are notified when this method returns. This can (and
usually will) happen at any time in the future.
send_message destination msgname msgargs
: Sends an admin message to
destination
.
See send_message
for the notification guarantees.
let id = register_lever f
: It is possible to register a function f
in the controller, and run it over the internal RPC interface from
any container. These functions are called levers. See
activate_lever
below. See also
Netplex_cenv.Make_lever for a convenient way to create
and use levers.
Lists the containers for a certain socket service name
The number of containers for a certain socket service name
Should be called when the controller is finished, in order to free resources again. E.g. plugins are unplugged, and the master sockets are closed.
The current directory at startup time
The controller is the object in the Netplex master process/thread that manages the containers, logging, and service definitions
The directory where Unix domain sockets are created. For every service a subdirectory is created, and the socket has the name of the protocol.
This is always an absolute path, even if it is only given as relative path in the config file.
Create a logger to be used for the whole Netplex system. The controller is already initialized which makes it possible to write the logger as Netplex service. Messages arriving during the creation are queued up and sent afterwards to the new logger.
The name of the socket_service
is used to identify the service
in the whole netplex process cluster. Names are hierarchical;
name components are separated by dots (e.g. "company.product.service").
The prefix "netplex." is reserved for use by Netplex. The name
"netplex.controller" refers to the service provided by the
controller.
A socket_service
consists of a list of supported protocols
which are identified by a name. Every protocol is available
on a list of sockets (which may be bound to different addresses).
The sockets corresponding to `Container
addresses are missing
here.
Shuts down the master sockets
Internal method. Called by the controller to create a new container. The container must match the parallelization type of the controller. This call is already done in the process/thread provided for the container.
Get some runtime configuration aspects from this controller. This is called when the socket service is added to the controller
The current directory at Netplex startup time (same view as controller)
The proposed name for the socket_service
Instructs the container to change the user of the process after starting the service. This is only possible in multi-processing mode. In multi-threading mode, this parameter is ignored.
After this many seconds the container must have finished the
post_start_hook
. It is usually 60 seconds.
An optional limit of the number of connections this container can accept. If the limit is reached, the container will not accept any further connections, and shut down when all connections are processed.
If set, idle containers run a Gc.full_major
cycle.
The protocol name is an arbitrary string identifying groups of
sockets serving the same protocol for a socket_service
.
The addresses of the master sockets. (The socket type is always SOCK_STREAM.) The list must be non-empty.
The backlog (argument of Unix.listen)
Whether to reuse ports immediately
Whether to set the keep-alive socket option
Whether to set the TCP_NODELAY option
A user-supplied function to configure slave sockets (after accept
).
The function is called from the process/thread of the container.
Enables a disabled socket service again
Disable a socket service temporarily
Restarts the containers for this socket service only
Closes the socket service forever, and initiates a shutdown of all containers serving this type of service.
The name of this receiver
This function is called when a broadcast message is received. The first string is the name of the message, and the array are the arguments.
This function is called when a broadcast admin message is received. The first string is the name of the message, and the array are the arguments.
A user-supplied function that is called after the service has been added to the controller
A user-supplied function that is called after the service has been removed from the controller
A user-supplied function that is called before the container is created and started. It is called from the process/thread of the controller.
A user-supplied function that is called after the container is created and started, but before the first service request arrives. It is called from the process/thread of the container.
A user-supplied function that is called just before the container is terminated. It is called from the process/thread of the container.
A user-supplied function that is called after the container is terminated. It is called from the process/thread of the controller.
A user-supplied function that is called when the workload
changes, i.e. a new connection has been accepted, or an
existing connection could be completely processed.
The bool
argument is true
if the reason is a new
connection. The int
argument is the number of connections.
This function is called from the process/thread of the container.
This function is called when a broadcast message is received. The first string is the name of the message, and the array are the arguments.
This function is called when a broadcast admin message is received. The first string is the name of the message, and the array are the arguments.
A user-supplied function that is called when a system shutdown notification arrives. This notification is just for information that every container of the system will soon be shut down. The system is still completely up at the time this notification arrives, so if the services of other components are required to go down this is the right point in time to do that (e.g. send important data to a storage component).
A user-supplied function that is called when a shutdown notification
arrives. That means that the container should terminate ASAP.
There is, however, no time limitation. The termination is started
by calling the when_done
function passed to the process
method.
This method is called when an uncaught exception would otherwise
terminate the container. It can return true
to indicate that
the container continues running.
This method is called to get the event systems for containers. This is normally a Unixqueue.standard_event_system, but users can override it.
container_run esys
: By default, it just runs esys#run()
.
This method is called to run the event system of the containers.
Users can override it.
Processor hooks can be used to modify the behavior of a processor. See [root:Netplex_intro].servproc for some documentation about the hooks.
A user-supplied function that is called when a new socket connection is established. The function can now process the requests arriving over the connection. It is allowed to use the event system of the container, and to return immediately (multiplexing processor). It is also allowed to process the requests synchronously and to first return to the caller when the connection is terminated.
The function must call when_done
to indicate that it processed
this connection completely.
The string argument is the protocol name.
The processor is the object that is notified when a new TCP connection is accepted. The processor has to include the protocol interpreter that reads and write data on this connection. See [root:Netplex_intro].defproc for an example how to define a processor.
Internal Method. Called by the controller to start the container.
It is the responsibility of the container to call the
post_start_hook
and the pre_finish_hook
.
The file descriptors are endpoints of RPC connections to the
controller. The first serves calls of the Control
program,
and the second serves calls of the System
program.
When start
returns the container will be terminated.
Initiates a shutdown of the container.
The current number of connections
The sum of all connections so far
An RPC client that can be used to send messages to the controller.
Only available while start
is running. It is bound to
System.V1
.
In multi-threaded programs access to system
must be governed
by system_monitor
. See [root:Uq_mt] for details what this means.
lookup service_name protocol_name
tries to find a Unix domain
socket for the service and returns it.
lookup_container_sockets service_name protocol_name
: returns
the Unix Domain paths of all container sockets for this service and
protocol. These are the sockets declared with address type
"container" in the config file.
List of pairs (protocol_name, path)
of all container sockets
of this container
send_message service_pattern msg_name msg_arguments
: Sends
a message to all services and message receivers matching
service_pattern
. The pattern may include the wildcard *
.
See the Netplex_types.controller.send_message method for the notification guarantees.
Sends a log message to the controller. The first string is the subchannel
Update the detail string output for the netplex.connections
admin message
Returns the value of a container variable or Not_found
. Container
variables can be used by the user of a container to store additional
values in the container. These values exist once per thread/process.
Runs a lever function registered in the controller. The int
argument identifies the lever. The encap
argument is the parameter,
and the returned exception is the result. See also
Netplex_cenv.Make_lever for a convenient way to create
and use levers.
The current directory at Netplex startup time (same view as controller)
Containers encapsulate the control flow of the service components. A container is run in a separate thread or process.
Thread safety: All methods except start
can be called from
any thread, and provide full thread safety.
Called by the controller to notify the manager about a shutdown
This function is called by the controller at certain events to
adjust the number of available containers. The manager can
call start_containers
and stop_containers
to change the
system.
The function is called right after the startup to ensure that there are containers to serve requests. It is also called:
Of course, the workload manager is free to adjust the load
at any other time, too, not only when adjust
is called.
Computes the capacity, i.e. the number of jobs a certain container can accept in addition to the existing load.
See [root:Netplex_workload] for definitions of workload managers
The RPC program structure on which the messaging bases. The program, version and procedure numbers are ignored
This method is invoked when the plugin has been added to this controller. Note that plugins can be added to several controllers.
ctrl_receive_call ctrl cid procname procarg emit
:
This method is called in the controller context ctrl
when a procedure
named procname
is called. In procarg
the argument of the
procedure is passed. cid
is the container ID from where the
call originates. To pass the result r
of the call back to the caller,
is is required to call emit (Some r)
(either immediately, or at
some time in the future). By calling emit None
, an error condition
is propagated back to the caller.
This method is called when a container finishes
(after post_finish_hook
).
The boolean is true if the container is the last of the terminated
socket service.
Plugins are extensions of the Netplex system that run in the controller and can be invoked from containers
Returns a system-dependent identifier for the thread:
`Thread id
: The id
as returned by Thread.id
`Process id
: The id
is the process IDOutputs the process or thread ID
Called by the controller if it thinks the container is down. This method must not be called outside the internal Netplex implementation!
Returns the parallelizer that created this thread. Can be used to start another thread of the same type.
Initializes the main process for usage with this parallelizer. This method must not be called outside the internal Netplex implementation!
start_thread f l_close l_share name logger
:
Starts a new thread or process and calls
f thread
in that context. Before this is done, file descriptors
are closed, controlled by the parameters l_close
and l_share
.
The descriptors in l_close
are always closed. The descriptors
in l_share
are not closed. The implementation of the parallelizer
is free to close a reasonable set of descriptors, and l_close
is the minimum, and all - l_share
is the maximum.
There is no way to check when the thread terminates.
It is allowed that the par_thread
object passed to f
is a different
object as the returned par_thread
object.
let lock, unlock = par#create_mem_locker()
: Creates a mutex that
is sufficient to protect process memory from uncoordinated access.
The function lock
obtains the lock, and unlock
releases it.
Returns the system-dependent thread identifier of the caller