1. Working with InternetGatewayDevices

An InternetGatewayDevice connects a LAN to a WAN, and through UPnP supports the monitoring and configuration of LAN and WAN interfaces. Typically this functionality is used for NAT port mapping: A client application on the LAN wants to receive network connections from a WAN host, so it has to create a port forwarding and mapping on the LAN router.

1.1. Mapping a NAT port

Cling Support contains all the neccessary functionality, creating a port mapping on all NAT routers on a network requires only three lines of code:

PortMapping desiredMapping =
        new PortMapping(
                8123,
                "192.168.0.123",
                PortMapping.Protocol.TCP,
                "My Port Mapping"
        );

UpnpService upnpService =
        new UpnpServiceImpl(
                new PortMappingListener(desiredMapping)
        );

upnpService.getControlPoint().search();

The first line creates a port mapping configuration with the external/internal port, an internal host IP, the protocol and an optional description.

The second line starts the UPnP service with a special listener. This listener will add the port mapping on any InternetGatewayDevice with a WANIPConnection or a WANPPPConnection service as soon as it is discovered. You should immediately start a ControlPoint#search() for all devices on your network, this triggers a response and discovery of all NAT routers, activating the port mapping.

The listener will also delete the port mapping when you stop the UPnP stack through UpnpService#shutdown(), usually before your application quits. If you forget to shutdown the stack the port mapping will remain on the InternetGatewayDevice - the default lease duration is 0!

If anything goes wrong, log messages with WARNING level will be created on the category org.teleal.cling.support.igd.PortMappingListener. You can override the PortMappingListener#handleFailureMessage(String) method to customize this behavior.

Alternatively, you can manually add and delete port mappings on an already discovered device with the following ready-to-use action callbacks:

LocalService service = device.findService(new UDAServiceId("WANIPConnection"));

upnpService.getControlPoint().execute(
    new PortMappingAdd(service, desiredMapping) {

        @Override
        public void success(ActionInvocation invocation) {
            // All OK
        }

        @Override
        public void failure(ActionInvocation invocation,
                            UpnpResponse operation,
                            String defaultMsg) {
            // Something is wrong
        }
    }
);

upnpService.getControlPoint().execute(
    new PortMappingDelete(service, desiredMapping) {

        @Override
        public void success(ActionInvocation invocation) {
            // All OK
        }

        @Override
        public void failure(ActionInvocation invocation,
                            UpnpResponse operation,
                            String defaultMsg) {
            // Something is wrong
        }
    }
);
1.2. Getting connection information

The current connection information, including status, uptime, and last error message can be retrieved from a WAN*Connection service with the following callback:

LocalService service = device.findService(new UDAServiceId("WANIPConnection"));

upnpService.getControlPoint().execute(
    new GetStatusInfo(service) {

        @Override
        protected void success(Connection.StatusInfo statusInfo) {
            assertEquals(statusInfo.getStatus(), Connection.Status.Connected);
            assertEquals(statusInfo.getUptimeSeconds(), 1000);
            assertEquals(statusInfo.getLastError(), Connection.Error.ERROR_NONE);
        }

        @Override
        public void failure(ActionInvocation invocation,
                            UpnpResponse operation,
                            String defaultMsg) {
            // Something is wrong
        }
    }
);

Additionally, a callback for obtaining the external IP address of a connection is available:

LocalService service = device.findService(new UDAServiceId("WANIPConnection"));

upnpService.getControlPoint().execute(
    new GetExternalIP(service) {

        @Override
        protected void success(String externalIPAddress) {
            assertEquals(externalIPAddress, "123.123.123.123");
        }

        @Override
        public void failure(ActionInvocation invocation,
                            UpnpResponse operation,
                            String defaultMsg) {
            // Something is wrong
        }
    }
);
2. Sending messages to Samsung TVs

Many network-enabled Samsung TVs implement the proprietary samsung.com:MessageBoxService:1. The original purpose of this service was most likely connectivity with Samsung mobile phones; notification messages and alerts would appear on your TV when you are at home and your cellphone is connected to your local WiFi network (and your TV is turned on).

Cling Support delivers client classes for sending notifications to your Samsung TV via UPnP. (See this page for more information about the reverse-engineered raw message format.)

Sending messages from an Android handset
The XML parsing of messages requires Android 2.2, it won't work on any older version.

There are several message types available. The first is an SMS with a sender and receiver names and phone numbers, as well as a timestamp and message text:

MessageSMS msg = new MessageSMS(
        new DateTime("2010-06-21", "16:34:12"),
        new NumberName("1234", "The Receiver"),
        new NumberName("5678", "The Sender"),
        "Hello World!"
);

This message will appear as a "New SMS Received!" notification on your TV, with the option to reveal all message details. The other message types recognized by the TV are incoming call notification as well as calendar schedule reminder:

MessageIncomingCall msg = new MessageIncomingCall(
        new DateTime("2010-06-21", "16:34:12"),
        new NumberName("1234", "The Callee"),
        new NumberName("5678", "The Caller")
);
MessageScheduleReminder msg = new MessageScheduleReminder(
        new DateTime("2010-06-21", "16:34:12"),
        new NumberName("1234", "The Owner"),
        "The Subject",
        new DateTime("2010-06-21", "17:34:12"),
        "The Location",
        "Hello World!"
);

This is how you send a message asynchronously:

LocalService service = device.findService(new ServiceId("samsung.com", "MessageBoxService"));

upnpService.getControlPoint().execute(
    new AddMessage(service, msg) {

        @Override
        public void success(ActionInvocation invocation) {
            // All OK
        }

        @Override
        public void failure(ActionInvocation invocation,
                            UpnpResponse operation,
                            String defaultMsg) {
            // Something is wrong
        }
    }
);

Note that although your TV's service descriptor most likely contains a RemoveMessage action and Cling Support also ships with a RemoveMessageCallback, this action doesn't seem to be implemented by any Samsung TVs. Messages can only be deleted directly on the TV, with the remote control.

3. Accessing and providing MediaServers

The standardized UPnP AV MediaServer:1 device template describes some of the the most popular UPnP services. Despite the name, these services are not about serving and accessing media data such as music, picture, or video files. They are for sharing metadata: The data about media files such as their name, format, and size, and a locator that can be used to obtain the actual file. Transmission of the media file is outside of the scope of these specifications; most of the time that is the job of a simple HTTP server and client.

ContentDirectory server or client on Android
The XML parsing of DIDL content requires Android 2.2, it won't work on any older version.

A MediaServer:1 device has at least a ContentDirectory:1 and a ConnectionManager:1 service.

3.1. Browsing a ContentDirectory

A ContentDirectory:1 service provides media resource metadata. The content format for this metadata is XML and the schema is a mixture of DIDL, Dublic Core, and UPnP specific elements and attributes. Usually you'd have to call the Browse action of the content directory service to get this XML metadata and then parse it manually.

The Browse action callback in Cling Support handles all of this for you:

new Browse(service, "3", BrowseFlag.DIRECT_CHILDREN) {

    @Override
    public void received(ActionInvocation actionInvocation, DIDLContent didl) {

        // Read the DIDL content either using generic Container and Item types...
        assertEquals(didl.getItems().size(), 2);
        Item item1 = didl.getItems().get(0);
        assertEquals(
                item1.getTitle(),
                "All Secrets Known"
        );
        assertEquals(
                item1.getFirstPropertyValue(DIDLObject.Property.UPNP.ALBUM.class),
                "Black Gives Way To Blue"
        );
        assertEquals(
                item1.getFirstResource().getProtocolInfo().getContentFormatMimeType().toString(),
                "audio/mpeg"
        );
        assertEquals(
                item1.getFirstResource().getValue(),
                "http://10.0.0.1/files/101.mp3"
        );

        // ... or cast it if you are sure about its type ...
        assert MusicTrack.CLASS.equals(item1);
        MusicTrack track1 = (MusicTrack) item1;
        assertEquals(track1.getTitle(), "All Secrets Known");
        assertEquals(track1.getAlbum(), "Black Gives Way To Blue");
        assertEquals(track1.getFirstArtist().getName(), "Alice In Chains");
        assertEquals(track1.getFirstArtist().getRole(), "Performer");

        MusicTrack track2 = (MusicTrack) didl.getItems().get(1);
        assertEquals(track2.getTitle(), "Check My Brain");

        // ... which is much nicer for manual parsing, of course!

    }

    @Override
    public void updateStatus(Status status) {
        // Called before and after loading the DIDL content
    }

    @Override
    public void failure(ActionInvocation invocation,
                        UpnpResponse operation,
                        String defaultMsg) {
        // Something wasn't right...
    }
};

The first callback retrieves all the children of container 3 (container identifier).

The root container identifier
You can not copy/paste the shown example code! It will most likely not return any items! You need to use a different container ID! The shown container ID '3' is just an example. Your server does not have a container with identifier '3'! If you want to browse the "root" container of the ContentDirectory, use the identifier '0': Browse(service, "0", BrowseFlag.DIRECT_CHILDREN). Although not standardized many media servers consider the ID '0' to be the root container's identifier. If it's not, ask your media server vendor. By listing all the children of the root container you can get the identifiers of sub-containers and so on, recursively.

The received() method is called after the DIDL XML content has been validated and parsed, so you can use a type-safe API to work with the metadata. DIDL content is a composite structure of Container and Item elements, here we are interested in the items of the container, ignoring any sub-containers it might or might not have.

You can implement or ignore the updateStatus() method, it's convenient to be notified before the metadata is loaded, and after it has been parsed. You can use this event to update a status message/icon of your user interface, for example.

This more complex callback instantiation shows some of the available options:

ActionCallback complexBrowseAction =
        new Browse(service, "3", BrowseFlag.DIRECT_CHILDREN,
                   "*",
                   100l, 50l,
                   new SortCriterion(true, "dc:title"),        // Ascending
                   new SortCriterion(false, "dc:creator")) {   // Descending

            // Implementation...

        };

The arguments declare filtering with a wildcard, limiting the result to 50 items starting at item 100 (pagination), and some sort criteria. It's up to the content directory provider to handle these options.

3.2. The ContentDirectory service

Let's switch perspective and consider the server-side of a ContentDirectory. Bundled in Cling Support is a simple ContentDirectory abstract service class, the only thing you have to do is implement the browse() method:

public class MP3ContentDirectory extends AbstractContentDirectoryService {

    @Override
    public BrowseResult browse(String objectID, BrowseFlag browseFlag,
                               String filter,
                               long firstResult, long maxResults,
                               SortCriterion[] orderby) throws ContentDirectoryException {
        try {

            // This is just an example... you have to create the DIDL content dynamically!

            DIDLContent didl = new DIDLContent();

            String album = ("Black Gives Way To Blue");
            String creator = "Alice In Chains"; // Required
            PersonWithRole artist = new PersonWithRole(creator, "Performer");
            MimeType mimeType = new MimeType("audio", "mpeg");

            didl.addItem(new MusicTrack(
                    "101", "3", // 101 is the Item ID, 3 is the parent Container ID
                    "All Secrets Known",
                    creator, album, artist,
                    new Res(mimeType, 123456l, "00:03:25", 8192l, "http://10.0.0.1/files/101.mp3")
            ));

            didl.addItem(new MusicTrack(
                    "102", "3",
                    "Check My Brain",
                    creator, album, artist,
                    new Res(mimeType, 2222222l, "00:04:11", 8192l, "http://10.0.0.1/files/102.mp3")
            ));

            // Create more tracks...

            // Count and total matches is 2
            return new BrowseResult(new DIDLParser().generate(didl), 2, 2);

        } catch (Exception ex) {
            throw new ContentDirectoryException(
                    ContentDirectoryErrorCode.CANNOT_PROCESS,
                    ex.toString()
            );
        }
    }

    @Override
    public BrowseResult search(String containerId,
                               String searchCriteria, String filter,
                               long firstResult, long maxResults,
                               SortCriterion[] orderBy) throws ContentDirectoryException {
        // You can override this method to implement searching!
        return super.search(containerId, searchCriteria, filter, firstResult, maxResults, orderBy);
    }
}

You need a DIDLContent instance and a DIDLParser that will transform the content into an XML string when the BrowseResult is returned. It's up to you how you construct the DIDL content, typically you'd have a backend database you'd query and then build the Container and Item graph dynamically. Cling provides many convenience content model classes fore representing multimedia metadata, as defined in the ContentDirectory:1 specification (MusicTrack, Movie, etc.), they can all be found in the package org.teleal.cling.support.model.

The DIDLParser is not thread-safe, so don't share a single instance between all threads of your server application!

The AbstractContentDirectoryService only implements the mandatory actions and state variables as defined in ContentDirectory:1 for browsing and searching content. If you want to enable editing of metadata, you have to add additional action methods.

Your MediaServer:1 device also has to have a ConnectionManager:1 service.

3.3. A simple ConnectionManager for HTTP-GET

If your transmission protocol is based on GET requests with HTTP - that is, your media player will download or stream the media file from an HTTP server - all you need to provide with your MediaServer:1 is a very simple ConnectionManager:1.

This connection manager doesn't actually manage any connections, in fact, it doesn't have to provide any functionality at all. This is how you can create and bind this simple service with the Cling Support bundled ConnectionManagerService:

LocalService<ConnectionManagerService> service =
        new AnnotationLocalServiceBinder().read(ConnectionManagerService.class);

service.setManager(
        new DefaultServiceManager<ConnectionManagerService>(
                service,
                ConnectionManagerService.class
        )
);

You can now add this service to your MediaServer:1 device and everything will work.

Many media servers however provide at least a list of "source" protocols. This list contains all the (MIME) protocol types your media server might potentially have resources for. A sink (renderer) would obtain this protocol information and decide upfront if any resource from your media server can be played at all, without having to browse the content and looking at each resource's type.

First, create a list of protocol information that is supported:

final ProtocolInfos sourceProtocols =
        new ProtocolInfos(
                new ProtocolInfo(
                        Protocol.HTTP_GET,
                        ProtocolInfo.WILDCARD,
                        "audio/mpeg",
                        "DLNA.ORG_PN=MP3;DLNA.ORG_OP=01"
                ),
                new ProtocolInfo(
                        Protocol.HTTP_GET,
                        ProtocolInfo.WILDCARD,
                        "video/mpeg",
                        "DLNA.ORG_PN=MPEG1;DLNA.ORG_OP=01;DLNA.ORG_CI=0"
                )
        );

You now have to customize the instantiation of the connection manager service, passing the list of procotols as a constructor argument:

service.setManager(
    new DefaultServiceManager<ConnectionManagerService>(service, null) {
        @Override
        protected ConnectionManagerService createServiceInstance() throws Exception {
            return new ConnectionManagerService(sourceProtocols, null);
        }
    }
);

If your transmission protocol is not HTTP but for example RTSP streaming, your connection manager will have to do more work.

3.4. Managing connections between peers

You'd probably agree that the ConnectionManager is unnecessary when the media player pulls the media data with a HTTP GET request on the provided URL. Understand that the UPnP MediaServer device provides the URL; if it also serves the file named in the URL, that is outside of the scope of UPnP although a common system architecture.

Then again, when the source of the media data has to push the data to the player, or prepare the connection with the player beforehand, the ConnectionManager service becomes useful. In this situation two connection managers would first negotiate a connection with the PrepareForConnection action - which side initiates this is up to you. Once the media finished playing, one of the connection managers will then call the ConnectionComplete action. A connection has a unique identifier and some associated protocol information, the connection managers handle the connection as peers.

Cling Support provides an AbstractPeeringConnectionManagerService that will do all the heavy lifting for you, all you have to do is implement the creation and closing of connections. Although we are still discussing this in the context of a media server, this peer negotiation of a connection naturally also has to be implemented on the media renderer/player side. The following examples are therefore also relevant for the connection manager of a MediaRenderer.

First, implement how you want to manage the connection on both ends of the connection (this is just one side):

public class PeeringConnectionManager extends AbstractPeeringConnectionManagerService {

    PeeringConnectionManager(ProtocolInfos sourceProtocolInfo,
                             ProtocolInfos sinkProtocolInfo) {
        super(sourceProtocolInfo, sinkProtocolInfo);
    }

    @Override
    protected ConnectionInfo createConnection(int connectionID,
                                              int peerConnectionId,
                                              ServiceReference peerConnectionManager,
                                              ConnectionInfo.Direction direction,
                                              ProtocolInfo protocolInfo)
            throws ActionException {

        // Create the connection on "this" side with the given ID now...
        ConnectionInfo con = new ConnectionInfo(
                connectionID,
                123, // Logical Rendering Control service ID
                456, // Logical AV Transport service ID
                protocolInfo,
                peerConnectionManager,
                peerConnectionId,
                direction,
                ConnectionInfo.Status.OK
        );

        return con;
    }

    @Override
    protected void closeConnection(ConnectionInfo connectionInfo) {
        // Close the connection
    }

    @Override
    protected void peerFailure(ActionInvocation invocation,
                               UpnpResponse operation,
                               String defaultFailureMessage) {
        System.err.println("Error managing connection with peer: " + defaultFailureMessage);
    }
}

In the createConnection() method you have to provide the identifiers of your Rendering Control and A/V Transport logical service, responsible for the created connection. The connection ID has already been stored for you, so all you have to do is return the connection information with these identifiers.

The closeConnection() method is the counterpart, here you would tear down your logical services for this connection, or do whatever cleanup is necessary.

The peerFailure() message is not related to the two previous messages. It is only used by a connection manager that invokes the actions, not on the receiving side.

Let's create a connection between two connection manager peers. First, create the service acting as the source (let's also assume that this is the media server representing the source of the media data):

PeeringConnectionManager peerOne =
    new PeeringConnectionManager(
            new ProtocolInfos("http-get:*:video/mpeg:*,http-get:*:audio/mpeg:*"),
            null
    );
LocalService<PeeringConnectionManager> peerOneService = createService(peerOne);

You can see that it provides media metadata with several protocols. The sink (or media renderer) is the peer connection manager:

PeeringConnectionManager peerTwo =
    new PeeringConnectionManager(
            null,
            new ProtocolInfos("http-get:*:video/mpeg:*")
    );
LocalService<PeeringConnectionManager> peerTwoService = createService(peerTwo);

It plays only one particular protocol.

The createService() method is simply setting the connection manager instance on the service, after reading the service metadata from (already provided) annotations:

public LocalService<PeeringConnectionManager> createService(final PeeringConnectionManager peer) {

    LocalService<PeeringConnectionManager> service =
            new AnnotationLocalServiceBinder().read(
                    AbstractPeeringConnectionManagerService.class
            );

    service.setManager(
            new DefaultServiceManager<PeeringConnectionManager>(service, null) {
                @Override
                protected PeeringConnectionManager createServiceInstance() throws Exception {
                    return peer;
                }
            }
    );
    return service;
}

Now one of the peers has to initiate the connection. It has to create a connection identifier, store this identifier ("managing" the connection), and call the PrepareForConnection service of the other peer. All of this is provided and encapsulated in the createConnectionWithPeer() method:

int peerOneConnectionID = peerOne.createConnectionWithPeer(
    peerOneService.getReference(),
    controlPoint,
    peerTwoService,
    new ProtocolInfo("http-get:*:video/mpeg:*"),
    ConnectionInfo.Direction.Input
);

if (peerOneConnectionID == -1) {
    // Connection establishment failed, the peerFailure()
    // method has been called already. It's up to you
    // how you'd like to continue at this point.
}
        
int peerTwoConnectionID =
        peerOne.getCurrentConnectionInfo(peerOneConnectionID) .getPeerConnectionID();

int peerTwoAVTransportID =
        peerOne.getCurrentConnectionInfo(peerOneConnectionID).getAvTransportID();

You have to provide a reference to the local service, a ControlPoint to execute the action, and the protocol information you want to use for this connection. The direction (Input in this case) is how the remote peer should handle the data transmitted on this connection (again, we assume the peer is the data sink). The method returns the identifer of the new connection. You can use this identifier to obtain more information about the connection, for example the identifier of the connection assigned by the other peer, or the logical service identifier for the AV Transport service, also assigned by the remote peer.

When you are done with the connection, close it with the peer:

peerOne.closeConnectionWithPeer(
        controlPoint,
        peerTwoService,
        peerOneConnectionID
);

The peerFailure() method shown earlier will be called when an invocation of createConnectionWithPeer() or closeConnectionWithPeer() fails.

4. Accessing and providing MediaRenderers

The purpose of the MediaRenderer:1's services is remote control of a media output device. A device that implements a renderer and therefore has the necessary AVTransport:1 service can be controlled just like with a traditional infrared remote. Think about how awkward it is to control video playback on the Playstation3 with the game controller. The MediaRenderer is like a programmable universal remote API, so you could replace your infrared remote control or Playstation controller with an iPad, Android handset, touchscreen panel, laptop computer, or anything else that speaks UPnP.

(Unfortunately, the Playstation3 does not expose any MediaRenderer services. In fact, most MediaRenderer implementations in the wild, in TVs and set-top boxes, are incomplete or incompatible given a to-the-letter interpretation of the specifications. To make matters worse, instead of simplifying the UPnP A/V specifications, more rules were added in DLNA guidelines, thus making compatiblity even more difficult to achieve. A working and correctly behaving MediaRenderer seems to be an exception, not the norm.)

The procedure is simple: First you send the URL of a media resource to the renderer. How you obtained the URL of that resource is entirely up to you, probably browsing a media server's resources metadata. Now you control the state of the renderer, for example, playing, pausing, stopping, recording the video, and so on. You can also control other properties such as volume and brightness of the audio/video content through the standardized RenderingControl:1 service of a media renderer.

Cling Support provides the org.teleal.cling.support.avtransport.AbstractAVTransportService class, an abstract type with all the UPnP actions and state variable mappings already in place. To implement a MediaRenderer you'd have to create a subclass and implement all methods. You should consider this strategy if you already have an existing media player, and you want to provide a UPnP remote control interface.

Alternatively, if you are writing a new media player, Cling can even provide the state management and transitions for you, so all you have to implement is the actual output of media data.

4.1. Creating a renderer from scratch

Cling Support provides a state machine for managing the current state of your playback engine. This feature simplifies writing a media player with a UPnP renderer control interface. There are several steps involved

4.1.1. Defining the states of the player

First, define your state machine and what states are supported by your player:

package example.mediarenderer;

import org.teleal.cling.support.avtransport.impl.AVTransportStateMachine;
import org.teleal.common.statemachine.States;

@States({
        MyRendererNoMediaPresent.class,
        MyRendererStopped.class,
        MyRendererPlaying.class
})
interface MyRendererStateMachine extends AVTransportStateMachine {}

This is a very simple player with only three states: The initial state when no media is present, and the Playing and Stopped states. You can also support additional states, such as Paused and Recording but we want to keep this example as simple as possible. (Also compare the "Theory of Operation" chapter and state chart in the AVTransport:1 specification document, section 2.5.)

Next, implement the states and the actions that trigger a transition from one state to the other.

The initial state has only one possible transition and an action that triggers this transition:

public class MyRendererNoMediaPresent extends NoMediaPresent {

    public MyRendererNoMediaPresent(AVTransport transport) {
        super(transport);
    }

    @Override
    public Class<? extends AbstractState> setTransportURI(URI uri, String metaData) {

        getTransport().setMediaInfo(
                new MediaInfo(uri.toString(), metaData)
        );

        // If you can, you should find and set the duration of the track here!
        getTransport().setPositionInfo(
                new PositionInfo(1, metaData, uri.toString())
        );

        // It's up to you what "last changes" you want to announce to event listeners
        getTransport().getLastChange().setEventedValue(
                getTransport().getInstanceId(),
                new AVTransportVariable.AVTransportURI(uri),
                new AVTransportVariable.CurrentTrackURI(uri)
        );
        
        return MyRendererStopped.class;
    }
}

When a client sets a new URI for playback, you have to prepare your renderer accordingly. You typically want to change the MediaInfo of your AVTransport to reflect the new "current" track, and you might want to expose information about the track, such as the playback duration. How you do this (e.g. you could actually already retrieve the file behind the URL and analyze it) is up to you.

The LastChange object is how you notify control points about any changes of state, here we tell the control points that there is a new "AVTransportURI" as well as a new "CurrentTrackURI". You can add more variables and their values to the LastChange, depending on what actually changed - note that you should do this within a single call of setEventedValue(...) if you consider several changes to be atomic. (The LastChange will be polled and send to control points periodically in the background, more about this later.)

The AVTransport will transition to the Stopped state after the URI has been set.

The Stopped state has many possible transitions, from here a control point can decide to play, seek, skip to the next track, and so on. The following example is really not doing much, how you implement these triggers and state transitions is completely dependend on the design of your playback engine - this is only the scaffolding:

public class MyRendererStopped extends Stopped {

    public MyRendererStopped(AVTransport transport) {
        super(transport);
    }

    public void onEntry() {
        super.onEntry();
        // Optional: Stop playing, release resources, etc.
    }

    public void onExit() {
        // Optional: Cleanup etc.
    }

    @Override
    public Class<? extends AbstractState> setTransportURI(URI uri, String metaData) {
        // This operation can be triggered in any state, you should think
        // about how you'd want your player to react. If we are in Stopped
        // state nothing much will happen, except that you have to set
        // the media and position info, just like in MyRendererNoMediaPresent.
        // However, if this would be the MyRendererPlaying state, would you
        // prefer stopping first?
        return MyRendererStopped.class;
    }

    @Override
    public Class<? extends AbstractState> stop() {
        /// Same here, if you are stopped already and someone calls STOP, well...
        return MyRendererStopped.class;
    }

    @Override
    public Class<? extends AbstractState> play(String speed) {
        // It's easier to let this classes' onEntry() method do the work
        return MyRendererPlaying.class;
    }

    @Override
    public Class<? extends AbstractState> next() {
        return MyRendererStopped.class;
    }

    @Override
    public Class<? extends AbstractState> previous() {
        return MyRendererStopped.class;
    }

    @Override
    public Class<? extends AbstractState> seek(SeekMode unit, String target) {
        // Implement seeking with the stream in stopped state!
        return MyRendererStopped.class;
    }
}

Each state can have two magic methods: onEntry() and onExit() - they do exactly what the name says. Don't forget to call the superclass' method if you decide to use them!

Usually you'd start playback when the onEntry() method of the Playing state is called:

public class MyRendererPlaying extends Playing {

    public MyRendererPlaying(AVTransport transport) {
        super(transport);
    }

    @Override
    public void onEntry() {
        super.onEntry();
        // Start playing now!
    }

    @Override
    public Class<? extends AbstractState> setTransportURI(URI uri, String metaData) {
        // Your choice of action here, and what the next state is going to be!
        return MyRendererStopped.class;
    }

    @Override
    public Class<? extends AbstractState> stop() {
        // Stop playing!
        return MyRendererStopped.class;
    }

So far there wasn't much UPnP involved in writing your player - Cling just provided a state machine for you and a way to signal state changes to clients through the LastEvent interface.

4.1.2. Registering the AVTransportService

Your next step is wiring the state machine into the UPnP service, so you can add the service to a device and finally the Cling registry. First, bind the service and define how the service manager will obtain an instance of your player:

LocalService<AVTransportService> service =
        new AnnotationLocalServiceBinder().read(AVTransportService.class);

service.setManager(
        new DefaultServiceManager<AVTransportService>(service, null) {
            @Override
            protected AVTransportService createServiceInstance() throws Exception {
                return new AVTransportService(
                        MyRendererStateMachine.class,   // All states
                        MyRendererNoMediaPresent.class  // Initial state
                );
            }
        }
);

The constructor takes two classes, one is your state machine definition, the other the initial state of the machine after it has been created.

That's it - you are ready to add this service to a MediaRenderer:1 device and control points will see it and be able to call actions.

However, there is one more detail you have to consider: Propagation of LastChange events. Whenever any player state or transition adds a "change" to LastChange, this data will be accumulated. It will not be send to GENA subscribers immediately or automatically! It's up to you how and when you want to flush all accumulated changes to control points. A common approach would be a background thread that executes this operation every second (or even more frequently):

AVTransportService avTransportService = service.getManager().getImplementation();
avTransportService.fireLastChange();

Finally, note that the AVTransport:1 specification also defines "logical" player instances. For examle, a renderer that can play two URIs simultaneously would have two AVTransport instances, each with its own identifier. The reserved identifier "0" is the default for a renderer that only supports playback of a single URI at a time. In Cling, each logical AVTransport instance is represented by one instance of a state machine (with all its states) associated with one instance of the AVTransport type. All of these objects are never shared, and they are not thread-safe. Read the documentation and code of the AVTransportService class for more information on this feature - by default it supports only a single transport instance with ID "0", you have to override the findInstance() methods to create and support several parallel playback instances.

4.2. Controlling a renderer

Cling Support provides several action callbacks that simplify creating a control point for the AVTransport service. This is the client side of your player, the remote control.

This is how you set an URI for playback:

ActionCallback setAVTransportURIAction =
        new SetAVTransportURI(service, "http://10.0.0.1/file.mp3", "NO METADATA") {
            @Override
            public void failure(ActionInvocation invocation, UpnpResponse operation, String defaultMsg) {
                // Something was wrong
            }
        };

This is how you actually start playback:

ActionCallback playAction =
        new Play(service) {
            @Override
            public void failure(ActionInvocation invocation, UpnpResponse operation, String defaultMsg) {
                // Something was wrong
            }
        };

Explore the package org.teleal.cling.support.avtransport.callback for more options.

Your control point can also subscribe with the service and listen for LastChange events. Cling provides a parser so you get the same types and classes on the control point as are available on the server - it's the same for sending and receiving the event data. When you receive the "last change" string in your SubscriptionCallback you can transform it, for example, this event could have been sent by the service after the player transitioned from NoMediaPresent to Stopped state:

LastChange lastChange = new LastChange(
        new AVTransportLastChangeParser(),
        lastChangeString
);
assertEquals(
        lastChange.getEventedValue(
                0, // Instance ID!
                AVTransportVariable.AVTransportURI.class
        ).getValue(),
        URI.create("http://10.0.0.1/file.mp3")
);
assertEquals(
        lastChange.getEventedValue(
                0,
                AVTransportVariable.CurrentTrackURI.class
        ).getValue(),
        URI.create("http://10.0.0.1/file.mp3")
);
assertEquals(
        lastChange.getEventedValue(
                0,
                AVTransportVariable.TransportState.class
        ).getValue(),
        TransportState.STOPPED
);
This manual has been created with Lemma from tested source code and Javadoc. Try it, you will like it.
org.teleal.cling:cling-support:1.0.5

Creative Commons License