Wednesday, 27 August 2014

A Change of Scenery

This is a short one, but just over two weeks ago I started a new position at The Creative Assembly as a gameplay programmer.

Many things are new to me in this job. First off it's my first time working for a triple-A studio, all of my experience up until now has been in the indie scene - and one company at that. Working with a large codebase is also something I'm not used to, even when I didn't have a clean slate in the past, they were still small projects with only a few programmers at most. I started my career as a professional game developer using Flash of all things, and spent the majority of the last five years making Unity games. I've always worked on C++ projects in my spare time, but for the first time I'm doing it for a living. Though I find the idea of this slightly intimidating, I'm looking forward to really sharpening my C++ skills with a trial by fire.

I've got a busy time ahead as I try to get up to speed with all of this, and I hope I'm cut out for it. Above all though, I'm excited.

Wednesday, 11 June 2014

Integrating Mono.Cecil with Unity

Have you seen JB Evain's talk at Unite? If you haven't, then you probably can't, as it has been inexplicably taken down. The subject matter is mostly about using a library called Mono.Cecil to actually alter a .NET assembly, allowing for both injecting and removing code. This gives us the ability to create C# post-processors which operate on the compiled assembly, not the code files. I've been writing some pretty repetitive code of late, and so I figured I'd try using this approach to simplify things. There is a problem with this though - there isn't an "on assembly built" event in Unity that we can hook in to.

Options:
1) Compile our code to a managed DLL, and set up Visual Studio to run the assembly post processor on it before placing it in the output directory.

2) Hack it up.

One of the things I really hate is a shitty workflow, and (at least at the moment) I find using managed DLLs in Unity to be conducive to exactly that. For example, if I double click on a message in the console, it won't open the file in my IDE like it does when I'm letting Unity compile everything for me. In addition, I'd like to have the option to distribute what I'm working on as an Asset Store editor extension, and imposing upon users that they must compile their code to a managed DLL and import it into their project is pretty much out of the question if you ask me. So, I opted for option #2.

The first step to take is to download ILSpy, this great tool allows you to inspect .NET assemblies, and even view the (estimated) source code. You can use this to have a look at the managed Unity DLLs (found in path\to\unity\Editor\Data\Managed) for undocumented classes and functions.

The next thing is to download and compile Mono.Cecil. I compiled it for .NET 2.0, as when I opened UnityEngine.dll in ILSpy it reported that it used the 2.0 runtime, so I figured that would be best.

For the purposes of this blog post, I'll just create a simple post processor that adds logging in and out of functions which are marked with a special attribute. A tool like this could be used just for debug builds, but stripped out of release. In his talk, JB Evain showed some interesting examples like inserting sanity checks on function parameters, amongst other things.

So, here is the code for my attribute:
using System;

public class LogAttribute : Attribute
{
}

And here's a MonoBehaviour which uses it, I'll attach this to a GameObject in my scene.
using UnityEngine;
using System.Collections;

public class Test : MonoBehaviour 
{
    private void Start()
    {
        this.LogTest();
    }

    [Log]
    private void LogTest()
    {
        Debug.Log( "Here's some logic" );
    }
}

I want my post processor to insert a logging message at the start and end of the function which has the [Log] attribute on it. Ideally, I want this editor tool to run after compilation of Assembly-CSharp.dll (or any other Unity-compiled DLL file). As I mentioned above, there is no official "on compilation completed" event that can be listened to, but there is a way (at least currently) to detect this. As outlined here the [InitializeOnLoad] attribute allows us to run code when the editor launches, but what it doesn't document clearly is that this code will also run whenever Unity reloads all the assemblies after compiling them.

The only remaining problem is that once we have post-processed the assemblies, any that have changed will be detected by Unity and all of the assemblies will be reloaded. This means that we can end up in a loop if we're not careful. We need to make sure that the post-processor has a way of knowing if an assembly has already been processed. In this example I'll do that by removing the [Log] tag from the methods as they are altered, in some more complex cases I've instead added a [HasBeenProcessed] attribute to the assembly.

The post processor will enumerate all of the loaded assemblies, and try to process each one.
[InitializeOnLoad]
public static class AssemblyPostProcessor
{
    static AssemblyPostProcessor()
    {
        try
        {
            // Lock assemblies while they may be altered
            EditorApplication.LockReloadAssemblies();

            foreach( System.Reflection.Assembly assembly in AppDomain.CurrentDomain.GetAssemblies() )
            {
                // Only process assemblies which are in the project
                if( assembly.Location.Replace( '\\', '/' ).StartsWith( Application.dataPath.Substring( 0, Application.dataPath.Length - 7 ) ) )
                {
                    AssemblyDefinition assemblyDefinition = AssemblyDefinition.ReadAssembly( assembly.Location );
                    AssemblyPostProcessor.PostProcessAssembly( assemblyDefinition );
                }
            }
   
            // Unlock now that we're done
            EditorApplication.UnlockReloadAssemblies();
        }
        catch( Exception e )
        {
            Debug.LogWarning( e );
        }
    }
}

Now we need to find any functions with the [Log] attribute on them, process them, and remove the attribute.
private static void PostProcessAssembly( AssemblyDefinition assemblyDefinition )
{
 foreach( ModuleDefinition moduleDefinition in assemblyDefinition.Modules )
 {
  foreach( TypeDefinition typeDefinition in moduleDefinition.Types )
  {
   foreach( MethodDefinition methodDefinition in typeDefinition.Methods )
   {
    CustomAttribute logAttribute = null;

    foreach( CustomAttribute customAttribute in methodDefinition.CustomAttributes )
    {
     if( customAttribute.AttributeType.FullName == "LogAttribute" )
     {
      // Process method here...

      logAttribute = customAttribute;
      break;
     }
    }

    // Remove the attribute so it won't be processed again
    if( logAttribute != null )
    {
     methodDefinition.CustomAttributes.Remove( logAttribute );
    }
   }
  }
 }
}

As for the code to process the method - you'll need to get familiar with IL to do this. I did this with a combination of having learned assembly in the past, and viewing decompiled assemblies in ILSpy. Test.LogTest already makes a call to Debug.Log, so lets open up that function in ILSpy and view the IL. Find "Assembly-CSharp.dll" in Library\ScriptAssemblies in the project folder, and drag it into ILSpy.












Have a look at the IL for Log.LogTest.

This will look quite straightforward to anyone familiar with assembly. The ldstr command is short for "load string", and it pushes a string onto the stack. When a function is called, the parameters are pushed onto the stack one by one beforehand. If you happen to look at a non-static function being called, you'll notice that the "this" parameter is pushed first before any of those appearing within the brackets. Due to Debug.Log being a static function, that doesn't happen in this case. After the ldstr command is the call command, which simply calls the function. Lastly the ret command is like a "return" statement in a function. Though these can be omitted from functions declared void, the compiler will insert a ret command at the end for you.

What needs to be done, is to insert a ldstr and call at the start of the function, and again just before the ret command.
MethodReference logMethodReference = moduleDefinition.Import( typeof( Debug ).GetMethod( "Log", new Type[] { typeof( object ) } ) );

ILProcessor ilProcessor = methodDefinition.Body.GetILProcessor();

Instruction first = methodDefinition.Body.Instructions[0];
ilProcessor.InsertBefore( first, Instruction.Create( OpCodes.Ldstr, 
"Enter " + typeDefinition.FullName + "." + methodDefinition.Name ) );
ilProcessor.InsertBefore( first, Instruction.Create( OpCodes.Call, logMethodReference ) );

Instruction last = methodDefinition.Body.Instructions[
methodDefinition.Body.Instructions.Count - 1];
ilProcessor.InsertBefore( last, Instruction.Create( OpCodes.Ldstr, 
"Exit " + typeDefinition.FullName + "." + methodDefinition.Name ) );
ilProcessor.InsertBefore( last, Instruction.Create( OpCodes.Call, logMethodReference ) );

So lets have a look at what the processed version of the function looks like.

And the C# version.

So it worked! This contrived example may not be all that useful, but there are some seriously cool applications for metaprogramming. In an upcoming post I hope to waffle about how I'm using this for network programming to improve workflow.

The code I gave in the snippets so far have been somewhat simplified. It should work for this, but it's not robust enough to deal with importing methods and types, and resolving type references. For the complete code, have a look at the source code on GitHub.

The only thing left for me to point out is that this does not currently process the assemblies when a project is built. For this you may need to create a special "Build Project" button which includes it, but that shouldn't be difficult to do.

So that's it, Unity + Mono.Cecil can achieve some great things, let me know what interesting stuff you're doing with this.

Saturday, 31 May 2014

Unity Critic



I've been using Unity for about five years, and every day in my job for the last three. In that time I've become pretty comfortable with the engine. There are very few aspects which I'm not familiar with, and it's my go-to choice for game creation. 

I want to firstly say - I love Unity. I came from a background of using UDK where in order to make a decent game, you have to invest quite a bit of time in reading the huge class hierarchy, and understanding all of the inheritance intricacies and couplings which affect the code. Unity by contrast, is a big blank slate, and far more welcoming when you're getting started. Its component-based pattern makes it great for throwing together quick prototypes, and as much as this had led to a lot of bad code - ultimately it's our responsibility as programmers to sort out what we fart out into Visual Studio at three in the morning. 

Regardless of what you think of Unity, it has certainly shaken up the game engine space. As the Unreal Engine and Cryengine have lost large numbers of bedroom developers, they've been creating indie friendly licensing agreements and pricing models (particularly exciting recent developments include source code access to UE4 for $19 per month). Would those things have happened without this new-indie-friendly-kid-on-the-block? I'm not sure they would. With that said, I still think it's important that us in the Unity community do our bit to hold Unity's feet the fire when necessary, and that's what this post is all about - what aren't we happy about with Unity?



Native SDK


Most users of Unity who came from a C++ background, have at least at some point wished for a native SDK of some sort. Unity's stance on this for a long time has been "don't be ridiculous, C# code runs at 50% the speed of native code!", to which I would say "yes, that's half as much stuff I can do in a script-heavy game".

With Unity Pro you can write native dlls to do heavy lifting, but the issue with this is that you have to pass the thing that your code computes, across the managed/native border. It rather depends on your situation as to whether that turns out like a stroll through Calais on a Saturday afternoon, or like being led through the Sonora desert to your death.

That said, in a few other engines I've used, native code is often used in order to access parts of the engine which cannot be reached through a scripting language. This is very rarely a problem with Unity, C# can reach most things, I've only ever been annoyed by the lack of access to the physics engine for certain reasons but I imagine you could create a wrapper for Bullet Physics if you really needed it.

Unity GUI

This is the most obvious thing to mention - Unity GUI is bad. It's slow, it's horrible to use, and almost everyone uses NGUI or some other third party solution. In fairness, these third party solutions are in many cases, extremely well integrated with the editor. This still presents a problem though - I can't use NGUI in an open source project, or at least if I do, I have to use the super-outdated (and much clunkier) free version. That's all I can really say about this, other than that I cannot believe the built-in GUI system for Unity can still be this bad, after this many years.

Nested Prefabs

This one is simple - I have a prefab of a building, the building contains several doors and windows which themselves are also prefabs. If I update one window or door, I'd quite like to update all the nested instances of that prefab. Currently that isn't possible in Unity without some kind of editor extension. Unity have said in the past that they don't want to do this because it would be confusing for the user to understand when a nested object is part of the prefab, or a nested instance of another prefab. In reality this would be trivial to communicate, just put some kind of icon next to a nested instance in the hierarchy, job done. Unity Feedback says work on this feature has started, but they said that about Unity GUI and it has literally taken them years to still not deliver.

Serialisation

This isn't talked about as often as it should in my view, but Unity's serialisation sucks! The prime example of this is when you have two references which point to the same object, these will be deserialised to two different objects containing identical data. Your only option is to have the class inherit from ScriptableObject, which means you can't create instances with the new() operator, and that can be a real issue if you're using some kind of third party library.

The most irritating part of this, is that it's not actually difficult to do. I created a simple save game system for a game, where the file contains an array of serialised objects in JSON format. When each object is serialised, I grab the values of its fields with reflection, if the field type is a primitive then it writes the value straight into the file. If on the other hand, the field type is some kind of class, then it looks up its index in the array (and adds it if it isn't already there) and uses the index as its value. You might be thinking that you'd lose some human readability with this method - you'd be right. But I'd also challenge you to read any Unity .scene file of reasonable size, and tell me it's human readable, while keeping a straight face.

In addition It would be nice to be able to use Unity's serialisation system too, instead of having to create our own save systems. If we could save the current state of the game to essentially a .scene file that we can load up again later, then we wouldn't have to do all the annoying fun things with asset loading - like anything that's spawnable and also savable, is going to probably end up in Resources. Why can't we have access to Unity's regular asset database? I know that we'd have to mark assets to be included in a build if they were to be spawned dynamically, but what's so wrong with that?


Priorities



My main issue with all of the above is that Unity seem to have their priorities entirely wrong. We see update after update about shiny nonsense like mobile shadows and DirectX 11 support, while the awful GUI system is only just getting released later on this year in 4.6! Making the bells-and-whistles more important than the fundamental backbone of the engine will only impair Unity's progress towards being a mature engine.







Anywho, this cup of tea next to me went cold about 10 minutes ago, and I feel a severe need for a slice of toast.

Friday, 30 May 2014

Fun With Bitcoin



After I haven't posted anything for what seems like forever, this will be just a short one. I've been interested in Bitcoin and cryptocurrencies for a couple of years now, and I've played around with the blockchain.info API recently, and decided to make a visualiser in the same vein as BitListen and so on. New transactions create an explosion of fireworks, the bigger the amount transferred - the bigger the explosion.

Click here to view in your Browser.

Windows Download

Mac Download

Linux Download

WebPlayer Download

Source Code on Github

Thursday, 21 November 2013

Review: Sir You're Being Hunted

My friends could probably tell you just how often I whinge that I don't have enough time to play games, and finally start getting through my steam backlog. That is of course a terrible excuse, you'd be quite right if you pointed out that really I just don't make time to play games. I, on the other hand, would say that anyone can win an argument with facts, it's practically cheating - try winning an argument with bullshit if you want a real challenge.

I've decided to work through my Steam backlog alphabetically, where am I up to? Well I'm still on the A's. At least I was, and then I got a Kickstarter backer email for Sir, You're Being Hunted. I've backed a few games now, and occasionally early access or beta builds trickle in. Some are interesting, but ultimately too unfinished to hold my attention for long. Some on the other hand, are made of win.


I fired the game up to be told that Madam (I), was being hunted. I take it that the game randomly selects between Sir and Madam when you haven't played before - inclusiveness is always welcome. Well, not welcome in general gaming culture, but I invite it into my home and offer it a cup of tea and slice of battenberg.

In Sir, You Are Being Hunted you play as an old-timey fellow whose nondescript 'device' has exploded, scattering components across five islands - north, south, east, west, and central. You must wander the landscape in search of the pieces, scavenging whatever you can to survive, all the while being hunted mercilessly by gentleman robots. And with that, I begun my adventure, armed only with a pair of binoculars, and a pie - I mean, it's a good pie, but no pie is that good.


I spent the first half an hour just roaming about looking for loot in all the houses I could find, while trying not to get shot. I had a couple of attempts at taking down robots, at first with just sharp stones, and then with a hatchet. I discovered that the stones don't seem to deal damage, and though I never entirely figured out what they were for, I imagine it is for either distracting or attracting attention. Perhaps they are used for luring an enemy in to a trap? For that sort of thing I usually just resorted to dancing about in plain view. The hatchet approach on the other hand also didn't work for me at first, the robots can run away from you about as fast as you can chase them, I managed it a few times later but there's a knack to it.

The entire thing is procedurally generated, and it does so extremely well. The "rural" environment in the game reminds me heavily of growing up in the South-Easterly garden of England, and all of the biomes are decent at generating islands with memorable layouts, and set pieces which you start to recognise (and navigate by) before long.


Fairly soon I started to get the hang of things - I was cautiously creeping across the countryside looking for unguarded buildings, or perhaps sitting on a hilltop patiently observing robots patrol a piece of the device, waiting to pounce. I got in to a few gun battles, but it's hard to aim properly when you're shitting yourself - and only have 4 bullets. After dying a few times, the dominant strategy that emerged for me was to lure them into a trap and then smash them to bits with a hatchet. The advantage of this is that the robots can't shoot back at you while they're trying to free themselves, while also conserving ammo. As a result of this, I ended up with a lot of bullets towards the end of the game, but I was pleasantly surprised to find that if I did get too trigger happy I would often attract more unwanted attention from more hunters in the area.

As you find more pieces of the device, security around the islands starts to intensify, I found myself having to resort to hit and run tactics to stay alive. I was stashing weapons, ammo and supplies in various locations, and drawing every landmark I saw on a paper map I kept next to my keyboard. Eventually I tracked down the last few pieces of the device, and completed my first run-through of the game.


But this was not the end, this was just the beginning - next I would start permadeath runs, if I died I would delete my save. Though I have not yet successfully finished one, it intensifies the game. You have to be insanely careful - just running around too much can attract the attention of the hunters. Gone are the days when I would load up after a failed encounter because I'd rinsed all of my ammo whilst doing so. Now I have to retreat, regroup, scavenge for weapons whilst relatively defenceless, and return another day. It becomes less about beating the game, and more about creating another emergent story. Just recently I finished the tale of how I recovered about half of the pieces of the device, and then got cocky during my trap luring routine. I got ambushed, ran in to one of my own traps, and died in a hail of buckshot whilst desperately trying to free myself.


If I can put the wank-hat on for a minute - this is a wonderful game. It's everything I wanted it to be. It's a lovely mish-mash of free roaming, survival, hunting, and being hunted. It's rewarding, it's tense, it's fun. I didn't keep track of the number of times I sat in a bush while a hunting party hungrily roamed past me, I was too busy pissing myself. Sometimes when it's quiet, I still think I can hear robots in the distance, I'm hoping it goes away at some point.

Tuesday, 7 February 2012

Client-side Prediction in Unity

If you're making a multiplayer game in Unity and your networking model includes a fully authoritative server, you might have found movement to be a bit of a stumbling block. Client-side prediction is how lag tends to be hidden, Glenn Fiedler has an awesome series of articles, this one explains client-side prediction nicely.

To summarise - clients send their input (e.g. key presses) to the server which computes their updated position in the game world, and sends it back to them. To hide the lag, the client runs the movement code locally, and records its input and position each frame. When an update is received from the server, it looks through all the recorded positions and compares it with the data received from the server. If the two are out of sync by too large a margin, it retrospectively corrects the client by moving them to the correct position, and then runs through all the stored inputs, replaying the users actions up to the present time. This "rewind and replay" system is fine under certain circumstances, but a big problem in Unity is physics.

For about two years I've been developing a Unity multiplayer FPS on and off. Naturally, players can run about, jump up and down, collide with objects, and so on. I have no way of triggering the Rigidbody component on the player to simulate a frame, so I can't rewind and replay can I? Well I can, if I roll my own basic Rigidbody class:

public class NetRigidbody : MonoBehaviour
{
private Vector3 velocity;

public void Simulate( float dt )
{
// s = ut + .5at^2
Vector3 movementDelta = ( this.velocity * dt ) + 0.5f * ( Physics.gravity * dt * dt );
this.transform.position = this.transform.position + movementDelta;
this.velocity += Physics.gravity * dt;
}

public void FixedUpdate()
{
this.Simulate( Time.fixedDeltaTime );
}
}

Big problem here though - collisions! Currently a GameObject with one of these attached would just clip through everything, it doesn't process any collisions, and wont receive OnCollisionEnter() messages without a standard Rigidbody attached. One way we could approach this, is make gravity zero and attach a standard Rigidbody to process collisions for us. However, this wouldn't work for rewind and replay as we still have no ability to tell Unity when to simulate the physics.

The Physics class in Unity provides some useful static functions which would allow us to process collisions ourselves, for example RayCast, SphereCast, and CapsuleCast. I'd rather not do this myself, but there's a nice shortcut in the form of CharacterController. This class moves a GameObject by calling the Move( Vector3 motion ) method, and automatically does some collisions processing (i.e. stops the CharacterController's capsule from intersecting with other colliders). It also directly calls OnControllerColliderHit() when a collision occurs - that is to say, it calls OnControllerColliderHit() before Move() exits. So the updated NetRigidbody implementation is:

public class NetRigidbody : MonoBehaviour
{
private CharacterController _characterController;
public CharacterController characterController
{
get
{
if( this._characterController == null )
{
this._characterController = this.GetComponent<CharacterController>();
}

return this._characterController;
}
}

public Vector3 velocity;

public void Simulate( float dt )
{
if( this.characterController == null )
{
return;
}

Vector3 movementDelta = ( this.velocity * dt ) + 0.5f * ( Physics.gravity * dt * dt );
this.characterController.Move( movementDelta );

this.velocity += Physics.gravity * dt;
}

public void OnControllerColliderHit( ControllerColliderHit hit )
{
// Negate some of the velocity based on the normal of the collision by projecting velocity onto normal
this.velocity += Vector3.Dot( this.velocity, hit.normal ) * hit.normal;
}

public virtual void FixedUpdate()
{
this.Simulate( Time.fixedDeltaTime );
}
}

With this we can now effectively achieve rewind and replay in Unity, full client-side prediction! Huzzah! So what are the downsides? For one, using a CharacterController like this means that a GameObject which needs to rewind and replay (i.e. player controlled entities) will only use a single CapsuleCollider for movement collisions. This is fine for something like counter strike, but what if you want to put vehicles in like Battlefield? I fear that this approach is like building a structure on jelly, it might stay up, but it might all come crashing down.

What alternatives do we have?
1) Instead of using CharacterController, use Physics static functions to properly process collisions for all colliders attached to the GameObject and its children.

2) Just don't rewind and replay. When the server disagrees with the client to a significant degree, stop sending input to the server, and stop predicting movement. Wait for the server to completely catch up and process all of the input we've sent, and once that's happened we resume control of the player. This will result in very noticeable snaps. This should be fine for a game that will only be played on LANs.

3) Take a popular open source physics library such as Bullet, and either port it to .NET (or use an existing port), or Unity Pro users could compile it down to a native .DLL. You'd still need to build a bridge between Unity and that physics library. Keep in mind that Bullet (like many physics libraries) does not simulate physics objects individually, the entire system is stepped forward as a whole. So if you want to rewind and replay, you need to either create a temporary physics world containing just the things you're rewinding and replaying, or you need to rewind and replay absolutely everything.

What did I choose to do? Well, I realised that I just wanted to implement client-side prediction like this because I found it an interesting programming excercise after reading Glenn Fiedlers excellent series of articles. I'd advise others making multiplayer games in Unity to consider semi-authoritative network models. In the case of the game I'm trying to make, it's cooperative, so cheating isn't really a big issue - I could essentially just use the server as a way to connect players, and provide some validation.

But what happens if (like me) you worry that one day you might want to add another game mode where players compete directly with eachother? My answer - I downloaded UDK.

Wednesday, 25 January 2012

Designing A LocalConnection Protocol in AS3

It's been quite a while since I've programmed in AS3, but something I found quite useful for communication between SWFs was the LocalConnection class. The SWFs don't need to be on the same page, or even the same browser. I suspect that the BBC iPlayer desktop application (built with AIR) uses this under the hood to trigger a download to start when you click the download button on the iPlayer website.

LocalConnection itself is fairly crude, it can listen for data coming from any other LocalConnection which knows its connection name, and it can send data to any other LocalConnection by name. There's no mechanism built in for knowing where received data had come from, that has to be implemented yourself. A quick code example:

var lc:LocalConnection = new LocalConnection();
lc.connect( "myConn" );
lc.send( "otherConn", "method", ...args );

This is a bit bare on its own, as we often want bi-directional communication, or at least to make sure that the recipient is listening before we attempt to send them data. They behave in some ways like UDP sockets, and nice reusable module for projects which share data in this way could benefit with something a bit more like a TCP socket. I did something a bit like this for dPets when I was working on that, this isn't the same code though, just inspired by that idea.

So here's the basic API I would set out to use:

// Listening for incoming connections
var listenSocket:LocalSocket = new LocalSocket();
listenSocket.listen( "myConn", onClientConnected );

function onClientConnected( newSocket:LocalSocket ):void
{
     // Do something with the socket
}

// Connecting to a listen socket
var sock:LocalSocket = new LocalSocket();
sock.addEventListener( LocalSocket.LOCAL_SOCKET_CONNECTED, onSocketConnected );
sock.connect( "myConn" );

function onSocketConnected( event:Event ):void
{
     // Do something with socket
}

// Sending data on sockets
// In this example sock1 and sock2 are connected to eachother
sock1.addHandler( "message", messageHandler );
sock1.addHandler( "product", productHandler );

function messageHandler( message:String ):void
{
     trace( message );
}

function productHandler( a:int, b:int ):void
{
     trace( a * b );
}

sock2.send( "message", "Hello world!" );
sock2.send( "product", 7, 6 );

So with that in mind, here goes the implementation. Just like sockets, you can only have one LocalConnection instance listening to any particular connection name. But what needs to happen, is that whenever a socket attempts to connect to a listen socket, they perform a handshake process where they exchange unique connection names. The listening socket will have to spawn a second listening socket to receive data, and tell the connecting socket the new connection name to communicate with. The connecting socket will likewise need to generate a connection name, start listening on that connection, and send the connection name to the listening socket. This is actually quite similar to how TCP works.

public class LocalSocket
{
private var _localConnection:LocalConnection;
private var _connectionName:String;
private var _destinationConnectionName:String;

public function LocalSocket() 
{
_localConnection = new LocalConnection();
_localConnection.client = new Object();
}

public function connect( connectionName:String ):void
{
_connectionName = String( Math.random() );
_localConnection.client["handshakeAccepted"] = this.handshakeAccepted;
_localConnection.send( connectionName, "handshakeRequested", _connectionName );
}

private function handshakeAccepted( replyConnectionName:String ):void
{
_destinationConnectionName = replyConnectionName;
}

private var _onClientConnectedCallback:Function;

public function listen( connectionName:String, onClientConnected:Function ):void
{
_onClientConnectedCallback = onClientConnected;
_localConnection.client["handshakeRequested"] = this.handshakeRequested;
_localConnection.connect( connectionName );
}

private function handshakeRequested( replyConnectionName:String ):void
{
var newSocket:LocalSocket = new LocalSocket();
newSocket._connectionName = String( Math.random() );
newSocket._destinationConnectionName = replyConnectionName;

_localConnection.send( replyConnectionName, "handshakeAccepted", newSocket._connectionName );

_onClientConnectedCallback( newSocket );
}
}

Adding in the ability to add handlers and send messages is pretty simple.

public function addHandler( functionName:String, callback:Function ):void
{
_localConnection.client[functionName] = callback;
}

public function send( functionName:String, ...args ):void
{
args.unshift( functionName );
args.unshift( _destinationConnectionName );

_localConnection.send.apply( _localConnection, args );
// This generates the effects of writing _localConnection.send( _destination, functionName, arg1, arg2, arg3, etc );
}

There are one or two small adjustments in the full source, for example some rudimentary support for a socket continuing to attempt to connect until a timeout is exceeded. I haven't tested the code beyond the basics, so it won't handle erroneous data at all gracefully. There's also no way to close sockets like you would with berkely sockets, or detect if one end of the connection has dropped off. A nice feature to add at a later date would be some kind of repeated attempts at delivering a sent message if it fails, again with a timeout.

Full source code and a brief example can be found here.