Thursday, 21 November 2013

Review: Sir You're Being Hunted

My friends could probably tell you just how often I whinge that I don't have enough time to play games, and finally start getting through my steam backlog. That is of course a terrible excuse, you'd be quite right if you pointed out that really I just don't make time to play games. I, on the other hand, would say that anyone can win an argument with facts, it's practically cheating - try winning an argument with bullshit if you want a real challenge.

I've decided to work through my Steam backlog alphabetically, where am I up to? Well I'm still on the A's. At least I was, and then I got a Kickstarter backer email for Sir, You're Being Hunted. I've backed a few games now, and occasionally early access or beta builds trickle in. Some are interesting, but ultimately too unfinished to hold my attention for long. Some on the other hand, are made of win.


I fired the game up to be told that Madam (I), was being hunted. I take it that the game randomly selects between Sir and Madam when you haven't played before - inclusiveness is always welcome. Well, not welcome in general gaming culture, but I invite it into my home and offer it a cup of tea and slice of battenberg.

In Sir, You Are Being Hunted you play as an old-timey fellow whose nondescript 'device' has exploded, scattering components across five islands - north, south, east, west, and central. You must wander the landscape in search of the pieces, scavenging whatever you can to survive, all the while being hunted mercilessly by gentleman robots. And with that, I begun my adventure, armed only with a pair of binoculars, and a pie - I mean, it's a good pie, but no pie is that good.


I spent the first half an hour just roaming about looking for loot in all the houses I could find, while trying not to get shot. I had a couple of attempts at taking down robots, at first with just sharp stones, and then with a hatchet. I discovered that the stones don't seem to deal damage, and though I never entirely figured out what they were for, I imagine it is for either distracting or attracting attention. Perhaps they are used for luring an enemy in to a trap? For that sort of thing I usually just resorted to dancing about in plain view. The hatchet approach on the other hand also didn't work for me at first, the robots can run away from you about as fast as you can chase them, I managed it a few times later but there's a knack to it.

The entire thing is procedurally generated, and it does so extremely well. The "rural" environment in the game reminds me heavily of growing up in the South-Easterly garden of England, and all of the biomes are decent at generating islands with memorable layouts, and set pieces which you start to recognise (and navigate by) before long.


Fairly soon I started to get the hang of things - I was cautiously creeping across the countryside looking for unguarded buildings, or perhaps sitting on a hilltop patiently observing robots patrol a piece of the device, waiting to pounce. I got in to a few gun battles, but it's hard to aim properly when you're shitting yourself - and only have 4 bullets. After dying a few times, the dominant strategy that emerged for me was to lure them into a trap and then smash them to bits with a hatchet. The advantage of this is that the robots can't shoot back at you while they're trying to free themselves, while also conserving ammo. As a result of this, I ended up with a lot of bullets towards the end of the game, but I was pleasantly surprised to find that if I did get too trigger happy I would often attract more unwanted attention from more hunters in the area.

As you find more pieces of the device, security around the islands starts to intensify, I found myself having to resort to hit and run tactics to stay alive. I was stashing weapons, ammo and supplies in various locations, and drawing every landmark I saw on a paper map I kept next to my keyboard. Eventually I tracked down the last few pieces of the device, and completed my first run-through of the game.


But this was not the end, this was just the beginning - next I would start permadeath runs, if I died I would delete my save. Though I have not yet successfully finished one, it intensifies the game. You have to be insanely careful - just running around too much can attract the attention of the hunters. Gone are the days when I would load up after a failed encounter because I'd rinsed all of my ammo whilst doing so. Now I have to retreat, regroup, scavenge for weapons whilst relatively defenceless, and return another day. It becomes less about beating the game, and more about creating another emergent story. Just recently I finished the tale of how I recovered about half of the pieces of the device, and then got cocky during my trap luring routine. I got ambushed, ran in to one of my own traps, and died in a hail of buckshot whilst desperately trying to free myself.


If I can put the wank-hat on for a minute - this is a wonderful game. It's everything I wanted it to be. It's a lovely mish-mash of free roaming, survival, hunting, and being hunted. It's rewarding, it's tense, it's fun. I didn't keep track of the number of times I sat in a bush while a hunting party hungrily roamed past me, I was too busy pissing myself. Sometimes when it's quiet, I still think I can hear robots in the distance, I'm hoping it goes away at some point.

Saturday, 30 June 2012

No Battle Plan Survives Contact With The Enemy

I have to make a confession - I like the Model-View-Controller (MVC) software design pattern. Quick MVC 101 just in case:
  • The model manages the behaviour and data of the application.
  • The view presents the model to the user.
  • The controller passes user input to the model and view.
  • In the Apple documentation where I learned about MVC, they also push that part of the controllers job is to get their model onscreen, so views shouldn't see their models, they should be generic and get their information via the controller.
"But Joe" I hear you cry, "software design patterns are just wankery for university lecturers, and people who don't live in the real world!" Well yes, I fully agree, of course I fully agree, I just put those words in your mouth. But it's a nice idea - keeping views decoupled from models so that views are reuseable, and models don't need to worry about how they're represented. Decoupling the controllers from the models is equally prudent, as this means you can change the entire method of user input, without having to change the model at all. That said, I still have some reservations.
   Working with a particular design pattern really reminds me of what it's like to work on a code library, or a game engine. It's all neat and tidy, essentially because it's never seen the light of day. Once they're exposed to those pesky real-world scenarios, that's when the code starts to get messy, and MVC is no exception.
   I was trying to figure out how I might go about using MVC for a game, and one of the things I really like about it is that with the decoupled views, I could for example take a 2D game and make it 3D by writing some new views. But I found myself wondering, what if I wanted to unplug a view, and plug a different one in on the fly? This would possibly require having the controller support multiple types of view - but this is ugly to maintain, you end up with lots of:

if( this->view->GetType() == "View2D" )
{
   // Do 2D view stuff
}
else if( this->view->GetType() == "View3D" )
{
   // Do 3D view stuff
}

Or you put all of the user input code into one class, and then create 2 subclasses - one for the 2D view, one for the 3D view. But then what if you have controller class hierarchy like:
  • Controller
    • PlayerController
      • PlayerControllerKeyboardAndMouse
      • PlayerControllerXboxGamepad
      • PlayerControllerTouchscreen
So I'm going to create two versions of each of the last three classes to support both view types am I? Am I bollocks. OK so some might be thinking "who on earth is going to go from 2D to 3D on the fly", but the ability to switch views at runtime like that has proper real-world applications. Take for example the cooperative AC130 mission in Call of Duty: Modern Warfare 2, both players are most likely looking at exactly the same data in terms of models, but one is seeing higher detail meshes and textures from a first-person perspective, and the other is looking down from the sky on a lower poly, black and white representation of the same thing. Though it doesn't happen in that particular example, it's not unreasonable to imagine a game scenario where you'd want to switch perspectives on the fly.

So, did I throw MVC out the window? Well no, not entirely, I just changed it to better fit my needs. This is the thing I want to get at, well.. Two things really:
  • Thing the first - never set out with a pattern in mind before thinking about the needs of the particular project, just because you have a favourite pattern, doesn't mean it will be right for everything you work on. 
  • Thing the second - a pattern is not a one size fits all solution. Don't feel you have to stick to the almighty rules of the pattern even if they're a pain! A pattern is a starting point, it's something that worked for somebody else, somewhere else, making something else. The chances are that your particular project will have some specific needs that will need to be factored in to the overall architecture of your code.
Here's the thing I wish I learned at university - if your nice shiny plan for your software doesn't work out, it doesn't mean you failed. Game development is often a very iterative process, it doesn't matter how long you mess around with flow-charts and diagrams, "No battle plan survives contact with the enemy". So just get on and code, go back to drawing board as and when you need to, and retrofit existing code to reflect architecture changes.

To be honest, that's where this post should end, from here on it's mostly rambling. For anyone bored enough to read on, or wondering what changes to MVC I felt necessary, here they are:
  • Controllers don't directly change the model, they just provide abstract influence - e.g. the model might call this->controller->GetDesiredMovementSpeed()
  • Views are specialised, so the controller doesn't need to worry about setting them up
  • Views have read-only access to the model they are representing
This seems to be working well for me so far, it's a really smart solution, but I didn't come up with it. I read about this particular flavour of MVC in a great article by Jorrit Rouwe of Guerrilla Games. I was trying to imagine how MVC might be used for a game, and I just couldn't get my head round it. If I'm playing Mario, I understood that the game world data is the model, or rather a collection of models. So Mario would have a model with his coordinates and velocity, and a controller which fed my key presses to Mario's model. All the enemies would have models with similar data, and these models will have views to display them onscreen. Most likely these would be pretty simple views, just drawing a texture on the screen at their x and y coordinates.
   What I couldn't figure out, is what if I've got flower power, and I shoot a fireball? Mario's model creates the fireball's model, but how does its view get set up? Mario's model can't do that, because that's coupling model code with view code. So I found the article by the power of google, but it actually doesn't answer the question directly, but I sent the author an e-mail and he was kind enough to explain it to me.

"In our code, the RepresentationManager gets notified when an entity is added to the EntityManager (it could be a generic listener on the EntityManager). It then creates the EntityRepresentation.
For that you'd use a factory pattern. Something like:

map = { { "Entity", "EntityRep" },
{ "EntityTypeA", "EntityTypeARep" },
...
}

RepresentationManager::OnEntityAdded(Entity *e)
{
string rep_type = map[e->GetType()];
EntityRep *rep = factory->Create(rep_type)
...
}"

In theory this would also be possible with views which are more generic, where the map would take a model type as a key, and a view config as the value:

{
   "Mario" : { "type" : "TextureView", "texture" : "mario.png" }
}

// As opposed to..
{
   "Mario" : "MarioView"
}

Anyway, quite possibly I'll eat my words in a years time, or perhaps the game I'm working on will have an architecture even further removed from MVC. Who knows, or cares? I'm going to bed.

Tuesday, 7 February 2012

Client-side Prediction in Unity

If you're making a multiplayer game in Unity and your networking model includes a fully authoritative server, you might have found movement to be a bit of a stumbling block. Client-side prediction is how lag tends to be hidden, Glenn Fiedler has an awesome series of articles, this one explains client-side prediction nicely.

To summarise - clients send their input (e.g. key presses) to the server which computes their updated position in the game world, and sends it back to them. To hide the lag, the client runs the movement code locally, and records its input and position each frame. When an update is received from the server, it looks through all the recorded positions and compares it with the data received from the server. If the two are out of sync by too large a margin, it retrospectively corrects the client by moving them to the correct position, and then runs through all the stored inputs, replaying the users actions up to the present time. This "rewind and replay" system is fine under certain circumstances, but a big problem in Unity is physics.

For about two years I've been developing a Unity multiplayer FPS on and off. Naturally, players can run about, jump up and down, collide with objects, and so on. I have no way of triggering the Rigidbody component on the player to simulate a frame, so I can't rewind and replay can I? Well I can, if I roll my own basic Rigidbody class:

public class NetRigidbody : MonoBehaviour
{
private Vector3 velocity;


public void Simulate( float dt )
{
// s = ut + .5at^2
Vector3 movementDelta = ( this.velocity * dt ) + 0.5f * ( Physics.gravity * dt * dt );
this.transform.position = this.transform.position + movementDelta;
this.velocity += Physics.gravity * dt;
}


public void FixedUpdate()
{
this.Simulate( Time.fixedDeltaTime );
}
}

Big problem here though - collisions! Currently a GameObject with one of these attached would just clip through everything, it doesn't process any collisions, and wont receive OnCollisionEnter() messages without a standard Rigidbody attached. One way we could approach this, is make gravity zero and attach a standard Rigidbody to process collisions for us. However, this wouldn't work for rewind and replay as we still have no ability to tell Unity when to simulate the physics.

The Physics class in Unity provides some useful static functions which would allow us to process collisions ourselves, for example RayCast, SphereCast, and CapsuleCast. I'd rather not do this myself, but there's a nice shortcut in the form of CharacterController. This class moves a GameObject by calling the Move( Vector3 motion ) method, and automatically does some collisions processing (i.e. stops the CharacterController's capsule from intersecting with other colliders). It also directly calls OnControllerColliderHit() when a collision occurs - that is to say, it calls OnControllerColliderHit() before Move() exits. So the updated NetRigidbody implementation is:

public class NetRigidbody : MonoBehaviour
{
private CharacterController _characterController;
public CharacterController characterController
{
get
{
if( this._characterController == null )
{
this._characterController = this.GetComponent<CharacterController>();
}

return this._characterController;
}
}

public Vector3 velocity;

public void Simulate( float dt )
{
if( this.characterController == null )
{
return;
}

Vector3 movementDelta = ( this.velocity * dt ) + 0.5f * ( Physics.gravity * dt * dt );
this.characterController.Move( movementDelta );

this.velocity += Physics.gravity * dt;
}

public void OnControllerColliderHit( ControllerColliderHit hit )
{
// Negate some of the velocity based on the normal of the collision by projecting velocity onto normal
this.velocity += Vector3.Dot( this.velocity, hit.normal ) * hit.normal;
}

public virtual void FixedUpdate()
{
this.Simulate( Time.fixedDeltaTime );
}
}

With this we can now effectively achieve rewind and replay in Unity, full client-side prediction! Huzzah! So what are the downsides? For one, using a CharacterController like this means that a GameObject which needs to rewind and replay (i.e. player controlled entities) will only use a single CapsuleCollider for movement collisions. This is fine for something like counter strike, but what if you want to put vehicles in like Battlefield? I fear that this approach is like building a structure on jelly, it might stay up, but it might all come crashing down.

What alternatives do we have?
1) Instead of using CharacterController, use Physics static functions to properly process collisions for all colliders attached to the GameObject and its children.

2) Just don't rewind and replay. When the server disagrees with the client to a significant degree, stop sending input to the server, and stop predicting movement. Wait for the server to completely catch up and process all of the input we've sent, and once that's happened we resume control of the player. This will result in very noticeable snaps. This should be fine for a game that will only be played on LANs.

3) Take a popular open source physics library such as Bullet, and either port it to .NET (or use an existing port), or Unity Pro users could compile it down to a native .DLL. You'd still need to build a bridge between Unity and that physics library. Keep in mind that Bullet (like many physics libraries) does not simulate physics objects individually, the entire system is stepped forward as a whole. So if you want to rewind and replay, you need to either create a temporary physics world containing just the things you're rewinding and replaying, or you need to rewind and replay absolutely everything.

What did I choose to do? Well, I realised that I just wanted to implement client-side prediction like this because I found it an interesting programming excercise after reading Glenn Fiedlers excellent series of articles. I'd advise others making multiplayer games in Unity to consider semi-authoritative network models. In the case of the game I'm trying to make, it's cooperative, so cheating isn't really a big issue - I could essentially just use the server as a way to connect players, and provide some validation.

But what happens if (like me) you worry that one day you might want to add another game mode where players compete directly with eachother? My answer - I downloaded UDK.

Wednesday, 25 January 2012

Designing A LocalConnection Protocol in AS3

It's been quite a while since I've programmed in AS3, but something I found quite useful for communication between SWFs was the LocalConnection class. The SWFs don't need to be on the same page, or even the same browser. I suspect that the BBC iPlayer desktop application (built with AIR) uses this under the hood to trigger a download to start when you click the download button on the iPlayer website.

LocalConnection itself is fairly crude, it can listen for data coming from any other LocalConnection which knows its connection name, and it can send data to any other LocalConnection by name. There's no mechanism built in for knowing where received data had come from, that has to be implemented yourself. A quick code example:

var lc:LocalConnection = new LocalConnection();
lc.connect( "myConn" );
lc.send( "otherConn", "method", ...args );

This is a bit bare on its own, as we often want bi-directional communication, or at least to make sure that the recipient is listening before we attempt to send them data. They behave in some ways like UDP sockets, and nice reusable module for projects which share data in this way could benefit with something a bit more like a TCP socket. I did something a bit like this for dPets when I was working on that, this isn't the same code though, just inspired by that idea.

So here's the basic API I would set out to use:

// Listening for incoming connections
var listenSocket:LocalSocket = new LocalSocket();
listenSocket.listen( "myConn", onClientConnected );

function onClientConnected( newSocket:LocalSocket ):void
{
     // Do something with the socket
}

// Connecting to a listen socket
var sock:LocalSocket = new LocalSocket();
sock.addEventListener( LocalSocket.LOCAL_SOCKET_CONNECTED, onSocketConnected );
sock.connect( "myConn" );

function onSocketConnected( event:Event ):void
{
     // Do something with socket
}

// Sending data on sockets
// In this example sock1 and sock2 are connected to eachother
sock1.addHandler( "message", messageHandler );
sock1.addHandler( "product", productHandler );

function messageHandler( message:String ):void
{
     trace( message );
}

function productHandler( a:int, b:int ):void
{
     trace( a * b );
}

sock2.send( "message", "Hello world!" );
sock2.send( "product", 7, 6 );

So with that in mind, here goes the implementation. Just like sockets, you can only have one LocalConnection instance listening to any particular connection name. But what needs to happen, is that whenever a socket attempts to connect to a listen socket, they perform a handshake process where they exchange unique connection names. The listening socket will have to spawn a second listening socket to receive data, and tell the connecting socket the new connection name to communicate with. The connecting socket will likewise need to generate a connection name, start listening on that connection, and send the connection name to the listening socket. This is actually quite similar to how TCP works.

public class LocalSocket
{
private var _localConnection:LocalConnection;
private var _connectionName:String;
private var _destinationConnectionName:String;

public function LocalSocket() 
{
_localConnection = new LocalConnection();
_localConnection.client = new Object();
}

public function connect( connectionName:String ):void
{
_connectionName = String( Math.random() );
_localConnection.client["handshakeAccepted"] = this.handshakeAccepted;
_localConnection.send( connectionName, "handshakeRequested", _connectionName );
}

private function handshakeAccepted( replyConnectionName:String ):void
{
_destinationConnectionName = replyConnectionName;
}

private var _onClientConnectedCallback:Function;

public function listen( connectionName:String, onClientConnected:Function ):void
{
_onClientConnectedCallback = onClientConnected;
_localConnection.client["handshakeRequested"] = this.handshakeRequested;
_localConnection.connect( connectionName );
}

private function handshakeRequested( replyConnectionName:String ):void
{
var newSocket:LocalSocket = new LocalSocket();
newSocket._connectionName = String( Math.random() );
newSocket._destinationConnectionName = replyConnectionName;

_localConnection.send( replyConnectionName, "handshakeAccepted", newSocket._connectionName );

_onClientConnectedCallback( newSocket );
}
}

Adding in the ability to add handlers and send messages is pretty simple.

public function addHandler( functionName:String, callback:Function ):void
{
_localConnection.client[functionName] = callback;
}

public function send( functionName:String, ...args ):void
{
args.unshift( functionName );
args.unshift( _destinationConnectionName );

_localConnection.send.apply( _localConnection, args );
// This generates the effects of writing _localConnection.send( _destination, functionName, arg1, arg2, arg3, etc );
}


There are one or two small adjustments in the full source, for example some rudimentary support for a socket continuing to attempt to connect until a timeout is exceeded. I haven't tested the code beyond the basics, so it won't handle erroneous data at all gracefully. There's also no way to close sockets like you would with berkely sockets, or detect if one end of the connection has dropped off. A nice feature to add at a later date would be some kind of repeated attempts at delivering a sent message if it fails, again with a timeout.

Full source code and a brief example can be found here.

Thursday, 19 January 2012

Stuff Wot I've Done

Given that my horrifically awful ePortfolio is no longer on the web, and this blog has very little information about what I've actually done to date, I thought I'd talk about a few games I've made or been a part of.

The first game that I ever released was a Flash game (the first programming language I learned was Flash Actionscript) Sunrise Hackathon. I wanted to just call it "Sunrise", but that was already taken. It was a hacking game - I'm a big fan of Uplink and most things Introversion as you would probably imagine, but at times I was frustrated with how the process of hacking something in Uplink was all about running a series of programs. I thought a possible solution would be to turn each step of the hacking process into an entirely unrealistic mini game, which looked like something out of a film like Swordfish. The results were not good as I'm sure you'll find out if you play it, but even despite all the things which make me cringe now, there are still things I like about it, and if memory serves it was the first game I ever properly finished, which is an achievement in itself.

Next there was my follow up game Neuron which had the silly background story of the player taking the role of a Russian scientist on the run from the KGB, and resorting to uploading his consciousness to the Internet so that he might become immortal. Naturally that was all tongue in cheek nonsense, but the core idea of uploading ones consciousness to the Internet was something that I wanted to play with after seeing the X-Files episode "Kill Switch". This concept entirely fell by the wayside, my initial prototypes of the game quickly shifted towards what seemed to be fun, rather than what was relevant to the "story", and it turned into a Geometry Wars style shmup. This was actually by far the most popular Flash offering which I ever produced, and the first game I managed to find a sponsor for. It appeared on the front page of newgrounds for a time, and I was really pleased with the response. Though it has the aesthetic much reminiscent of Geometry Wars, I had at the time not actually played Geometry Wars, only ever briefly seen it played, so I maintain that I didn't actually intentionally "pinch" any ideas beyond the general look.

Then came Soundscape Blast which was essentially my attempt to do AudioSurf in Flash. I don't remember what I wanted to call the game originally, but the sponsor wasn't fond of it, and "Soundscape Blast" was just a replacement which I pulled out of my arse, and they liked that one better. At the time, the process of capturing the raw samples of an mp3 in Flash, and performing a beat-finding analysis was quite new. Back then there were no games I was aware of in the Flash world which did this, maybe there still isn't I'm not really sure. I just wish I'd made a game which was less shit, and less ripping-off AudioSurf, which I still feel bad about to be honest.

I've known for a long time that game programming is what I want to do with my life, and I enrolled to study Computer Games Programming at Staffordshire University back in 2007. I planned to study 2 years, 1 year in industry, the final bachelors year, plus an additional masters year. I guess I just wanted to spend the maximum amount of time in education as possible before joining the real world (and maximise my chances of finding a job in the industry). When my placement year came around in 2009, things weren't looking good for quite a while. Placements specifically in the games industry were looking fairly scarce (presumably the economy had something to do with that, I'm not sure). Out of desperation I got on gamedevmap and spammed a load of studios who were relatively in the south-east-ish of the UK with a copy-paste email explaining my situation of needing a placement, and that I would be happy to work unpaid in return for the opportunity. I still remember sitting on my mate John's bed watching a film.. I forget which film.. I must've been paying more attention to my computer, because there sitting in my inbox was a very promising-sounding email from RedBedlam. A couple of days later I had a sort of interview with my potential boss over the phone, followed by an interview in person (which involved a bowl of chips in a pub, which is always awesome). About a week later I was offered an intern position, and I moved to Brighton about a week after that, which was all a bit of a blur.

RedBedlams speciality is MMOs, and I came into the picture just as the dPals online game was starting development, given my Flash background this was what I was put straight on to. After my year at RedBedlam was up, they offered me a full time job. At the time it seemed like a tough decision, but looking back on it I knew what I needed to do, I was just scared that if I didn't get a degree then I wouldn't be able to get another job in the industry if I ever left RedBedlam. Given that I'd learned such a huge amount in just a year, it was clear that I should take the job, so I did, and here I am. Development on dPals is still ongoing, and it gets better all the time. These days I'm working on our upcoming game The Missing Ink, which I probably shouldn't say much about, but I'm really looking forward to release.

I still plan on working on games in my free time, but the kind of games that I want to make has changed a lot. Relatively by accident I came across the "Rev Rant" series by Anthony Burch (then features editor at Destructoid) and his rants on "Fun Isn't Enough" and "Donate".


I was aware of the games-as-art movement at the time, and I was already a fan of some arty games like fl0w and Flower by ThatGameCompany, and Edmund McMillen's Coil, but I wasn't really that interested specifically by art games. I played those aforementioned games because I enjoyed them, I didn't really know why. After seeing those rants I realised I should take the medium more seriously if I want to see it progress. I started to analyse why I liked certain games, and why some games were just fun while others really spoke to me on a completely different level, and were far more meaningful to me. It's this sort of avenue I want to go down in future.

So, that's the story so far.

Monday, 26 September 2011

Mono and mkbundle on Windows

A problem for a fair few Mono and .NET developers is "what if the user doesn't have Mono or .NET installed?" You can bootstrap the .NET installer into an .msi installer, but if you really need to just zip up a few files and send them to someone, and have your application just work, then you might need mkbundle.

Mkbundle is a great tool included in the Mono framework, which allows you to embed the Mono runtime with your application, into a single executable. On OSX you can just use MonoMac (especially if you want to distribute something on the Mac App Store), but on Windows we'll need to crank up the good ol' command line!

Right, the first thing to say, is that on Windows, mkbundle is sort of broken (at least in Mono 2.10.5 which I'm using), the code is pretty straightforward, so I might have a go at patching it up if I get some time in the next few months, I've never contributed to an open-source project before, first time for everything. Anyway, here's the list of workarounds I had to use to get an application to bundle and run:
  1. Install Mono to a path with no spaces (mine is C:\Mono-2.10.5)
  2. Install Cygwin, adding the following packages:
    • gcc-mingw
    • mingw-zlib
    • pkg-config
    • nano
  3. Start up Cygwin and type nano .bashrc
  4. At the end of the file add:
    • export PKG_CONFIG_PATH=/cygdrive/c/Mono-2.10.5/lib/pkgconfig
    • export PATH=$PATH:/cygdrive/c/Mono-2.10.5/bin
  5. Restart Cygwin
  6. Copy the Mono executable to C:\Cygwin\<user>\home
  7. Copy C:\Mono-2.10.5\etc\mono\4.0\machine.config to the home directory alongside the executable
  8. Type the following into cygwin:
    • mkbundle -c -o host.c -oo bundle.o --deps --machine-config machine.config YOUR_APPLICATION.exe -z
    • nano host.c
  9. Scroll down to find:
#ifdef _WIN32
#include <windows.h>
#endif

Change it to:

#ifdef _WIN32
#include <windows.h>
#endif
#undef _WIN32
  1. Save the file by hitting CTRL + o, and then enter
  2. Finally, to compile the intermediate files into a final executable, type the following into Cygwin:
    • gcc -mno-cygwin -o BundledExecutable.exe -Wall host.c `pkg-config --cflags --libs mono-2|dos2unix` bundle.o -lz
  3. You'll need to copy mono-2.0.dll and zlib1.dll from C:\Mono-2.10.5\bin to wherever your final executable will be, other than that, you should be good to go.
That should do the trick, but this specific series of steps might not work (or be unnecessary) in later releases of Mono.

Tuesday, 6 September 2011

MonoDevelop and Loading Embedded Resources

I recently had an unusual amount of difficulty just loading an embedded resource in a MonoDevelop C# project. I couldn't find a great deal of useful information through googling, and in the end found out how to do it by looking through the GTK# source code of all things. So for anyone else wondering, here's how I achieved it:

Select the file in the solution view












In the properties panel, set "Build action" to "Embed as resource"



Set "Resource ID" to a unique name, one is automatically generated for you but you can change it if you wish















Then when you want get a stream to the resource, you can do so like this:

System.Reflection.Assembly.GetExecutingAssembly().GetManifestResourceStream( "mytextfile" );

This will return an open stream to the resource. Handily you could code a .dll which accesses embedded resources in a calling assembly, by simply changing GetExecutingAssembly() to GetCallingAssembly().

Hope this is of use to someone =)