Due to my day job, I’m somewhat of a Houdini power user. Considering that, it’s quite late that I got to actually play with Houdini Engine for Unity. Sadly it’s just not a topic in VFX – yet.
But finally I’ve started chipping away on a prototype for a new game that requires dungeon creation and so I got around to crossing the bridge from SOPs into Unity via the Houdini Engine.
Here a small demo video:
This system is set up as a single HDA, which is driven by a bunch of input curves. The curves come straight out of unity and allow to interactively adapt the level creation.
From the curves – with SOPs lofting, booleans and a bit of wrangling – I generate the dungeon. Also setting the attributes for connecting unity materials directly. This is my main concern, as I really want to just get my world set up with a single click. I suppose I could also run a post processing script, seen the hooks for that on the Unity HDA asset.
Works like a charm! I’m sure there will be a good use for this.
I’ve been working with a principle called Vertex Animation Textures lately. In essense this is about using texture maps to drive animation of a non dynamic number of mesh entities in a 3D scene.
I used the Houdini Game Development Tools that allow you to export a set of packed geometries into said images. The package also comes with a unity shader, that reads those textures and applies the transformation on vertex level.
So I created a houdini POP sim based on some a bit terrible but free motion capture animation I found on TurboSquid. It too a not unnerving amount of fiddling with packed transforms to make it all work and stick, but once it all was in Unity, it was fairly straightforward to work.
I’ve to say, it’s a pretty neat trick to get some very complex animation of a LOT of moving parts into real-time rendering at pretty much the cost of an additional two texture lookups.
I played with this for a while with the outcome of a short piece about all the money in the word.
After tinkering with AR for my last post, I got somewhat hooked.
Same objective, this time around. Help your tiny friend to get to the treasure. This time around you have a (conveniently heavy) anvil, a see-saw and a wooden platform you can move around.
I managed to upgrade my camera, so I could engage in a few camera angles and some compelling, story driving editing. And once again, this is all done just on my desk.
It’s quite amazing how much difference a real world connecting into the virtual scene has. Even if its jittery tracking of crinkled paper markers, that kinda obscure half the screen. There is definitely something in this approach that – with better soft/hardware – will bring small revolution in the gaming world.
And I’m not talking PS4 Shooter replacements. I’m talking playing Monopoly with your family, or Settlers of Catan. With animated characters, that come to life on your living room table.
Or – I cannot spare a World Cup reference – play the next FIFA match against your colleagues at lunchtime on the office floor, with everyone standing around the field!
Well, easly days, I’ll keep playing. Let me know what you think!
The models are partially free assets from the unity asset store> A few I modelled and painted with Sculptris. Just wanted to try it out and it’s a useful – and for the average indie game developer most importantly – free alternative for 3d sculpting and painting.
Little man has made it through the forest in pursue of a mighty treasure. Now, on top of a rocky peak, all there is left, is to cross a mighty crevasse to be set for life in wealth and fame! But what to do, the treasure seems so close but is yet so far!
Now this is the point where the player comes in, lending a divine hand by lifting a wooden board to form a bridge and let the little guy achieve his goal.
This is the next iteration of my experiments to come up with interesting use cases for AR and games. Apologies for the crappy web cam again, which makes the tracking just pop every now and then.
I want to explore, what game mechanics can be constructed around direct human interaction with a virtual world and characters within it.
This – again, is not really AR. It is a game I can play on my living room table. It does not interact with the real world, apart from waving around a marker. But it instantly felt quite captivating, having this small guy and his struggle and lending him – quite literally – a hand. As I’m doing this on a webcam that points towards me, so directions are reversed. If you look closely, I messed up and hit him in the head with the board after he crossed. And I did feel rather bad! Says something about immersion, I’d say.
Well this is a start. There are tons of things that can be improved, but as a prototype I found this quite informative.
The tech behind – in case you wondered – is once again Unity2018, vuforia and a bunch of amazing free 3D assets from the asset store.
As I have unity no working pretty well with Vuforia and AR, I’m toying with a few mini game ideas.
Still getting used to how it all ties nicely together, struggling with the discrepencies between world and content scale a bit as it messes with physics and lighting. But made progress. Spent a good half hour just playing and utterly failing to get the ball into the box.
Cleaned up the scene today and presented the whole thing a bit better. And changed some physics parameters to actually get this to work much nicer.
It’s cool to interact with the world. Still feels a little bit too tinkerish when shuffling markers around while making sure the webcam – which is taped to my desk lamp – doesn’t fall off. It’s all a matter of time until there is some nice end user interface and plain real world object tracking. That’s what I gonna start looking into next.
I started a while back on a two player snake game which I used to test out Löve2D. I’ve ported the code to Unity and set up a simple test with Vuforia.
The whole process is very straightforward. I was quite impressed how well it tracks. A well lit environment seems a must, especially when testing with a £5 WebCam it struggled a bit.
Here the clip:
Apologies for the in parts bad quality and clunky handling. Holding a marker and a phone while playing is beyond the range of my motoric functions.
For this test the interaction is quite simple, nothing particularly taylored for AR.
It lets you can play snake on any flat surface, like a floor or a table. In the video I “glued” it to a marker. Touching left/right of the phone screen steers the snake. Tap with two fingers resets.
I have a few better ideas for some cool more AR-esk mini games. Watch this space!
This is an early test using Unity NavMeshAgents and some simple animation state machine to generate autonomous little people runnig around in some rather grassy environment.
I’m thinking to use this in a later project. For now I’m dipping into all the different areas I think I’ll need, just to get a better understanding of the scope and complexity of the problems ahead.
So far this has all been rather straightforward. There are questions about how to reliably streamline animation import into Unity, keeping it all as flexible as possible.