Archive of articles classified as' "server/client"

Back home

Deploying a C# app through pip

28/05/2014

“If we bundle up this C# application inside of a Python package archive we can deploy it through our internal CheeseShop server with very little work.”

That sentence got me a lot of WTF’s and resulted in one of the worst abuses of any system I’ve ever committed.

We had a C# service that would run locally and provide an HTTP REST API for huge amounts of data in our database that was difficult to work with.* However, we had no good way to deploy a C# application, but we needed to get this out to users so people could start building tools against it.
I refused to deploy it by running it out of source control. That is how we used to do stuff and it caused endless problems. I also refused bottlenecks like IT deploying new versions through Software Center.

So in an afternoon, I wrote up a Python package for building the C# app, packaging it into a source distribution, and uploading to our Cheese Shop. The package also had functions for starting the packaged C# executable from the installed distribution. The package then became a requirement like anything else and was added to a requirements.txt file that was installed via pip.

What I initially thought was an unsightly hack ended up being perfect for our needs. In fact it caused us to eliminate a number of excessive future plans we had around distribution. I certainly wouldn’t recommend this “technique” for anything more than early internal distribution, but that’s what we needed and that’s what it did so effectively.

Getting something up and running early was extremely useful. It’s important to walk the line between the “we need to do several weeks of work before you see any result” and “I can hack something together we’re going to regret later,” especially for infrastructure and platform work. If code is well-written, tested, documented, and explained, you shouldn’t have qualms about it going into (internal) production. If the hackiness is hidden from clients, it can be easily removed when the time comes.


* Reworking the data was not an option. Creating this service would have allowed us to eventually rework the data, by providing an abstraction, though.

7 Comments

goless- Golang semantics in Python

21/04/2014

The goless library https://github.com/rgalanakis/goless provides Go programming language semantics built on top of Stackless Python or gevent.* Here is a Go example using channels and Select converted to goless:

c1 = goless.chan()
c2 = goless.chan()

def func1():
    time.sleep(1)
    c1.send('one')
goless.go(func1)

def func2():
    time.sleep(2)
    c2.send('two')
goless.go(func2)

for i in range(2):
    case, val = goless.select([goless.rcase(c1), goless.rcase(c2)])
    print(val)

While I am not usually a Go programmer, I am a big fan of its style and patterns. goless provides the familiarity and practicality of Python while better enabling the asynchronous/concurrent programming style of Go. Right now it includes:

  • Synchronous/unbuffered channels (send and recv block waiting for a receiver or sender).
  • Buffered channels (send blocks if buffer is full, recv blocks if buffer is empty).
  • Asynchronous channels (do not exist in Go. Send never blocks, recv blocks if buffer is empty).
  • The select function (like reflect.Select, since Python does not have anonymous blocks we could not replicate Go’s Select statement).
  • The go function (runs a function in a tasklet/greenlet).

goless is pretty well documented and tested, but please take a look or give it a try and tell us what you think here or on GitHub’s issues. I’m especially interested in adding more Go examples converted to use goless, or other Go features replicated to create better asynchronous programs.**


*. goless was written at the PyCon 2014 sprints by myself, Carlos Knippschild, and Simon Konig, with help from Kristjan Valur Jonsson and Andrew Francis (sorry the lack of accents here, I am on an unfamiliar computer). Carlos and myself were both laid off while at PyCon- if you have an interesting job opportunity for either of us, please send me an email: rob.galanakis@gmail.com

**. We are close to getting PyPy support working through its stackless.py implementation. There are some lingering issues in the tests (though the examples and other ‘happy path’ code works fine under PyPy). I’ll post again and bump the version when it’s working.

2 Comments

Why I love blogging

29/03/2014

I started to write a post about how I missed multithreading and speed going from C# to Python. I ended up realizing the service we built which inspired the post was poorly designed and took far more effort than it should have. The speed and multithreading of C# made it easier to come up with an inferior design.

The service needed to be super fast, but the legacy usage pattern was the problem. It needed to run as a service, but then we realize it’ll be running as a normal process. This is what happens when you focus too much on requirements and architecture instead of delivering value quickly. You create waste.

I wouldn’t have realized all of this if I didn’t sit down to write about it.

No Comments

Configuration in Host the Docs

28/03/2014

With Host the Docs, I chose to eschew configuration files and use a sphinx-style “conf.py” approach (I have previously written about how I like this approach). If a conf.py file is found, it is used for configuration, allowing it to override default configuration options. I also allow configuration through environment variables, where the environment variable name is the same as the conf.py variable but with HTD_ prepended.

So for example, in hostthedocs/getconfig.py, I have code like the following:

try:
    import conf
except ImportError:
    conf = None

def get(attr, default):
    result = os.getenv('HTD_' + attr.upper())
    if result is None:
        result = getattr(conf, attr, None)
    if result is None:
        result = default
    return result

welcome = get('welcome', 'Welcome to Host the Docs!')
# Other configuration values

I was initially going to use a json or yaml file, but always find their contracts a bit unclear, especially when paths are involved in the configuration (if the config uses relative paths, what are the paths relative to?). It also involves more code.

I was really happy with how using Python for configuration worked out in Host the Docs.

1 Comment

Maya and PyCharm

15/01/2013

Recently at work I had the opportunity of spending a day to get Maya and PyCharm working nicely. It took some tweaking but we’re at a point now where we’re totally happy with our Maya/Python integration.

Remote Debugger: Just follow the instructions here: http://tech-artists.org/forum/showthread.php?3153-Setting-up-PyCharm-for-use-with-Maya-2013. We ended up making a Maya menu item that will add the PyCharm helpers to the syspath and start the pydev debugger client (basically, the only in-Maya instructions in that link).

Autocomplete: To get autocomplete working, we just copied the stubs from “C:\Program Files\Autodesk\Maya2013\devkit\other\pymel\extras\completion\py” and put them into all of the subfolders under: “C:\Users\rgalanakis\.PyCharm20\system\python_stubs” (obviously, change your paths to what you need). We don’t use the mayapy.exe because we do mostly pure-python development (more on that below), and we can’t add the stubs folder to the syspath (see below), so this works well enough, even if it’s a little hacky.

Remote Commands: We didn’t set up the ability to control Maya from PyCharm, we prefer not to develop that way. Setting that up (commandPort stuff, PMIP PyCharm plugin, etc.)  was not worth it for us.

Copy/Paste: To get copy/paste from PyCharm into Maya working, we followed the advice here:  http://forum.jetbrains.com/thread/PyCharm-671 Note, we made a minor change to the code in the last post, because we found screenshots weren’t working at all when Maya was running. We early-out the function if “oldMimeData.hasImage()”.

And that was it. We’re very, very happy with the result, and now we’re running PyCharm only (some of us were using WingIDE or Eclipse or other stuff to remote-debug Maya or work with autocomplete).

————————

A note about the way we work- we try to make as much code as possible work standalone (pure python), which means very little of our codebase is in Maya. Furthermore we do TDD, and we extend that to Maya, by having a base TestCase class that sends commands to Maya under the hood (so the tests run in Maya and report their results back to the pure-python host process). So we can do TDD in PyCharm while writing Maya code. It is confusing to describe but trivial to use (inherit from MayaTestCase, write code that executes in Maya, otherwise treat it exactly like regular python). One day I’ll write up how it works in more depth.

3 Comments

Too ignorant to know better

11/02/2012

My first big python project last year was yet another feed aggregator (taogreggator). Before I started, I looked around at what other aggregators were available, and wasn’t happy with any of them in terms of features, complexity, or trying to get each working.

Of course, 9 months later, that project is dead and I’ve successfully got the python ‘planet’ module up and running at www.tech-artists.org/planet.

Note, this blog post probably reveals what a big programming phony I am ;) Remember though that this sort of thing is well outside my usual domain of expertise.

So what happened? Why did it take so long to realize I was doing something stupid, regroup, and adopt something that actually works?

I was too ignorant to know better. Well, to be fair, I didn’t undertake this project out of hubris or to build something better, I built it mostly as a significant project I could train my python skills with.

I’m not interested in why it failed. There are 100 reasons why it failed, none of them unexpected or interesting. I’m interested in why I undertook it in the first place and took so long to trash it.

1. I didn’t know anything about the web

I still know barely anything, but trying to take an existing package and get it running was incredibly difficult, because I was so out of water. I didn’t even have the vocabulary, and was unfamiliar with everything I was supposed to do and the concepts of how things worked. My own project allowed me to get into it gradually.

2. Too inexperienced to know the challenges ahead of me

It wasn’t actually that difficult to get the app running locally. I even opened up a router port and ran my PC as a server, for remote connections. But I had an Ubuntu server to deploy to, and know nothing about Linux. I had never created a web app before. So at every step, I thought I was almost there. Every known was an unknown unknown to me, because I had no idea what to expect.

3. Too inexperienced with the commandline and the python environment

I talked about it in my Relearning Python series. When I started out, I didn’t really get how python works, because I came from .NET where I didn’t have to worry about any of that. I have a much, much better understanding now, and the environment is one of the early things I teach any new python programmer, because once you start importing code, or writing complex scripts, you need to know how it works. I didn’t understand the environment so I had a very difficult time getting any third-party systems set up.

4. Pythonic is more than a coding style

When I came to python, I was indoctrinated in the ways of a .NET programmer. It took me a long time to understand that ‘pythonic’ applies to more than just lines of code. It has to do with how you run your entire application. The way I run planet I’d consider entirely pythonic- I have a very thin script that generates and uploads some files. The planet module itself is pythonic- there’s some straightforward documentation, commented ini files, and templates, and you’re supposed to customize things and build a few wrapper scripts to run the stuff you need. This looseness was foreign, as I was more used to a much data-driven, rigid way of customizing an app. Being data driven is not great in all circumstances, especially when developing frameworks and apps like this, where the programmer is the user. When I saw what I ended up with with planet, I was embarrassed with how confusing my design was (though, to be fair, it had more features planned). Without understanding how I should use modules like planet, I couldn’t use them. Such basic stuff is not covered in a readme.

So, several weeks ago, I finally made an effort to deploy my custom aggregator on an AWS windows server. I still couldn’t get it working. And I was having even more questions about why I did stuff a certain way (I don’t think the code or design is particularly bad, but it made it difficult to use on a server). It was a huge failure. So three days later, after an awful day at work, I regrouped, and spent the entire evening figuring out existing aggregators, and after struggling with various ones, chose ‘planet’, and got pretty much everything working.

The lessons are pretty clear. You need some minimum knowledge to be able to make an informed decision. Attempt something of a very limited scope to give you that knowledge before making your decision. You will have plenty of options to reinvent the wheel when you know what you’re doing. On the other hand, if you’re pursuing a project only for educational purposes, do whatever you want :)

Next time I’m going to follow some tutorial end to end. It was fun hacking away on something way too complex, but I failed to deliver a server to the community, and, tbh, the time could have been better spent.

No Comments

Everything can be a server/client!

30/01/2012

We Tech Artists can get intimidated when talking about servers and clients. They remind us of a world of frameworks and protocols we’re not familiar with, run by hardcore server programmers who seem to have a very demanding job. Fortunately, that needn’t be the case, and understanding how to turn anything into a server/client can open limitless possibilities.

You can think of server/client as a way to get two processes to communicate to each other using sockets, that is more flexible than other means of IPC such as COM or .NET marshalling. Your server can be local, or it can be remote, and very little usually has to change. Moreover, you can define much more flexible protocols/mechanisms, so you can communicate across literally any programming language or platform.

The practical reason everything can be a server/client is because we don’t have to understand much of how anything works under the hood. You follow some examples of how to set up a server and client using the framework of your choice (I’m a huge, huge fan of ZeroMQ which has bindings for pretty much everything including python and the CLR). Once you get comfortable, you just design your interface, and implement one on the server and on the client (the client just usually sends data over to the server and returns the response). Actually I really like how WCF recommends you build your server and client types, even though I am not a big fan of the framework. And I do the same for Python even though it’s not strictly necessary ;)

So your server just needs to poll for messages in a loop, and the client sends requests to it, and the server sends back replies. So driving one app with another is as simple as creating a server on the slave and polling in a (usually non-blocking) loop, and having the client send commands to it. You can invert the relationship on a different port and now you have bi-directional communication (hello live-updating in your engine and DCC!).

The real power of this, I’ve found, is that I really have full control over how I want things to work. No more going through shitty COM or .NET interop, no more Windows messaging. I define the interface that declares what functionality I need, and can implement it in a sensible and native way (ie, not COM, etc.).

For example, we use this for:

  • Recreating a Maya scene in our editor, and interactively updating our editing scene by manipulating things in Maya, even though their scene graphs and everything else are nothing alike.
  • Running a headless version of our editor, so we can interact with libraries that only work inside the editor/engine, from any other exe (like regular python, Maya, or MotionBuilder).
  • Having a local server that caches and fetches asset management information, so data between tools is kept in sync for the entire machine and there are no discrepancies per-app.

If we had a need, we could easily extend this so any other programs could talk to each other. In fact this is generally how it’s done when apps talk to each other: I’m not presenting anything new, just trying to convince you it becomes really really easy.

If you’re anything like me, thinking about things in a server/client scenario can give you an entirely new perspective on how you develop tools and pipelines.

1 Comment

Three options for data correctness

24/01/2012

In a previous post, I linked to Rico Mariani’s performance advice for Data Access Layers. On G+, Tyler Good asked:

I just read the posts and the linked blogs, I had a question about some specific implementations. How do you deal with classes that represent another non-[in this case]-Python entity that may be updated outside of Python?

I’m not sure if this sort of case is outside of the scope of what’s being talked about in the articles, but if there’s a better way to do getting on things like p4 paths or elements in a Maya file (that may have been changed by the user since instantiating/loading the object) I’d really like some ideas about that.

You basically have three options and fortunately they line up easily on a scale:

Technique Correct Difficulty
Transactions Always High
Fetch-on-demand Usually Medium
Store in memory Maybe Low

Let’s get on the same page first. Let’s consider all three types of interactions- database through a DAL, perforce (or any source control) interaction, and interaction with some host application (Maya, or your engine, or whatever). So what are the three approaches and how do they differ?

Store in Memory

You create a code object with a given state, and you interact with that code object. Every set either pushes changes, or you can push all changes at once. So for example, if you have a tool that works with some Maya nodes, you create the python objects, one for each node, when you start the tool. When you change one of the python objects, it pushes its changes to the tool.

This is the simplest to reason about and implement. However, the difficultly quickly becomes managing its correctness. You need to lock people out of making changes (like deleting the maya node a python object refers to), which is pretty much impossible. Or you need to keep the two in sync, which is incredibly difficult (especially since you have any number of systems running concurrently trying to keep things in sync). Or you just ignore the incorrectness that will appear.

It isn’t that this is always bad, more that it is a maintenance nightmare because of all sorts of race conditions and back doors. Not good for critical tools that are editing any sort of useful persistent data. And in my opinion, the difficulties with correctness are not worth the risk. While the system can be easy to reason about, it is only easy to reason about because it is very incomplete and thus deceivingly simple. So what is better?

Fetch on Demand

Here, instead of storing objects in two places (your code’s memory, and where they exist authoritatively, like the Maya scene, or a Perforce database), you store them only where they exist authoritatively and create the objects when that data is queried. So instead of working with a list of python objects as with Store in Memory, you’d always query for the list of Maya nodes (and create the python object you need from it).

This can be simple to reason about as well but can also be quite slow, depending on your dependency. If you’re hitting a DB each time, it will be slow. If you need to build complex python objects from hundreds of Maya or Max calls, it will be slow. If you need to query Perforce each time, it will be slow.

I should note that this is really just a correctness improvement upon Store in Memory and the workings are really similar. The querying of data is only superior because it is done more frequently (so it is more likely to be correct). The changing of data is only more likely to be correct because it will have had less time to change since querying.

That said, in many cases the changing of data will be correct enough. In a Maya scene, for example, this will always be correct on the main thread because the underlying Maya nodes will not be modified by another thread. In the case of Perforce, it may not matter if the file has changed (let’s say, if someone has checked in a new revision when your change is to sync a file).

Transactions

Transactions should be familiar to anyone who knows about database programming or has read about Software Transactional Memory. I’m going to simplify at the risk of oversimplifying. When you use a transactions, you start a transaction, do some stuff (to a ‘copy’ of the ‘real’ data), and commit the transaction. If the ‘real’ data you are reading or updating has changed, the whole transaction fails, and you can abort the transaction, or keep trying until it succeeds.

Mass simplification but should be enough for our purposes. This is, under the hood, the guaranteed behavior of SCM systems and all databases I know of. The correctness is guaranteed (as long as the implementation is correct, of course). However, it is difficult to implement. It is even difficult to conceptualize in a lot of cases. There are lots of user-feedback implications: an ‘increment’ button should obviously retry a transaction, but what if it’s a spinner? Are you setting an explicit value, or just incrementing? Regardless, where you need correctness in a concurrent environment, you need transactions. The question is, do you need absolute correctness, or is ‘good enough’ good enough?

Recommendations

Avoid Store in Memory. If you design things this way, break the habit. It is a beginner’s mistake that I still make from time to time. Use Fetch on Demand instead. It should be your most common pattern for designing your tools.

Be careful if you think you need Transactions. Ensure they are where they need to be (database, SCM), but don’t just go around designing everything as if it needs to be transactional. If you have two programs that can edit the same file- is one or the other just winning OK? How likely is that to happen? How will you indicate the failed transaction to the user? I’d suggest designing your tools so transactions are not necessary, and just verify things are correct when they cross an important threshold (checkin, export, etc.). Do your cost-benefit analysis. A highly concurrent system will need transactions, tools that only work with local data will likely not.

It should be clear, but still worth pointing out, you can mix-and-match these patterns inside of your designs.

Hope that clarifies things, Tyler.

No Comments

Cloud Based Pipelines?

18/07/2011

Originally posted on AltDevBlogADay:

The rest of software is moving into The Cloud, how come we aren’t doing the same with our tools and pipeline?

I love the cloud.  Yes, I know it’s a buzz word for not quite revolutionary concepts, but I love it anyway.  I love it for the practical benefit I get, and I love it for the technological possibilities it brings.  It doesn’t just mean using web apps- it means using amazing applications that run in any browser on any platform, it means not worrying about storing data locally, it means a rich and expanding personal experience based on the connections between your data and everyone else’s.

And then I think about most of the pipelines I’ve seen and I wonder: what have we missed?  Very often, we are building some of the most incredible and expensive games ever with incredibly shitty sets of tools.  Why do we have essentially the same pipelines as we’ve had for the same 10+ years? (I recently finished a case study of Dark Angel’s pipeline, from 2001, which is remarkably similar to some I’ve seen recently).  Game production has changed, but pipelines have not.  We’re releasing games that get downloaded content (or are continuously updated like an MMO), and the amount of content is ballooning.  Yet we’re still using essentially the same technologies and strategies as we were in 2001.  There’s something to learn by looking at Cloud technologies and concepts, buzzword or not.

Can game pipelines, too, move into the cloud?

The one essential aspect of the cloud is its basis in service-based architectures.  For the sake of simplicity and those unfamiliar, let’s say a service is a local or remote process that has some set of exposed methods that can be called by a client through a common protocol (JSON, XMLRPC, etc.).  All other aspects of cloud technologies require this serviced based architecture.  You couldn’t have the characteristic web apps if there was no service behind them.  You couldn’t run the same or similar page on any platform and device if the work was happening on the client instead of the service.  You couldn’t have a backend that automatically scales if the real work was happening in a Rich Client App (RCA) instead of in a service.

Could we build our pipelines with the same service-based approach (if not the always-there distributed-ness), and would we get similar results?

,--.::::::::::::::::::::::::::::::::::::
    )::::::::::::::::::::::::::::::::..
  _'-. _:::::::::::::::::::::::::::..
 (    ) ),--.::::::::::::::::::::::.
             )-._::::::::::::::::::..
_________________) ::::::::::::::...

Yes, we can.  But let’s consider what a service-based pipeline architecture would look like.  The biggest change is moving nearly all functionality out of DCC apps, which are RCA’s, and into libraries that can be consumed by the services.  This is what I’ve been doing for years, but I understand it may be a new thing for many people- but I guarantee you can do it and you’ll be better off because of it, not having to deal with buggy and monolithic DCC apps.  These libraries/services can use headless apps behind the scenes if necessary, to do rendering or some processing or whatever (mayabatch.exe or whatever).  Avoid it if you can, but you could do it.

The DCC and its UI’s, then, become very simple shells which just call methods on the service, and contain very little functionality of their own.  The service does the processing and calls back to the client (and if the function can be done asynchronously, the user keeps working while the work happens in the background).  The service can communicate to other remote and local services to do the work it needs to do.

Conceptually it is simple, but I promise you, the implementation will be complex.  So the benefits better be worth it.

And they would be.  The first thing you get is better abstraction between systems and components.  We remove ourselves from the hacks and workarounds of programming in a DCC, and can instead concentrate on working in a sensible development environment and not have to worry about debugging in app or having to make sure all our libraries work under whatever half-assed and old implementation of python Autodesk provides.  This results in being more deliberate about design decisions- not having a hundred pipeline modules available to you is actually a good thing, it forces you to get your dependencies under control, and you give more consideration to your APIs (I blogged about how server/client systems can be a useful exercise in abstraction).

These abstractions also give greater scalability.  No problem moving your code between versions of your DCC, machine architectures, python/.NET versions, etc.  It doesn’t have the ball and chain of DCC apps, because you’ve taken it all out of the DCC apps.  Compare this flexibility in scalability to something like render farms- they usually have a very specific functionality and required software and added more functionality takes lots of engineering time.  By having ‘normal’ code that can be run on any machine, you can distribute your processing to a farm that can tackle anything, and doesn’t require as complex systems or specialized skills to manage.  This is the distributed processing capacity of cloud computing (in fact you could probably deploy this code to a cloud provider, if you had good server-fu).

These abstractions also lead to language neutrality.  That’s right, I said it.  I didn’t say it is a good idea, just that it’s possible.  Just the same way the Twitter API has been wrapped in three dozen languages, your services should have an API using a common protocol like JSON, and many services and clients can communicate together.  You’re not stuck using COM or marshalling data or any other number of bullshit techniques I’ve seen people do to glue things together.  Your client can be anything- a DCC, a web app, a mobile app- you could even run it via email if you so desired, with zero change to the pipeline itself- only the client code you need to call it.  And don’t forget hosting a web page in a library like Qt or .NET could also run the service.

This is software engineering as we tech artists and pipeline engineers should have been doing all along.

 _______________
| | _________ |o|
| |___________| |
|     _____     |
| DD |     |   V|
|____|_____|____|

Let’s take a simple pipeline, like a character mesh exporter that includes an automatic LoD creator.  In Maya (or Max, or XSI, whatever), the user just hits ‘export selected’, and it can transfer the mesh data and the Maya filename/mesh name to the Local Service.’  It can transfer the mesh data directly as a json object, or it can save it to an fbx file first and transfer the name of the fbx file, whatever- the point is that it isn’t data in the DCC, it’s data from the DCC.

At that point, Maya’s work is done and the user can go back to working while everything else happens in the background in other processes and machines.  Awesome!  Most (all?) DCC’s are still very single threaded so trying to do any real work in background threads is not practical (or stable…).

The Local Service sends the mesh data to some Remote Services to request the generation of some crunched and optimized LoD meshes.  The Local Service can call an Asset Management Service with the scene filename/mesh name, to get the export path of the final mesh file.  The Local Service can then do whatever it needs to do to ‘export’ the content: call some exe files, serialize it, whatever, it just needs to save the exported file to where the Asset Management Service said it should be.

The Remote Services can call back to the Local Service as they finish processing the LoD’s, and the Local Service can save them where they’re supposed to go as well.  All of this without the user having to wait or intervene for anything, and without bogging down his box with expensive, CPU hungry operations.

  __________
/_________/ |
|         | |
| |====|  | |
| |====|  | |
|   ___   | |
|  | @ |  | |
|   ---   | |
|_________|./

Is this complex?  Yes.  Is it possible for a technically competent team to do?  Absolutely not.  Pipelines are the bastard child of game technology, and it show- we have been doing the same crappy things for a decade.  If we want to minimize ballooning costs of content development, develop robust pipelines capable of supporting games after ship with updates and DLC, and, let’s face it, work on some inspiring and exciting new technology, we’ll take pipelines to the cloud.

No Comments

What would a browser-based pipeline look like?

25/06/2011

So I’m fully on the browser-based app bandwagon, but what would that technology look like implemented in a traditional game pipeline?

You have a totally portable UI.  To some extent, you can get this with 3ds Max and .NET, or Maya and PyQt.  With both of those, though, there is still a significant platform reliance, and inevitably there are layers of quirks (I can only speak for 3dsMax in which learning how to use your C# .NET UI components inside of it was a never ending nightmare spanning the full spectrum of problems, but I assume, based on intuition and posts on tech-artists.org, that the experience is similar in Maya).  With a browser, you have a really, truly portable UI, that you can use from any app or the browser.  You can just use one of the available .NET/Qt controls to host a browser inside of a control.

You have a totally decoupled UI.  The decoupling is even more important than the portability.  Nothing you do in JavaScript or HTML is going to be dependent upon the quirks of Max and Maya, so you should really be able to use the app from entirely outside of Maya/Max with few or minimal changes.

Well guess what, Insomniac has been doing this stuff for a while already.  And it looks fucking awesome.

How does the UI communicate with your app?  The benefits of abstracted UI’s are great when you’re just using standalone tools inside your 3d app, but what about tools that need to interact with the scene?  Well the answer here is to develop all that awesome communication infrastructure you’ve been thinking about ;)  Studios like Volition have pipelines that allow 3dsMax and python to talk to each other, and the same capabilities exist in Maya.  So your UI, hosted in your 3D app, talks to a service (local or otherwise), which then talks back to the 3D app.

Which is awesome, or redundant, depending on how exciting you are.  It seems like a redundant, and complex, step.  But to me it is a box of possibilities.  First, you can do anything on the backend- logging, for example, that is completely transparent to your tools.  But far more interesting is that you’ve introduced a layer of abstraction that can allow you to, say, farm an expensive operation out through your service.  I mean, normally the barrier to entry here is high- you’d need to set up all the client/server infrastructure.  But if you go down the browser-based pipeline, you need to have it set up by default.  So you basically get the flexibility for free.  Imagine:

You have a UI that has a ‘generate LOD group’ button and settings.  You click it.  It sends a message to a local service that says, ‘Tell Maya I want to generate an LoD group with these settings.’  Maya gets the command, and sends info back to the server- ‘Server, here is the info you need to generate the LoDs.’  The server then sends a message back to Maya, and 3 remote machines, that each one generate an LoD.  Maya finishes and updates the scene with the generated LoD and 3 placeholders.  As the remote machines report progress, they send the LoD info back to the local service, and the local service says ‘Hey Maya, here’s that updated LoD you asked for,’ and Maya updates the scene.

That sounds complex, but think about how much of that you already have, or could use for other things.  The 3d/service layers you can use, and may already have, for any form of interop communication (like COM).  The data structures and functionality you’d need to send data to/from Maya can be used to generate LoDs, or just export meshes, or anything else you can think of doing with mesh data outside of Maya.  The remote farming ability can be used for distributed processing of anything.

So now we move much closer towards what I’ve discussed with the Object Model Pipeline, except it happens much more flexibly, naturally, and asynchronously.  Services expose the functionality to nearly all of your tools- basically anything you could want to use outside of your application- and you can write anything against those services.

Ambitious, but feasible, and not for the feint of heart.  I’ll certainly be pushing for this, and we’ll see how it goes.

No Comments