Wednesday, December 11, 2013

FGx Globe R5: New Globe Type, More Aircraft, More Thumbnails



For the past several weeks I have been working on the FGx project. I think FGx stands for Flight Gear Extras. The effort includes the design, style and content of the web pages hosted on GitHub as well as FGx Globe. FGx Aircraft Overview and FGx Airports Runways Navaids.

I have been communicating almost entirely with the other members of the project via the FGx Google Group but realize this is silly because it's *you* I should be talking to. 

All of this work is in need of feedback and comments and suggestions.

The screen grab above is from FGx Globe. It's showing aircraft that are currently being flown by people using the FlightGear flight simulator.  Of course the globe is in 3D and so you can zoom, pan and rotate the globe. Move your mouse over an aircraft and a window pops up with the flight details and a thumbnail image of the plane. Open the Crossfeed tab, click on a flight and a separate window opens showing the aircraft flying over a 2D map. And there's much more; please explore the tabs. The main thing missing in the tabs is the credits and licensing data for all the tools used to build this app, but this info is being added slowly but surely.  

So FGx Globe is in a good enough state - but just for the moment.

Coming up will be fixing the issues with all the aircraft in FGx Aircraft. Some craft are missing, some are missing just a few bits (like wings or propellers ;-), and others have extra bits such as light shields or parachutes. Once that is done, we nee to see if we can reattach all the logos and paint jobs.

Once the planes are in order, we can come back to FGx Globe and decide then next big thing which is what happens when you zoom way in? How do you get to the place where you can see the planes taking off and landing at the airports? Should the next step be inside FGx Globe or should you transition to a different app. I will be looking into both possibilities in upcoming posts.

In the mean time, happy globe-trotting!








Thursday, November 14, 2013

Leap + Three.js: Boilerplate post at Leap Motion Labs





On the 15th of October Leap Motion Labs published a post written by me:

Thinking as a Designer: What’s a Good Leap + Three.js Boilerplate?

From my point of view it's a fairly good post because the contents fulfill many of what I consider to be essential requirements for a good technical post which might include:

  • An assortment of visuals
  • Access to source code easily obtainable on GitHub
  • A YouTube video
  • Plenty of links to useful information
  • And a demo app that works

And, above and beyond the specification items, there's a even fairly lively story.

So how did this post go from the original email request into a published post in about five days?

The answer has little to do with me. The answer may be surprising at first, but then becomes eminently reasonable.

Look at the publisher of the post.

labs.leapmotion.com

And when I say 'look' I mean click on the link and flip through some of the articles.

In my opinion, this site stands out as one of the best online vendor-specific tech journals currently in operation.

The articles are lengthy and yet entertaining, in-depth and yet readable and do a great job of marketing without a heavy sales pitch. I don't think you will find many other start-ups with such a well-worked out formula for disseminating what is actually very complicated stuff.

Why is the Leap Motion Lab doing such a good job when other aspects of the Leap Motion organization are quite lacking? Perhaps, it's the people. The editor I worked with, Alex Colgan, in a matter of hours transformed the job of preparing the article from being a task into being a pleasure. Alex lives/works in Yarmouth, Nova Scotia but the distance in time and miles did little to prevent a speedy and engaged conversation. And the Google Docs real-time collaboration was a blast.

The main thing is that Alex picked up my style of writing ever so quickly. He made a lot of edits and yet looking back at the post I can't tell if a phrase is his or mine - even in the most technical parts. I worked through the weekend to finish the post, but Alex made it easy.

So if anybody at Leap ever asks you to pen a post for the Labs journal, you should immediately place your hands over your Leao device and reply with a thumbs up.


Wednesday, October 9, 2013

Leap + Three.js: Phalanges R7 Video


Description

The goal is to build a web app with the procedures required to display - correctly and in real-time - a user-manipulated 3D hand - or claw - or appendage. This demo shows what is still a work in progress.

Source Code here: https://github.com/jaanga/gestification/tree/gh-pages/cookbook/phalanges

Live demo here: http://jaanga.github.io/gestification/cookbook/phalanges/r7/phalanges.html
- Requires a Leap Motion device

The motion is captured using a Leap Motion device. See http://leapmotion.com

The 3D graphics are generated using the Three.js JavaScript library. See http://threejs.org

The video was recorded using CamStudio. http://http://camstudio.org/ There needs to be work on capturing data at a better frame rate.


Phalanges R7 - Requires Leap Motion Device to operate

Transcript

Hello this is Theo. And You're looking at the new Phalanges Release 7
Phalanges is Latin term for finger bones
It's October 8th 2013,  here in San Francisco
What you're seeing is the movements of my hand recreated in a 3D space
I'm using the Leap Motion device to capture the actual movements of my hand and fingers as I speak
The graphics you see in the video are being generated on screen using the three.js JavaScript library
The issue in all this is that the Leap device cannot see all your fingers all the time
So whenever one of the  colored block disappears, it means that the Leap Device cannot see that finger
The objective of the code is to keep all the fingers  - the gray box-like objects - visible at all times.
The second objective have fingers *not* go off in crazy directions.
As you can see there's a fairly good connection, but it's not perfect.
I can make my hand  pitch - roll  - and Yaw
I can wiggle my fingers
Mostly the fingers the visible and not too crooked
And it's a lot better than Release 1
Anyway, All of this very much a work in progress.
What you are looking at is example or cookbook code.
It's a program intended to be used as the basis for further development
So it's not a thing beauty.
For example, you can see All the dummy objects to make sure the fingers point in the right direction
They are just here for testing and won't be visible in later programs
Speaking of later programs
The next generation of code based on this work will be out very soon'
Two major features will be getting into this code:
First, You will be able to use this these algorithms to save data in the industry-standard BVH file format.
Secondly, you'll be able use this code to display human-like hands, or animal claws or robot appendages or whatever'
So there's a lot more to be coming out out of this code.
But for the moment, this is Theo, saying 'Bye for now...'

Sunday, September 22, 2013

Skin and Bones for Leap Motion Devices ~ Update

Please see the previous post on this topic:

http://www.jaanga.com/2013/09/so-close-yet-still-so-far-skin-and.html

This morning I built and posted Phalanges R5 - a great improvement over the previous release:

http://jaanga.github.io/gestification/work-in-hand/phalanges/r5/phalanges.html

with info here:

https://github.com/jaanga/gestification/tree/gh-pages/work-in-hand/phalanges

The interesting issue in all this is the difference between the methods Leap Motion uses to expose its data and the methods normally used in character animation.

In character animation, all 'bones' are connected. If you move the upper arm then all the bones below move as well.

The Leap provides individual positions and angles data for all the fingers and palms.

Quite frequently you do not have information for all the fingers.

In normal character animation, this is not much of an issue because if you move the palm then any unaccounted fingers will move along with palm automatically.

But with the Leap Motion data, fingertips seen previously may end up sitting frozen in space disjointed from the hand or they may simply disappear. For some people this may be a disconcerting series of events.

[Disclosure: my left hand disappeared a number of years ago never to return, so this sort of thing is no big issue for me. ;-]

The first releases of of Phalanges relied on the fingertips, finger bases and palms all moving and controlled separately. This made for lots of fingers disappearing. The more recent releases followed the idea of all bones being connected and this caused fingertips to move in all sorts of inhuman ways.

The current release is a hybrid. The palm and the finger bases are connected - move the palm and the bases move with it. The fingertips all move independently from each other and from the palm.  This works just fine - until the Leap Motion device decides that a fingertip no longer exists.

So what looks like the next solution to investigate is a hybrid-hybrid solution. When Leap Motion fingertip data is available use the hybrid solution. When Leap Motion data is not available make the Leap fingertips invisible and make a completely connected finger visible. When the Leap finger data is again available, switch out the fingers.

Now all this may seem a wee bit complicated and you would think that sticking just a single joint between tip and palm would be no big deal. And you would be quite right. And you would be really, really smart because your brain would know how to crawl in and out and all over things like inverse kinematics and be prepared to lots more code and include more libraries

But that sort of thing is way beyond my skill level. My brain starts to fatigue when an app is over 300 lines. The current app is at 222 lines. With a bit of luck we can have a skinnable phalanges release that even my little brain may grasp...

Link:

https://github.com/jaanga/gestification/tree/gh-pages/work-in-hand/phalanges







Friday, September 20, 2013

So Close / Yet Still So Far: Skin and Bones for Leap Motion Devices - A Progress Report

../../../Common/images/Leap_Gesture_Swipe.png
Hand image from Leap Motion documentation
2013-09-22: See also update post that discusses much improved Phalges R5:
http://www.jaanga.com/2013/09/skin-and-bones-for-leap-motion-devices.html  


The above image is from the documentation for the Leap Motion device. Questions relating as how to produce such images or how to access the 'raw data' that produces such images are some of the most frequently asked questions in the Leap Motion forums. The bad news is that there is no source code or coding examples currently provided by Leap Motion for producing such a display.

The good news is: Wow! What an excellent coding challenge...

This post is a progress report on the current status to produce realistic-looking and behaving hands that can be controlled by the Leap Motion device.

The most exciting recent development is certainly this recent post by Roman Liutikov:

http://blog.romanliutikov.com/post/60899246643/manipulating-rigged-hand-with-leap-motion-in-three-js

With demo file here:

http://demo.romanliutikov.com/three/10/

Roman provides very clear guidance as how to export skin and bones from Blender as a JSON file that can be read by Three.js and used to display arbitrary, real-time finger movements generated by a Leap Motion device.

An interesting side note is that the code uses a BVH-like structure to control the movement of the fingers. I recently wrote about the importance and efficacy of BVH here:

http://www.jaanga.com/2013/09/bvh-format-to-capture-motion-simply.html

The unfortunate aspect of this work is that there are a number of issues with the movement of the hand and fingers.

Nevertheless, this code is an important step forward and well worth inspecting.  I did so myself and have re-written Roman's code in my own (admittedly somewhat simplistic) style:

Demo: http://jaanga.github.io/gestification/work-in-hand/phalanges/liutikov/liutikov.html

With information and background here:

https://github.com/jaanga/gestification/tree/gh-pages/work-in-hand/phalanges/liutikov

My own work, since the publication of the post on BVH, has been involved with building up a notion of the best methods for positioning and angling the 'bones' inside the fingers. There are a host of issues - too many to list here - including: hands that sometimes have five fingers, or two fingers or no fingers; finger 2 easily switches places with finger 3; the order of the fingers is 4, 2, 0, 1, 3 and so on.

The latest demo (R4) is here:

http://jaanga.github.io/gestification/work-in-hand/phalanges/r4/phalanges.html

Previous releases, source code and further information is available here:

https://github.com/jaanga/gestification/tree/gh-pages/work-in-hand/phalanges

Much is working: the hand generally moves and rotates appropriately, fingers stay in the same position and don't disappear. But it is readily apparent that the tips of the fingers are still quite lost in space.

Not to worry. Eventually the light bulb will turn on. Actually the more likely thing is that a search on Google will turn up an answer or some person very smart in the ways of vectors will respond on Stackoverflow.

Also worth noting is that the people at Leap Motion gave a demo of routines at the recent developer's conference in San Francisco that may provide a satisfactory response. The interesting thing will be to see which code come out first and which code is the more hackable.







Tuesday, September 10, 2013

BVH: A format to capture motion - simply, quickly and economically

One of the reasons that Android phones have such good voice recognition is because, under Peter Norvig's guidance, Google has acquired an immense corpora or database of what and how people speak. It is my contention that gestures and other non-verbal means of communication will eventually regain some of the primacy that they had before primate evolved verbal communication. If this is to happen to gesturing, then we need some fast, cool effective methods for recording the many gestures people make.

This is not a new thought. Personally and over the years, I have spent some fascinating moments exploring dance notation. And sign language is the codification of gesturing. But coming back to computers, we have all the methods used by computer games to record and replay the movements of game characters. Collada, FBX and the new glTF come to mind.

Here's the thing: gesturing can generate huge amounts of data per second. It's nearly as good ( or bad - depending on your outlook) as video - if nothing else because the data gathering usually is via video. Secondly, if data scientists are ever to be able to parse our gestures they will need the data in digital format. The concept represented by letters 'donut' is far smaller than the audio file of the sound bit let alone the object in question.

Because of my joy in exploring the Leap Motion device, I have spent the last month or so looking into ways of registering gestures.

One of my experiments is to record all the messages sent out by the Leap Motion device and save then in JSON format. The messages are used by software developers and for testing. In normal coding such messages are typically short and sweet (or not). But even a short gesture may generate a JSON file of over a megabyte. If you have a Leap device, you can have a look at the app here:

http://jaanga.github.io/gestification/cookbook/jest-play/r1/jest-play.html

With source code and more details here:

https://github.com/jaanga/gestification/tree/gh-pages/cookbook/jest-play

Thus, as helpful as this app should be to developers and testers (especially as none of the example apps in the Leap Motion examples site can do this), this is not an app that should be used to recorded and replay a corpora of thousands or millions of gestures because the files sizes are too large.

In July I wrote a paper using Google Docs about gesture recording. You can have a look at the paper here:

Skeleton API Considerations for Leap Motion Devices R2

In this document I recommend looking at the BVH format. This is not my first encounter with BVH. Last I wrote a five post tutorial on getting animations into Three.js by importing BVH files into Blender. I have yet to hear or see anybody else that was able to follow - successfully - the tortuous path I proposed that you should dance down. And, in the mean time, there have been so changes, that half the stuff no longer works.

Anyway, because of the paper and because of the Leap device, I decided to write a BVH reader based on code I had found [only after many searches over a long period of time] including these two examples:

https://code.google.com/p/papervision3d/source/browse/trunk/as3/trunk/src/org/papervision3d/objects/parsers/mocap/BVH.as
https://github.com/sinisterchipmunk/bvh

Even though I code a lot, I am not really a programmer and it soon all started to get a bit daunting. When that sort of thing happens I tend to go into denial and whatever. And I did a Google search on 'Three.js BVH reader' and up came this:

http://saqoo.sh/a/labs/perfume/2/

I nearly fell out of my chair. Here was everything I wanted: A simple Three.js app that reads BVH files. And more than that, the code itself is fascinating. The methods the author uses to do 'if/then' within a 'for' loop were totally new to me.

Saqoosha: you are amazing! And thank you for your kind permission to build upon your code. Here's Saqoosha's web site:

http://saqoo.sh/a/

So within quick order I had several demos up and running - each accessing slightly different dialects of BVH. The links are at the end of this post.  And now I have had several days, reading and thinking about BVH and comparing it with other methods.

And the TL;DR is that the BVH format is awesome. Accept no substitute.

You can read about BVH here and here and here.

Thing #1. The main thing is that the main data part of the format is about as sparse as you can get in uncompressed ASCII. It's just numbers and spaces. And, the most important, it's only the numbers you actually need.

Let me try and explain. To position something like a hand or foot in space you to specify it X, Y and  as well as the pitch, roll and yaw angles. That's six numbers - the 'six degrees of freedom'. But the BVH files only records pitch, roll and yaw - three numbers. It assumes you can fill in the X, Y and Z yourself at runtime. How? Because the header tells you the offset distances for all the body bits. In essence, for the purpose of this app,  the length of an arm or a leg is a constant not a variable, so you don't need repeat these values endlessly and the actual position is calculated in real-time frame by frame. Of course, all of this is recursive which short circuits my tiny brain.

Anyway, the main about BVH is that it is not possible to come up with a smaller method of recording motion than BVH. [I say this in the context of being a person often in the midst of people who understand mathematics - so wait and see awhile before accepting this assertion.]

Thing #2. Since the X, Y and Z information is all in the header. You can change this at any time. Even run time - and make the character morph as it's moving.  Thus you can fairly easily adapt a BVH file to different character sizes.

Thing #3. All the movement data is in an array of strings which contain the relevant angles. At runtime you can easily splice, pull or shift the array and update the character to have a new series of motions. So you could have a character moving about for twenty minutes but be, say, just twenty seconds ahead in terms of data that needs to be loaded.

Thing #4. The BVH is supported by Blender, Daz,  MakeHuman, Mixamo, FreeMocap and probably a number of other suppliers off 3D stuff. It's a fairly safe format. And the only commonly accepted format dedicated to motion.

Thing #5. The format is quite flexible. It can handle all the bones in the toes and fingers, or creatures with seven tentacles or just a robot arm with three moving parts. This does mean that there are a number of BVH 'dialects' out there, but my guess is that a good parser will eventually be able to identify the major types and adjust accordingly.

Thing #6. BVH data may be generated either via motion capture devices or by algorithm - and you can mix the two easily.

So is BVH perfect? Perhaps it is, but there is an issue. If BVH is the 'verb' - the thing that gets things moving, then what about the 'noun' the data that needs to be moved about? That is the subject of a whole story in itself and I will talk about this in an upcoming post.

In the mean, please enjoy the code that Saqoosha wrote to get your screen to dance:

Live demo: http://jaanga.github.io/cookbook/bvh-reader/r1/bvh-reader-saqoosha.html
Live demo: http://jaanga.github.io/cookbook/bvh-reader/r1/bvh-reader-saqoosha-cmu-daz.html
Live demo: http://jaanga.github.io/cookbook/bvh-reader/r1/bvh-reader-saqoosha-truebones.html

Details and source code here:
https://github.com/jaanga/cookbook/tree/gh-pages/bvh-reader































Tuesday, September 3, 2013

JavaScript App to Record, Save and Replay Leap Motion Gestures in 3D

The Leap Motion device leads the way into new computer interfaces designed for specifically for our hands. Why? Because all other devices (mice, pianos, steering wheels whatever) require that the hands touch, hold or manipulate some other things. The Leap Motion device is the first device that captures your hand and finger movements in a completely free and unfettered manner.

Being the first device of this kind has its issues. There is not a lot of software for the device. There are not many good tools for designing software. And there really isn't even a good idea as to what the best tools should be or should do.

Frankly, I think this is amazing. This is one of the very rare occasions when we have a 'green field site' that doesn't even have a green field.

So what fun things need too be addressed first? Well, one of the main things is that there is no way of recording the movement of your hands and fingers and then replying the gestures and being to read the numeric data as well as view a representation in 3D. And perhaps more interesting there is no simple, easy-to-understand FOSS method for recording motions.  Or maybe there is but I haven't seen it.

There was, however, a great first attempt. Kai Chung put together two code examples, 'Leap Motion Recorder' and 'Leap Motion Replayer', available from here:

http://js.leapmotion.com/examples.

There are issues with both the apps. The recorder app provides no method for saving data and the replayer app only replays just the one file it is hardwired to replay and ut has no method for selecting and opening files. And in terms of helping programmers, these apps are coded to work with an early beta version of the Leap Motion software and do not seem to work when linked to the current version of the JavaScript Leap library.

But now, as of today, there is 'jestPlay'. If you have a Leap Motion device, try out the app here:

http://jaanga.github.io/gestification/cookbook/jest-play/r1/jest-play.html

There is a replay only version in the works - so that people without the device can replay gestures. It should be available shortly.

The jestPlay app enables you to record your hand movements by saving data from the device to your computer as JSON files. Once saved, you can open these files and watch a full 3D replay of the movements.

The app is a 'cookbook' style app. It is not a fully-featured or even a finished app. It does, however, provide you with a simple working example in just over two hundred lines of very simple JavaScript code that you can use to start developing your own code.

The app provides full access to your operating systems file save and file open dialog boxes
- which are features not normally found in JavaScript as they were recently introduced in HTML 5.

Based on the Three.js library, the jestPlay app allows you to zoom, pan and and rotate the views of the replays - so from another person's point of view - you can see your handiwork .

Source Code
https://github.com/jaanga/gestification/tree/gh-pages/cookbook/jest-play









Monday, September 2, 2013

The Barfolina Pavilion: Towards a Procedural Architecture

The past week or so has been exhilarating because I have been able to churn out so much interesting code. But now I have material for a dozen blog posts. So there is going to be a battle between the coding fingers and the writing fingers. For the moment, the writing fingers are in charge.

Nico B wants an app that would allow people to 'fly'' over the harbor in Iquique Chile and he wants to do this using the motion controller from Leap Motion. How do you fly in an imaginary way - hundreds of feet over the harbor and down through the buildings - while using just one hand twirling and swirling inches above the Leap Motion device? Neither the Leap Motion sample software nor Three.js examples have anything that does exactly this. So we needed to come with the flying app ourselves. In order to build the app, we need landscape and building to practice with. The actual physical project in Iquique has issues, so we needed to come up with our own imaginary buildings.

The first simulation I came  up with was an imaginary landscape:
http://jaanga.github.io/gestification/projects/flying-leap-3d/r1/flying-leap-3d.html

In many ways, this was just fine. But really it was just too good. All you do is float around. It's actually quite difficult to get anywhere specific.

So then Nico found the data for this castle:
http://jaanga.github.io/gestification/projects/flying-leap-3d/castle/load-castle.html 

It may take a number of seconds for it to load. There were many issues here. The biggest issue has been getting the flying speed right, when you are in the castle it's too fast and when you are flying around it's too slow. And then the walls only have textures on the outside. When you go inside, the walls are invisible. So you think you are still outside. Therefore you keep on going and then you are outside without having known you were inside. And so on.

So the Nico came up with the Barcelona Pavilion - and he even sourced a Blender 3D file for it. Conceptually the Pavilion is a perfect place to lean to fly.  You can start in the landscape, move to the courtyard than try to negotiate the narrow passages. It was a perfect fit.
But the Blender file was missing its textures. I found two other Blender files. Again there were issues.

What to do? Build my own models using Blender or SketchUp or whatever? No way. I don't build stuff using tools I did not have a hand in building myself. So I built a 3D model of the pavilion using Three.js. Every floor, wall, window is a procedure.  The project took half a day. It's about 500 lines of code and is about 28K in file size. Simple, tiny and fast. Perfection.

Not really. As you fly through the building you'll see a dozen or so mistakes. That's because I mostly did things by eye and feel. It will take an hour or two to fix these, but I have no fear. Unlike the old-school CAD programs, there won't be broken walls that have to be 'fixed' or dozens of items that need changing because a height has changed.

The pavilion - which I call the Barfolina Pavilion - is viewable here:
http://jaanga.github.io/gestification/projects/flying-leap-3d/barfolina-pavillion/r1/barfolina-pavillion.html

It's all still at a Release 1.0 stage, but it was so much fun thus far that there will be many more releases with things like transparent roofs and people visiting the pavilion and maybe even en exhibit or two.

But working on this project make me think a lot of the buildings of the future. These building, as we all know, will be built and edited and updated continuously by robots. "The grandparents are coming. We need to add a guestroom." "Saturday is Tammy's birthday. Take down all the walls so there's room for the party." The robots will not want static databases of dimensions. The robots will want to know the program.

So in coding this 1929 building, perhaps I was designing for 2129...

Source Code:
https://github.com/jaanga/gestification/tree/gh-pages/projects/flying-leap-3d





Sunday, September 1, 2013

Folding Polygons the Naughty Way: in 3D, with Intersections and a Video Veneer

My recent post on displaying video on wobbly surfaces amused ArtS of Meno Park CA and to make a long story short we had a delightful lunch together today. It's not often that I find anybody like Art in the meatworld that likes to talk about 3D.

We talked about many aspects of 3D. For example, a great introdution to JavaScript coding in 3D is here: http://www.mrdoob.com/projects/htmleditor/. Look for the word 'wireframeLinewidth' ( at the end of the long sentence in the middle) and change the '2' to an '8'. Bingo! You are a programmer.

And we talked a lot about tween.js - a brilliant way of morphing all matter of stuff in 3D and 3D.

But mostly we talked about displaying video on folding polygons, Folding polygons are things that look like the images in the link provide by Art to this book: http://graphics.berkeley.edu/papers/Iben-RPP-2006-06/. Basically, if you like origami then you like folding polygons.

Many of the discussions on folding polygons relate to morphing the polygons on a 2D plane
such that no vertex is 'naughty' and crosses over or intersects anybody else's line. This is certainly fun stuff. But even more fun - or fun in a different way - is the exploration of 'naughty' folding and 3D folding.

After lunch I built some demos that begin to explore the naughty bits.

Demo:
http://jaanga.github.io/cookbook/video-folding-polygons/r1/video-folding-polygons-5x5.html

This first demo is a version of the Three.js demo:
http://mrdoob.github.io/three.js/examples/#webgl_materials_video

The code is greatly simplified, and made suitable for being used as boilerplate for further apps/

The next demo is here:
http://jaanga.github.io/cookbook/video-folding-polygons/r1/video-folding-polygons-pixelated.html

Question: can you make a video with holes in it? This app shows the answer is 'yes!'

The fun thing here is the array that is used to layout the position of the holes. See below - if you look carefully you can see the word 'Art' spelled out. Now is that the Art I had lunch with or is it that thing that people do with chemicals and brushes? Who knows.

You can see that the array is laid out as 20 x10 grid - just as  the cubes in the grid are laid out. A 1 indicates inserting the cube. A 0 indicates leaving the cube out. I enjoyed this cute, ever so simple 'Art'istic method for creating a 'pixelated' video.

var pixels = [
1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
1,1,1,1,0,1,1,1,0,0,0,1,1,0,0,0,0,0,1,1,
1,1,1,0,1,0,1,1,0,1,1,0,1,1,1,0,1,1,1,1,
1,1,0,1,1,1,0,1,0,1,1,0,1,1,1,0,1,1,1,1,
1,1,0,1,1,1,0,1,0,1,1,0,1,1,1,0,1,1,1,1,
1,1,0,0,0,0,0,1,0,0,0,1,1,1,1,0,1,1,1,1,
1,1,0,1,1,1,0,1,0,1,1,0,1,1,1,0,1,1,1,1,
1,1,0,1,1,1,0,1,0,1,1,0,1,1,1,0,1,1,1,1,
1,1,0,1,1,1,0,1,0,1,1,0,1,1,1,0,1,1,1,1,
1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1
];

Demo:
http://jaanga.github.io/cookbook/video-folding-polygons/r1/video-folding-polygons-deformed-planes.html

This is the actual 'naughty' folded polygon demo. You will note that the 'teeth; are splayed out in 3D
but if they were laid out flat the teeth would intersect. In other words, you could not cut this thing out of a single sheet of paper.

And then again, even if you could, you might also have some trouble displaying video on the impossibly cut sheet of paper.

If you asked me a year ago if a script-kiddie of my ability could code the display of a video on an impossible origami fold, I would have laughed. Actually, I am still laughing because the demo kind of sucks. Looking at the video from the back or from the side is vaguely interesting - for about four seconds. And sometimes the video feels a bit 3D-like.  But, frankly, I am happiest when it's all 3D through and through. But if you do have any good "Can you do this?" challenges, I would be delighted to hear about them. Art, I am looking at you.

Source Code:
https://github.com/jaanga/cookbook/tree/gh-pages/video-folding-polygons


Tuesday, August 27, 2013

WebGL: Displaying Video on Wobbly, Moving Surfaces

Today there was a message from lyc78026 to the WebGL Developer mailing list that asked this:

I want to implement a curved surface in WebGL, and map a video texture to the surface, is this possible? 
Something like this:
https://dl.dropboxusercontent.com/u/73906326/img.png 
Thank you!
To which I responded the following in an email to lyc78026:

WebGL can be made to do almost anything, so it certainly is possible for a webGL app to wrap a video around a cylinder as your link indicates.

Unfortunately, my skill level is not up to the necessary WebGL level of coding. So I tend to use libraries such as Scene.js and and Three.js to do the heavy lifting.

Thus, using these three.js examples as starting points, it was not difficult to come up with the Sintel video on playing on a cylindrical surface:

Note that you can use your mouse to spin the cylinder.

http://jaanga.github.io/cookbook/video-surfaces/r1/threejs-video-surface-canvas-cylinder.html

When I finally was able to get the video running, it looked a bit 'old-school'. So wondered a bit and here is the video playing skewed at an angle on a cylinder.

http://jaanga.github.io/cookbook/video-surfaces/r1/threejs-video-surface-canvas-skewed.html

I had no idea that you could this. And this made me wonder some more. How about bending the video in two directions? And having twenty videos at the same time? Bingo!

http://jaanga.github.io/cookbook/video-surfaces/r1/threejs-video-surface-canvas-sphere.html

Then after looking at this for a while, I began to feel that somewhere maybe I had seen stuff like this before. So what could I build that Remi has not seen before? Remi has seen nearly everything 3D. So here is the video running inside a torus.

http://jaanga.github.io/cookbook/video-surfaces/r1/threejs-video-surface-canvas-torus.html

My guess is that once you have viewed 'Blade Runner' from inside the Torus that you are good to go in the Matrix.

So now I was on a roll. The video was running like butter through my finger tips. And then I had a flash of inspiration.

When I was young, all our TVs had aerials that were two metal rods sticking up and we called them 'rabbit ears'. What I wanted to do was project the video onto this rabbit: http://mrdoob.github.io/three.js/examples/#webgl_loader_vtk.

Then I could say something like "In the old days all TVs had rabbit ears, but today all you need is the rabbit" ;-)

But that was a fail. And my attempt at projecting the video onto a 3D model of Walt Disney's head was also a fail.

And I thought, OMG, lyc78026 will be sorely disappointed if is there is not a good closing demo.

But as I was having those horrid gloomy thoughts and bad experiences, a light bulb lit up over the top of my head.

Of course, the video does not want to run on the 'bunny.js' and 'WaltDisneyLo.js' because these are static objects. The Sintel video is a moving picture thing. The video is only going to run on something that is in and of itself 'running'.  If this thing is going to work,the closing demo needs to a moving picture, moving picture [sick] thing. Otherwise the Sintel video will walk off the set.

And we all know where that is going: to zz85's roller coaster ride [Toggle 'Camera Spline Animation View' to: On]

Again, I don't have Singaporean elementary school math in my kit of tools.

But, be still my heart, I do have my algorebra routines - which is algebra as made known to the world by Al Gore while holding a bra.

If algebra ever drove you to tears then these routines are truly moving pictures.

Let's see how the Sintel video takes on a transcendental function:

http://jaanga.github.io/cookbook/video-surfaces/r1/threejs-video-surface-transcend.html

Pretty moving, huh?

Moving wobbly text can be done in Three.js. QED mostly, therefore it can be done in WebGL.

lyc78026: Got any more fun, moving 3D questions?

Link to Source Code
https://github.com/jaanga/cookbook/tree/gh-pages/video-surfaces


Saturday, August 17, 2013

Leap Motion: Towards a New Linguistics

TL;DR Methods for exporting data from our brains are woefully slow and incomplete. Tools such as the new device from Leap Motion may provide an intriguing ability to extend language itself with color and other attributes.

Humans Are Not as Fast as Computers as Communicators

The laptop I am using to write this post can export data to the world at 1,000,000,000 bits of data per second - which is quite slow compared to the speed it can transmit data internally. But what about the speed of getting the data from my brain to the computer?

If you speak fast, you might output about one hundred sixty words per minute. At five characters per word plus a space that adds up to 960 bytes of data per minute. Typing is generally half that speed.

The world record for piano playing is currently at 765 keys played in a minute. A very good guitar player might hit 600 notes per minute.

Of course there is much meta data as well. Voice has pitch and timbre. The piano has acceleration and duration. So there's more data but not that much more

Using a mouse it may well be possible to click several hundred times in a minute. A game controller is likely to produce even more but a computation of an estimate of the data output per minute is beyond my skill set. But it can be no more than a few thousand bytes per minute.

Is there a pattern here? If there is then it is really simple: The human being, using current methods, is able to produce only a very small amount of new digital data per minute.

You can read fast, listen fast and view incredibly fast but in terms of creating or generating new information that you can share you are painfully slow when compared to the technology you have in your hand.

And, frankly, even the receiving of data into your brain is not that fast. Reading 300 words per minute - or 1,500 bytes per minute is considered to be speed reading.

These speeds have certainly improved over the centuries. We have been reading and writing for three or so thousand years and speaking and listening to words for perhaps a hundred times that long. Certainly we can gather and disseminate faster than our ancestors. Trying to do speed reading with stone tablets was probably not self-evident.

Nevertheless the improvements in the speed of reading and writing comprehension have not improved significantly in in my lifetime. For example, Any improvements the abilities for our brains to process data has been nowhere near the gain in the ability to transmit the data that the Internet brought about. In other words your grandmother could read a letter about as fast as you can read an email.

Perhaps there is some kind of an asymptotic limit to the speed at which our brains can import and export data and we seem to be approaching it. At least we seem to be approaching some limit using the tools we have become accustomed to using over the last millennium.

Emerging Alternative Methods of Writing

But are letters and spoken words the only means we have at our disposal? Are there other ways/methods/vehicles that we could use to communicate?

Let's consider some ways.

I am intrigued by the differences between the way Westerners and East Asians tend to read. Westerners use a phonetic alphabet and East Asians ten to use ideographs. Both have their strong points.

And I begin to see a blend occurring. The East Asians have been dragged into learning the Roman alphabet. And Westerners have begun to learn a new series of ideographs that range from the home icon, reload, mute, go back and other icons as well as smileys and emoji.

I also see much increased visual complexity in the data being sent out, Web pages and text and imagery and sophisticated graphic design. Whether codes such as 'lol', 'bte', and 'rofl' speed up your data output is up for grabs.

Full Body Data

Can we switch gears? Is there a new 'communiKid' on the block. Could we import and export data to and from our brains at 5K bytes per minute or more without inserting tubes and connecting wires into our brains?

If so, how would we do this?
When people develop computer games, they need to simulate body movements in order too create animated characters. They use a technology dubbed 'motion capture' to do this. A typical method of motion capture is for an actor to have a number of dots attached to their body, be filmed going through a series of moments and the film decoded in such a way that the movement of the dots can be saved as X, Y and Z coordinates.

In this manner, hundreds of thousands of bytes of intentional data can be exported from a human being per minute.

This may be 'poetry in motion' but the data itself is not a a mere poem. Every byte of data was caused by an intentional action that occurred in that actor's brain. This is a huge amount of engaged data being created and logged per minute.

When we look at ballerinas, golfers, tennis players, ju-jitsu practitioners and others we can see virtually every aspect of the body brought under control of the mind and dedicated to communicating.

Thus, conceivably, not only can we record huge amounts of data but in the right bodies much of that data could be termed significant or intentional data.

We cannot, however, all be Tiger Woods. Nor do we have access to Hollywood motion data recording studios. Are there, perhaps, other ways to capture the body's kinetic motions and transforms that motion into digital data?

Dandy New Device

Golly gee. It's just arrived on my desk.

I am using the device recently released by Leap Motion. It's tiny device, no bigger than a pack of chewing gum, that records the X, Y and Z position of every finger you move as well as the three rotation angles of the fingers and the velocity. It is also doing the same and a bit more with the palm of the two hands. Whether this is 98 data points or more or less is up for debate, nevertheless the data is coming in at over 100 times per second or more than 6,000 events per minute. So is this half a megabyte of data being created per minute? Who knows? All I know is that the more days I code for this thing the more data I am getting out of it.

Is the data as erudite as a Shakespeare sonnet or as elegant as an Einstein equation? I don't need to answer that do I? But then again the first human grunts or scribbling were also probably nothing to write home about.

Thus you can probably feel safe that you will go to your grave with a keyboard and microphone. But what about the children and the grandchildren?

My intuition is that using such devices and their successors we will build new, extras layers of communication with such devices. The price is already cheap enough for it to be attached to all phones or laptops.

Eventually you (or your grand children) are typing in thin air - perhaps more like playing a multi-keyboard organ or air guitar - and what is coming out is some kind of multi-dimensional information stream which in turn builds into a new lexicon. A way of communicating that is phonetic, ideographic, vocal, gestural, 3D and even colorful.

A New Linguistics

If this sounds phantasmagorical, please do remember:

We use gestures to write down musical notation to record the details of the gestures used to create music. And then read that notation and turn the notes back into gestures.
Writing is using gestures to record speech by manipulating pen and ink or by frenetic tapping on plastic keys.

We use gesturing all the time - without thinking. But we subjugate these gestures, we make them the servants of the oral and the aural.

We all know how to wave hello, make the OK sign, show a thumbs up or give the finger. The Italians seem to be able to recognize 250 gestures. These are all the most simplistic and basic events. But they are a start and indicate that our brains are wired to communicate using our hands - just as dogs and cats wired to communicate with their tails and ears. And the use of appendages to communicate with other beings predates visual and auditory communication and is thus perhaps part of our oldest and deepest thought processes.

Perhaps it's time we let gestures act in their own right. Using the Leap Motion device - and its eventual successors and competitors - we will have methods of recording, editing and playing back gestures without referencing or mediation of any other device or instrument. [Explanations of these aspects will be provided soon.]

The changes will not happen overnight. It may even take several generations. Today's babies are learning to swipe on tablets. Tomorrow's babies may learn to swipe in the air.

And the changes may start to occur on several fronts. We are all beginning to use voice recognition. This enables us to get up from our desks and be more healthy. Since we are standing and away from keyboards, I can see gesturing and voice recognition working closely together. In the beginning by adding line breaks and formatting the text while we speak but in the future I can see you speaking while gently moving your hands and fingers. The movements are altering the pitch and timbre of what you are writing much like the gestures of the conductor shape the music of the orchestra. What appears on the screen or in our glass is a writing that we today would hardly recognize. The text is full of colors and devices that emphasize or modify the tempo. Diacritical marks indicate the importance of particular aspects to the reader or the writer. The final output is a gushing of sound, music, spoken words, gesture symbols, images. And did I mention that you will need a 3D printer in order to read stuff in the future?

Too Frightening Maybe? Consider the Possible Happiness

Actually, it all does sound quite frightful doesn't it? Life is already complex enough. Do we really need all this more stuff happening more fast all at once? Not likely.

But then consider this. People may knit and talk at the same time. Drive a car and chat on the phone - not. Take a shower and sing simultaneously - that's good. And when you are doing this, my guess is that you tend to be happy. The more you use the more parts of your brain, the happier your brain. The more your brain is fully engaged the more fulfilled you feel.

This writing - or even the linguistics - of the future may well embody more of your body in service of more comprehensive and faster and more fulfilling methods of communicating.

As I mentioned before, the Leap Motion device is what is opening up these thoughts. It can recognize movements that are a fraction of a millimeter. it's small and its successors will be embedded in phones, laptops and wearables. If it only costs $80 today how little will it cost on five years?
The Leap Motion device is not a game-changer. For example, it won't significantly alter this years' holiday season technology sales. But it will change the game. In ten years or so the rules of the game we call life will be different than they are today. Or is that just me waving my hands at you?

Links

https://www.leapmotion.com/

And here is a link to some of the code I have been working on that provoked these thoughts. This code could not have been written without the support of Leap Motion and the wonderful Three.js 3D JavaScript library that enables me to access the WebGL in your browser:
http://jaanga.github.io/gestification/

If you do not have a Leap Motion device you can get a glimpse of the apps using the links on this page
http://jaanga.github.io/gestification/no-leap-view-only.html



Thursday, July 25, 2013

Leap Motion: real-time 3D data is a leap into the future

I have had a Leap Motion device on order since July 2012. Earlier this week I received an email saying that my device has been shipped. Yay!

Even better: courtesy of a friend, I was able to obtain a developer version of the device last week and I have been coding for it ever since.

I am in heaven.

My on-going dream is to code around real-time data in a 3D world.

With the Leap Motion I am reveling in over abundance of 3D data.

Perhaps my brain is so happy because it is seeing me do what it does for itself incessantly without thinking!

I think my brain also likes it when a lot of it is being used on work that requires a lot of concentration. And, boy, coding for the Leap Motion devices requires an abundance of concentration as well.

So I have spend the last week pouring over the documentation and examples - and trying to get some code out. It's been both fun and frustrating. This is always what happens when you are a newb working in a new area.

And in upcoming posts I will talk in more detail about both these aspects.

But apart from coding for the device the other aspect is that device presents new possibilities for communicating via computer. In other words, for designers this is terra incognita. Most of us have very little experience with 3D interaction.

So far, I have taken a rather peculiar direction with learning about this device. I decided that I would not look at applications that are already published. I want my brain to come into this device free and unfettered by existing thinking and current dogma.

I want my brain and limbs to experience this thing and I want them to inform me about what is needed and where to go. Therefore I have only looked at code in the API docs and code in the examples. And simply playing with the device and trying to understand the numbers it sends out.

I am coming to see/feel several thought patterns.

1. This device is about using two hands. If you are only programming it for use by one hand then much could be done faster and more easily with mouse or touch pad or cursor keys.

2. The thing you are working on should be 3D. There is no advantage to using the Leap Motion for editing text or arranging photos and other flat stuff.

3. It's about moving things that are moving - particularly with a swinging or radial motion.

This device is for sculptors modelling cay, musicians playing drums, scientists folding proteins and, of course, for gamers.

If what you want to do can be done on a flat surface then finding a good use for the Leap Motion is going to be challenging. I am not saying it can't be done, but more that cool ways have not yet been thought of. And much as we all like Minority Report the great portion of what we see there could be done just as well with a touchpad. And furthermore the Leap Motion device has, as of this writing, only a very small field of vision and must be in a position where hand activity is just above the device.

The use that jumps out at me is where you pick up an object with one hand and do something to the object with the other hand. So think of a jeweler picking up and turning a ring with one hand and using a tool to add detail with the other hand. Or think of an engineer picking up the Starship Enterprise with one hand, rolling it around and inspecting for hull integrity with the other hand. Do remember that with computers we can make anything happen. Or think of a chemist holding a grain of salt in the palm of one hand while approaching it to the sun with the other hand.

The point is that there are going to be many uses for the Leap Motion device but they will only start appearing when we stop thinking with a paper and pencil mentality.

"Toto, I have a feeling we are not in Flatland anymore..."

Link to some of my code:
http://jaanga.github.io/gestification/



Saturday, June 15, 2013

Jaanga oSome Globe ~ R4 ~ Improved Resolution and Better Material Handling


This update to oSome Globe adds much improved resolution for zoom levels 1 and 2. Work on better resolution for zoom level 3 will start soon.

Also the commands to update the meshes after editing the vertices were added with the effect that rendering quality is much improved.


Link
http://jaanga.github.io/cookbook/osome-globe/r4/index.html

Friday, June 14, 2013

Jaanga oSome Globe ~ R3 ~ Poof of Concept



Yes, I know the idiom is "proof of concept" and thus the title looks like a fail, but in this case the "poof" is intentional.

The above demo is a work in progress, a sketch. You will note that elevations are set so they are grossly exaggerated. In real life the globe is as smooth a billiard ball. So the 'poof' is just just a means of saying that the globe has been slightly exploded.

Note also the light frustum and cubes bobbing out of the globe and other artifacts and curiosities.

BTW, the purply translucent sphere indicates the sea level. What appears inside is the sea bottom - again with a gross exaggeration of depth.

For the moment the 3D topology only goes down to zoom level 3, but I hope in a few days to have 3D all the way down to zoom level 10 or more with elevations at approximately 1,000 meter intervals.

The level of detail after that would be to get down to elevations at about 30 meter intervals. More about that possibility at some future date.

The main thing is that what you see here are the rudiments of an animated 3D globe embedded in a Blogger post with all the code hosted on a GitHub server and the data sourced from the FOSS OpenStreetMap servers.

Keep in mind, as always: "The aim of the project is to create a lightweight 3D library with a very low level of complexity — in other words, for dummies."

So watch out: we dummies are going global!

Link:
http://jaanga.github.io/cookbook/osome-globe/r3/index.html

BTW, the the globe up above should work - it's a simplified version of what's available via the link.

Saturday, June 8, 2013

Jaanga oSome Globe ~ Full 3D + 18 zoom levels ~ Cookbook Demo Code

For a lot of the time since the Urdacha debacle in April, I have been working with 3D globe models of the earth for the FGx project. These have for the most part been based on the Three.js globe in Chrome Experiments.

This globe has a kind of usefulness in terms of simple DataViz. I think it can be thought of as the 3D equivalent of the pie chart. Great for beginners.

But for almost any other use, the globe code is quite limited.

So I began to think of the globe I would like to have.

The features should include:

  • Ability to access all sorts of open source and publicly available data from, say, Open Street Map, MapQuest and Google Maps
  • All runs in your browser in a just a few hundred lines of code
  • Fully 3D with easy to add and remove 3D assets at runtime
  • Eighteen levels of zoom like OSM and Google Maps
  • Basic 3D topology for the entire globe
  • Full 3D camera controls
  • User interface overlay built with a standard library such as jQuery

I'm sure there's more, but in a nutshell I want a simplified Google Map and a Google Earth that I can create and edit using tools made for dummies.

So far I think it's all doable. The link below will take you to Jaanga oSome Globe r2 on GitHub which supports 18 levels of zoom. I am working on r3 and the work on 3D topology is underway.

Link
http://jaanga.github.io/cookbook/osome-globe/r2



Friday, May 31, 2013

Three.js and jQuery: Never the Twain Need Meet




The phrase "never the twain shall meet" was used by Rudyard Kipling, in his Barrack-room ballads, 1892:
"Oh, East is East, and West is West, and never the twain shall meet."
I use this as my lesson-learned from playing with Three.js and jQuery. In this post I will suggest keeping the two different libraries in quite separate places and writing in two styles.

And, actually, I am quite wrong. Jerome Etienne at learningthreejs.com is doing an amazing job of building a combined dialect which he has named tQuery. And for proof, just look at the hugely complex structures that Steve Wittens is building using tQuery.

But now let us consider the wrong way.

Three.js says it is 'a lightweight 3D library with a very low level of complexity — in other words, for dummies.'

jQuery says it 'makes things like HTML document traversal and manipulation, event handling, animation, and Ajax much simpler'.

Guess what? What is easy 4 1 is not necessarily easy 4 2 let alone 4 U.

The two libraries appear to come from two different planets.
  • Three.js is JavaScript kindergarten coding. If you can do IF's and FORs you can do THREE.JS
  • jQuery is a JavaScript takeout menu. If you can order two from Column A and three from Column 4 and you are good to go on jQuery.
Which library is from Mars and which one is from Venus? Who knows?

Anyway, the thing is that both are libraries are extremely popular in their niches. And it is highly likely that people (like me for example) want to combine the user experience of jQuery with the 3D of Three.js.

The question is: can one easily combine the two styles?

The normal thing is to say, I am writing a program. My program should have a style and to make life easier for the maintainer, my program should have a consistent style throughout.

So then which style should you follow? Mr Doob's or John Ressig's?

Each giant is so huge that this midget can't seem to stand on both their shoulders at the same time.

And yet we need to write apps that are 3D and have calendars, that are animated but can show text in an accordion. What can small people do?

So here is another quote, this time from Saint Augustine:
Cum Romanum venio, ieiuno Sabbato; cum hic sum, non ieiuno: sic etiam tu, ad quam forte ecclesiam veneris, eius morem serva, si cuiquam non vis esse scandalum nec quemquam tibi.
Basically this Latin translates as: "When in Rome do as the Romans do."

The links to the cookbook samples will allow you to be completely schizophrenic while maintaining total sanity throughout while playing with both libraries together

Here are some guidelines.

We like Google, Bling, Alpha. The more words we show in straight HTML the more they will like us.

JavaScript and torus knots are not really SEO-able.

So keep let's keep two separate files. The HTML and jQuery goes in the main HTML file. The Three.js goes into its own file called in by an IFRAME into the main page. In the main file, you write in jQuery style. In the iframe, you write in Three.js style.

So the final page is built from two completely different files. And both pages - if well-written - should display all by themselves.

But 'Wait, wait!" you say. "The jQuery and the Three.js need to communicate."

Here is the secret sauce.

Both libraries will accept communication from the other if you wash your hands that communication is sanitized.

When Three.js needs something from upstairs, it just prefixes the jQuery variable name with 'parent.$.'

When jQuery need something from downstairs it prefixes the Three.js variable name in this sort of manner:

$("#ifr")[0].contentWindow.scene.children[0].material.wireframeLinewidth.toFixed(1)

The magic word is '.contentWindow.'

Both of these 'handshakes' work with getting and setting in the alternative environment.

So what are you waiting for?

Welcome to my happy, easy jQThreery schizophrenia!

Link:
http://jaanga.github.io/cookbook/j3qUE/r1/





First-ever high-resolution images of a molecule as it breaks and reforms chemical bonds

I find these first images of molecular bonding to be very exciting.

You may find this curious, as this is meant to be a data visualization site where reality is modeled but rarely shown for itself.

Look at the image. It shows a single molecule displayed from an orthogonal point of view - and nothing is moving.

What will multiple molecules moving and interacting in 3D look like? For the foreseeable future, my guess is that such complex stuff will have to be modeled rather than photographed.

http://phys.org/news/2013-05-first-ever-high-resolution-images-molecule-reforms.html

We have work to do...

Saturday, May 25, 2013

The First Image Ever of a Hydrogen Atom's Orbital Structure

Here is an image of some very small events.


Eventually we will be able to generate algorithmically generated 3D animated depictions of such events - directly from the actual physics equations.

This in turn will enable us to consider what happens when multiple hydrogen atoms interact.

As well as visualizing what is happening at an even smaller scale.

Monday, May 20, 2013

Old 2.0: How Old becomes the New New


The San Francisco WebGL Developers Meetup held on 15 May at CBS Interactive was in many ways even more mind-boggling than our normal meetup.

The usual type of things we see at the meetup is some sort of visualization that we have all seen before. The only difference is that we last saw the thing in a mega movie such as Minority Report or on a $75 game DVD running on a computer with dual GPUs. And now we are seeing it in a browser, for free, no plugin necessary. It's so much deja view all over again that we can hardly Yogi Berra it.  It's huge and we think its normal.

Things are moving so fast and yet it feels like we are standing still. Is there a reason for this?  The present and the future are fast moving, but we actually live slowly in the present and present. Is it because we live in our own legacy?

And that was the fun fact of this WebGL meetup. Every demo was trying to speed up the past, to bring our legacy up to modern capability, to make the past go faster miles an hour.

Let me count the ways.

First up: Tony Parisi - along with Remi Arnaud and Fabrice Robinet - showed off glFT. What does glFT do? It's like a hose that sucks 3D models out of people's hard disks and gets them up into the cloud. If we are going to move from Computer Aided Design to Internet Aided Design then we have to get the last twenty years of design data online - quickly, easily and cheaply - without coders always having to rescue stuff. And Tony gave us a presentation of an open-source and probable industry-standard method for doing so - truly a free glFT.

After Tony, up came Aleksandar (Aki) Rodic who showed a 3D editor built using Three.js and connected to the cloud via Google Drive. I logged in to the online demo while Aki talked and happily added an icosahedron and scaled torus knots while he demoed. So what was the old bit here? The editor did not have much power. Apart from the amazing collaboration ability - it was more like AutoCAD 2.18 from 1983 than an editor of 2013. But that's not the point - and I will come back to Aki's demo at the end of this post.

Then came Iker Zugaza from Ludei. OMG. Now we are not just talking old. We are talking archeology. Iker showed us the Ludei game system running on old Blackberry tablets, early Kindles and even an iPad 1.  Each of these ancient devices displayed 3D graphics with webGL-like speed even if the device did not support WebGL.

More than that: you can use your wacky Android dev tool or Objection-C or whatever. Just get your stuff to run in a browser, send the blob to Ludei and they will send you back the code for just about any platform out there - living or dead. For free (well until you get enough users).

Finally there was Robin Willis from sunglass.io.  Think GitHub for designers. Use the tools you like to use. SketchUp. Inventor SolidWorks or whatever, and then easily and quickly obtain a representation that can be shared, and viewed and commented (and blamed) by the whole design team online. Your project is no longer stuck on a hard disk in some office but is available to the world.

So all of the demos gave the old legacy data and devices a kick in the pants and said "back to the future with you!" All very cool.

But the demo I keep thinking about is Aki Rodic's collaborative 3D edit demo.

And I think I am beginning to understand why.

When we think of the past we think of all that data from 3D Max, Maya, ProE, SolidWorks, Rhino and whatever. trillions of point and faces.

But the future is not about data, it's about code.  You don't send monster data down the interpipes, you send the code for building and animating monsters. Architects don't send you buildings, they send you the instructions for creating the building. Your DNA does not contain a miniature you, it only holds the the code for re-creating you.

Three.js is not a tool for building CAD models, it's a library of WebGL tools. It's the equivalent of DNA. What Aki is sending around is just the instructions on how to use the DNA. He's transferring the Internet equivalent of RNA. I think the future of 3D on the web will be based on techniques that move code as much as data. It's a fast and proven idea.

Speaking of fast and proven, hats off to Tony Parisi for yet again showing that he 'gets' what happening in 3D and 'puts' on a great event.

Saturday, April 27, 2013

FGx Globe r3.1 ~ Bug Fix Release - Nonetheless Very Nice Bugs

Here is the description on GitHub

The drop-down lists have received a lot of attention. It should now be a fairly straightforward process to select a different theme or map. Also what gets closed stays closed. This includes windows that have been closed and flights that have been terminated. More world maps have been added to the globe and the TBD panel is beginning to have a healthy set of issues to fix and features to add. 
Work has also continued on bringing in 3D models of airplanes. The current issue has to with dealing with what to do while waiting during the loading times for the 3D models, but this appears to be a solvable issue.
And there's a lot more that could be said.  I had a look at the app in FireFox and Internet Explorer. Given that I code on and for the Google Chrome browser there are a ton of issues.

The nice thing is that Three.js - the premier WebGL library for dummies - is sort of running on Internet Explorer. It's just not very snappy. The beauty of this scenario is that people can actually see what they are missing by being on IE.

The real issue, the lesson-learned of today, is something quite different.  I know that I suck at using the Chrome JavaScript developer console. I am working on this. Last night I even fell asleep watching a Paul Irish video on this very topic. But it turns out that I suck even more at the FireFox and Internet Explorer developer consoles. And don't even begin to ask me about Safari and Opera developer consoles.

So not only are the languages interpreted differently on each browser but also the available debugging consoles are radically different.

Even more interesting was that some of the issues definitely had to do with jQuery. And jQuery is meant to be the solid working-class tool for cross-browser support.

Would anybody kindly console me?

Friday, April 26, 2013

FGx Globe r3 ~ FlightGear Planes Now In the Sky

One of my several projects over the last few weeks has been the development of a flight simulator visualizer.  You may well ask what is a 'flight simulator visualizer'?

FlightGear is a popular FOSS flight simulator program. Players like to share their in-the-air status online. A JSON feed for this data is available from FGX's Crossfeed. Their current visualizer for this flight simulator data looks like this. I thought is might be fun to wrap the data around a globe.

I have just started rev 3 of my FGx Globe. Here's is what I just wrote about the new rev:
This revision begins to feel as if its getting somewhere. The code is simpler, shorter and does more than before. In particular there is much better separation between the user interface and the 3D world. The user interface is now built using jQuery UI while the 3D all resides in a separate file embedded in an iframe. 
The user interface now provides a table that displays all the flights currently underway. For each flight there is a clickable button that opens a separate window. Multiple windows may be opened - each of which displays the current position and navigation data for a single plane. You may select from a variety of map data sources. The user interface is theme-able with all the standard 20+ jQuery UI themes available at the click of a button. All significant settings are savable as 'permalinks'. 
On the 3D side, the data-handling procedures have been re-written in a much more simple and more straightforward fashion. Also, there first steps have been taken introduce more realistic planes. 
There is still much unfinished work in this build. In particular the only way to remove terminated flights id to reload the window and the only way to completely delete a window is to reset the permalink. And there are several issues with the drop-down lists. Please note that all building and testing has been on Google Chrome. Other browsers will have issues. Other issues are listed in the TBD panel of the main menu.
So the new thing here is that I am using jQuery. I have gone over to the dark side. I have drunk the Kool-Aid.   I say this because, for the most part I code in a very simple and spare style.

The Three.js site says this about the code:
The aim of the project is to create a lightweight 3D library with a very low level of complexity — in other words, for dummies. 
Well, I have tried to match that style. In some ways this is because I resemble the categorization and in other ways because the style is simple and pure. But perhaps the most telling reason is because I am not a programmer.  I actually don't try to write code or build a program. I am a designer. What I am trying to design is a brain visualization thingy or a stock market snooper or whatever.

I could do this using pencil and paper or clay or some such traditional media. But I just happen to like to use JavaScript as my design media.

I plan to write more about coding while being a designer, but suffice it to say that much of my work sticks to "for", "if" and "=" along with parentheses, commas and semi-colons.

So the move into adding jQuery to my repertoire is a big one for me.  I think I will most likely become schizophrenic: Writing in Three.js style on some days and jQuery style on other days.

But not to worry, on both types of days my brain will very likely be placed squarely in the middle of the Three.js target market.




Jaanga Pivots Again

Much as I like the thought and idea of blogging in 3D, it looks like I don't do enough of it.  I am already doing more than enough 3D stuff on GitHub, so the work here on the blog is being neglected.

And, on the other hand, there are a lot of matters that I would like to be discussing that relate to coding and 3D and designing that I am not writing enough about.

So here's what I am thinking of doing for the time being:

Jaanga.com - this site - becomes all writing and then all the demos and code are on GitHub - at jaanga.github.io and gitihub.com/jaanga.

Well, the demos and code are already there there. So all I need to do is link to these from time to time from posts here in this blog.

I envisage writing short text-only posts. This could easily be a Tumblr thing, but since I have enough material on Blogger I don't feel the urge to do a big switch. And let's not get into a Blogger versus WordPress thing. I love them all. All of them are adding so much to the blogosphere both in terms of content but also in ways of processing that content. Hats-off to all three.