tag:blogger.com,1999:blog-13852895503971691222024-03-08T12:30:39.258-08:00jaanga<br>
<i>a web site dedicated to the visualization of huge numbers of numbers<br>
<br>
supplying all the Applets + Art + Attitude = 4U 2C Gr8...</i>Theohttp://www.blogger.com/profile/02877421856947529794noreply@blogger.comBlogger82125tag:blogger.com,1999:blog-1385289550397169122.post-20769564583450273552014-12-05T00:58:00.000-08:002014-12-05T00:58:22.374-08:00The Disruptions Get Disrupted: vA3C Hacker now contends with vA3C Hackette The <a href="http://va3c.github.io/viewer/va3c-viewer-html5/latest/" target="_blank">vA3C Viewer</a> is meant to disrupt the classic closed-source, pay-for-service online 3D model viewers. And it does so quite nicely. But as I pointed out in my previous post, the code was becoming a bit gnarly, so I disrupted the disruption and started <a href="http://va3c.github.io/viewer/va3c-hacker/r2dev/va3c-hacker-r2dev.html" target="_blank">vA3C Hacker</a>. It does ( or will one day ) everything vA3C Viewer does but just in a faster, cheaper smarter way.<br />
<br />
Then this is week a funny thing happened. I was looking at the code, and I had the thought: well, I code use this stuff myself on a bunch of my projects but I just need a small subset of the hacker tools. And so this how the disruptor of the disruption itself became disrupted and "VA3C Hackette" was born<br />
<br />
vA3C Hacker and Hackette are within a few days of being usable/playble. There are a bunch of fun things. The user interface is built using Markdown - with links that call JavaScript functions that are loaded dynamically and/or with tiny bits of HTML from time to time. Because it's so easy, the interface is very chatty. Think of newspaper columns to the left and right of a wide screen.<br />
<br />
And tn since Hackette is just a tiny index.html file that you can drop into anywhere on GitHub, the the viewer is multi-cellular - a network or viewing stations.<br />
<br />
And a main thing is that if I can't understand things within a few minutes then that thing has to be replaced by something shorter and simpler.<br />
<br />
But the really main thing is that the app can begin to do its own marketing. The app really simplifies the process of doing screen captures is working well. Performances that include the camera tweening from location to location with text to speech voice over have been created. I am happy with the idea that vAeC will really help with bulding its own demos.<br />
<br />
The thing I am not happy about is this. I can't seem to balance writing and coding. I look at the work of Jeremy Tammik at on <a href="http://thebuildingcoder.typepad.com/blog/" target="_blank">The Building Coder</a> and Kean Walmesley at <a href="http://through-the-interface.typepad.com/" target="_blank">Through The Interface</a> and see two people who can mix coding and writing and a busy schedule all in the same day. Well, maybe one day when I grow up...<br />
<br />
<h4>
<b>Links</b></h4>
Live: <a href="http://va3c.github.io/viewer/va3c-hacker/latest/">http://va3c.github.io/viewer/va3c-hacker/latest/</a><br />
Code: <a href="https://github.com/va3c/viewer/tree/gh-pages/va3c-hacker">https://github.com/va3c/viewer/tree/gh-pages/va3c-hacker</a><br />
Readme: <a href="http://va3c.github.io/viewer/va3c-hacker">http://va3c.github.io/viewer/va3c-hacker</a><br />
<br />Theohttp://www.blogger.com/profile/02877421856947529794noreply@blogger.com14tag:blogger.com,1999:blog-1385289550397169122.post-15544885466061618342014-11-23T01:20:00.000-08:002014-11-23T01:20:14.296-08:00vA3C Hacker R1: A Possible Solution To A Personal Dilemma?I have been having a lot of fun with <a href="http://va3c.github.io/viewer/va3c-viewer-html5/latest/" target="_blank">vA3C Viewer</a>. It's now up to R7 and it now has a lot of cool features. But I have been having less and less fun with each release. As the script gets bigger and bigger, adding a new feature takes more and more time. Lately it has taken several days to get just to get a thing or two going.<br />
<br />
Taking time to add features is normal but it has a dreadful unintended consequence. Coding features over multiple days just about completely annihilates the possibility of writing a post. Your brain is fixated on completing the task. Thinking about a post is a distraction that gets in the way of completing the feature. Most likely the next post will be about the feature you are working on, so it might be easy to just cobble together some thoughts about the current work and get the post out of the way. But you end up coding until bed time and so no post.<br />
<br />
One idea would be to write small apps only - apps that get finished in under a day. But that prevents anything of any great power from being accomplished. Any app of any significance has a lot of code - tens of thousands of lines of code.<br />
<br />
Well, for the last few days, I have been working on an app that may help solve some of these issues<br />
<br />
vA3C Hacker is a breakaway from vA3C viewer in both thought process and coding style.<br />
<br />
The HTML home page is 47 lines. All it does is load whatever script it is asked to load. If you don't tell it to load anything, then it just loads a page that informs you of scripts it could load for you.<br />
<br />
What I have been doing is writing scripts that 'feed the beast'. Most are just a few dozen lines at most. Projects that were completed in a few hours. It sort of reminds me of the the early versions of WordPress. Then, if you went to login.php or page.php you came upon virtually everything to do with that particular function. You needed to go nowhere else to see or understand what that script did.<br />
<br />
This made WordPress very easy to lean and to feel comfortable with. And it's the feeling I am getting with Hacker. Today I added two more features: the ability to drag and drop files into the app and the ability to drag 3D objects around in the 3D space. Both are totally standalone. You could quite easily look at the code of either and add one to your app without the other.<br />
<br />
The process I am inventing for myself has probably been invented many times before. I just don't know its name and don't know how to search for it. But eventually I will find these out and, hopefully, report back with even greater understanding and more detail.<br />
<br />
The other interesting aspect - and difference with vA3C Viewer - is the intent to stand on the shoulders of giants even more. The Viewer does a very nice job of giving access to a wide variety of content - from dozens of algebraic visualizations to dozens of models of aircraft, Hacker should continue doing this and as well begins to give access to a wide variety of code. Of course, the basis WebGL and Three.js - just like Viewer. But, for example, instead of trying to create a geometry editor, Hacker gives access to zz85's <a href="http://zz85.github.io/zz85-bookmarklets/threelabs.html" target="_blank">Three.js Inspector</a>.<br />
<br />
With a bit of luck, the next few days should see Hacker providing easy access to even more 'shoulders of giants'. But the real proof of the potential of the Hacker way of doing things will be the production of blog posts. The more posts you you see the more the efficacy of the system will be demonstrated.<br />
<h4>
<b>Links</b></h4>
Live: <a href="http://va3c.github.io/viewer/va3c-hacker/latest/">http://va3c.github.io/viewer/va3c-hacker/latest/</a><br />Code: <a href="https://github.com/va3c/viewer/tree/gh-pages/va3c-hacker">https://github.com/va3c/viewer/tree/gh-pages/va3c-hacker</a><br />Readme: <a href="http://va3c.github.io/viewer/va3c-hacker">http://va3c.github.io/viewer/va3c-hacker</a><br /><br />Theohttp://www.blogger.com/profile/02877421856947529794noreply@blogger.com32tag:blogger.com,1999:blog-1385289550397169122.post-50938878637293010812014-08-03T21:39:00.000-07:002014-08-03T21:39:31.503-07:00vAEC Viewer R4: Permalinks Provide Fast Easy Free Ways To Source and Save Data Online<iframe height="400px" src="http://va3c.github.io/viewer/va3c-viewer-html5/r4/va3c-viewer-html5-r4.html#autocrapdoodle" width="96%">
Visible only in HTML view here: http://va3c.github.io/viewer/va3c-viewer-html5/r4/va3c-viewer-html5-r4.html </iframe><br />
<i><a href="http://va3c.github.io/viewer/va3c-viewer-html5/r4/va3c-viewer-html5-r4.html#auto" target="_blank">vA3C Viewer HTML5 R4</a></i><br />
<br />
An interesting phenomena of the architecture/engineering/construction (AEC) industry is the <a href="http://www.aecbytes.com/illustrations/viewpoint/2013/issue_67-images/fig7.png" target="_blank">very</a> <a href="http://images.huffingtonpost.com/2010-02-26-Productivityopt.jpg" target="_blank">low</a> <a href="http://www.wangyujian.com/wp-content/uploads/2013/03/2e.jpg" target="_blank">labor</a> <a href="http://datadrivendesignblog.files.wordpress.com/2014/01/mckinsey-report-1.png" target="_blank">productivity</a>.<br />
<br />
And, yet, we all know that in the future buildings will grow, edit and repair at will with the help of robots using 3D printers to generate Lego-like re-definable construction elements and humans to sculpt, paint and decorate the beautiful hand-crafted elements.<br />
<br />
So how do we get from that future back to the here and now as quickly as possible?<br />
<h4>
What Do We Have Now? </h4>
<ul>
<li>Building is not easy. There are codes and laws to follow. Construction techniques and design tools are difficult</li>
<li>Building is not cheap. Land, design professionals, developers all take their toll</li>
<li>Building is not fast. It takes <a href="http://www.b737.org.uk/production.htm">eleven days to build a Boeing 737</a> and months to build a house. </li>
</ul>
<h4>
What Do We Want? </h4>
We want every building on the planet under perpetual construction, repair and embellishment - with lots of jobs for everybody. We want to replace the National Registry of Historical Buildings with a registry of hysterical buildings.<br />
<br />
More seriously, we want design and management tools where:<br />
<ul>
<li>Stuff happens. Windows open, doors close and stuff happens just like living in the building. You can see every step of the construction. You can see the building aging and see what needs work or replacing. The digital model and the real-world model complement each other. It everything in Sim City, Second Life, WoW and <a href="http://www.commandandconquer.com/" target="_blank">CandC</a>.</li>
<li>It just works. Nothing to download. Nothing to install. Nothing to learn.</li>
<li>It happens now. Click and it's here. On your computer. On your phone. Whenever, wherever.</li>
</ul>
Yes, the expensive, specialist apps still need to exist too. Good engineering and good design require professionals with professional tools.<br />
<br />
But that should not stop the cheap, fast easy sharing, the give and take of data between everybody in real-time.<br />
<br />
And when you think of design that future way, it turns out you can do a lot of what is wanted right now.<br />
<h4>
So Show Me!</h4>
For a whimsical recreation of that desired design future click here: <a href="http://va3c.github.io/viewer/va3c-viewer-html5/r4/va3c-viewer-html5-r4.html#autocrapdoodle">AutoCrapdoodle</a><br />
<br />
Just do it.<br />
<br />
Each time you reload, the future changes.<br />
<br />
Want more hyperlinks to the future? Here are five of them:<br />
<br />
link 1- <a href="http://goo.gl/zXE5Iv">http://goo.gl/zXE5Iv</a> - Suzanne and Monster Visit the Revit House<br />
link 2 - <a href="http://goo.gl/gHxKKy">http://goo.gl/gHxKKy</a> - Monster and A6 Visit Ms Windy Cloth<br />
link 3 - <a href="http://goo.gl/c3kRJ2">http://goo.gl/c3kRJ2</a> - Monster Visits Stemkoski Land<br />
link 4 - <a href="http://goo.gl/gBNH8m">http://goo.gl/gBNH8m</a> - Mr Wright and Mr Jeep Visit Mme Tranguloid Trefoil<br />
link 5- <a href="http://goo.gl/NIDgvD">http://goo.gl/NIDgvD</a> - Walt and Lee, Sitting in a Tree<br />
<br />
OK. OK. For many of you the above hyperlinks will not work. ( We are looking at you, iPhone users. ) For many more there will be issues. But nonetheless the demos will work for some people somewhere.<br />
<br />
Somewhere the future is working today.<br />
<br />
Let's describe the technical aspects of what you are ( or could be ) seeing:<br />
<br />
Very large data files are sent to you with no effort. Many of the screens are over twenty megabytes in size - way beyond the size you can attach to an email. All arrive within seconds.<br />
<br />
There's no server dude or web site manager between you and the data. Nothing to load or download. No sending files into walled gardens.<br />
<br />
The files you see are sourced from Three.js, Stemkoski, FGx, Jaanga and vA3C repositories. Five groups with no connection to each other besides their presence on GitHub. GitHub is a place where you can keep data and code at no charge as long as its open source.<br />
<br />
The app, the data, the learning curve - all cost cost zilch.<br />
<br />
Use your mouse or fingers and you are panning, rotating and zooming. Learning is minimal<br />
<div>
<br /></div>
All the designs that arrive at your computer are editable. Change materials, move stuff about, throw things out. All doable.<br />
<br />
If you like what you see, and you are an old-timey sort of person, you may save your work as a file ( probably huge ) to you local hard disk.<br />
<h4>
The New New Thing in vA3C Viewer R4</h4>
On the other hand you can now save your new creations and edits as <a href="http://en.wikipedia.org/wiki/Permalink" target="_blank">permalinks</a>.
The permalinks are small and tidy yet they they actually contain all the information required to recreate your design.<br />
<br />
For example, the five short links provided above *are* the design. These designs have no other existence or presence outside of those links. The links are the designs.<br />
<br />
Of course, once on your screen you could download the design to your hard disk<br />
<br />
But also you could create and save and publish your own design. The Permalinks tab even provides you with a link to the <a href="http://goo.gl/" target="_blank">Google URL Shortener</a>.<br />
<br />
Using these short links you can easily share your designs via email, posts, tweets or texts. Or, with a bit more work, even share live. And if the data is on GitHub all data changes can become part of the design history.<br />
<br />
All of the code is open source and written in JavaScript - a very nice language for people who are not and do not want to be computer science professionals but who do want to drive and navigate their own knowledge domain programmatically. In other words, it's written in code you could edit yourself.<br />
<br />
All of the 3D work is happening in an <span style="font-family: Courier New, Courier, monospace;">iframe</span>, thus it is easy to append to existing code and web pages while still giving you the ability to edit objects and embed custom animations inside the <span style="font-family: Courier New, Courier, monospace;">iframe</span>.<br />
<br />
**<br />
<br />
Sure there are many issues and bugs in this current revision. And there's a ton more future stuff to be added.
<br />
<br />
But remember this: George Santayana said "Those who do not learn from history are doomed to repeat it."
<br />
<br />
Now you can also say to those AEC laggards:<br />
<br />
"It seems that those who do learn from history are also doomed to repeat it. So let's have a go at learning from the future..."
<br />
<br />
Links to vA3C Viewer R4<br />
<a href="http://va3c.github.io/viewer/va3c-viewer-html5/r4/va3c-viewer-html5-r4.html#" target="_blank">vA3C Viewer R4 Live Demo</a><br />
<a href="http://va3c.github.io/viewer/va3c-viewer-html5/readme-reader.html">Read Me with Feature List </a><br />
<a href="https://github.com/va3c/viewer/tree/gh-pages/va3c-viewer-html5">Source Code on GitHub</a><br />
<br />Theohttp://www.blogger.com/profile/02877421856947529794noreply@blogger.com1tag:blogger.com,1999:blog-1385289550397169122.post-82747686369574822342014-07-27T23:48:00.001-07:002014-07-27T23:48:29.286-07:00vA3C Viewer R3 Update ~ 2014-07-27 ~ No More Russian Dolls ( scenes inside scenes inside scenes )<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEisbMAi87m403N0k6Nhyphenhyphenq_XrgcZjgB3q22N3qf9o7yTtQcOE7eVGlcfWiYLl8eyXFbbV2sXgMBvTrl5zcDTK3vxC3QxPG_q_jIMGRS1cbyOwc_CK3-0ORdPrctfij5sMU3-A4LA95CGqCE/s1600/va3c-attributes.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEisbMAi87m403N0k6Nhyphenhyphenq_XrgcZjgB3q22N3qf9o7yTtQcOE7eVGlcfWiYLl8eyXFbbV2sXgMBvTrl5zcDTK3vxC3QxPG_q_jIMGRS1cbyOwc_CK3-0ORdPrctfij5sMU3-A4LA95CGqCE/s1600/va3c-attributes.PNG" height="460" width="640" /></a></div>
<br />
<br />
A while ago I decided that the thing I like doing the most, most, most is coding. Yes, there are truthiness issues here, but let's not ruin a good start.<br />
<br />
That does not mean, however happiness all the time 24/7. And yesterday was one of those times when the process became loathsome. It started with a equal sign in the wrong place, but not so wrong that it caused a message in the console. So: Look. Not find. Look. Not Find. Repeat. Repeat.<br />
<br />
Then I wanted to add some sparkle to the project. A good way to add sparkle is to have a light that follows the camera.<br />
<br />
The code in Three.js is easy:<br />
<br />
<span style="font-family: 'Courier New', Courier, monospace;">var pointLight = new THREE.PointLight( 0xffffff, 0.5 );</span><br />
<span style="font-family: Courier New, Courier, monospace;">pointLight.position = camera.position;</span><br />
<span style="font-family: Courier New, Courier, monospace;"></span><br />
<span style="font-family: Courier New, Courier, monospace;">camera.add( light );</span><br />
<span style="font-family: inherit;"><br /></span>
<span style="font-family: inherit;">The trick is that you also must add the following:</span><br />
<span style="font-family: inherit;"><br /></span>
<span style="font-family: 'Courier New', Courier, monospace;">scene.add( camera );</span><br />
<span style="font-family: inherit;"><br /></span>
<span style="font-family: inherit;">Normally you don't need to add the camera because scene adds the camera by itself, but in this instance it's necessary to do so because you need to inform the scene that the camera has a new passenger.</span><br />
<span style="font-family: inherit;"><br /></span>
<span style="font-family: inherit;">Again: Talk to light. It not budge. Plead with camera. It not budge. Finally found the answer on one of West Langley's <a href="http://stackoverflow.com/questions/16456912/point-spotlight-in-same-direction-as-camera-three-js-flashlight">posts on StackOverflow</a>. And, of course, that brings back all the memories of all the previous times I have confronted the un-budging light. So even worse then the problem was the feeling of having been such a dummie again. </span><br />
<span style="font-family: inherit;"><br /></span>
The darkest moments, however, were brought on by the code itself. One issue about being a cowboy coder, is that you you can code yourself way out into the wilderness and then have trouble getting back.<br />
<br />
One of the amazing things about Three.js is that it allows you to embed a scene, withing a scene, within a scene. It's like the Russian dolls that fit inside one another. In Three.js, however, the scenes can all be visible at once and everything co-mingled on stage all at once.<br />
<br />
It's easy to do. All you need to say is:<br />
<br />
<span style="font-family: Courier New, Courier, monospace;">scene1.add( scene2 );</span><br />
<br />
This worked a treat in helping get the vA3C code up and running for the AEC Hackathon, but really started causing problems as in: "Hello, Object, What scene are you in?" "Duh, I dunno" is the usual reply.<br />
<br />
Anyway, a lot of the file opening code has been cleaned up - and the Russian dolls have morphed into one doll. You can now open files from the web using links & Ajax or from local your local hard disk using your OS' File Open dialog. Hats off to Mr.doob for loader.js in the Three.js editor.<br />
<br />
The only newish feature is that attributes are back in in a rudimentary fashion. When you click on any item that has userData, the attributes appear in the top right corner - as you can see in the image of <a href="http://thebuildingcoder.typepad.com/">Jeremy Tammik</a>'s Revit House Project above.<br />
<br />
***<br />
<br />
On a completely different note I am intrigued by Lawrence Kesteloot's post on <a href="http://www.teamten.com/lawrence/writings/norris-numbers.html">Norris Numbers</a> - which starts of with <a href="http://www.johndcook.com/blog/2011/11/22/norris-number/">this quote</a>:<br />
<blockquote class="tr_bq">
My friend Clift Norris has identified a fundamental constant that I call Norris’ number, the average amount of code an untrained programmer can write before he or she hits a wall. Clift estimates this as 1,500 lines. Beyond that the code becomes so tangled that the author cannot debug or modify it without herculean effort.</blockquote>
The gist of the post is that the longer the program the greater the level of skills required to write the program. He closes by wondering aloud if he could achieve a two million line program. (Linux is currently around 15 million lines.) The implication is that it takes smart and clever people to write large programs while smaller programs can be written by less smart people, <br />
<br />
Of course, I have full admiration for a person who has written several programs in the 100,000 to 200,000 line range. But on the other hand, I have just as much admiration - and perhaps more - for the programmer who makes magic happen in a 100 lines of code or 10 lines of code.<br />
<br />
Similarly, the way I code as a designer differs greatly from the way a programmer codes. (For example, as a risk-taker, I deliberately leave out quotation marks in places most programmers add them.)<br />
<br />
The importance is not the length or the shortness of the code, nor is it in the robustness or riskiness of the code but the true benefit derives from the diverse spirits and skills of the people writing the code and their desire and capabilities to share their code - and fully respect each other gifts and talents.<br />
<br />
<br />
Links<br />
vA3C HTML5 R3<br />
<a href="http://va3c.github.io/viewer/va3c-viewer-html5/r3/va3c-viewer-html5-r3.html#">Viewer Live Demo</a><br />
<a href="http://va3c.github.io/viewer/va3c-viewer-html5/readme-reader.html">Read Me</a><br />
<a href="https://github.com/va3c/viewer/tree/gh-pages/va3c-viewer-html5">Source Code</a><br />
<br />Theohttp://www.blogger.com/profile/02877421856947529794noreply@blogger.com2tag:blogger.com,1999:blog-1385289550397169122.post-14419374137592657242014-07-25T17:50:00.001-07:002014-07-27T23:51:49.852-07:00vA3C Viewer R3 Update: Now with Save and Open<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEggBzlcuTXDV8fxgvKtKcof3Ipf4Ip6WINaVntpqi1UpMSTBqR56qxMVOfs4-pfi-yXt_fhDFCjF57ImkmWQgwen34HT0Amep6TPu_Jhq0P9KUO5fQdaOn-tZO0LhD5MOCADBz-IPfTO4Q/s1600/fgx-three-va3c.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEggBzlcuTXDV8fxgvKtKcof3Ipf4Ip6WINaVntpqi1UpMSTBqR56qxMVOfs4-pfi-yXt_fhDFCjF57ImkmWQgwen34HT0Amep6TPu_Jhq0P9KUO5fQdaOn-tZO0LhD5MOCADBz-IPfTO4Q/s1600/fgx-three-va3c.png" height="414" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Three.js Examples - FGx Aircraft - vA3C Grasshopper</td></tr>
</tbody></table>
The glorious thing about a computer is that it enables you to process data, save that data, and then - later and elsewhere - open that data for re-processing. This saving and opening thing - being such a sacred holy grail of computing - is normally programmed by the very high-priests in the various cults of data-crunchers. Moving that array of pixels on your screen down to very tiny magnetic blips onto the platter that's spinning very fast and from there to a server-farm in, say, Brea, California and from there to an array of pixels on your friend's screen in Poznan, Poland is not self-evident. It's a job best left to people with PhDs, distinguished resumes and first class brain cells.<br />
<br />
The vA3C Viewer app that's being worked on is for viewing data files. So saving is not really a thing that viewers need to do. Unfortunately, the ability to move things about and change materials started creeping into the viewer. So now there are different states: Do you want to see that pretty pony in blue? Or in pink?<br />
<br />
<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left; margin-right: 1em; text-align: left;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjZ90M7KtuEv8hbCLceCwbBVUmzm8DFtn2oBYLeTVYnpFgCNkMKExg-LE87-1gesVRYtALXwsCknPLMhpc2pY75ujt1zXxGgzizwyo9XZNO-km8iEEPnKf6c-4aP2b2m1LpZ0xQf5EjdgM/s1600/breather-enner-boys.png" imageanchor="1" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjZ90M7KtuEv8hbCLceCwbBVUmzm8DFtn2oBYLeTVYnpFgCNkMKExg-LE87-1gesVRYtALXwsCknPLMhpc2pY75ujt1zXxGgzizwyo9XZNO-km8iEEPnKf6c-4aP2b2m1LpZ0xQf5EjdgM/s1600/breather-enner-boys.png" height="480" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">AlgeSurf PE - Jurgen Meier Equations</td></tr>
</tbody></table>
Thus a way of registering those states for reuse would be nice. The previous release of the viewer has the ability to create permalinks - text added onto the end of a URL. In turn these were hand--massaged into text files that restores selected states.<br />
<br />
The plan was and still is to automate that process for R3. The idea is to provide a 'save' button that would collect all the links to all the models in the display along with their position, rotation and scale and save these into a script file that you can load that recreates what you had on your display when you saved.<br />
<br />
<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left; margin-right: 1em; text-align: left;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhoE91vh_-u2owjniKBQo9SAz3tGLvqxyp3pHrmEQAM67yFv21Txbu_kwq4x1tLGpCk9bCuYmssNromEBidLt4AKslcJ7a00G4KLe88sIJIn5A5BQlTd_JISE8FNMNWhb95IpuqqopomOg/s1600/klein-pony.png" imageanchor="1" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhoE91vh_-u2owjniKBQo9SAz3tGLvqxyp3pHrmEQAM67yFv21Txbu_kwq4x1tLGpCk9bCuYmssNromEBidLt4AKslcJ7a00G4KLe88sIJIn5A5BQlTd_JISE8FNMNWhb95IpuqqopomOg/s1600/klein-pony.png" height="388" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">vA3C Revit - Three.js Examples</td></tr>
</tbody></table>
<br />
<br />
This is a simple and light weight method and very doable. It is not the same thing as going through every item of geometry, finding every face, vertex and the associated materials and shuffling these all into a tidy bundle on your hard disk. This latter activity is for the pros. The former is OK for script kiddies such as your vA3C Viewer team.<br />
<br />
Well, upon the morning that had been appropriated to the permalink coding task, it was decided that reviewing Mr.doob's <a href="http://mrdoob.github.io/three.js/editor/">Three.js Editor</a> would be a good thing.<br />
<br />
The Editor does *not* have a save feature. It does, however, have a command that reads the data in memory and sends that data as text to a new window where you can review the data.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhLrVYuWdAsS1U4zH616zH2XIJhwBJMLYwdR3IIS5LhCrniS4fZhwsHV4bfI8ITwz2UgArrOtDl4DLImWSOZvfG-JRSBm8I5Gy9yomIQ_TmjA-a3dvL73IudpB99T4HDi-bNC_-BNCCtDs/s1600/city-monster.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhLrVYuWdAsS1U4zH616zH2XIJhwBJMLYwdR3IIS5LhCrniS4fZhwsHV4bfI8ITwz2UgArrOtDl4DLImWSOZvfG-JRSBm8I5Gy9yomIQ_TmjA-a3dvL73IudpB99T4HDi-bNC_-BNCCtDs/s1600/city-monster.png" height="416" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">vA3C Grasshopper - Three.js Examples</td></tr>
</tbody></table>
<br />
And, then, it turns out the HTML5 has the new ability to enable you to save text that's in a window to a file on your hard disk.<br />
<br />
Somehow these two thoughts became co-mingled. Code was copied, pasted and cobbled together. A 'Save' button was added to the vA3C Viewer.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh7YB26w6dZUFkvjN5SF0ftnD6yrrLA9f9OuKCnMJqQ0QhROYY4JEmbcvnntEYXY3ixnQdDTQWbz5hUeQQ0qnrLeHiRo3PRindrFIcR4mMCfDfZHNWTxcVnyr8l4hnSgtXs71M6GB9LAmg/s1600/fgx-threejs-meier.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh7YB26w6dZUFkvjN5SF0ftnD6yrrLA9f9OuKCnMJqQ0QhROYY4JEmbcvnntEYXY3ixnQdDTQWbz5hUeQQ0qnrLeHiRo3PRindrFIcR4mMCfDfZHNWTxcVnyr8l4hnSgtXs71M6GB9LAmg/s1600/fgx-threejs-meier.png" height="487" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">FGx Aircraft - Three.js Examples - AlgeSurf PE Jurgen Meier Equations </td></tr>
</tbody></table>
And then, the Save button was clicked.<br />
<br />
The mental lollipops nearly fell on the floor.<br />
<br />
It worked.<br />
<br />
vA3C Viewer R3 is now enabled to take all the data that is in the display and write it out to a brand new JSON file. And these files, in turn can be used to create further JSON files.<br />
<br />
Of course, it's not perfect. Files with shaders and other exotics don't work. Somethings come in and can be edited and others can't. But many fun things do work.<br />
<br />
The images on this page show assemblies of files from the <a href="http://fgx.github.io/fgx-aircraft-overview/r4/aircraft-overview.html">FGx Aircraft</a> libray, the <a href="http://jaanga.github.io/algesurf/parametric-equations/r3/algesurf-pe-r3.html">Jaanga AlgeSurf</a> parametric equations library, the <a href="http://va3c.github.io/three.js/examples/">Three.js Examples</a> library and the <a href="https://github.com/va3c/json">vA3C JSON</a> files prepared for the AEC Hackathon. It's an assembly of arbitrary colorful fantasies. <br />
<br />
So, yes, the computer has the wonderful ability to process data and to save it and reuse it. Unfortunately this does not stop the peeps with loose wing nuts from producing hyper-whimsical images.<br />
<br />
Links<br />
vA3C HTML5 R3<br />
<a href="http://va3c.github.io/viewer/va3c-viewer-html5/r3/va3c-viewer-html5-r3.html#">Viewer Live Demo</a><br />
<a href="http://va3c.github.io/viewer/va3c-viewer-html5/readme-reader.html">Read Me</a><br />
<a href="https://github.com/va3c/viewer/tree/gh-pages/va3c-viewer-html5">Source Code</a><br />
<br />
<br />
<br />Theohttp://www.blogger.com/profile/02877421856947529794noreply@blogger.com2tag:blogger.com,1999:blog-1385289550397169122.post-24088871454962835832014-07-22T14:40:00.000-07:002014-07-22T14:40:01.476-07:00vA3C Viewer: Work-in-progress Update: Digging Deep DOM into 3D Models<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgYdebLWYdh_GnGkmA4b71jM5fI5bEsvdWqpFnEk0DRuXo3Ifp3ICbJC3pLf8YgqEH60k_UseFZlRjB6DNqByHZ_peAsGqMKn3fVW4gXC3ur4KFC0gLj8kuB1XWP_p7ugH9vxRAb1Qf8UE/s1600/lucy-in-sky-with-birds.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgYdebLWYdh_GnGkmA4b71jM5fI5bEsvdWqpFnEk0DRuXo3Ifp3ICbJC3pLf8YgqEH60k_UseFZlRjB6DNqByHZ_peAsGqMKn3fVW4gXC3ur4KFC0gLj8kuB1XWP_p7ugH9vxRAb1Qf8UE/s1600/lucy-in-sky-with-birds.png" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<i>Lucy in the Sky with Birds</i></div>
<br />
<br />
One of the many joys of JavaScript is its very close-binding to the HTML <a href="http://en.wikipedia.org/wiki/Document_Object_Model">Document Object Model (DOM.)</a>, This enables little people like me to blast out code that is way above their given brain-grade. With very little training, you can quite quickly and easily see what is causing this hiccough or gripe in this code snippet you are cobbling together.The JavaScript console is your friend. Type in the word 'window' or 'document' and a dot or period, click on the pop-up menu items, repeat.<br />
<br />
It's a reverse Hansel and Gretel. The dots are like little stones you follow. But instead of taking you home they encourage you to visit new worlds you never knew existed. Yes, debuggers in other languages have similar sorts of capabilities - but since they are not as closely linked to the DOM they mostly good at showing you the garbage you have created and not as good with curated, fun stuff that's out there.<br />
<br />
But this is not a language comparison diatribe. This post about accessing 3D models in deep DOM-like fashion.<br />
<br />
I have been looking at Mr.doob's 200+ Three.js coding examples for four years. Their display output and their lines of code are becoming etched in my mind. They become fixed pillars as I look back on my development process.<br />
<br />
Until yesterday.<br />
<br />
Mr.doob's lines of code - and any other code - is as mutable, as real-timable, as DOM-able, as fresh as we want it to be.<br />
<br />
So I am bringing in Lucy into Voxel Painter and making her mesh reflect the square in the city of Pisa. I have put Suzanne in the NURBs demo and the Witch of Agnesi cylinder equation in the canvas birds demo.<br />
<br />
For sure, I could do this in a CAD program: insert a 'block' into a drawing etc. The difference is that I am not just using the program. I am creating/editing/playing with the program itself at the same time as I use it.<br />
<br />
It no longer takes teams of really smart compter science wizards many weeks to add a new feature. You can now bring in all that 3D data into the DOM. You can use the JavaScript Console to guide you into inventing new tricks for that data. And you can do this before breakfast<br />
<br />
Links<br />
vA3C HTML5 R3<br />
<a href="http://va3c.github.io/viewer/va3c-viewer-html5/r3/va3c-viewer-html5-r3.html#">Viewer Live Demo</a><br />
<a href="http://va3c.github.io/viewer/va3c-viewer-html5/readme-reader.html">Read Me</a><br />
<a href="https://github.com/va3c/viewer/tree/gh-pages/va3c-viewer-html5">Source Code</a><br />
<br />
<br />
<br />Theohttp://www.blogger.com/profile/02877421856947529794noreply@blogger.com3tag:blogger.com,1999:blog-1385289550397169122.post-24446400276677361582014-07-20T01:47:00.000-07:002014-07-20T01:59:13.898-07:00vA3C Viewer R3: Processes Data from Multiple GitHub Sources<iframe height="400px" src="http://va3c.github.io/viewer/va3c-viewer-html5/r3/va3c-viewer-html5-r3.html" width="96%">
Visible only in HTML view here: http://va3c.github.io/viewer/va3c-viewer-html5/r3/va3c-viewer-html5-r3.html </iframe><br />
<i><a href="http://va3c.github.io/viewer/va3c-viewer-html5/r3/va3c-viewer-html5-r3.html">vA3C Viewer HTML5 R</a>3</i><br />
<br />
The preview in today's post looks a lot like the preview in yesterday's post. But there is a huge difference. Or a tiny difference - depending on your point of view.<br />
<br />
The issue is this: The Internet is about nice people sharing data - in a free and unencumbered manner - and about naughty people manipulating data in peculiar ways. And, yes there are other types of peeps as well.<br />
<br />
In the early days of the Internet, everybody was nice and it was fun and easy to share. Nowadays it's different. Of note here, are the increasing restrictions on <a href="http://en.wikipedia.org/wiki/Cross-origin_resource_sharing">cross origin resource sharing</a> and the increasing demand for a <a href="http://en.wikipedia.org/wiki/Same_origin_policy">same origin policy</a>. And, yes, apps on servers can bypass many of these restrictions.<br />
<br />
The result is that most of the larger resource where you can keep things for free - such as Flickr, Imgur, DropBox, GitHub and many others disable cross origin resource sharing. This means that JavaScript apps running in your browser are stuck with obtaining data from one source at a time. And, yes, there are exceptions.<br />
<br />
Thus the situation for a CAD file viewer such as the vA3C viewer has been fairly marginal. The only drawing the app could open were drawings from the same domain it was launched from.<br />
<br />
Until today.<br />
<br />
The new - still quite broken - R3 of the vAEC Viewer loads the file we built for it at the AEC Hackathon. And it also loads every example file and every 3D model from Mr.doob's <a href="http://mrdoob.github.io/three.js/examples/">Three.js example folders</a>. And it is loading and displaying the 170+ HTML files of math equations from<a href="http://jaanga.github.io/algesurf/parametric-equations/r3/algesurf-pe-r3.html"> Jaanga Alegesurf</a>.<br />
<br />
Not only is the app loading the files but it's also enabling you to mash-up the data from the three sources any way you want. And, yes, it's doing this real-time sharing in a time-honored, secure manner.<br />
<br />
What's the trick?<br />
<br />
It's easy peasy. Fork the repo on the GitHub server. No need to download a single byte to a local machine. Publish the repo to the GitHub gh-pages branch. Presto now all of its data is available to apps running from your GitHub presence.<br />
<br />
What happens when upstream changes? The official method is to create a pull request and merge. But that is bothersome to the upstream party. It's much easier to simply delete your fork and then re-fork.<br />
<br />
Oh, but then - as GitHub warns - it can take quite a while for the pages to start appearing. Not to worry. Open for editing any file in the new fork, make any change, save. Presto! Your new gh-pages files appear instantly.<br />
<br />
So this is a huge difference in terms of sharing. But what does this mean for GitHub? The GitHub peeps don't really want to go around replicating half-gig repos all over the place. And they don't have to. They use Git after all. A popular repo has many thousands of forks. So for GitHub your new fork - which they set up in a few seconds - is just a few new pointers and diffs. No big deal.<br />
<br />
But for you and me this could be a nice big deal. Currently the site links to something like four hundred files. The current goal is to deliver access to, say, a thousand files of at least somewhat meaningful content.<br />
<br />
The traditional on-line CAD viewer is closed-sourced, upload to a walled garden, do-what-they-allow-you-to-do place. Perhaps the time has come for an open-source, freely shared, highly programmable computer aided design enhancer.<br />
<br />
The current rev is very incomplete and has many broken bits. It does, however, allow you to place Walt Disney's head inside a warped triaxial hexatorus textured with an image of a '53 Cadillac. Welcome to the Matrix.<br />
<br />
<br />
Links<br />
vA3C HTML5 R3<br />
<a href="http://va3c.github.io/viewer/va3c-viewer-html5/r3/va3c-viewer-html5-r3.html#">Viewer Live Demo</a><br />
<a href="http://va3c.github.io/viewer/va3c-viewer-html5/readme-reader.html">Read Me</a><br />
<a href="https://github.com/va3c/viewer/tree/gh-pages/va3c-viewer-html5">Source Code</a><br />
<br />Theohttp://www.blogger.com/profile/02877421856947529794noreply@blogger.com2tag:blogger.com,1999:blog-1385289550397169122.post-405351785922670022014-07-19T00:15:00.001-07:002014-07-20T00:10:42.900-07:00AlgeSurf Parametric Equations: Math in 3D - fast, pretty and easy<iframe height="400px" src="http://jaanga.github.io/algesurf/parametric-equations/r2/breather-surface/breather-surface.html" width="96%">
Visible only in HTML view here: http://jaanga.github.io/algesurf/parametric-equations/r2/breather-surface/breather-surface.html </iframe><br />
<i>The Breather Surface. Use your pointing device to pan rotate and zoom</i><br />
<br />
Some of of best friends are mathematicians. This does not mean I necessarily like what they do. On the contrary, I find the <a href="http://en.wikipedia.org/wiki/Leonhard_Euler#Mathematical_notation">Euler-derived notation</a> difficult to parse, not easy to remember and not really pretty to look at.
And then, say you have equations printed on a piece of paper or even displayed in Wikipedia - such as the <a href="http://en.wikipedia.org/wiki/Frobenius_endomorphism">Frobenius Endomorphism</a>. Now tap or swipe the equation with your fingers. Nothing happens! People spend hours formatting text that just so it ends up sitting still like this. How lame is that?<br />
<br />
Thank goodness there are alternatives. One of may favorites is computerese. Just translate the equation into Java or Python or JavaScript and now the number of people that can read what is going on increases from thousands to millions.<br />
<br />
And another really nice way of displaying math is via the computer display. The display above is generated by the latest update to <a href="http://jaanga.github.io/algesurf/">AlgeSurf </a>- called AlgeSurf PE. PE stands for parametric equations.<br />
<br />
The code for the above can be see in <a href="http://jaanga.github.io/algesurf/parametric-equations/r2/breather-surface/breather-surface.html">breather-surface.html</a>. A much refined and enhanced version is available via the new <a href="http://jaanga.github.io/algesurf/parametric-equations/r3/algesurf-pe-r3.html">Equation Browser</a>.<br />
<br />
This new display technique represents a major shift in direction from the previously released AlgeSurf <a href="http://jaanga.github.io/algesurf/marching-cubes/r2/1-Overview/Builder.html">Marching Cubes Builder</a> and <a href="http://jaanga.github.io/algesurf/marching-cubes/r2/1-Overview/Player.html">Player</a>, Both the the Marching Cubes and the Parametric equations apps serve the same purpose: to provide access a extensive libraries of well-known equations and allow you to display, edit and enhance these in 3D.<br />
<br />
The Marching Cubes app enables you to do this by accepting and parsing text you enter into an input box - and the text being as close as possible to the standard mathematicians way of writing things. A lot of time and coding went into this effort - with the emphasis being on hiding the code from your view as much as possible.<br />
<br />
In other word, the Marching Cube app aids and abets you trying to behave like an old-timey mathematician.<br />
<br />
The Parametric Equations app, however, is all about *coding* math. There are 170+ equations - all derived from Jurgen Meier's wonderful <a href="http://www.3d-meier.de/">web site full of math tutorials</a> written in Java. Each equation is presented as a stand alone HTML file. The files are about 75 lines short and contain everything need to load and view the equation in real-time 3D.<br />
<br />
You are very much encouraged to open up any of the files, change the equation and see what happens. It's a fast, fun and easy way to get going with exploring.<br />
<br />
The thing is that math can hard and complex and very time-consuming as well. For this you have the Equation Browser. This app reads the the HTML files and adds many features to the display of the equations. Features include the following:<br />
<br />
<ul>
<li>Reads, parses and displays remote Three.js HTML files</li>
<li>Support real-time 3D pan, rotate and zoom</li>
<li>Adds access and editing to full complement of materials, reflections, lights, shade and shadows</li>
<li>Update geometry parameters in real-time</li>
<li>Display wireframe, face & vertex normals</li>
<li>Select background colors or gradients</li>
</ul>
<div>
And the list of future wish list enhancements is even longer.</div>
<div>
<br /></div>
<div>
And as important as the new features might be for helping progress math and math apps, there is perhaps an even more important aspect. JavaScript and Three.js are not APIs or apps but they are languages - coming out of infancy and into broad application. Both the Marching Cubes and Parametric Equations routines are demonstrations that these tools can have highly diverse and profound mathematical application. And when you click them, stuff happens.<br />
<br />
Links:<br />
<br />
<b>AlgeSurf PE <a href="http://jaanga.github.io/algesurf/parametric-equations/r3/algesurf-pe-r3.html">Equation Browser</a></b><br />
<b>AlgeSurf PE <a href="http://jaanga.github.io/algesurf/parametric-equations/readme-reader.html">Read Me</a></b><br />
<b>AlgeSurf PE <a href="https://github.com/jaanga/algesurf/tree/gh-pages/parametric-equations">Source Code</a></b><br />
<br /></div>
Theohttp://www.blogger.com/profile/02877421856947529794noreply@blogger.com2tag:blogger.com,1999:blog-1385289550397169122.post-84934565092265471242014-04-30T22:27:00.001-07:002014-04-30T22:27:18.262-07:00Building Borges' Map, One Line of Three.js at a TimeIt's been so long since there's been a post, I don't know where to start.<br />
<br />
Well, let's look a the map, Good idea! unFlatLand R10 has been up for weeks:<br />
<br />
<br />
<iframe height="400px" src="http://jaanga.github.io/terrain-viewer/un-flatland/latest/index.html" width="96%">
Visible only in HTML view here: http://jaanga.github.io/terrain-viewer/un-flatland/ </iframe>
<br />
<a href="http://jaanga.github.io/terrain-viewer/un-flatland/latest/">Jaanga unFlatland</a><br />
<br />
But it's what off the map where it has really been happening. The goal - the quest - keeps getting to be bigger, deeper, wider. Soon the map will be - like Jorge Luis Borges' map - <a href="http://en.wikipedia.org/wiki/On_Exactitude_in_Science">even bigger than the territory it covers</a>. And, unlike most computer games, the closer we get the easier it gets. Well, that's how it feels at the moment. Tomorrow may bring desolation and sorrow, but not if we can keep on the same course.<br />
<br />
So what's the goal?<br />
<br />
A little background first. There are many mapping apps out there. Some are more accurate. Some are prettier. Some are faster. But they are still all maps. Things that get you from point A to B and other old-timey mappish things. Map apps are crap apps.<br />
<br />
The goal is to take something like a 3D programming language and make cartography simply part of the language. Want people dancing in the streets? You can have it. Blue whales swimming up the Gulf stream? Good to go. Adventures on Route 66? Be my guest. Let your imagination go free.<br />
<br />
At the same time, how about a dose of reality? Google maps shows that the distance Union Square in San Francisco to the Fairmont Hotel is just four blocks. What Google Maps does not show you is the 500 foot height difference between the two places. Ditto the Spanish Steps in Rome. Or the Peak to the Star Ferry in Hong Kong. From space the world may be flat, but we cyclists know it isn't that flat <br />
<br />
The goal therefore<br />
<ul>
<li>Make 3D terrain data insanely free, quick and easy to obtain</li>
<li>Build code examples that even dummies can understand</li>
<li>Go places nobody has been to</li>
</ul>
But what do these things mean?<br />
<br />
There are dozens of 2D map overlays available - all following the Tile Map Service standard. All these overlays are available at no charge as long as you are not greedy and you know the tricks. We need to de-trickify access. We need to show: "Here are the URLs that work."<br />
<br />
3D data is also available at no charge. But it's imprisoned up in all manner of unusual places and strange ways. We need to unlock this data. We need to put the data up on GitHub and say "Come 'n get it!"<br />
<br />
And we need take a popular beginner code page - Three.js - and show that instead of standing on a PlaneGeometry object you could be standing on New Jersey.<br />
<br />
So future posts will talk about the 2D data with eighteen or more levels of zoom that is available from dozens of sources.<br />
<br />
The 3D data that covers the entire world - land and water - to a reasonable level of accuracy will be described. For the moment that is '30 seconds' data - which is about a data point vey kilometer.<br />
<br />
Wherever there is better data, us it. Currently that mean everywhere on the globe where there is land there is a data point every ninety meters.<br />
<br />
And in same places such locations on Europe and the US, it is possible to get down to '1 Second data or a data point every 30 meters.<br />
<br />
And as always, the coding examples will all be on GitHub. [And always with some broken links and stuff that got out of date before it was finished. ;-]<br />
<br />
And, as always, the code will be the code of the designer. If you understand what 'if' and 'for' and '=' can do in JavaScript then you are good to go. No maps needed...<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />Theohttp://www.blogger.com/profile/02877421856947529794noreply@blogger.com0tag:blogger.com,1999:blog-1385289550397169122.post-73051066649870702832014-02-14T22:33:00.003-08:002014-02-14T22:33:43.705-08:00Very Simple Menu r1<iframe class="overview" height="500px" src="http://jaanga.github.io/blode/very-simple-menu/r1/index.html" width="100%">
</iframe>
<br />
The goal of the code and apps on Jaanga is to be a resource for people who know a lot about something and just a little about programming.
A hero around here is Mr.doob. He states it bluntly on his <a href="https://github.com/mrdoob/three.js">Three.js site</a> when he says:<br />
<blockquote class="tr_bq">
The aim of the project is to create a lightweight 3D library with a very low level of complexity — in other words, for dummies.
</blockquote>
We are at one with this mindset.<br />
<br />
If you understand the equal sign, an 'if-then' statement and a 'for i = 0 to i= 100', then reading the code on Jaanga should be quite straightforward.<br />
<br />
Also, you do not need too know much about HTML and CSS. The JavaScript Document Object Model or DOM is built into every browser and the DOM enables you to create and control every aspect of a web page from JavaScript alone. Thus all the Jaanga apps are 100% JavaScript for dummies.<br />
<br />
Why try to simplify things?<br />
<br />
It's not about simplifying things, it's about simplifying the programming part of things.<br />
<br />
You are an architect or physicist or mathematician. Do you also need to be a wiz at jQuery or rule at Ruby?<br />
<br />
No, you do not. And, more bluntly, the more time you spend on devising elegant programming code the less time you are spending on your own discipline.<br />
<br />
Have a look at the Very Simple Menu demo and source code. Using just these few lines of code you could actually construct a Content Management System (CMS) that could access thousands of files. [And all of these files can even be hosted for you by GitHub free of charge, but that is another matter.]<br />
<br />
As simple as this code is, it's worth remembering that this is just Very Simple Menu R1. Could the code be even simpler or the variables have better names? Could the commenting be more explicit?<br />
<br />
R2 should be simpler and yet do more. Why not?<br />
<br />
<br />
<b>Links</b><br />
<a href="http://jaanga.github.io/blode/very-simple-menu/r1/">Very Simple Menu Demo</a><br />
<a href="https://github.com/jaanga/blode/blob/gh-pages/very-simple-menu/r1/index.html">Very Simple Menu Source Code</a><br />
<br />
<br />
<br />
<br />
<br />Theohttp://www.blogger.com/profile/02877421856947529794noreply@blogger.com0tag:blogger.com,1999:blog-1385289550397169122.post-30641667009762651562014-02-05T18:14:00.000-08:002014-02-05T18:14:05.841-08:00Terrain & Terrain Viewer Updates<div dir="ltr">
<h2>
<span style="font-size: medium;">De Ferranti Gives A Thumb's Up</span></h2>
<div>
A large portion of the Jaanga Terrain elevation data originates from Jonathan de Ferranti's <a href="http://www.viewfinderpanoramas.org/">Viewfinder Panoramas</a> web site. It is essential therefore to have his approval for the usage and translation of the data.</div>
<div>
<br />
<div>
I emailed Jonathan de Ferranti over the weekend explaining the nature of the Terrain project. He responded quickly saying that the usage of his data in this manner is acceptable. Attribution is requested but not mandatory.</div>
<div>
<br /></div>
<div>
So please do feel free to use the Jaanga Terrain data in any way that you wish. The Jaanga portions of the effort are under an MIT license. And similarly to Jonathan's request: attribution is nice but not required.</div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<h2>
<span style="font-size: medium;">Terrain and Terrain Viewer Repositories Now Have Menus</span></h2>
</div>
<div>
All Jaanga material on the GitHub website is available in two ways. You can view the material as source code at <a href="http://github.com.jaanga/terrain">github.com.jaanga/terrain</a> or you can view the material as a web page (using the GitHub Pages feature) at <a href="http://jaanga.github.io/">jaanga.github.io</a>.</div>
<div>
<br /></div>
<div>
If you use the latter, there is now a nice and simple menu system that enables you to move around the web pages quickly and easily. There have been a number of previous iterations of of this menu system. This one is the simplest and easiest to maintain/</div>
<div>
<br /></div>
<div>
The goals include</div>
<div>
<ul>
<li>Write everything only once</li>
<li>Everything that is written automagically appears on both the source code and the web pages</li>
<li>Write everything in Markdown format<br /> </li>
<li>Everything is turned into HTML automagically</li>
</ul>
</div>
<div>
There's quite a bit more to the system, but not that much more - or it would start to get complicated which is what we are trying to avoid.</div>
<div>
<br /></div>
<div>
All of this is worth a post or two in its own right, but for the moment just be happy to be able to Roam the repos more easily.</div>
<div>
<br /></div>
<div>
<b>Links</b></div>
<div>
<a href="http://jaanga.github.io/terrain/readme-reader.html">jaanga.github.io/terrain/readme-reader.html</a></div>
<div>
<br /></div>
<div>
<a href="http://jaanga.github.io/terrain-viewer/readme-reader.html">jaanga.github.io/terrain-viewer/readme-reader.html</a></div>
<div>
<br />
<br /></div>
<div>
<h2>
<span style="font-size: medium;">New Repository: Terrain Plus</span></h2>
</div>
<div>
This repository is for smaller data sets/</div>
<div>
<br /></div>
<div>
The gazetteer with over 2000 places names with latitude and longitude has been moved here</div>
<div>
<br /></div>
<div>
The very beautiful 'unicom' elevation data is now here. More about that data in a later post</div>
<div>
<br /></div>
<div>
<b>Link</b></div>
<div>
<a href="https://github.com/jaanga/terrain-plus">https://github.com/jaanga/terrain-plus</a></div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<h2>
<span style="font-size: medium;">PNG Viewer r3: Many New Features</span></h2>
</div>
<div>
There is now a dropdown that allows you to 'travel' to over 2000 locations.</div>
<div>
<br /></div>
<div>
A 'Lighten' button makes very dark PNG files much easier to read </div>
<div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj0ZncPM62LBQJRsej16R5SvOQVjX1U_oYe69MNlN6TxQ5WjyNqegKYmRF_VRXbCYqzxnuVuYd7a6gG3kmmkbr3sgcYTPdVoDxatQnDyTAsPyRhR2ALq5o8KyVT85qYR0dtVFbuJT_cV_4/s1600/png-viewer-places.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj0ZncPM62LBQJRsej16R5SvOQVjX1U_oYe69MNlN6TxQ5WjyNqegKYmRF_VRXbCYqzxnuVuYd7a6gG3kmmkbr3sgcYTPdVoDxatQnDyTAsPyRhR2ALq5o8KyVT85qYR0dtVFbuJT_cV_4/s1600/png-viewer-places.PNG" height="472" width="640" /></a></div>
<br /></div>
<div>
The major new new element is that every place in the gazetteer that is withing the current tile area is now displayed on the PNG with a little red box. To the right of the box is displayed the name of the location and its elevation. Not that the elevation is just a height relative the the lowest point in the heightmap. It will take a bit more learning about de Ferranti's data to display the actual elevation. But the data should not be good enough so that an object such as a building can be place on the map and not be up in the air or totally underground.</div>
<div>
<br /></div>
<div>
Now that there is a working prototype, the next step will be to add his feature to unFlatland and start adding objects.</div>
<div>
<br /></div>
<div>
<b>Link</b></div>
<div>
<a href="http://jaanga.github.io/terrain-viewer/png-viewer/r3/png-viewer-r3.html">jaanga.github.io/terrain-viewer/png-viewer/</a></div>
<div>
</div>
<div>
<br /></div>
<div>
</div>
<div>
<br /></div>
<div>
<br /></div>
<div>
</div>
</div>
</div>
Theohttp://www.blogger.com/profile/02877421856947529794noreply@blogger.com1tag:blogger.com,1999:blog-1385289550397169122.post-77614805459927944122014-02-02T01:32:00.000-08:002014-02-05T15:35:07.332-08:00 unFlatland r5.1: New Revision is Up. Already Hopelessly Outdated by unFlatland r6 Dev<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhGDZ9_9LKDOsCSB7_H8vR58IfjrGBW5bs9_cWEumiyvkG9ccK7EIn5k85H7jojN81Ce2BRr_Cof6GrwsGsAPY5zP65GzVByOO1phgHuQHouzIByKqb5rZGHFPz_OjmbSHdBgBUJwr8ab0/s1600/hong-kong.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhGDZ9_9LKDOsCSB7_H8vR58IfjrGBW5bs9_cWEumiyvkG9ccK7EIn5k85H7jojN81Ce2BRr_Cof6GrwsGsAPY5zP65GzVByOO1phgHuQHouzIByKqb5rZGHFPz_OjmbSHdBgBUJwr8ab0/s1600/hong-kong.PNG" height="328" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">unFlatland r6 Dev ~ view of Hong Kong Island </td></tr>
</tbody></table>
<br />
unFlatland r5.1 is up and it does most everything that was promised in the post on <a href="http://www.jaanga.com/2014/01/unflatland-make-maps-in-3d.html">unFlatland r4</a>.<br />
<br />
But it sucks.<br />
<br />
This revision can now display any location on earth with a height or elevation or altitude or whatever accurate to 90 meters. It accesses the wonderful Jaanga Terrain repository of heightmaps accurate to 90 meters - anywhere on the entire Earth (Thanks to J de F) . It all follows the OpenStreetMap Tile Map System and zoom from the entire world - zoom level 0 town to zoom level 15 and maybe even beyond<br />
<br />
Old School.<br />
<br />
It's only five hundred lines of code - so it really is aimed at the target audience (which is us dummies) .<br />
<br />
It has geo-referencing. Click the Placards checkbox to to see the city names pop up at the correct latitude and longitude<br />
<br />
Yawn.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjBhm47cPluOBKDJmBN9aK-6M9WUN7WuWWF0Ss95v2GFj1ZkKUd3K8auKFPDBukr9-BPnd_dDenZ5-JR0SmBLYJGrboKSIFVHaZRLgIxEBPVTxF-ov1_k8J5iIga1lBNkRkOF_gGdkVQQM/s1600/un-flatland-r5.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjBhm47cPluOBKDJmBN9aK-6M9WUN7WuWWF0Ss95v2GFj1ZkKUd3K8auKFPDBukr9-BPnd_dDenZ5-JR0SmBLYJGrboKSIFVHaZRLgIxEBPVTxF-ov1_k8J5iIga1lBNkRkOF_gGdkVQQM/s1600/un-flatland-r5.PNG" height="233" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">unFlatland r5 ~ view of San Francisco Bay with 'blobby' overlay</td></tr>
</tbody></table>
<br />
Well, how about this? r5 already has had its first critique. swissGuy says: ' So it remains blobby for the moment.' Obviously swissGuy is taking his own sweet time to observe. Actually unFlatland r5 has *TWELVE* overlays and the blobby overlay is just the one we happened to feature in this release. swissGuy should learn to watch how he watches.<br />
<br />
Wotcha! << London slang greeting<br />
<br />
Yes, of course, it should be finished. It must have its FGx aircraft flying around in real-time and, yes, it needs to be Leap Motion-enabled. But...<br />
<br />
But what!?!<br />
<br />
Well, these other few lines of code just sort of showed up. Kind of by accident. You know, fixing something else in another part of the forest. And there's a bit of crossover in the code. And then, who know how, the code is co-mingled. And thus, yes goddammit, r5 is 'old school'.<br />
<br />
OK...<br />
<br />
Um, we have decided to name the codeling 'r6'.<br />
<br />
<h4>
Link</h4>
<a href="http://jaanga.github.io/terrain-viewer/un-flatland/r6/un-flatland-r6.html">unFlatland r6 Dev</a><br />
<br />
<br />
<a href="http://jaanga.github.io/terrain-viewer/un-flatland/r5/un-flatland-r5.html">unFlatLand 5.1</a><br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<div>
<br /></div>
Theohttp://www.blogger.com/profile/02877421856947529794noreply@blogger.com0tag:blogger.com,1999:blog-1385289550397169122.post-15196862863859446052014-01-31T16:01:00.000-08:002014-01-31T16:11:36.161-08:00Jaanga Terrain<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://jaanga.github.io/terrain/0/0/0.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://jaanga.github.io/terrain/0/0/0.png" height="400" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><em style="font-family: sans-serif; font-size: 16px; text-align: start;">terrain/0/0/0.png - the entire globe at zoom level 0</em></td></tr>
</tbody></table>
<br />
<br />
There is now a new GitHub public repository with heightmaps for the entire globe accurate to 90 meters.<br />
<br />
Heightmaps are special image files where every color or shade represents an altitude/height/ elevation. They can help you create 3D cartography quickly and easily.<br />
<br />
All of Jonathan de Ferranti's 3 Second data - all 265 gigabytes of raw binary files - have been losslessly compressed down to 2.85 gigabytes of PNG files. The files are organized in the Open Street Map way - according to the TMS standard.<br />
<br />
The files are in the GitHub pages branch so you are free to access these files from your app or use them as you wish. Everything is under an MIT license.<br />
<br />
And and as free bonus, Ferranti's 15 Second data is also up and available as well.<br />
<br />
All of this is documented and described - including the tricks being used - here;<br />
<br />
<a href="http://jaanga.github.io/terrain/">Jaanga Terrain as GitHub Pages</a><br />
<br />
<a href="https://github.com/jaanga/terrain">Jaanga Terrain as GitHub Source Code</a><br />
<br />
There are also links to demo files to show you how JavaScript and libraries such Three.js can be used to view and manipulate the data.<br />
<br />
And, if we weren't so busy with all the viewers, we would be working on the One Second data - accurate to about 30 meters.<br />
<br />
Thank you Jonathan de Ferranti, GitHub and Mr.doob for making all this possible.<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />Theohttp://www.blogger.com/profile/02877421856947529794noreply@blogger.com0tag:blogger.com,1999:blog-1385289550397169122.post-65169400074054585762014-01-09T01:35:00.000-08:002014-01-09T01:35:10.199-08:00unFlatland: Make Maps in 3D <div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg228d_pvn1_Wo6AduBGAqCNfptZ9tTq0OEJ3yNrm4jurUnZXn4rGPaFlKB2lJzGlQXYXsmQ-TeMwgSuOp35BRsAn_18dUpt2b5EZBNcGPd9ZCvuanLIwp2UpfiWiBZJisJiMN4WQTyWUs/s1600/unflatland-r4-1.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg228d_pvn1_Wo6AduBGAqCNfptZ9tTq0OEJ3yNrm4jurUnZXn4rGPaFlKB2lJzGlQXYXsmQ-TeMwgSuOp35BRsAn_18dUpt2b5EZBNcGPd9ZCvuanLIwp2UpfiWiBZJisJiMN4WQTyWUs/s1600/unflatland-r4-1.JPG" height="472" width="640" /></a></div>
<br />
<br />
The coding has been too much fun and thus the writing of posts has not been good. Even worse: the more I code the more things there are to write about and I fall even further behind. Speaking of <a href="http://en.wikipedia.org/wiki/Sisyphus">Sysiphus</a> my previous post - on <a href="http://www.jaanga.com/2013/12/fgx-globe-r5-new-globe-type-more.html">FGx Globe</a> - was about rolling that big rock we all live aboard.<br />
<br />
The issue with the FGx Globe is that it really only shows the aircraft that are in the air. Well, aircraft do need and want to touch land from time time. Even these virtual ones.<br />
<br />
So, what are some ways for you to quickly and easily display highly detailed 3D geography in your browser? Exploring the possibilities has been keeping me up late - and even getting me up early for weeks.<br />
<br />
So let's jump back a month or so:<br />
<br />
<a href="http://jaanga.github.io/cookbook/un-flatland/r4/index.html">UnFlatland R 4.1</a><br />
<br />
This 3D map covers the entire earth with an accuracy of one elevation point approximately every one kilometre or 43,600 x 43,600 data points.<br />
<br />
The current goals include:<br />
<ul>
<li>Attain an accuracy of a datum every one hundred meters for the entire earth.</li>
<li>Make the data sufficiently compact that it will fit in a single GitHub repository - which have limits of about one gigabyte of data maximum</li>
<li>Follow the TMS/Slippy Map simple proven methods</li>
<li>Have it all work in browser with nothing to download or install</li>
<li>Make it easy enough so that beginning and intermediate coders can build and edit 3D maps</li>
<li>Supply the knowhow so that it is easy to add building, diagramming, </li>
</ul>
<br />
There's probably another half-dozen cool things involved, but the main thing is to get the code up on GitHub and thus allow you to play with it.<br />
<br />
Some comments on unFlatland.<br />
<ul>
<li>Latitude & Longitude. Enter any latitude or longitude and the press 'Go'.</li>
<li>Cities dropdown. The 'Cities' dropdown takes you directly to any of 2,017 cities around the world. Macchu Pichu and Kathmandu are fun places to visit.</li>
<li>Zoom levels dropdown. Currently there are only zoom-levels 7-12. Elsewhere we have zoom levels 1-7 working well and progress is being made on the higher levels.</li>
<li>Scale: The default is for a highly exaggerated map. Such exaggeration really helps with debugging and identifying issues. Some people say the display looks 'unrealistic'. A setting of one will make the map totally flat. A setting of two approximates true-to-life scale.</li>
<li>Map types. Select the type of map you want overlaid or 'draped' over the terrain.</li>
<li>Camera controllers. The first person controller allows you to fly over or through the landscape as if you are in a very high-speed helicopter. Pressing the right mouse button or holding two fingers down on the tack pad allows you to fly backwards.</li>
<li>Placards. Click the checkbox to toggle the display of the name of every city in the map.</li>
</ul>
<div>
By the way, the title unFlatland has several interesting sources. See the Wikipedia article on <a href="http://en.wikipedia.org/wiki/Flatland">Flatland</a>. Also my eldest daughter is an industrial designer. A critical requirement for industrial designers is to be able to think and communicate in 3D. While she was studying, we once had a chat about working with graphic designers and people in the print industry. And she remarked something like "Not interested, all their work is in Flatland."</div>
<div>
<br /></div>
<div>
So the title of this app, unFlatland, is a reminder that we live in a 3D world. We are in the process of leaving behind those old 2D paper maps and entering a world full of lumps and bumps. And even more importantly, it is a land where people live and things happen and our maps should reflect this activity.</div>
<div>
<br /></div>
<div>
You can see two derivatives of unFlatland that begin to show the active possibilities.</div>
<div>
<ul>
<li><a href="https://github.com/fgx/fgx-plane-spotter">FGx Plane Spotter</a></li>
<li><a href="http://jaanga.github.io/gestification/projects/flying-leap-3d/fgx-plane-spotter-leap/r1/index.html">FGx Plane Spotter ~ Leap Motion Enabled</a></li>
</ul>
</div>
<div>
FGx Plane Spotter allows you to travel to all the usual places. And you can also see who is currently flying a virtual aircraft using the FlightGear simulator. And if you have a <a href="https://www.leapmotion.com/">Leap Motion</a> device you can have a hand, so to speak, in the game yourself.</div>
<div>
<br /></div>
<div>
<br /></div>
<br />
<br />
<br />Theohttp://www.blogger.com/profile/02877421856947529794noreply@blogger.com0tag:blogger.com,1999:blog-1385289550397169122.post-49390512075082189372013-12-11T15:07:00.000-08:002013-12-11T17:51:13.072-08:00FGx Globe R5: New Globe Type, More Aircraft, More Thumbnails<div dir="ltr">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh_9MMyMohyphenhyphenPge3z-nlZYRKM_rqT2dii3jH0IMcQzfFPVWEN-o5yf6qcEKY6_h96qbaxO4BIVesd2Vet2IdufQvFdP3fzWbTPX-uO0Qnksl23wrhXGHiOOyNikaMb0ikuH_kgrt-Kgy8YE/s1600/image-766846.png"><img alt="" border="0" height="412" id="BLOGGER_PHOTO_ID_5956273830739187602" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh_9MMyMohyphenhyphenPge3z-nlZYRKM_rqT2dii3jH0IMcQzfFPVWEN-o5yf6qcEKY6_h96qbaxO4BIVesd2Vet2IdufQvFdP3fzWbTPX-uO0Qnksl23wrhXGHiOOyNikaMb0ikuH_kgrt-Kgy8YE/s320/image-766846.png" width="640" /></a><br />
<div>
<br /></div>
<div>
For the past several weeks I have been working on the <a href="http://fgx.github.io/">FGx</a> project. I think FGx stands for Flight Gear Extras. The effort includes the design, style and content of the web pages hosted on GitHub as well as <a href="file:///C:/Dropbox/Public/git-repos/fgx-repos/fgx-globe/fgx-globe-r5/index.html">FGx Globe</a>. <a href="http://fgx.github.io/fgx-aircraft-overview/r4/aircraft-overview.html">FGx Aircraft Overview</a> and <a href="http://fgx.github.io/fgx-globe/cookbook/air-run-nav-01/">FGx Airports Runways Navaids</a>.</div>
<div>
<br /></div>
<div>
I have been communicating almost entirely with the other members of the project via the <a href="https://groups.google.com/forum/#!forum/fgx-project">FGx Google Group</a> but realize this is silly because it's *you* I should be talking to. </div>
<div>
<br /></div>
<div>
All of this work is in need of feedback and comments and suggestions.</div>
<div>
<br /></div>
<div>
The screen grab above is from FGx Globe. It's showing aircraft that are currently being flown by people using the <a href="http://www.flightgear.org/">FlightGear</a> flight simulator. Of course the globe is in 3D and so you can zoom, pan and rotate the globe. Move your mouse over an aircraft and a window pops up with the flight details and a thumbnail image of the plane. Open the Crossfeed tab, click on a flight and a separate window opens showing the aircraft flying over a 2D map. And there's much more; please explore the tabs. The main thing missing in the tabs is the credits and licensing data for all the tools used to build this app, but this info is being added slowly but surely. </div>
<div>
<br /></div>
<div>
So FGx Globe is in a good enough state - but just for the moment.</div>
<div>
<br /></div>
<div>
Coming up will be fixing the issues with all the aircraft in FGx Aircraft. Some craft are missing, some are missing just a few bits (like wings or propellers ;-), and others have extra bits such as light shields or parachutes. Once that is done, we nee to see if we can reattach all the logos and paint jobs.</div>
<div>
<br /></div>
<div>
Once the planes are in order, we can come back to FGx Globe and decide then next big thing which is what happens when you zoom way in? How do you get to the place where you can see the planes taking off and landing at the airports? Should the next step be inside FGx Globe or should you transition to a different app. I will be looking into both possibilities in upcoming posts.</div>
<div>
<br /></div>
<div>
In the mean time, happy globe-trotting!</div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<br /></div>
</div>
Theohttp://www.blogger.com/profile/02877421856947529794noreply@blogger.com0tag:blogger.com,1999:blog-1385289550397169122.post-41086282261283953642013-11-14T20:56:00.001-08:002013-11-14T20:56:42.701-08:00Leap + Three.js: Boilerplate post at Leap Motion Labs<br />
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiB9YfSuGma9hbJk2njFur3mmrZmSB4oXVKkcwXTBEPJW881yNOrCyJkf1XkE8HWT2qOXQYfQ4y9rWJ62g0Sj3S47NhWPW8wKMtK_lZMJsfDD2h5vhQNkgZOfSl02CVynl2_umUmfk-dos/s1600/labs-post-boilerplate.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiB9YfSuGma9hbJk2njFur3mmrZmSB4oXVKkcwXTBEPJW881yNOrCyJkf1XkE8HWT2qOXQYfQ4y9rWJ62g0Sj3S47NhWPW8wKMtK_lZMJsfDD2h5vhQNkgZOfSl02CVynl2_umUmfk-dos/s1600/labs-post-boilerplate.png" height="478" width="640" /></a></div>
<br />
On the 15th of October Leap Motion Labs published a post written by me:<br />
<br />
<a href="http://labs.leapmotion.com/post/64166391272/thinking-as-a-designer-whats-a-good-leap-three-js">Thinking as a Designer: What’s a Good Leap + Three.js Boilerplate?</a>
<br />
<br />
From my point of view it's a fairly good post because the contents fulfill many of what I consider to be essential requirements for a good technical post which might include:<br />
<br />
<ul>
<li>An assortment of visuals</li>
<li>Access to source code easily obtainable on GitHub</li>
<li>A YouTube video</li>
<li>Plenty of links to useful information</li>
<li>And a demo app that works</li>
</ul>
<br />
And, above and beyond the specification items, there's a even fairly lively story.<br />
<br />
So how did this post go from the original email request into a published post in about five days?<br />
<br />
The answer has little to do with me. The answer may be surprising at first, but then becomes eminently reasonable.<br />
<br />
Look at the publisher of the post.<br />
<br />
<a href="http://labs.leapmotion.com/">labs.leapmotion.com</a><br />
<br />
And when I say 'look' I mean click on the link and flip through some of the articles.<br />
<br />
In my opinion, this site stands out as one of the best online vendor-specific tech journals currently in operation.<br />
<br />
The articles are lengthy and yet entertaining, in-depth and yet readable and do a great job of marketing without a heavy sales pitch. I don't think you will find many other start-ups with such a well-worked out formula for disseminating what is actually very complicated stuff.<br />
<br />
Why is the Leap Motion Lab doing such a good job when other aspects of the Leap Motion organization are quite lacking? Perhaps, it's the people. The editor I worked with, Alex Colgan, in a matter of hours transformed the job of preparing the article from being a task into being a pleasure. Alex lives/works in Yarmouth, Nova Scotia but the distance in time and miles did little to prevent a speedy and engaged conversation. And the Google Docs real-time collaboration was a blast.<br />
<br />
The main thing is that Alex picked up my style of writing ever so quickly. He made a lot of edits and yet looking back at the post I can't tell if a phrase is his or mine - even in the most technical parts. I worked through the weekend to finish the post, but Alex made it easy.<br />
<br />
So if anybody at Leap ever asks you to pen a post for the Labs journal, you should immediately place your hands over your Leao device and reply with a thumbs up.<br />
<br />
<br />Theohttp://www.blogger.com/profile/02877421856947529794noreply@blogger.com1tag:blogger.com,1999:blog-1385289550397169122.post-33600911726326679092013-10-09T00:39:00.001-07:002013-10-09T00:54:30.618-07:00 Leap + Three.js: Phalanges R7 Video<br />
<div class="separator" style="clear: both; text-align: center;">
<iframe allowfullscreen='allowfullscreen' webkitallowfullscreen='webkitallowfullscreen' mozallowfullscreen='mozallowfullscreen' width='320' height='266' src='https://www.youtube.com/embed/LP3py1R91m8?feature=player_embedded' frameborder='0'></iframe></div>
<h3>
</h3>
<h3>
Description</h3>
The goal is to build a web app with the procedures required to display - correctly and in real-time - a user-manipulated 3D hand - or claw - or appendage. This demo shows what is still a work in progress.<br />
<br />
Source Code here: <a href="https://github.com/jaanga/gestification/tree/gh-pages/cookbook/phalanges">https://github.com/jaanga/gestification/tree/gh-pages/cookbook/phalanges</a><br />
<br />
Live demo here: <a href="http://jaanga.github.io/gestification/cookbook/phalanges/r7/phalanges.html">http://jaanga.github.io/gestification/cookbook/phalanges/r7/phalanges.html</a><br />
- Requires a Leap Motion device<br />
<br />
The motion is captured using a Leap Motion device. See <a href="http://leapmotion.com/">http://leapmotion.com</a><br />
<br />
The 3D graphics are generated using the Three.js JavaScript library. See <a href="http://threejs.org/">http://threejs.org</a><br />
<br />
The video was recorded using CamStudio. <a href="http://http//camstudio.org/">http://http://camstudio.org/</a> There needs to be work on capturing data at a better frame rate.<br />
<br />
<iframe src="http://jaanga.github.io/gestification/cookbook/phalanges/r7/phalanges.html" style="height: 480px; width: 640px;"></iframe><br />
<div style="text-align: center;">
<span style="font-size: x-small;">Phalanges R7 - Requires Leap Motion Device to operate</span></div>
<h3>
Transcript</h3>
Hello this is Theo. And You're looking at the new Phalanges Release 7<br />
Phalanges is Latin term for finger bones<br />
It's October 8th 2013, here in San Francisco<br />
What you're seeing is the movements of my hand recreated in a 3D space<br />
I'm using the Leap Motion device to capture the actual movements of my hand and fingers as I speak<br />
The graphics you see in the video are being generated on screen using the three.js JavaScript library<br />
The issue in all this is that the Leap device cannot see all your fingers all the time<br />
So whenever one of the colored block disappears, it means that the Leap Device cannot see that finger<br />
The objective of the code is to keep all the fingers - the gray box-like objects - visible at all times.<br />
The second objective have fingers *not* go off in crazy directions.<br />
As you can see there's a fairly good connection, but it's not perfect.<br />
I can make my hand pitch - roll - and Yaw<br />
I can wiggle my fingers<br />
Mostly the fingers the visible and not too crooked<br />
And it's a lot better than Release 1<br />
Anyway, All of this very much a work in progress.<br />
What you are looking at is example or cookbook code.<br />
It's a program intended to be used as the basis for further development<br />
So it's not a thing beauty.<br />
For example, you can see All the dummy objects to make sure the fingers point in the right direction<br />
They are just here for testing and won't be visible in later programs<br />
Speaking of later programs<br />
The next generation of code based on this work will be out very soon'<br />
Two major features will be getting into this code:<br />
First, You will be able to use this these algorithms to save data in the industry-standard BVH file format.<br />
Secondly, you'll be able use this code to display human-like hands, or animal claws or robot appendages or whatever'<br />
So there's a lot more to be coming out out of this code.<br />
But for the moment, this is Theo, saying 'Bye for now...'<br />
<div>
<br /></div>
Theohttp://www.blogger.com/profile/02877421856947529794noreply@blogger.com0tag:blogger.com,1999:blog-1385289550397169122.post-2571486132731839022013-09-22T17:42:00.000-07:002013-09-22T17:42:31.170-07:00Skin and Bones for Leap Motion Devices ~ UpdatePlease see the previous post on this topic:<br />
<br />
<a href="http://www.jaanga.com/2013/09/so-close-yet-still-so-far-skin-and.html">http://www.jaanga.com/2013/09/so-close-yet-still-so-far-skin-and.html</a><br />
<br />
This morning I built and posted Phalanges R5 - a great improvement over the previous release:<br /><br /><a href="http://jaanga.github.io/gestification/work-in-hand/phalanges/r5/phalanges.html">http://jaanga.github.io/gestification/work-in-hand/phalanges/r5/phalanges.html</a><br /><br />with info here:<br /><br /><a href="https://github.com/jaanga/gestification/tree/gh-pages/work-in-hand/phalanges">https://github.com/jaanga/gestification/tree/gh-pages/work-in-hand/phalanges</a><br /><br />The interesting issue in all this is the difference between the methods Leap Motion uses to expose its data and the methods normally used in character animation.<br />
<br />
In character animation, all 'bones' are connected. If you move the upper arm then all the bones below move as well.<br />
<br />
The Leap provides individual positions and angles data for all the fingers and palms.<br />
<br />
Quite frequently you do not have information for all the fingers.<br />
<br />
In normal character animation, this is not much of an issue because if you move the palm then any unaccounted fingers will move along with palm automatically.<br />
<br />
But with the Leap Motion data, fingertips seen previously may end up sitting frozen in space disjointed from the hand or they may simply disappear. For some people this may be a disconcerting series of events.<br />
<br />
[Disclosure: my left hand disappeared a number of years ago never to return, so this sort of thing is no big issue for me. ;-]<br />
<br />
The first releases of of Phalanges relied on the fingertips, finger bases and palms all moving and controlled separately. This made for lots of fingers disappearing. The more recent releases followed the idea of all bones being connected and this caused fingertips to move in all sorts of inhuman ways.<br />
<br />
The current release is a hybrid. The palm and the finger bases are connected - move the palm and the bases move with it. The fingertips all move independently from each other and from the palm. This works just fine - until the Leap Motion device decides that a fingertip no longer exists.<br />
<br />
So what looks like the next solution to investigate is a hybrid-hybrid solution. When Leap Motion fingertip data is available use the hybrid solution. When Leap Motion data is not available make the Leap fingertips invisible and make a completely connected finger visible. When the Leap finger data is again available, switch out the fingers.<br />
<br />
Now all this may seem a wee bit complicated and you would think that sticking just a single joint between tip and palm would be no big deal. And you would be quite right. And you would be really, really smart because your brain would know how to crawl in and out and all over things like <a href="http://en.wikipedia.org/wiki/Inverse_kinematics">inverse kinematics</a> and be prepared to lots more code and include more libraries<br />
<br />
But that sort of thing is way beyond my skill level. My brain starts to fatigue when an app is over 300 lines. The current app is at 222 lines. With a bit of luck we can have a skinnable phalanges release that even my little brain may grasp...<br />
<br />
<h4>
Link:</h4>
<a href="https://github.com/jaanga/gestification/tree/gh-pages/work-in-hand/phalanges">https://github.com/jaanga/gestification/tree/gh-pages/work-in-hand/phalanges</a><br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />Theohttp://www.blogger.com/profile/02877421856947529794noreply@blogger.com1tag:blogger.com,1999:blog-1385289550397169122.post-67093499602260348032013-09-20T23:04:00.001-07:002013-09-22T21:14:14.512-07:00So Close / Yet Still So Far: Skin and Bones for Leap Motion Devices - A Progress Report<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://draft.blogger.com/goog_206039577"><img alt="../../../Common/images/Leap_Gesture_Swipe.png" src="https://developer.leapmotion.com/documentation/Common/images/Leap_Gesture_Swipe.png" style="margin-left: auto; margin-right: auto;" title="" /></a></td></tr>
<tr><td class="tr-caption">Hand image from <a href="https://developer.leapmotion.com/documentation/Languages/JavaScript/Guides/Leap_Overview.html">Leap Motion documentation</a></td></tr>
</tbody></table>
<i>2013-09-22: See also update post that discusses much improved Phalges R5:</i><br />
<i><a href="http://www.jaanga.com/2013/09/skin-and-bones-for-leap-motion-devices.html">http://www.jaanga.com/2013/09/skin-and-bones-for-leap-motion-devices.html </a> </i><br />
<br />
<br />
The above image is from the documentation for the Leap Motion device. Questions relating as how to produce such images or how to access the 'raw data' that produces such images are some of the most frequently asked questions in the Leap Motion forums. The bad news is that there is no source code or coding examples currently provided by Leap Motion for producing such a display.<br />
<br />
The good news is: Wow! What an excellent coding challenge...<br />
<br />
This post is a progress report on the current status to produce realistic-looking and behaving hands that can be controlled by the Leap Motion device.<br />
<br />
The most exciting recent development is certainly this recent post by Roman Liutikov:<br />
<br />
<a href="http://blog.romanliutikov.com/post/60899246643/manipulating-rigged-hand-with-leap-motion-in-three-js">http://blog.romanliutikov.com/post/60899246643/manipulating-rigged-hand-with-leap-motion-in-three-js</a><br />
<br />
With demo file here:<br />
<br />
<a href="http://demo.romanliutikov.com/three/10/">http://demo.romanliutikov.com/three/10/</a><br />
<br />
Roman provides very clear guidance as how to export skin and bones from Blender as a JSON file that can be read by Three.js and used to display arbitrary, real-time finger movements generated by a Leap Motion device.<br />
<br />
An interesting side note is that the code uses a BVH-like structure to control the movement of the fingers. I recently wrote about the importance and efficacy of BVH here:<br />
<br />
<a href="http://www.jaanga.com/2013/09/bvh-format-to-capture-motion-simply.html">http://www.jaanga.com/2013/09/bvh-format-to-capture-motion-simply.html</a><br />
<br />
The unfortunate aspect of this work is that there are a number of issues with the movement of the hand and fingers.<br />
<br />
Nevertheless, this code is an important step forward and well worth inspecting. I did so myself and have re-written Roman's code in my own (admittedly somewhat simplistic) style:<br />
<br />
Demo: <a href="http://jaanga.github.io/gestification/work-in-hand/phalanges/liutikov/liutikov.html">http://jaanga.github.io/gestification/work-in-hand/phalanges/liutikov/liutikov.html</a><br />
<br />
With information and background here:<br />
<br />
<a href="https://github.com/jaanga/gestification/tree/gh-pages/work-in-hand/phalanges/liutikov">https://github.com/jaanga/gestification/tree/gh-pages/work-in-hand/phalanges/liutikov</a><br />
<br />
My own work, since the publication of the post on BVH, has been involved with building up a notion of the best methods for positioning and angling the 'bones' inside the fingers. There are a host of issues - too many to list here - including: hands that sometimes have five fingers, or two fingers or no fingers; finger 2 easily switches places with finger 3; the order of the fingers is 4, 2, 0, 1, 3 and so on.<br />
<br />
The latest demo (R4) is here:<br />
<br />
<a href="http://jaanga.github.io/gestification/work-in-hand/phalanges/r4/phalanges.html">http://jaanga.github.io/gestification/work-in-hand/phalanges/r4/phalanges.html</a><br />
<br />
Previous releases, source code and further information is available here:<br />
<br />
<a href="https://github.com/jaanga/gestification/tree/gh-pages/work-in-hand/phalanges">https://github.com/jaanga/gestification/tree/gh-pages/work-in-hand/phalanges</a><br />
<br />
Much is working: the hand generally moves and rotates appropriately, fingers stay in the same position and don't disappear. But it is readily apparent that the tips of the fingers are still quite lost in space.<br />
<br />
Not to worry. Eventually the light bulb will turn on. Actually the more likely thing is that a search on Google will turn up an answer or some person very smart in the ways of vectors will respond on Stackoverflow.<br />
<br />
Also worth noting is that the people at Leap Motion gave a demo of routines at the recent developer's conference in San Francisco that may provide a satisfactory response. The interesting thing will be to see which code come out first and which code is the more hackable.<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />Theohttp://www.blogger.com/profile/02877421856947529794noreply@blogger.com0tag:blogger.com,1999:blog-1385289550397169122.post-44704437754169360602013-09-10T18:38:00.000-07:002013-09-10T18:44:55.337-07:00BVH: A format to capture motion - simply, quickly and economically One of the reasons that Android phones have such good voice recognition is because, under <a href="http://en.wikipedia.org/wiki/Peter_Norvig">Peter Norvig</a>'s guidance, Google has acquired an immense corpora or database of what and how people speak. It is my contention that gestures and other non-verbal means of communication will eventually regain some of the primacy that they had before primate evolved verbal communication. If this is to happen to gesturing, then we need some fast, cool effective methods for recording the many gestures people make.<br />
<br />
This is not a new thought. Personally and over the years, I have spent some fascinating moments <a href="https://www.google.com/search?q=dance+notation&tbm=isch&tbo=u&source=univ&sa=X">exploring</a> <a href="http://en.wikipedia.org/wiki/Dance_notation">dance notation</a>. And <a href="https://www.google.com/search?q=sign+language&tbm=isch&tbo=u&source=univ&sa=X">sign language</a> is the codification of gesturing. But coming back to computers, we have all the methods used by computer games to record and replay the movements of game characters. Collada, FBX and the new <a href="https://github.com/KhronosGroup/glTF">glTF</a> come to mind.<br />
<br />
Here's the thing: gesturing can generate huge amounts of data per second. It's nearly as good ( or bad - depending on your outlook) as video - if nothing else because the data gathering usually is via video. Secondly, if data scientists are ever to be able to parse our gestures they will need the data in digital format. The concept represented by letters 'donut' is far smaller than the audio file of the sound bit let alone the object in question.<br />
<br />
Because of my joy in exploring the Leap Motion device, I have spent the last month or so looking into ways of registering gestures.<br />
<br />
One of my experiments is to record all the messages sent out by the Leap Motion device and save then in JSON format. The messages are used by software developers and for testing. In normal coding such messages are typically short and sweet (or not). But even a short gesture may generate a JSON file of over a megabyte. If you have a Leap device, you can have a look at the app here:<br />
<br />
<a href="http://jaanga.github.io/gestification/cookbook/jest-play/r1/jest-play.html">http://jaanga.github.io/gestification/cookbook/jest-play/r1/jest-play.html</a><br />
<br />
With source code and more details here:<br />
<br />
<a href="https://github.com/jaanga/gestification/tree/gh-pages/cookbook/jest-play">https://github.com/jaanga/gestification/tree/gh-pages/cookbook/jest-play</a><br />
<br />
Thus, as helpful as this app should be to developers and testers (especially as none of the example apps in the Leap Motion examples site can do this), this is not an app that should be used to recorded and replay a corpora of thousands or millions of gestures because the files sizes are too large.<br />
<br />
In July I wrote a paper using Google Docs about gesture recording. You can have a look at the paper here:<br />
<br />
<a href="https://docs.google.com/document/d/1jVB3RP0Xnhp_py0hhbbZ8jZtHW-MSkxbGKEUPWwtMos">Skeleton API Considerations for Leap Motion Devices R2</a><br />
<br />
In this document I recommend looking at the <a href="http://en.wikipedia.org/wiki/Biovision_Hierarchy">BVH</a> format. This is not my first encounter with BVH. Last I wrote a five post tutorial on getting animations into <a href="http://threejs.org/">Three.js</a> by importing BVH files into Blender. I have yet to hear or see anybody else that was able to follow - successfully - the tortuous path I proposed that you should dance down. And, in the mean time, there have been so changes, that half the stuff no longer works.<br />
<br />
Anyway, because of the paper and because of the Leap device, I decided to write a BVH reader based on code I had found [only after many searches over a long period of time] including these two examples:<br />
<br />
<a href="https://code.google.com/p/papervision3d/source/browse/trunk/as3/trunk/src/org/papervision3d/objects/parsers/mocap/BVH.as?spec=svn938&r=938">https://code.google.com/p/papervision3d/source/browse/trunk/as3/trunk/src/org/papervision3d/objects/parsers/mocap/BVH.as</a><br />
<a href="https://github.com/sinisterchipmunk/bvh">https://github.com/sinisterchipmunk/bvh</a><br />
<br />
Even though I code a lot, I am not really a programmer and it soon all started to get a bit daunting. When that sort of thing happens I tend to go into denial and whatever. And I did a Google search on 'Three.js BVH reader' and up came this:<br />
<br />
<a href="http://saqoo.sh/a/labs/perfume/2/">http://saqoo.sh/a/labs/perfume/2/</a><br />
<br />
I nearly fell out of my chair. Here was everything I wanted: A simple Three.js app that reads BVH files. And more than that, the code itself is fascinating. The methods the author uses to do 'if/then' within a 'for' loop were totally new to me.<br />
<br />
Saqoosha: you are amazing! And thank you for your kind permission to build upon your code. Here's Saqoosha's web site:<br />
<br />
<a href="http://saqoo.sh/a/">http://saqoo.sh/a/</a><br />
<br />
So within quick order I had several demos up and running - each accessing slightly different dialects of BVH. The links are at the end of this post. And now I have had several days, reading and thinking about BVH and comparing it with other methods.<br />
<br />
And the TL;DR is that the BVH format is awesome. Accept no substitute.<br />
<br />
You can read about BVH <a href="https://sites.google.com/a/cgspeed.com/cgspeed/motion-capture/3dsmax-friendly-release-of-cmu-motion-database/3dsmax-bvh-import-specification">here</a> and <a href="http://tech-artists.org/wiki/BVH">here</a> and <a href="http://www.mindfiresolutions.com/BVH-biovision-hierarchy.htm">here</a>.<br />
<br />
Thing #1. The main thing is that the main data part of the format is about as sparse as you can get in uncompressed ASCII. It's just numbers and spaces. And, the most important, it's only the numbers you actually need.<br />
<br />
Let me try and explain. To position something like a hand or foot in space you to specify it X, Y and as well as the pitch, roll and yaw angles. That's six numbers - the 'six degrees of freedom'. But the BVH files only records pitch, roll and yaw - three numbers. It assumes you can fill in the X, Y and Z yourself at runtime. How? Because the header tells you the offset distances for all the body bits. In essence, for the purpose of this app, the length of an arm or a leg is a constant not a variable, so you don't need repeat these values endlessly and the actual position is calculated in real-time frame by frame. Of course, all of this is recursive which short circuits my tiny brain.<br />
<br />
Anyway, the main about BVH is that it is not possible to come up with a smaller method of recording motion than BVH. [I say this in the context of being a person often in the midst of people who understand mathematics - so wait and see awhile before accepting this assertion.]<br />
<br />
Thing #2. Since the X, Y and Z information is all in the header. You can change this at any time. Even run time - and make the character morph as it's moving. Thus you can fairly easily adapt a BVH file to different character sizes.<br />
<br />
Thing #3. All the movement data is in an array of strings which contain the relevant angles. At runtime you can easily splice, pull or shift the array and update the character to have a new series of motions. So you could have a character moving about for twenty minutes but be, say, just twenty seconds ahead in terms of data that needs to be loaded.<br />
<br />
Thing #4. The BVH is supported by Blender, Daz, MakeHuman, Mixamo, FreeMocap and probably a number of other suppliers off 3D stuff. It's a fairly safe format. And the only commonly accepted format dedicated to motion.<br />
<br />
Thing #5. The format is quite flexible. It can handle all the bones in the toes and fingers, or creatures with seven tentacles or just a robot arm with three moving parts. This does mean that there are a number of BVH 'dialects' out there, but my guess is that a good parser will eventually be able to identify the major types and adjust accordingly.<br />
<br />
Thing #6. BVH data may be generated either via motion capture devices or by algorithm - and you can mix the two easily.<br />
<br />
So is BVH perfect? Perhaps it is, but there is an issue. If BVH is the 'verb' - the thing that gets things moving, then what about the 'noun' the data that needs to be moved about? That is the subject of a whole story in itself and I will talk about this in an upcoming post.<br />
<br />
In the mean, please enjoy the code that Saqoosha wrote to get your screen to dance:<br />
<br />
Live demo: <a href="http://jaanga.github.io/cookbook/bvh-reader/r1/bvh-reader-saqoosha.html">http://jaanga.github.io/cookbook/bvh-reader/r1/bvh-reader-saqoosha.html</a> <br />
Live demo: <a href="http://jaanga.github.io/cookbook/bvh-reader/r1/bvh-reader-saqoosha-cmu-daz.html">http://jaanga.github.io/cookbook/bvh-reader/r1/bvh-reader-saqoosha-cmu-daz.html</a><br />
Live demo: <a href="http://jaanga.github.io/cookbook/bvh-reader/r1/bvh-reader-saqoosha-truebones.html">http://jaanga.github.io/cookbook/bvh-reader/r1/bvh-reader-saqoosha-truebones.html</a><br />
<br />
Details and source code here:<br />
<a href="https://github.com/jaanga/cookbook/tree/gh-pages/bvh-reader">https://github.com/jaanga/cookbook/tree/gh-pages/bvh-reader</a> <br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />Theohttp://www.blogger.com/profile/02877421856947529794noreply@blogger.com0tag:blogger.com,1999:blog-1385289550397169122.post-44709828489577427822013-09-03T17:00:00.000-07:002013-09-03T17:00:45.673-07:00JavaScript App to Record, Save and Replay Leap Motion Gestures in 3DThe <a href="http://leapmotion.com/">Leap Motion</a> device leads the way into new computer interfaces designed for specifically for our hands. Why? Because all other devices (mice, pianos, steering wheels whatever) require that the hands touch, hold or manipulate some other things. The Leap Motion device is the first device that captures your hand and finger movements in a completely free and unfettered manner.<br />
<br />
Being the first device of this kind has its issues. There is not a lot of software for the device. There are not many good tools for designing software. And there really isn't even a good idea as to what the best tools should be or should do.<br />
<br />
Frankly, I think this is amazing. This is one of the very rare occasions when we have a 'green field site' that doesn't even have a green field.<br />
<br />
So what fun things need too be addressed first? Well, one of the main things is that there is no way of recording the movement of your hands and fingers and then replying the gestures and being to read the numeric data as well as view a representation in 3D. And perhaps more interesting there is no simple, easy-to-understand FOSS method for recording motions. Or maybe there is but I haven't seen it.<br />
<br />
There was, however, a great first attempt. Kai Chung put together two code examples, 'Leap Motion Recorder' and 'Leap Motion Replayer', available from here:<br />
<br />
<a href="http://js.leapmotion.com/examples">http://js.leapmotion.com/examples</a>.<br />
<br />
There are issues with both the apps. The recorder app provides no method for saving data and the replayer app only replays just the one file it is hardwired to replay and ut has no method for selecting and opening files. And in terms of helping programmers, these apps are coded to work with an early beta version of the Leap Motion software and do not seem to work when linked to the current version of the JavaScript Leap library.<br />
<br />
But now, as of today, there is 'jestPlay'. If you have a Leap Motion device, try out the app here:<br />
<br />
<a href="http://jaanga.github.io/gestification/cookbook/jest-play/r1/jest-play.html">http://jaanga.github.io/gestification/cookbook/jest-play/r1/jest-play.html</a><br />
<br />
There is a replay only version in the works - so that people without the device can replay gestures. It should be available shortly.<br />
<br />
The jestPlay app enables you to record your hand movements by saving data from the device to your computer as JSON files. Once saved, you can open these files and watch a full 3D replay of the movements.<br />
<br />
The app is a 'cookbook' style app. It is not a fully-featured or even a finished app. It does, however, provide you with a simple working example in just over two hundred lines of very simple JavaScript code that you can use to start developing your own code.<br />
<br />
The app provides full access to your operating systems file save and file open dialog boxes<br />
- which are features not normally found in JavaScript as they were recently introduced in HTML 5.<br />
<br />
Based on the <a href="http://threejs.org/">Three.js</a> library, the jestPlay app allows you to zoom, pan and and rotate the views of the replays - so from another person's point of view - you can see your handiwork .<br />
<br />
<b>Source Code</b><br />
<a href="https://github.com/jaanga/gestification/tree/gh-pages/cookbook/jest-play">https://github.com/jaanga/gestification/tree/gh-pages/cookbook/jest-play</a><br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
Theohttp://www.blogger.com/profile/02877421856947529794noreply@blogger.com0tag:blogger.com,1999:blog-1385289550397169122.post-2528239486634878642013-09-02T22:43:00.000-07:002013-09-02T22:50:51.921-07:00The Barfolina Pavilion: Towards a Procedural ArchitectureThe past week or so has been exhilarating because I have been able to churn out so much interesting code. But now I have material for a dozen blog posts. So there is going to be a battle between the coding fingers and the writing fingers. For the moment, the writing fingers are in charge.<br />
<br />
Nico B wants an app that would allow people to 'fly'' over the harbor in Iquique Chile and he wants to do this using the motion controller from <a href="http://leapmotion.com/">Leap Motion</a>. How do you fly in an imaginary way - hundreds of feet over the harbor and down through the buildings - while using just one hand twirling and swirling inches above the Leap Motion device? Neither the Leap Motion sample software nor Three.js examples have anything that does exactly this. So we needed to come with the flying app ourselves. In order to build the app, we need landscape and building to practice with. The actual physical project in Iquique has issues, so we needed to come up with our own imaginary buildings.<br />
<br />
The first simulation I came up with was an imaginary landscape:<br />
<a href="http://jaanga.github.io/gestification/projects/flying-leap-3d/r1/flying-leap-3d.html">http://jaanga.github.io/gestification/projects/flying-leap-3d/r1/flying-leap-3d.html</a><br />
<br />
In many ways, this was just fine. But really it was just too good. All you do is float around. It's actually quite difficult to get anywhere specific.<br />
<br />
So then Nico found the data for this castle:<br />
<a href="http://jaanga.github.io/gestification/projects/flying-leap-3d/castle/load-castle.html">http://jaanga.github.io/gestification/projects/flying-leap-3d/castle/load-castle.html </a><br />
<br />
It may take a number of seconds for it to load. There were many issues here. The biggest issue has been getting the flying speed right, when you are in the castle it's too fast and when you are flying around it's too slow. And then the walls only have textures on the outside. When you go inside, the walls are invisible. So you think you are still outside. Therefore you keep on going and then you are outside without having known you were inside. And so on.<br />
<br />
So the Nico came up with the Barcelona Pavilion - and he even sourced a Blender 3D file for it. Conceptually the Pavilion is a perfect place to lean to fly. You can start in the landscape, move to the courtyard than try to negotiate the narrow passages. It was a perfect fit.<br />
But the Blender file was missing its textures. I found two other Blender files. Again there were issues.<br />
<br />
What to do? Build my own models using Blender or SketchUp or whatever? No way. I don't build stuff using tools I did not have a hand in building myself. So I built a 3D model of the pavilion using Three.js. Every floor, wall, window is a procedure. The project took half a day. It's about 500 lines of code and is about 28K in file size. Simple, tiny and fast. Perfection.<br />
<br />
Not really. As you fly through the building you'll see a dozen or so mistakes. That's because I mostly did things by eye and feel. It will take an hour or two to fix these, but I have no fear. Unlike the old-school CAD programs, there won't be broken walls that have to be 'fixed' or dozens of items that need changing because a height has changed.<br />
<br />
The pavilion - which I call the Barfolina Pavilion - is viewable here:<br />
<a href="http://jaanga.github.io/gestification/projects/flying-leap-3d/barfolina-pavillion/r1/barfolina-pavillion.html">http://jaanga.github.io/gestification/projects/flying-leap-3d/barfolina-pavillion/r1/barfolina-pavillion.html</a><br />
<br />
It's all still at a Release 1.0 stage, but it was so much fun thus far that there will be many more releases with things like transparent roofs and people visiting the pavilion and maybe even en exhibit or two.<br />
<br />
But working on this project make me think a lot of the buildings of the future. These building, as we all know, will be built and edited and updated continuously by robots. "The grandparents are coming. We need to add a guestroom." "Saturday is Tammy's birthday. Take down all the walls so there's room for the party." The robots will not want static databases of dimensions. The robots will want to know the program.<br />
<br />
So in coding this 1929 building, perhaps I was designing for 2129...<br />
<br />
<b>Source Code</b>:<br />
<a href="https://github.com/jaanga/gestification/tree/gh-pages/projects/flying-leap-3d">https://github.com/jaanga/gestification/tree/gh-pages/projects/flying-leap-3d</a><br />
<br />
<br />
<br />
<br />
<br />Theohttp://www.blogger.com/profile/02877421856947529794noreply@blogger.com0tag:blogger.com,1999:blog-1385289550397169122.post-58730625127471261102013-09-01T23:09:00.000-07:002013-09-01T23:27:07.843-07:00Folding Polygons the Naughty Way: in 3D, with Intersections and a Video VeneerMy recent post on <a href="http://www.jaanga.com/2013/08/webgl-displaying-video-on-wobbly-moving.html">displaying video on wobbly surfaces</a> amused ArtS of Meno Park CA and to make a long story short we had a delightful lunch together today. It's not often that I find anybody like Art in the meatworld that likes to talk about 3D.<br />
<br />
We talked about many aspects of 3D. For example, a great introdution to JavaScript coding in 3D is here: <a href="http://www.mrdoob.com/projects/htmleditor/">http://www.mrdoob.com/projects/htmleditor/</a>. Look for the word 'wireframeLinewidth' ( at the end of the long sentence in the middle) and change the '2' to an '8'. Bingo! You are a programmer.<br />
<br />
And we talked a lot about <a href="https://github.com/sole/tween.js/">tween.js</a> - a brilliant way of morphing all matter of stuff in 3D and 3D.<br />
<br />
But mostly we talked about displaying video on folding polygons, Folding polygons are things that look like the images in the link provide by Art to this book: <a href="http://graphics.berkeley.edu/papers/Iben-RPP-2006-06/">http://graphics.berkeley.edu/papers/Iben-RPP-2006-06/</a>. Basically, if you like origami then you like folding polygons.<br />
<br />
Many of the discussions on folding polygons relate to morphing the polygons on a 2D plane<br />
such that no vertex is 'naughty' and crosses over or intersects anybody else's line. This is certainly fun stuff. But even more fun - or fun in a different way - is the exploration of 'naughty' folding and 3D folding.<br />
<br />
After lunch I built some demos that begin to explore the naughty bits.<br />
<br />
Demo:<br />
<a href="http://jaanga.github.io/cookbook/video-folding-polygons/r1/video-folding-polygons-5x5.html">http://jaanga.github.io/cookbook/video-folding-polygons/r1/video-folding-polygons-5x5.html</a><br />
<br />
This first demo is a version of the Three.js demo: <br />
<a href="http://mrdoob.github.io/three.js/examples/#webgl_materials_video">http://mrdoob.github.io/three.js/examples/#webgl_materials_video</a> <br />
<br />
The code is greatly simplified, and made suitable for being used as boilerplate for further apps/ <br />
<br />
The next demo is here:<br />
<a href="http://jaanga.github.io/cookbook/video-folding-polygons/r1/video-folding-polygons-pixelated.html">http://jaanga.github.io/cookbook/video-folding-polygons/r1/video-folding-polygons-pixelated.html </a><br />
<br />
Question: can you make a video with holes in it? This app shows the answer is 'yes!' <br />
<br />
The fun thing here is the array that is used to layout the position of the holes. See below - if you look carefully you can see the word 'Art' spelled out. Now is that the Art I had lunch with or is it that thing that people do with chemicals and brushes? Who knows.<br />
<br />
You can see that the array is laid out as 20 x10 grid - just as the cubes in the grid are laid out. A 1 indicates inserting the cube. A 0 indicates leaving the cube out. I enjoyed this cute, ever so simple 'Art'istic method for creating a 'pixelated' video.<br />
<br />
<span class="Apple-tab-span" style="white-space: pre;"> </span>var pixels = [<br />
<span class="Apple-tab-span" style="white-space: pre;"> </span>1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,<br />
<span class="Apple-tab-span" style="white-space: pre;"> </span>1,1,1,1,0,1,1,1,0,0,0,1,1,0,0,0,0,0,1,1,<br />
<span class="Apple-tab-span" style="white-space: pre;"> </span>1,1,1,0,1,0,1,1,0,1,1,0,1,1,1,0,1,1,1,1,<br />
<span class="Apple-tab-span" style="white-space: pre;"> </span>1,1,0,1,1,1,0,1,0,1,1,0,1,1,1,0,1,1,1,1,<br />
<span class="Apple-tab-span" style="white-space: pre;"> </span>1,1,0,1,1,1,0,1,0,1,1,0,1,1,1,0,1,1,1,1,<br />
<span class="Apple-tab-span" style="white-space: pre;"> </span>1,1,0,0,0,0,0,1,0,0,0,1,1,1,1,0,1,1,1,1,<br />
<span class="Apple-tab-span" style="white-space: pre;"> </span>1,1,0,1,1,1,0,1,0,1,1,0,1,1,1,0,1,1,1,1,<br />
<span class="Apple-tab-span" style="white-space: pre;"> </span>1,1,0,1,1,1,0,1,0,1,1,0,1,1,1,0,1,1,1,1,<br />
<span class="Apple-tab-span" style="white-space: pre;"> </span>1,1,0,1,1,1,0,1,0,1,1,0,1,1,1,0,1,1,1,1,<br />
<span class="Apple-tab-span" style="white-space: pre;"> </span>1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1<br />
<span class="Apple-tab-span" style="white-space: pre;"> </span>];<br />
<br />
Demo:<br />
<a href="http://jaanga.github.io/cookbook/video-folding-polygons/r1/video-folding-polygons-deformed-planes.html">http://jaanga.github.io/cookbook/video-folding-polygons/r1/video-folding-polygons-deformed-planes.html</a> <br />
<br />
This is the actual 'naughty' folded polygon demo. You will note that the 'teeth; are splayed out in 3D<br />
but if they were laid out flat the teeth would intersect. In other words, you could not cut this thing out of a single sheet of paper.<br />
<br />
And then again, even if you could, you might also have some trouble displaying video on the impossibly cut sheet of paper.<br />
<br />
If you asked me a year ago if a script-kiddie of my ability could code the display of a video on an impossible origami fold, I would have laughed. Actually, I am still laughing because the demo kind of sucks. Looking at the video from the back or from the side is vaguely interesting - for about four seconds. And sometimes the video feels a bit 3D-like. But, frankly, I am happiest when it's all 3D through and through. But if you do have any good "Can you do this?" challenges, I would be delighted to hear about them. Art, I am looking at you.<br />
<br />
<b>Source Code:</b><br />
<a href="https://github.com/jaanga/cookbook/tree/gh-pages/video-folding-polygons">https://github.com/jaanga/cookbook/tree/gh-pages/video-folding-polygons</a><br />
<div>
<br />
<br /></div>
Theohttp://www.blogger.com/profile/02877421856947529794noreply@blogger.com0tag:blogger.com,1999:blog-1385289550397169122.post-78754354009350974822013-08-27T22:10:00.000-07:002013-09-08T11:42:28.342-07:00WebGL: Displaying Video on Wobbly, Moving SurfacesToday there was <a href="https://groups.google.com/forum/#!topic/webgl-dev-list/i3MziMkL2ZU">a message</a> from lyc78026 to the WebGL Developer mailing list that asked this:<br />
<br />
<blockquote class="tr_bq">
I want to implement a curved surface in WebGL, and map a video texture to the surface, is this possible? </blockquote>
<blockquote class="tr_bq">
Something like this:<br />
<a href="https://dl.dropboxusercontent.com/u/73906326/img.png">https://dl.dropboxusercontent.com/u/73906326/img.png </a></blockquote>
<blockquote class="tr_bq">
Thank you!</blockquote>
To which I responded the following in an email to lyc78026:<br />
<br />
WebGL can be made to do almost anything, so it certainly is possible for a webGL app to wrap a video around a cylinder as your link indicates.<br />
<br />
Unfortunately, my skill level is not up to the necessary WebGL level of coding. So I tend to use libraries such as Scene.js and and Three.js to do the heavy lifting.<br />
<br />
Thus, using these <a href="http://mrdoob.github.io/three.js/examples/#webgl_materials_video">three.js</a> <a href="http://mrdoob.github.io/three.js/examples/#canvas_materials_video">examples</a> as starting points, it was not difficult to come up with the Sintel video on playing on a cylindrical surface:<br />
<br />
Note that you can use your mouse to spin the cylinder.<br />
<br />
<a href="http://jaanga.github.io/cookbook/video-surfaces/r1/threejs-video-surface-canvas-cylinder.html">http://jaanga.github.io/cookbook/video-surfaces/r1/threejs-video-surface-canvas-cylinder.html</a><br />
<br />
When I finally was able to get the video running, it looked a bit 'old-school'. So wondered a bit and here is the video playing skewed at an angle on a cylinder.<br />
<br />
<a href="http://jaanga.github.io/cookbook/video-surfaces/r1/threejs-video-surface-canvas-skewed.html">http://jaanga.github.io/cookbook/video-surfaces/r1/threejs-video-surface-canvas-skewed.html</a><br />
<br />
I had no idea that you could this. And this made me wonder some more. How about bending the video in two directions? And having twenty videos at the same time? Bingo!<br />
<br />
<a href="http://jaanga.github.io/cookbook/video-surfaces/r1/threejs-video-surface-canvas-sphere.html">http://jaanga.github.io/cookbook/video-surfaces/r1/threejs-video-surface-canvas-sphere.html</a><br />
<br />
Then after looking at this for a while, I began to feel that somewhere maybe I had seen stuff like this before. So what could I build that Remi has not seen before? Remi has seen nearly everything 3D. So here is the video running inside a torus.<br />
<br />
<a href="http://jaanga.github.io/cookbook/video-surfaces/r1/threejs-video-surface-canvas-torus.html">http://jaanga.github.io/cookbook/video-surfaces/r1/threejs-video-surface-canvas-torus.html</a><br />
<br />
My guess is that once you have viewed 'Blade Runner' from inside the Torus that you are good to go in the Matrix.<br />
<br />
So now I was on a roll. The video was running like butter through my finger tips. And then I had a flash of inspiration.<br />
<br />
When I was young, all our TVs had aerials that were two metal rods sticking up and we called them 'rabbit ears'. What I wanted to do was project the video onto this rabbit: <a href="http://mrdoob.github.io/three.js/examples/#webgl_loader_vtk.">http://mrdoob.github.io/three.js/examples/#webgl_loader_vtk.</a><br />
<br />
Then I could say something like "In the old days all TVs had rabbit ears, but today all you need is the rabbit" ;-)<br />
<br />
But that was a fail. And my attempt at projecting the video onto a 3D model of Walt Disney's head was also a fail.<br />
<br />
And I thought, OMG, lyc78026 will be sorely disappointed if is there is not a good closing demo.<br />
<br />
But as I was having those horrid gloomy thoughts and bad experiences, a light bulb lit up over the top of my head.<br />
<br />
Of course, the video does not want to run on the 'bunny.js' and 'WaltDisneyLo.js' because these are static objects. The Sintel video is a moving picture thing. The video is only going to run on something that is in and of itself 'running'. If this thing is going to work,the closing demo needs to a moving picture, moving picture [sick] thing. Otherwise the Sintel video will walk off the set.<br />
<br />
And we all know where that is going: to zz85's <a href="http://mrdoob.github.io/three.js/examples/#webgl_geometry_extrude_splines" target="_blank">roller coaster ride</a> [Toggle 'Camera Spline Animation View' to: On]<br />
<br />
Again, I don't have Singaporean elementary school math in my kit of tools.<br />
<br />
But, be still my heart, I do have my algorebra routines - which is algebra as made known to the world by Al Gore while holding a bra.<br />
<br />
If algebra ever drove you to tears then these routines are truly moving pictures.<br />
<br />
Let's see how the Sintel video takes on a transcendental function:<br />
<br />
<a href="http://jaanga.github.io/cookbook/video-surfaces/r1/threejs-video-surface-transcend.html">http://jaanga.github.io/cookbook/video-surfaces/r1/threejs-video-surface-transcend.html</a><br />
<br />
Pretty moving, huh?<br />
<br />
Moving wobbly text can be done in Three.js. QED mostly, therefore it can be done in WebGL.<br />
<br />
lyc78026: Got any more fun, moving 3D questions?<br />
<br />
<b>Link to Source Code</b><br />
<a href="https://github.com/jaanga/cookbook/tree/gh-pages/video-surfaces">https://github.com/jaanga/cookbook/tree/gh-pages/video-surfaces</a><br />
<br />
<br />Theohttp://www.blogger.com/profile/02877421856947529794noreply@blogger.com0tag:blogger.com,1999:blog-1385289550397169122.post-58883542503536475632013-08-17T16:45:00.000-07:002013-08-17T17:45:18.912-07:00Leap Motion: Towards a New Linguistics<div>
<div>
<b>TL;DR</b> <i>Methods for exporting data from our brains are woefully slow and incomplete. Tools such as the new device from Leap Motion may provide an intriguing ability to extend language itself with color and other attributes.</i></div>
</div>
<div>
<br /></div>
<h4>
Humans Are Not as Fast as Computers as Communicators</h4>
The laptop I am using to write this post can export data to the world at 1,000,000,000 bits of data per second - which is quite slow compared to the speed it can transmit data internally. But what about the speed of getting the data from my brain to the computer?<br />
<br />
If you speak fast, you might output about <a href="http://en.wikipedia.org/wiki/Words_per_minute">one hundred sixty words per minute</a>. At five characters per word plus a space that adds up to 960 bytes of data per minute. Typing is generally half that speed.<br />
<br />
The world record for piano playing is currently at <a href="http://www.berklee-blogs.com/2012/10/17957/">765 keys played in a minute</a>. A very good guitar player might hit <a href="http://recordsetter.com/world-record/guitar-player/7285">600 notes per minute</a>.<br />
<br />
Of course there is much meta data as well. Voice has pitch and timbre. The piano has acceleration and duration. So there's more data but not that much more<br />
<br />
Using a mouse it may well be possible to click several hundred times in a minute. A game controller is likely to produce even more but a computation of an estimate of the data output per minute is beyond my skill set. But it can be no more than a few thousand bytes per minute.<br />
<br />
Is there a pattern here? If there is then it is really simple: The human being, using current methods, is able to produce only a very small amount of new digital data per minute.<br />
<br />
You can read fast, listen fast and view incredibly fast but in terms of creating or generating new information that you can share you are painfully slow when compared to the technology you have in your hand.<br />
<br />
And, frankly, even the receiving of data into your brain is not that fast. Reading 300 words per minute - or 1,500 bytes per minute is considered to be speed reading.<br />
<br />
These speeds have certainly improved over the centuries. We have been reading and writing for three or so thousand years and speaking and listening to words for perhaps a hundred times that long. Certainly we can gather and disseminate faster than our ancestors. Trying to do speed reading with stone tablets was probably not self-evident.<br />
<br />
Nevertheless the improvements in the speed of reading and writing comprehension have not improved significantly in in my lifetime. For example, Any improvements the abilities for our brains to process data has been nowhere near the gain in the ability to transmit the data that the Internet brought about. In other words your grandmother could read a letter about as fast as you can read an email.<br />
<br />
Perhaps there is some kind of an asymptotic limit to the speed at which our brains can import and export data and we seem to be approaching it. At least we seem to be approaching some limit using the tools we have become accustomed to using over the last millennium.<br />
<div>
<br />
<h4>
Emerging Alternative Methods of Writing</h4>
But are letters and spoken words the only means we have at our disposal? Are there other ways/methods/vehicles that we could use to communicate?<br />
<br />
Let's consider some ways.<br />
<br />
I am intrigued by the differences between the way Westerners and East Asians tend to read. Westerners use a phonetic alphabet and East Asians ten to use ideographs. Both have their strong points.<br />
<br />
And I begin to see a blend occurring. The East Asians have been dragged into learning the Roman alphabet. And Westerners have begun to learn a new series of ideographs that range from the home icon, reload, mute, go back and other icons as well as smileys and emoji.<br />
<br />
I also see much increased visual complexity in the data being sent out, Web pages and text and imagery and sophisticated graphic design. Whether codes such as 'lol', 'bte', and 'rofl' speed up your data output is up for grabs.<br />
<br />
<h4>
Full Body Data</h4>
Can we switch gears? Is there a new 'communiKid' on the block. Could we import and export data to and from our brains at 5K bytes per minute or more without inserting tubes and connecting wires into our brains?<br />
<br />
If so, how would we do this?<br />
When people develop computer games, they need to simulate body movements in order too create animated characters. They use a technology dubbed 'motion capture' to do this. A typical method of motion capture is for an actor to have a number of dots attached to their body, be filmed going through a series of moments and the film decoded in such a way that the movement of the dots can be saved as X, Y and Z coordinates.<br />
<br />
In this manner, hundreds of thousands of bytes of intentional data can be exported from a human being per minute.<br />
<br />
This may be 'poetry in motion' but the data itself is not a a mere poem. Every byte of data was caused by an intentional action that occurred in that actor's brain. This is a huge amount of engaged data being created and logged per minute.<br />
<br />
When we look at ballerinas, golfers, tennis players, ju-jitsu practitioners and others we can see virtually every aspect of the body brought under control of the mind and dedicated to communicating.<br />
<br />
Thus, conceivably, not only can we record huge amounts of data but in the right bodies much of that data could be termed significant or intentional data.<br />
<br />
We cannot, however, all be Tiger Woods. Nor do we have access to Hollywood motion data recording studios. Are there, perhaps, other ways to capture the body's kinetic motions and transforms that motion into digital data?<br />
<br />
<h4>
Dandy New Device</h4>
Golly gee. It's just arrived on my desk.<br />
<br />
I am using the device recently released by <a href="https://www.leapmotion.com/">Leap Motion</a>. It's tiny device, no bigger than a pack of chewing gum, that records the X, Y and Z position of every finger you move as well as the three rotation angles of the fingers and the velocity. It is also doing the same and a bit more with the palm of the two hands. Whether this is 98 data points or more or less is up for debate, nevertheless the data is coming in at over 100 times per second or more than 6,000 events per minute. So is this half a megabyte of data being created per minute? Who knows? All I know is that the more days I code for this thing the more data I am getting out of it.<br />
<br />
Is the data as erudite as a Shakespeare sonnet or as elegant as an Einstein equation? I don't need to answer that do I? But then again the first human grunts or scribbling were also probably nothing to write home about.<br />
<br />
Thus you can probably feel safe that you will go to your grave with a keyboard and microphone. But what about the children and the grandchildren?<br />
<br />
My intuition is that using such devices and their successors we will build new, extras layers of communication with such devices. The price is already cheap enough for it to be attached to all phones or laptops.<br />
<br />
Eventually you (or your grand children) are typing in thin air - perhaps more like playing a multi-keyboard organ or air guitar - and what is coming out is some kind of multi-dimensional information stream which in turn builds into a new lexicon. A way of communicating that is phonetic, ideographic, vocal, gestural, 3D and even colorful.<br />
<br />
<h4>
A New Linguistics</h4>
If this sounds phantasmagorical, please do remember:<br />
<br />
We use gestures to write down musical notation to record the details of the gestures used to create music. And then read that notation and turn the notes back into gestures.</div>
<div>
Writing is using gestures to record speech by manipulating pen and ink or by frenetic tapping on plastic keys.<br />
<br />
We use gesturing all the time - without thinking. But we subjugate these gestures, we make them the servants of the oral and the aural.<br />
<br />
We all know how to wave hello, make the OK sign, show a thumbs up or give the finger. The <a href="http://www.nytimes.com/2013/07/01/world/europe/when-italians-chat-hands-and-fingers-do-the-talking.html">Italians seem to be able to recognize 250 gestures</a>. These are all the most simplistic and basic events. But they are a start and indicate that our brains are wired to communicate using our hands - just as dogs and cats wired to communicate with their tails and ears. And the use of appendages to communicate with other beings predates visual and auditory communication and is thus perhaps part of our oldest and deepest thought processes.<br />
<br />
Perhaps it's time we let gestures act in their own right. Using the Leap Motion device - and its eventual successors and competitors - we will have methods of recording, editing and playing back gestures without referencing or mediation of any other device or instrument. [Explanations of these aspects will be provided soon.]<br />
<br />
The changes will not happen overnight. It may even take several generations. Today's babies are learning to swipe on tablets. Tomorrow's babies may learn to swipe in the air.<br />
<br />
And the changes may start to occur on several fronts. We are all beginning to use voice recognition. This enables us to get up from our desks and be more healthy. Since we are standing and away from keyboards, I can see gesturing and voice recognition working closely together. In the beginning by adding line breaks and formatting the text while we speak but in the future I can see you speaking while gently moving your hands and fingers. The movements are altering the pitch and timbre of what you are writing much like the gestures of the conductor shape the music of the orchestra. What appears on the screen or in our glass is a writing that we today would hardly recognize. The text is full of colors and devices that emphasize or modify the tempo. Diacritical marks indicate the importance of particular aspects to the reader or the writer. The final output is a gushing of sound, music, spoken words, gesture symbols, images. And did I mention that you will need a 3D printer in order to read stuff in the future?<br />
<br />
<h4>
Too Frightening Maybe? Consider the Possible Happiness</h4>
Actually, it all does sound quite frightful doesn't it? Life is already complex enough. Do we really need all this more stuff happening more fast all at once? Not likely.<br />
<br />
But then consider this. People may knit and talk at the same time. Drive a car and chat on the phone - not. Take a shower and sing simultaneously - that's good. And when you are doing this, my guess is that you tend to be happy. The more you use the more parts of your brain, the happier your brain. The more your brain is fully engaged the more fulfilled you feel.<br />
<br />
This writing - or even the linguistics - of the future may well embody more of your body in service of more comprehensive and faster and more fulfilling methods of communicating.<br />
<br />
As I mentioned before, the Leap Motion device is what is opening up these thoughts. It can recognize movements that are a fraction of a millimeter. it's small and its successors will be embedded in phones, laptops and wearables. If it only costs $80 today how little will it cost on five years?</div>
<div>
The Leap Motion device is not a game-changer. For example, it won't significantly alter this years' holiday season technology sales. But it will change the game. In ten years or so the rules of the game we call life will be different than they are today. Or is that just me waving my hands at you?<br />
<br />
<div>
<h4>
Links</h4>
<a href="https://www.leapmotion.com/">https://www.leapmotion.com/</a><br />
<br />
And here is a link to some of the code I have been working on that provoked these thoughts. This code could not have been written without the support of Leap Motion and the wonderful <a href="http://en.wikipedia.org/wiki/Threejs">Three.js </a>3D JavaScript library that enables me to access the <a href="http://en.wikipedia.org/wiki/WebGL">WebGL</a> in your browser:<br />
<a href="http://jaanga.github.io/gestification/">http://jaanga.github.io/gestification/</a></div>
<div>
<br />
If you do not have a Leap Motion device you can get a glimpse of the apps using the links on this page<br />
<a href="http://jaanga.github.io/gestification/no-leap-view-only.html">http://jaanga.github.io/gestification/no-leap-view-only.html</a><br />
<br />
<br />
<br /></div>
</div>
Theohttp://www.blogger.com/profile/02877421856947529794noreply@blogger.com0