*blog... kind of... *rss 



Archiving Windows demos. Take two.
This is something I've spent a fair amount of time on already. Last year I wrote about some of my findings but the truth is that it didn't anywhere.

The main problem is that, for whatever reason, Lagarith is not finding its way into ffmpeg and rumours say Youtube uses ffmpeg for doing the transcoding. So if you tried to upload that 2gbytes video file to Youtube you'd end up getting a "unknown format" error.

Early this month I checked what was the status with all this, and seemed that nothing had changed so I thought about looking for other lossless codecs. This time I tried with Huffyuv. Video file size is bigger but at least ffmpeg supports it.

The idea is to let Youtube host the uncompressed files for me (5gb-20gb each). I don't know if that's what happens or not, but I would think such service must keep the original.

This is how the process looks like:

1. kkapture the demo using Huffyuv for encoding.
Note: I have antialias enabled on my nvidia config as that compresses better than non-antialias stuff.
2. Now you'll get a bunch of .avi files that you need to merge. To do so just use Virtualdub. While you're on it, go ahead and remove the aspect ratio black bars.
3. Upload the file to youtube. This may take a while. It takes about 24 hours per video here.

And that's it. Now it's up to Youtube to keep updating their videos as technology evolves. It's a slow process, but if the theory is right it just needs to be done once.

You can see the ones I've managed to do already in the (just re-opened) demoscene section.

no comments
Moving back to Spain. Update.
That was a slow year on this blog. Only 3 posts...

It's been one year since we decided to move back to Spain. Since then some people have asked if we had moved already or what. Well... not yet.

There are a few things you need to deal with when moving. Specially if you own a property on the country you're leaving that you want to keep. On top of that, it was my first year as freelancer. Things were unstable and it was also quite hectic.

2011 seems to be a bit more stable so there it's more likely to happen. Once again we aim for June/July. We'll see how it goes...

Anyway, have a great 2011! :D

2 comments
3 point gradient trick and vertex colors
Continuing with the three.js development, after implementing multi lights the flat shading was starting to be quite a limitation.



In order to get smooth faces I needed to figure out a way to create what it's called 3 point gradients. I took a look at the usual Flash 3D engines and what was my surprise when I found out that they tend to be limited to just 1 light (correct me if I'm wrong). This is probably because by supporting 1 light they just needed to create a light map per material. Something like this:


That's indeed a fast approach, but it limits to just 1 light, and then you need to update the map if the ambient light changes or the color of the material... not really up my street.

The old way of doing this with OpenGL is by using Vertex Colors. Basically, I was after this:



I wondered if there was a way to create this kind of gradient with the Canvas API. I googled a bit to see what people had come up with, but all the approaches seemed quite cpu intensive. Like this one from nicoptere.

Then a crazy idea pop into my mind. What about having a 2x2 image, and change the colors of each pixel depending of the vertex color and do that at render time per polygon? The browser will then stretch that image and create all the gradient in between those 3-4 pixels.

This is the 2x2 image:


Can't you see it? Ok, this is the same image scaled to 256x256 with no filtering (using Gimp):


However, by default (whether you like it or not) the browsers filter scaled images. This is what you get within the browsers:


Each browser gets slightly different results but pretty much that's what you get. This wasn't exactly what I was after, there is too much color on the corners. However, by looking at all the results from all the browsers I realised that the center part of the image was the only part that was always similar.


Then I realised that that's the actual gradient I was after!


Here it's the code:

var QUALITY = 256;

var canvas_colors = document.createElement( 'canvas' );
canvas_colors.width = 2;
canvas_colors.height = 2;

var context_colors = canvas_colors.getContext( '2d' );
context_colors.fillStyle = 'rgba(0,0,0,1)';
context_colors.fillRect( 0, 0, 2, 2 );

var image_colors = context_colors.getImageData( 0, 0, 2, 2 );
var data = image_colors.data;

var canvas_render = document.createElement( 'canvas' );
canvas_render.width = QUALITY;
canvas_render.height = QUALITY;
document.body.appendChild( canvas_render );

var context_render = canvas_render.getContext( '2d' );
context_render.translate( - QUALITY / 2, - QUALITY / 2 );
context_render.scale( QUALITY, QUALITY );

data[ 0 ] = 255; // Top-left, red component
data[ 5 ] = 255; // Top-right, green component
data[ 10 ] = 255; // Bottom-left, blue component

context_colors.putImageData( image_colors, 0, 0 );
context_render.drawImage( canvas_colors, 0, 0 );

So it was just a matter of changing 3-4 pixels, scaling up and cropping and then use that as a texture for each polygon. Crazy? Yes. But it works! And fast enough! And what's more, it's not just limited to 3 points, but a 4th point comes for free (quads).


Here it's an example of the first material I've applied the technique to (use arrow keys and ASWD to navigate).

At this point I was surprised that I didn't see this before. And that I hadn't seen this being done in any of the usual 3d engines. I did another search and I found this snippet from Pixelero that uses the same concept. Good to know I'm not the only one with crazy ideas! :)

Thanks to this, now I'm able to do smooth materials (gouraud, phong) that support multiple lights, fog, even SAAO :) We just need the browsers to become faster (which they seem to be on it already).

Of course, if your browser supports WebGL you should be using that instead, but if it doesn't, at least you'll have something better than a text message.

18 comments
three.js r28
For the last few weeks I've been quite focused on developing the engine and I think the API is starting to get quite stable. I'm still unsure on what parts of the API need to be included in the actual build and what parts should just keep outside.

For instance, primitives is something you don't want to have in the compiled .js file. If you need a Cube, Sphere, ... I think it's easier to have it in /js/geometry/primives/*. Otherwise we end up having a 100kbytes file just for drawing a bunch of particles. I really want to avoid that, right now it's 60kbytes. But that includes all the renderers, if you're only going to use CanvasRenderer, you can save 20-30kb by removing the SVGRenderer, WebGLRenderer, ... logic. These scripts do that automatically.

Mr. AlteredQualia has been doing an awesome job with the WebGLRenderer these past weeks. If you have WebGL enabled, these 400k polys are waiting for you.

There is still quite a bit of work to do here and there, specially on the materials side (Mapping types, Blending, Gouraud, Phong... ) but it'll all come in good time :)

7 comments
How do you debug JavaScript?
Joe Parry suggested by email to write a follow up to the post about JavaScript IDEs. That's also one of the common questions and I intended to include it on the first post but I forgot :S

I've heard many people referring to Firebug as the best way to debug when coding JavaScript but I've never got to try it out properly as I started coding directly with/for Chrome. So in my case I mainly use the Webkit Inspector/Developer Tools panel.

However, rather than debug the code I usually log stuff instead for which console.log(), console.log(), console.error(), console.warn(), console.info() gives me way more than I need. Specially coming from Flash where the console is so basic that everyone keeps building their own logging library.

At this last Google I/O I saw this presentation about the Developer Tools that left me quite impressed of how much stuff I was missing.


15 comments
stats.js bookmarklet
Last week Matthew Lein shared a very interesting tip over twitter.

Apart from FFFOUND!'s bookmarklet I haven't seen myself using many of these as I've never felt they were that useful. However, this case is different and, once again, the possibilities of JavaScript amaze me.

Just drag and drop this link to your bookmarks toolbar:

Display Stats

By clicking on the saved bookmark you'll be able to insert the Stats.js widget in any website and monitor the FPS/MS/MEM by doing so \:D/.

7 comments
What IDE do you use for Javascript coding?
I've been asked this question a few times for the past week so I thought I would make a "public" answer.

At first I tested some IDEs like NetBeans and Aptana but somehow I didn't get to like them. Specially because they tend to add "hidden" folders around.

Believe it or not, I simply use Ubuntu's default text editor gedit. Sometimes the syntax highlighting is not correct and editing 1 line files is incredibly slow but apart from that it doesn't get on the way. Simple and fast.

However, auto completion is something I was missing from my FDT/AS3 days. Turns out auto completion based on already written text in the loaded files is all I need.

26 comments
Making of The Wilderness Downtown


I think it was about one year ago, right after the Google Sphere experiment, when Aaron started talking about a music video in the browser idea and if I would be up for it. If you consider my background, you'll understand how excited I was with such opportunity. Since then they (the Google Creative Lab guys) started looking for a band to work with and a director. Months later we got busy with The Johnny Cash Project and seeing how well it worked out they saw clear how Chris could do a great job directing this project too. Chris happened to be friends with Arcade Fire whom also seemed to be interested on joining the party.

As the project was considerably big we also needed a production company to handle the design and development process and B-Reel seemed a good fit. Albeit they had some in-house developers we really needed people with HTML5/Javascript experience. Finding people with these skills turned out to be a hard task as it seems most of the people we knew were stuck with either jQuery-kind-of-Javascript, AS3 or full employed/unavailable... As an act of desperation I tweeted this. An old friend of mine replied to the call. I knew he had already played around with <canvas> and I knew he would be invaluable for the team for his know-how. At this point, we had a band, a track, Chris+Aaron had the script ready and the team was all set.

At first I didn't realised the cleverness of the idea of using Google Maps/Street View data set. It wasn't until we had the first test of the kid running around the neighbourhood that it made an impact on me and made me remember old times. Kudos to Chris and Aaron for envisioning that :)

Production time

By reading the script I realised that how valuable the javascript libraries I had been developing for the past year were going to be. We had a sequencer ready to add and remove effects in sync with a tune, a 3D renderer and Harmony (as it was referred in the script).

I'm sure most of the people will think that the drawing tool is basically what I did... not really. Although it used Harmony as the base code, George was in charge of that part. He did a great work by creating a new brush out of it, a recorder and repainter and, in the last minute, some keyboard based input with the letter being drawn using that brush.

Eduard was responsible of the whole mainframe and making sure everyone was producing compatible code. The sequencer already provided some basic template for show/hide behaviour of effects but we also needed pre-loading, sharing data between effects and more things I'm probably not aware of. If that wasn't enough, he created a sequencing tool (with javascript) so the Director and Art Director could easily set when each window and effect would appear and where in the screen.

Jaime took the maps beast and geocoding utils. We couldn't just directly use the embeddable Google Maps, the maps were supposed to have some tilting on the camera so a new Maps Data viewer was required. We also had to figure out possible routes for the runner to get to the user's home in sync with the music, the Direction Route API provided that but we had to implement it properly. All this takes a lot of time, research and testing.

I was going to work on all the Street View scenes and also the cgi version of the runner. However, I didn't have skinning nor any animation code for three.js yet, so as these scenes weren't interactive neither customisable it was quickly decided that B-Reel would create videos for these parts — which ended up really great too! In the end I also added the birds flocking and birds landing on the drawing to my plate.

Now... I can't speak much about the challenges that other people faced but you can get the idea with the ones I did.

(Fast) Street View

At first we intended to simply use Google's Street View. I did a test integrating three.js with it and it seemed to work all fluid. However, what I didn't know was that if you have WebGL enabled Google's Street View would already use it. So what other people would see was considerable slower than what I was seeing. If WebGL is not enabled the Street View uses a three.js-like renderer. That was fine on Windows and Linux but not so much for MacOS. Turns out Google Chrome internally uses a different graphics library in MacOS than in Windows and Linux. CoreGraphics for MacOS and Skia for Windows and Linux. Each library have their own pros and cons but CoreGraphics is specially slow when transforming and clipping big images. Street View would run at 30fps on Windows/Linux while it would get 1fps on MacOS.

Like with the maps, we had to build a custom Street View Data viewer. Jaime encountered the same problem while doing the 3d maps using three.js. He then started researching other ways of drawing the Maps Data in a way that would create the same effect. An additional challenge was that with <canvas> you don't have access to the pixel data of images loaded from another domain. Otherwise we could just use this technique and call it a day. However, albeit pixel access is forbidden, context.drawImage() is allowed for copying areas from images hosted on other domains.

By stitching all the tiles the API provides for each panorama we get this image:



After zooming in to a part of the image we get this:



Somehow we need to apply this distortion:



We can do that by cropping columns from the original image and positioning them one after the other horizontally. Each one with some vertical scaling depending of the proximity to the center. We get this:



Now we just need to wider the columns a bit to hide the gaps:



The distortion isn't perfect but it's close enough. This approach seemed to be quite fast in all the platforms and all it was left was to apply the good old Perlin Noise to the movement to get some human feeling.

The right heading

Or so I thought. We were missing an important bit. For each StreetView we had to place the camera target at specific positions. For instance, the Street View that does the 360 right in front of your house had to start spinning right from your house. But how do you know where to look at? Where is the user's house? The Street View Service doesn't give any information about that. After studying all the data the API provided and directly debugging Google Maps I ended up noticing that each panorama has a lat/lng information, plus I also had the lat/lng information of the location of the house. On top of that, the panorama does provide the angle where the north points to.

Very long story short... subtract the lat/lng position of the house with the lat/lng position of the panorama. Get the angle of that vector and mix it with the angle of where the north is in the panorama. Voila! :)

Birds



Although Guille and Michael had done great progress with the birds we felt we could do better. After considering the options, my approach was using a 3 polygons mesh — one polygon for each wing and one for the body. Then animating it by sinusoidally moving up and down the vertices at the end of the wings. Although it didn't look like a crow it gave, once again, a close enough effect. Specially when you have a bunch of them following a boid simulation.

Tinting

This one is going to be controversial... Street View and Maps footage needed to be colour corrected because the action is supposed to take place in the morning, thus some yellowish tint was needed.



Again, we can't access the pixel data of images hosted in another domain, so the only option was to layer a colour on top of the image and play with blend modes. However, take a look at the blend modes available... only lighten could be of some use here. So we first tried drawing a rectangle on top with a yellow colour and lighten blending mode enabled. That kind of worked but it washed the footage out.



There is another blending mode though... darken. It was taken out of the specification but it still remains in WebKit and I hope they put it back because this is a good example of why is useful. By drawing that yellow colour using the darken blending mode on top of the image, and then drawing the original image on top using the lighten blending mode we achieved a really nice yellow tint and contrast that worked quite well for simulating the light we get in the morning.



Notice how the darks stay dark. Here it's the actual snippet:

var context = texture_mod.getContext( '2d' );
 
context.drawImage( texture, 0, 0 );
 
context.globalAlpha = 0.5;
context.globalCompositeOperation = 'darker';
context.fillStyle = '#704214';
context.fillRect( 0, 0, TEXTURE_WIDTH, TEXTURE_HEIGHT );
 
context.globalCompositeOperation = 'lighter';
context.drawImage( texture, 0, 0 );

Tween and Manual Tween

Most of the movements in the video use the well known Penner's Easing equations.

Sole had been working on a simplified Javascript Tweening library some months before this project and it proved really useful. You probably know how Tweening libraries work... you define an animation with its properties to be animated, it's delay and so on... then you start it. However, we needed to be able to go backwards at any point of the video (at least I needed it ;P). So by doing the Manual Tween alternative we were able to move freely to any point of the virtual timeline.

Now that I look back, instead of having 2 different libraries for Tweening, the Manual one should have been part of the sequencer code... hmmm... something to consider...

Optimising

Launch day was approaching and the server guys were giving some recommendations on some changes we could do for making the server happier. One of them was the tree animation I was using on the last Street View part. It was something I intended to do but I didn't have time just yet. Instead of having 63 separate images for a growing tree animation is better to mix them up in a single image.



We have seen this in previous projects haven't we? ;)

The second one was about javascript files.... we had about 40+ javascript files. The more files you have the more server resources each user will consume. There is a defined amount of connections available in a server, if that amount is, for example, 40 and each user needs to open 40 files to be able to see the website, the website would be able to be seen by just 1 user at a time. Mix all those 40 files into 1 file and then 40 users will be able to visit the website at a time. We didn't used the compressed/compiled index by launch time and I believe that was one of the main reasons we suffered some downtimes.

This script shows pretty much how to combine and minify easily.

Ok, that's enough

I know, that was quite long wasn't it. Hope this is of some use to some one, and hope you liked the actual piece too. Now, let's move to WebGL ;)

PS: If you wonder about any other technical details, feel free to use the comments and I'll try to address them.

23 comments
Deleted myself from Facebook
In case you were wondering... and man... it feels good. So much space in my head now :)

7 comments
Javascript size coding challenges
Last friday MIX Online and An Event Apart launched an unusual Javascript contest. 10K Apart pushes the developer to reduce their code so it fits in 10,000 bytes. This sounds like a nice challenge, but strangely they allow specific external libraries, which, in my opinion, over complicate things.

However, just to see what could be done, I quickly checked the amount of bytes that I would need to have a simple and easy to use 3d engine. That turned out to be about 1,000 bytes. Which left me with 9,000 bytes. Now, this may sound great, but it's kind of discouraging. Now I understand why the 64 kbytes coders always say that 64 kbytes are way harder to do than 4 kbytes – the number of possibilities is lower. Anyway, if I come up with a good idea I may try doing something with that for the contest.

Today I found out about JS1k. Which seemed to be Peter van der Zee's response to the former contest (no external libs, just 1,024 bytes of plain javascript) and it also had a few submissions.

Once I got the 3d engine working I got a bit code-addicted and ended up doing a plasma in 3D in 1,464 bytes, it was nice looking already, so it was a matter of reducing those extra 440 bytes. After learning some tricks and testing things here and there, it got reduced to 996 bytes. Here it's the result:



Looking forward to see what p01 has to "say" of all this... :)

EDIT: After reading Diego's post and finding interesting to see which tricks he used I thought I should also share the non-obfuscated code of my entry.

( function () {

	var res = 25, res3 = res * res * res,
	i = 0, x = 0, y = 0, z = 0, s, size, sizeHalf,
	vx, vy, vz, rsx, rcx, rsy, rcy, rsz, rcz,
	xy, xz, yx, yz, zx, zy,
	cx = 0, cy = 0, cz = 1, rx = 1, ry = 1, rz = 0,
	t, t1, t2, t3,
	sin = Math.sin, cos = Math.cos, pi = Math.PI * 3,
	mouseX = 0, mouseY = 0, color,
	doc = document, body = doc.body,
	canvas, context, mesh = [],
	width = innerWidth,
	height = innerHeight,
	widthHalf = width / 2,
	heightHalf = height / 2;

	body.style.margin = '0px';
	body.style.overflow = 'hidden';

	canvas = doc.body.children[0];
	canvas.width = width;
	canvas.height = height;

	context = canvas.getContext( '2d' );
	context.translate( widthHalf, heightHalf );

	doc.onmousemove = function ( event ) {

		mouseX = ( event.clientX - widthHalf ) / 1000;
		mouseY = ( event.clientY + heightHalf ) / 1000;

	};

	while ( i++ < res3 ) {

		mesh.push( x / res - 0.5 );
		mesh.push( y / res - 0.5 );
		mesh.push( z / res - 0.5 );

		z = i % res;
		y = !z ? ++y %res : y;
		x = !z && !y ? ++x : x;

	}

	setInterval( function () {

		context.clearRect( - widthHalf, - heightHalf, width, height );

		cx += ( mouseX - cx ) / 10;
		cz += ( mouseY - cz ) / 10;

		t = new Date().getTime();
		t1 = sin( t / 20000 ) * pi;
		t2 = sin( t / 10000 ) * pi;
		t3 = sin( t / 15000 ) * pi;

		rx = t / 10000;

		rsx = sin( rx ); rcx = cos( rx );
		rsy = sin( ry ); rcy = cos( ry );
		rsz = sin( rz ); rcz = cos( rz );

		i = 0;

		while ( ( i += 3 ) < res3 * 3 ) {

			x = mesh[ i ];
			y = mesh[ i + 1 ];
			z = mesh[ i + 2 ];
			s = sin( t1 + x * t1 ) + sin( t2 + y * t2 ) + sin( t3 + z * t3 );

			if ( s >= 0 ) {

				xy = rcx * y - rsx * z;
				xz = rsx * y + rcx * z;

				yz = rcy * xz - rsy * x;
				yx = rsy * xz + rcy * x;

				zx = rcz * yx - rsz * xy;
				zy = rsz * yx + rcz * xy;

				vx = zx - cx;
				vy = zy - cy;
				vz = yz + cz;

				if ( vz > 0 ) {

					color = ( 64 / vz ) >> 0;
					context.fillStyle = 'rgb('+ ( color - 16 ) + ','+ ( color * 2 - 128 ) + ','+ ( color + 64 ) + ')';

					size = s * 30 / vz;
					sizeHalf = size / 2;

					context.fillRect( ( vx / vz ) * widthHalf - sizeHalf, ( vy / vz ) * widthHalf - sizeHalf, size, size );

				}

			}
		}

	}, 16 );

} )();

Which, after compression ends up like this:

var O=24,d=O*O*O,X=0,U=0,T=0,S=0,W,j,L,o,m,k,b,q,ac,n,ab,l,K,I,r,p,Z,Y,C=0,A=0,w=1,G=1,F=1,E=0,V,P,N,M,u=Math.sin,f=Math.cos,v=Math.PI*3,R=0,Q=0,H,g=document,D=g.body,h,aa,B=[],a=innerWidth,e=innerHeight,J=a/2,c=e/2;D.style.margin="0px";D.style.overflow="hidden";h=g.body.children[0];h.width=a;h.height=e;aa=h.getContext("2d");aa.translate(J,c);g.onmousemove=function(i){R=(i.clientX-J)/1e3;Q=(i.clientY+c)/1e3};while(X++=0){K=q*T-b*S;I=b*T+q*S;p=n*I-ac*U;r=ac*I+n*U;Z=l*r-ab*K;Y=ab*r+l*K;o=Z-C;m=Y-A;k=p+w;if(k>0){H=(64/k)>>0;aa.fillStyle="rgb("+(H-16)+","+(H*2-128)+","+(H+64)+")";j=W*30/k;L=j/2;aa.fillRect((o/k)*J-L,(m/k)*J-L,j,j)}}}},16);

14 comments
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72