*blog... kind of... *rss 



Making of Starbucks Love Project


Over the past few weeks I've been busy working on the Starbucks Love Project website for Tool and BBDO.

I had the pleasure to work with a great team: Aaron Koblin, Hello Enjoy, Medios y Proyectos and We Are Mammoth.

During the development of my bits on the site I faced some little challenges that I though I would share so other people could consider my solutions as options for similar challenges.

The website had many release dates. This is something I though it was going to be hard to handle, but the guys at WAM managed it smoothly. However, the project had basically 2 phases.

Phase I - Love Gallery & Drawing Tool

My responsibilities for the first phase were to build the love gallery and the drawing tool.



The idea behind the love gallery was to collect love-related drawings submitted by the people, very similar to Aaron Koblin's The Sheep Market. Opposite to The Sheep Market, in this case there was no limit to the amount of submissions; that was already quite a challenge to start with.

The visualisation part wasn't decided at the start, interfaces like Google Maps (tiling) were considered, but in the end the guys at BBDO decided that it had to be similar to Cooliris. If supporting infinite amount of items wasn't challenging enough now we had to emulate what a some guys have done with OpenGL (hardware renderer) but with Flash (software renderer).

Instead of using Papervision3D or Away3D I decided to use my own engine as it was easier for me to tweak and remove computations that weren't needed. You can't have many 3d planes (specially not infinite) so the way it works is just by having 40x10 3d planes that offset the x positions depending on the camera position. Using their x,y positions I can figure out what's supposed to be their ID and then load the data from the nicely setup VO by the WAM guys. This sounds straight forward, but the code ends up being quite messy.

Loading lots of little images from the server was also another challenge. You spend more time asking the server for an image than receiving the image. To make this a bit more smooth we composed big images (Big Ass Images as the Americans liked to call them) with hundreds of little images.



So with just one request to the server we would receive lots of images inside an image. However, we did some bad decisions here. The dimensions of the big images are way too big (2800x2850) that it takes too much load time and affects the experience. And the one that pisses me off more is that I didn't realised in time (thanks anyway Mike!) that having those big images as .png or even .gif would have not only make them look better but also even have smaller filesize. Look at the images, look horrible! Oh well... next time.

Next was interaction. The people had to be able to select items on the 3d wall. That means projecting 2D points to 3D coordinates. That's something I didn't have on my engine, neither it was something for my mind to easily understand, so I took at look at how Papervision3D and others did it, and when I managed to understand and implement it, it wasn't working 100% well, there were some angle computations that had some delays and time was running out. So I went to the good old Color ID route.

What? Color ID? Yes, Color ID. What we're after here is to know what's 3D object is under the mouse. So in every frame we render the scene twice. Once using a big plane that covers the whole wall textured with a bitmap that has big pixels (that match each plane) and that has a unique color. Mapping the blue component with the X and the green component with the y... we will have... top left plane will have the color 0,0,0, second from the top 0,1,0, fist plane on the second column will have the color id 0,0,1 and so on.



This is how the first pass of the render looks. The user never sees this because as soon as it's done we copy it to a bitmap and then draw the real scene on top. If you look close you can see that the right of the wall is getting blue. That's because the Color ID is reaching 40. So having this in a bitmapData is just a matter of doing a getPixel(mouseX, mouseY) and we easily get the ID by using the coords get get and apply the camera offset. Then we know which plane we should animate/select. No complicated 3D to 2D formulas, just one getPixel per frame. Fast.

(Actually, after writing this I realise it could have been simpler by using the Plane ID as color instead of mapping the blur component to X and green component to Y... live and learn).

There was yet another little challenge. This was more a design challenge than dev, but somehow nobody managed to come up with a scrollbar that could scale to infinite amount of items. Once we started to have 15k drawings, moving the scrollbar one pixel would move the whole wall 100+ items which would mean a lot of movement and specially a lot of bandwidth because the user reached an area were more items needed to be loaded. The solution got implemented at the end of Phase II after I found this article.



The drawing tool was an easier task. Specially because for another project I had a drawing tool started and many of the challenges were already sorted. Explaining all the tricks used on the tool would require an article by itself, so maybe I'll talk about it as soon as I finish the other project.

Phase II - Map & Video Wall

For the second phase the site was going to collect videos sent from many countries where people would sing to the 'All you need is Love' track. (I wonder what's with this song, I'm hearing it in every AD campaign this Christmas, maybe it's an AD trend). My tasks for this one was to make a Map and another infinite wall (this time 2D).



Here I had a good idea. We needed to place arrows where countries are. An easy way would have been doing a little tool so whoever could place them. But if we had to change the shape of the map (which we did), we would have had to remap the arrows. Instead we used longitude/latitude values for each country (which are easy to find out on the net).

Usually there are 2 kind of world maps. One that has the poles distorted (Equirectangular projection) and one that hasn't (Mercator Projection, like Google Maps). The first one is good when mapping a sphere, but please stop using it on 2D interfaces.

Only thing to find out was the formulas to translate latitude/longitude to X,Y. The formula for the Equirectangular one is easy:

private function getPoint(lat : Number, lon : Number, mapwidth : uint, mapheight : uint) : Point
{
	return new Point(((lon+180) / 360) * mapwidth, ((90-lat) / 180) * mapheight);
}

The Mercator one took a bit more to find out, but just for the sake of having it here easy to reuse in AS3:

private function getMercatorPoint(lat : Number, lon : Number, mapwidth : uint, mapheight : uint) : Point
{
	return new Point(((lon+180) / 360) * mapwidth, ((90-GudermannianInv(lat)) / 180) * mapheight);
}

private function GudermannianInv(lat : Number) : Number
{
	var sign : Number = Math.sin(lat) > 0 ? 1 : -1;
	var sin : Number = Math.sin(lat * 0.0174532925 * sign);
	return sign * (Math.log((1 + sin) / (1 - sin)) / 2) * 28.6478898;
}

Having this sorted out, as long as the designers didn't distort the map the mapping of arrows would be always easy task.

At this point I needed to add animations. Coincidentally, eaze was just being released so I used this as a excuse to give it a try. Verdict, perfect.



Then it was yet again going back to optimising a wall that would have infinite entries. However, I didn't stress it too much this time because there were 200 walls instead of just one, so the amount of items would spread along these making it less CPU intensive.

Used again eaze to animate everything and again proven to be great.

All in all, it was quite a big project done in a very short time and I think we were lucky that we all had quite a bit of experience, otherwise we wouldn't had been able to pull this one off.

I couldn't finish this extremely-long post without sending some props to the WAM guys for their DoneDone. tool which I hope they make free for public projects at some point. It proved very efficient. Think of it like the Basecamp of Issue trackers.

16 comments
Branching is fun.
It all started with this animated gif I found lurking on the internets some months ago...



I saved it for the day I was inspired enough to make the effect with Javascript. That day was yesterday...



I know I know, this isn't really branching, it's just a disgusting worms kind of effect, that was something I got while playing with it. The code is simple, a bunch of particles and you generate a random value that apply to the direction of each particle (aka random walk). To get branching now you need to randomly generate more particles on the position of these particles. Here it's the result:



I posted the effects over twitter and minutes later @thespite emailed me a modified version of this last iteration.



That was interesting! I didn't know <canvas> had a method for blitting. It gave an effect as if it was algae continuously growing while the camera was moving back. To enhance the effect I changed the path drawing to thicker circles.



That looked nice but was starting to be too visually complex (and cpu intensive). I like the 2D version better and then I wondered how could it work if I wired some of the values to the sound amplitude of a tune and then using <audio> again. Here it's the result.



I spent 10x more time looking for a track that suited the effect than doing the effect itself. In the end I found a nice track at the always-interesting enoughrecords netlabel.

And... that is all for today... as usual, with Javascript, the source code is one right click and one left click away. Have fun!

PS: It was nice to see that most of the effects worked on my Android phone. I guess they also work on iPhone? :D

22 comments
Setting up Ubuntu 9.10 on a Eee PC 1101HA
Skip to the steps.

The iBook G4 was not pleasing me anymore. Because not many Linux distros support PowerPC anymore I was stuck with MacOS (which, remember, I'm not a fan of). So it was time for a replacement.

After looking up some netbooks I decided to go the Asus EeePC route. The 1101HA to be precise. I found it quite cheap for the features it has (Decent resolution, 160gb, 10hours battery... all for £256). Bought it from Amazon and in a couple of days it was on my desk already.



Unfortunately it comes with a bloated Windows XP. I don't mind Window XP, but I don't like having Microsoft Live, Office Student Edition and all the ASUS crap installed. So first thing you end up doing is uninstall all that. Unfortunately again, the recovery mode comes with all that crap. Then I uninstalled too many things and had to learn how to reinstall the system from a USB key.

Anyway, the biggest disappointment was the graphics chip (GMA 500). The hardware side of it seems to be good, but the software side (drivers) seem to be poor. Basically, the chip supports OpenGL 2.0, but the driver doesn't. And you're not able to play HD videos. All because shitty drivers. Lame.

Not much you can do on Windows but wait until they release proper drivers (I won't even go into details of why they are not doing it). So next thing was installing Ubuntu on the side of Windows XP.

I installed Ubuntu 9.10. The process was as simple as it always is. However, once installed, the resolution wasn't 1366x768 as it was supposed to be. It was 1024x768 instead and no option to change it. So I assumed a couple of tweaks needed to be done. Here they are:

Steps

1. Install poulsbo graphics driver. Just use the PPA script from here.
2. Replace NetworkManager with WICD. With NetworkManager, the wireless stops working randomly. WICD can be easily installed from Synaptic Package Manager.

And that's it. The result is a very robust system. With 2D (HD videos play smoothly) and 3D Acceleration (OpenGL works). Funny that, for first time, Linux performs better than the OS that came with the device. Things are changing!

2 comments
Eaze
Thanks to HIDIHO! blog I found out about a new Tween library developed by Philippe Elsass.

Yes, yet another Tween library. My favourite at the moment is BetweenAS3 and I though no library could change that. But this one brings a fresh syntax.

eaze(target).to(duration, { x:dx, y:dy })
    .onUpdate(handler, param1, etc)
    .onComplete(handler, param1, param2, etc);

I really missed the (not so old) Zeh Fernando's Tweener days. When everything was simple and clear, without twists and clubs... BetweenAS3 gave me back these days, but it was good to read that Phillipe shared my thoughts.

Now, lets end the drama and start to have fun again!
http://code.google.com/p/eaze-tween/

no comments
svg tag + audio tag = 3D Waveform


As always, you just need a bit of practice with a language to start using it in nice ways. Now that I had that little 3D engine working and with the <audio> around, it was time to produce an idea I always had in mine. A 3D interpretation of a waveform.

I'm sure the first thing you would think after checking this experiment would be... What? I didn't know I could analyse the sound signal with the <audio> tag?! ... Well, you can't, if you check the source code (*hint* right click -> view source) you'll see an array of numbers. These are all the sound level values of the waveform at 30 fps.

I got these values using the library BASS for linux. Unfortunately, my C skills aren't so good (example) and I can't seem to control how to get the values exactly at the fps I want without getting desyncronisations. The first part of the visualisation is spot on, but by the end things aren't that impressive. I'll keep researching on this and update a new array of values whenever I crack it.

Dean McNamee showed me the way. Forget C and BASS. All you need is python, the tune in .wav and 23 lines of magic.

# python script.py audio.wav > output.txt

import math
import struct
import wave
import sys

w = wave.open(sys.argv[1], 'rb')
# We assume 44.1k @ 16-bit, can test with getframerate() and getsampwidth().
sum = 0
value = 0;
amps = [ ]

for i in xrange(0, w.getnframes()):
	# Assume stereo, mix the channels.
	data = struct.unpack('<hh', w.readframes(1))
	sum += (data[0]*data[0] + data[1]*data[1]) / 2
	# 44100 / 30 = 1470
	if (i != 0 and (i % 1470) == 0):
		value = int(math.sqrt(sum / 1470.0) / 10)
		amps.append(value)
		sum = 0

print amps

I've updated the experiment with the new values; now is perfectly in sync all the time. Which makes you appreciate the little sounds at the end much more. Thanks Dean!

As you'll see on the source code, the rest is very simple. Create a couple of cubes, place them one in from on each other, and modify their scaleY depending on the waveform they are related to in that step.

The end result is quite interesting I think and I hope doing more like these. Hopefully more interactive next time.

The factor that turned this from just a nice experiment to an awesome was the fact that the eedl guys allowed me to use their (great) tunes for my experimentation.

Thanks once again!

23 comments
More and more Javascript
Seems like I'm still hooked to Javascript. For the last few months, on my spare time I've been toying more and more with it, creating little pieces that will serve as personal benchmarks for browser performance improvements.

For the next few links I recommend using a WebKit based browser otherwise your browser may crash.

The first idea I wanted to try was creating a canvas using checkboxes as pixels. Then display animations with it. Similar to textmode renderers.



I posted the link over twitter and minutes later Aaron was sending me a drawing done with the checkboxes by Valdean Klump. On firefox, the experiment didn't worked correctly and it just created a grid where anyone could pixelate in. Because this, I created drawing tool out of it. I though it would be fun if I could also generate links on-the-fly so people could share their checkbox based drawings easily. This is how it ended up:



(Press Shift for the eraser tool)

Some days after, Joa Ebert ported a Strange Attractor code to Silverlight to compare performance with Flash. I knew that Javascript <canvas> was going to be way slower than these but I was curious how much.



I believe the code can get some performance optimisations for the platform but my interests were to compare exactly the same code. 7 F/S seemed a good start.

At this point I felt it was about time to go back to toying with 3D worlds. Many people were working on engines using <canvas> as the renderer. But I though I could try using <svg> as renderer instead.

Differences between <canvas> and <svg> are pretty much the same as BitmapData and Graphics. With <canvas>, performance decreases exponentially by the size of the viewport. It's easier to fill the pixels of a triangle at 10x10 than 1000x1000. With <svg> the bottle neck is the amount of vectors being drawn. Because they don't get rastered on a bitmap the nodes stay in memory and exponentially gets slower. However, viewport size isn't a much of a problem here, which ticks my fullscreen action requirement.

How do you draw with <svg>?

Good question. A <svg> is basically a XML. Every frame you fill it with nodes for each polygon with their properties. Then, on the next frame, you delete all the nodes and repeat the process. Sounds crazy, but it works. Here is the proof.



As you've probably ovserved, a benefit of <svg> that you can disable antialiasing (which you can't with <canvas>). This thing alone makes <svg> outperform <canvas>.

Now, don't get the wrong impression. At the moment I don't even know if texturing is possible with <svg>, I haven't investigated that yet. Something tells me that, if possible, it'll go horribly slow.

While working on all these I came up with some things I wanted to do that flat colors were good enough. One of these was a QR code in 3D. Now that I own an Android phone I keep seeing more and more QR codes. The shape of a QR code seemed like a little city and extruding it was something that excited my mind (yes, I'm weird).



I've learn quite a bit just for doing this piece. Inkscape for vectorising, Blender for extruding and colorising, Python for exporting from Blender and refactored my engine once again. Performance wise, its like going back to the Actionscript 2 days. But, hopefully, WebGL will arrive soon and I'll be able to play with more polygons. Until then, I can create a bunch of things just with these.

Feel free to study the sources for each piece if you are interested. If you find something wrong or that can be improved, please, let me know.

5 comments
I'm a Windows... ssss... sseven! ... PC...


Lets just ignore the freak speaking on the video.

Marketing wise, am I the only one that thinks Microsoft and Apple work together? If they were really rivals, would Microsoft adopt the "I'm a PC" thing? I guess they need to work together so the free of charge option doesn't make them lose much money. That would explain why they never mention Linux as competitor but they mention the "rival", because the money goes the same place that way.

Yes, I love conspiracies.

EDIT:

3 comments
Decent SVN Client for Ubuntu / Linux
Since I moved to Linux, one of the major frustrations was the lack of a decent Subversion client. I have tried many different clients during my time on Windows and on MacOS. My favourite from all of them, by far, was TortoiseSVN.

I've been using both RapidSVN and Eclipse's Subversive on Linux. I can't even begin to think how much time I have spent trying to fix corrupted folders and having to recursively remove ".svn" folders and copy on top of a clean checkout... I even considered stopping using SVN as such a bad experience made me feel SVN was a waste of time.

I don't know why, I though about searching for options again. You know, a new app that changes your computer experience may appear tomorrow. One of the search results was NautilusSVN, which, if I recall correctly, Spite suggested as an option some time ago. Turns out NautilusSVN has been renamed to RabbitVCS, seems they don't want to close themselves to Subversion only (which is good).

When I tried it some months ago I found it was quite rudimentary... command line installation as plug-in for nautilus, everything becoming super slow, crash again and again... Surprisingly, this time has been totally different experience. Straightforward to install, the file browser still performs fast enough and it certainly brings the TortoiseSVN days back.

Yes I know, "Use the SVN command line like real man!" yada yada... not!

EDIT: As per version 0.12 nautilus can get pretty slow over the time to the point that you have to wait 30 seconds to go up a folder. Apparently this is being fixed on 0.13.

2 comments
Mr.doob left Hi-ReS!
As I posted on Twitter I'm no longer commuting from Victoria to Old St every day, nor having Franco's Fruit Salad every morning, nor randomly enjoying Franco's Spaghetti Carbonara, nor randomly enjoying BLT's Caesar Wrap, nor randomly enjoying Kick's Goat Cheese Salad, nor playing foosball matches at the end of the day, nor having beers and Brick Lane Bagels on Fridays and, most importantly, I'm no longer working on a studio with many talented and passionate people.

It's hard to believe that it's been already 2 years since I joined them. When you're having fun time goes really fast.

Thanks to everyone there, and specially to Theo and Mike for their constant support. It has been a great experience. :')

20 comments
60fps' Cube Clock
I forgot to mention here that my first Android App is on the Market already (took me less than a month since I got the device, beat that Apple).



So, for such a simple thing I had to learn OpenGL|ES, create my own little Tween engine, and so on...

Thanks a lot to all the people that were kind enough to answer my noob questions!

Now... let's get going with the next App :)

2 comments
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72