New AR app.
Jul. 24th, 2009 02:50 pmLast summer I've read "Rainbows End" (highly recommend), and after studying state of the art in AR an idea about tabletop emerged. http://izard.livejournal.com/21450.html
Smart people know that idea is nearly worthless while implementation or even a prototype has some value. So I had developed a prototype for my Nokia J2ME in January, it was only good enough to show off in the office.
Now everyone + their dog are doing this, and it would be more popular in 2011. (As usual I've mistaken by 1 year) http://graphics.cs.columbia.edu/projects/goblin/goblinXNA.htm, to add more link.
http://vivifypicture.com
I wish this entry had more luck in Android Google Challenge
80% of the code is not mine. But remaining 20% is based on what I've developed in 2008. I've wrote FAST edge detector and optimized Hough transform for feature selection, features recognition code and camera position and orientation calculation. All simple and fast enought to work on a mobile Java.
Now I am thinking about the ways to extend this idea to be a bit (but not too much) more egeneric. E.g. to create an API/platofrm to allow anybody use this idea - define small 3d tabletop worlds and interaction, partially server side. And to add QR code besides simple handwriting recognition forms like now.
Quite similar to ARtag but with focus on mobile tabletop recognition only, and without explicit AR tags.
Smart people know that idea is nearly worthless while implementation or even a prototype has some value. So I had developed a prototype for my Nokia J2ME in January, it was only good enough to show off in the office.
Now everyone + their dog are doing this, and it would be more popular in 2011. (As usual I've mistaken by 1 year) http://graphics.cs.columbia.edu/projects/goblin/goblinXNA.htm, to add more link.
http://vivifypicture.com
I wish this entry had more luck in Android Google Challenge
80% of the code is not mine. But remaining 20% is based on what I've developed in 2008. I've wrote FAST edge detector and optimized Hough transform for feature selection, features recognition code and camera position and orientation calculation. All simple and fast enought to work on a mobile Java.
Now I am thinking about the ways to extend this idea to be a bit (but not too much) more egeneric. E.g. to create an API/platofrm to allow anybody use this idea - define small 3d tabletop worlds and interaction, partially server side. And to add QR code besides simple handwriting recognition forms like now.
Quite similar to ARtag but with focus on mobile tabletop recognition only, and without explicit AR tags.