Article: Interview with Qi Pan about his Webcam 3D scanner proForma
I think that one of the problems with 3D imaging in the past has been the prohibitive cost of custom 3d cameras. Teh expense of 3D cameras is due to some literally being two cameras sandwiched together and others having an expensive custom lens. Some use multiple independent cameras however I think while that might have some advantages such as you can see all sides of the target the cost and complexity would be even more than for the previously mentioned methods.
Not only is Qi Pan's software able to use a common USB webcam for input but it updates the position and shape of the model in real time once it has been scanned! His project is called proFORMA which stand for Probabilistic Feature-based On-line Rapid Model Acquisition. The software works under the assumption that the camera is stationary and that the object of interest is in the center of view.
I think if porFORMA were freely licensed,under the BSD/MIT license for instance , with source code available could become quite a valuable tool for both open source and commercial interests. I think there is the possibility of using it to enhance video compression for video conferencing. Even though it makes the assumption that the camera is stationary I think it may be possible to allow the software to work if the changes in location of the camera were known making it usable for robotics applications.
The article mentioned that one of the drawbacks of the software currently was that the object needed to be sufficiently textured or else the proFORMA would be unable to model it correctly. I think that is probably because it uses the colors of the object as reference points when figuring out the 3D shape.
Qi Pan states that processing power is one of the main things holding back the system from being able to capture much larger models such as entire scenes. I think that could be partially remedied by using a computer with accelerator cards such made by Tilera which can run regular C/C++ code on using Linux as the operating system the advantage is it runs slightly modified but otherwise normal code on 36 to 100 or more cores for vastly increased performance in multithreaded applications.
Another cool possibility is being able to import objects you own into games to further personalize the experience. Perhaps so games no longer have to be entirely a canned experience but can be modified and enhanced by the players. An example of one such game would be Gary's mod I don't think it is actually a game with goals or anything in the conventional sense but fun non the less I think due to it being a giant user content scratchpad of sorts. From generating a model of you favorite teapot to getting a video chat stable on a weak wifi connection I think proFORMA has a lot of possibilities to explore.
Video of perFORMA in action.
Friday, November 27, 2009
Sunday, November 22, 2009
Journal 14, November 22
IBM makes supercomputer significantly smarter than cat
While I don't think that IBM has literally built a supercomputer as smart as a cat or smarter than a cat, they have completed a super computer capable of modeling neural simulations 4.5 times as complex as a cat brain. According to the researchers the simulation doesn't yet run in real time.
The purpose of whole brain simulations is to allow researchers to experiment with a model they can directly manipulate. The simulation allows them it run reproducible tests and create snapshots of activity with greater resolution than with real test subjects. While the simulations aren't real brains they are based on observations of how real neurons and brain tissue interact.
Some of the research is aimed toward understanding chemical interactions within the brain. Other researchers are also working on understanding how the brain actually works. If the researchers could uncover how the brain actually functions advances in technology similar to the fly eye algorithm might be possible for brain simulations. Assuming such algorithms exist in the brain perhaps even ordinary computers would be capable of supporting strong artificial intelligence.
I think the notion that researchers should be careful with these sorts of simulations since they might be alive is rather nonsensical. I think such simulations are after all just simulations and if they cease to be just simulations it will be painfully obvious. Another reason being that the simulations can be reversed or reset to their original state whereas in real brains that clearly can't happen.
Even though it seems nonsensical to me, the reason the ethical questions arise are due to so called emergent behavior of the systems that is as yet not understood similar to the fly eye algorithm in my past post though on a much higher order obviously. I don't think such emergent behavior in a man made system is grounds to call it alive or anything of that nature unless there is other significant evidence such as a human level of intelligence which would be a bit scary to begin with. I thought it was interesting that one person pointed out in the comments of the article this brain simulation may be more complex than a cat's brain but a plain old bucket of slime is also more complex and diverse diverse than a human even if it isn't intelligent. I think the underlying problem with the whole scheme is that the models may not reveal any emergent behavior at all if they do not incorporate all the needed components or are in the wrong configuration similar to how they fly eye algorithm required all the components to be in place for it to work.
Blog post referenced by Arstecnica postThe Cat is Out of the Bag and BlueMatter
While I don't think that IBM has literally built a supercomputer as smart as a cat or smarter than a cat, they have completed a super computer capable of modeling neural simulations 4.5 times as complex as a cat brain. According to the researchers the simulation doesn't yet run in real time.
The purpose of whole brain simulations is to allow researchers to experiment with a model they can directly manipulate. The simulation allows them it run reproducible tests and create snapshots of activity with greater resolution than with real test subjects. While the simulations aren't real brains they are based on observations of how real neurons and brain tissue interact.
Some of the research is aimed toward understanding chemical interactions within the brain. Other researchers are also working on understanding how the brain actually works. If the researchers could uncover how the brain actually functions advances in technology similar to the fly eye algorithm might be possible for brain simulations. Assuming such algorithms exist in the brain perhaps even ordinary computers would be capable of supporting strong artificial intelligence.
I think the notion that researchers should be careful with these sorts of simulations since they might be alive is rather nonsensical. I think such simulations are after all just simulations and if they cease to be just simulations it will be painfully obvious. Another reason being that the simulations can be reversed or reset to their original state whereas in real brains that clearly can't happen.
Even though it seems nonsensical to me, the reason the ethical questions arise are due to so called emergent behavior of the systems that is as yet not understood similar to the fly eye algorithm in my past post though on a much higher order obviously. I don't think such emergent behavior in a man made system is grounds to call it alive or anything of that nature unless there is other significant evidence such as a human level of intelligence which would be a bit scary to begin with. I thought it was interesting that one person pointed out in the comments of the article this brain simulation may be more complex than a cat's brain but a plain old bucket of slime is also more complex and diverse diverse than a human even if it isn't intelligent. I think the underlying problem with the whole scheme is that the models may not reveal any emergent behavior at all if they do not incorporate all the needed components or are in the wrong configuration similar to how they fly eye algorithm required all the components to be in place for it to work.
Blog post referenced by Arstecnica postThe Cat is Out of the Bag and BlueMatter
Saturday, November 14, 2009
Journal 12, November 14
Secret Math of Fly Eyes Could Overhaul Robot Vision
When things go right, robot vision is perhaps the most attention grabbing feat in computer science. I think that has to do with the long running fascination people have for designing human like machines and computers.
While most techniques for machine vision require massive amounts of processing power. Recent developments in studying fly vision have show that much simpler systems can also be effective. An example of current complex computer vision cited in the article was the Lucas-Kanade method for machine vision which is extremely computationally intensive having to compare individual pixel changes each time the image updates.
The fly inspired computer vision algorithm is much more optimal and works by ignoring areas that don't change in color and focuses on the changing patterns. This narrower approach allows more efficient implementation of commonly needed computer vision systems such as obstacle avoidance and detection. The algorithm is a feedback loop and creates a cascading non linear system of equations according to the researchers and is not fully understood but it works.
I think this sort of vision system might be useful for in the automotive industry for self guided vehicles once the algorithms are better understood. I think a drawback might be that some of the information from the camera is seemingly discarded by this design that might be needed for some applications and the system may have to rely on more conventional computer vision techniques anyway.
Examples of current computer vision systems in robots are Domo developed at MIT CSAIL and the Honda ASIMO robots. The Domo robot has been demonstrated on video to be able to interact with a visually complex environment for specific tasks. ASIMO is mainly a walking demo robot which basic balancing and obstacle avoidance. Both of these robots are fairly good examples of the state of the art in vision and environment interaction which is needed for human like robots. The main drawbacks are still high computational requirements of both systems with Domo using a powerful networked compute cluster of 15+ computers.
Perhaps if more algorithms similar to the fly vision algorithm could be discovered by experimentation and observation of nature faster and more efficient ways to comtrol robotic systems could be developed. Interestingly early versions of algorithm have already allowed the creating tiny self guided flying robots.
I think this development is similar to other developments in math for instance it is quicker to multiply 10 x 10 than to add 10 ten times a similar gain is made here where a new way via they fly vision algorithm of doing things has allowed the implementation of complex systems with much less powerful computers.
When things go right, robot vision is perhaps the most attention grabbing feat in computer science. I think that has to do with the long running fascination people have for designing human like machines and computers.
While most techniques for machine vision require massive amounts of processing power. Recent developments in studying fly vision have show that much simpler systems can also be effective. An example of current complex computer vision cited in the article was the Lucas-Kanade method for machine vision which is extremely computationally intensive having to compare individual pixel changes each time the image updates.
The fly inspired computer vision algorithm is much more optimal and works by ignoring areas that don't change in color and focuses on the changing patterns. This narrower approach allows more efficient implementation of commonly needed computer vision systems such as obstacle avoidance and detection. The algorithm is a feedback loop and creates a cascading non linear system of equations according to the researchers and is not fully understood but it works.
I think this sort of vision system might be useful for in the automotive industry for self guided vehicles once the algorithms are better understood. I think a drawback might be that some of the information from the camera is seemingly discarded by this design that might be needed for some applications and the system may have to rely on more conventional computer vision techniques anyway.
Examples of current computer vision systems in robots are Domo developed at MIT CSAIL and the Honda ASIMO robots. The Domo robot has been demonstrated on video to be able to interact with a visually complex environment for specific tasks. ASIMO is mainly a walking demo robot which basic balancing and obstacle avoidance. Both of these robots are fairly good examples of the state of the art in vision and environment interaction which is needed for human like robots. The main drawbacks are still high computational requirements of both systems with Domo using a powerful networked compute cluster of 15+ computers.
Perhaps if more algorithms similar to the fly vision algorithm could be discovered by experimentation and observation of nature faster and more efficient ways to comtrol robotic systems could be developed. Interestingly early versions of algorithm have already allowed the creating tiny self guided flying robots.
I think this development is similar to other developments in math for instance it is quicker to multiply 10 x 10 than to add 10 ten times a similar gain is made here where a new way via they fly vision algorithm of doing things has allowed the implementation of complex systems with much less powerful computers.
Thursday, November 5, 2009
Journal 11, November 5
Nvidia Making x86 CPU With Ex-Transmeta Brains?
The x86 ISA (instruction set architecture) originally developed by Intel has had a life of continual legal battles wins and losses on both sides of the Intel fence. Intel has long had competitors that also produce x86 designs under license namely AMD and Cyrix owned by VIA.
Intel has recently made a bold move that upsets the balances a bit. Intels latest designs integrate a memory controller onto the CPU die. The impact of this is that other companies would be forced to use an separate memory controller or license Intel's on chip memory controller assuming Intel would even be willing to license it at all. Nvidia made the next move by directly accessing intel's memory controller in their latest chipset designs. Intel of course retaliated and Nvidia has subsequently ceased chipset development as far as is know to the public.
An exception to that rule would be Fujitsu that currently still produces Sparc based designs for high performance computing needs and is touted as the fastest CPU .
Nvidia has never been in the CPU business but has lots of high proformace design experience. Much like the article I think that Nvidia aims to add support for executing x86 binaries on their hardware.
There has been speculation for some time that they would do this and Nvidia's CEO has even threatened it a time or two! I find it rather intriguing that Nvidia has hired many former Transmeta employees working for them possibly to work on x86 compatibility for their GPUs. In my opinion Transmeta's biggest development was a x86 compatible processor that did not use the x86 instruction set in hardware. This allowed them to translate x86 or any instruction set within reason into their own instruction format with good proformace due to their design.
In my opinion if Nvdia were to use a similar translation technology as Transmeta used and implemented it into their GPUs it might spur progress with parallel processing since in theory programs running on the main CPU could be migrated directly onto the GPU if it were determined that it were a multi-threaded program capable of benefiting from the GPU's massive parallel architecture.
Similar ideas are also in the works at Intel on the Larrabee project and also at AMD on the Bulldozer project. Its good to see that Nvidia is not going to lie down on this one and let Intel and AMD get too far ahead.
The x86 ISA (instruction set architecture) originally developed by Intel has had a life of continual legal battles wins and losses on both sides of the Intel fence. Intel has long had competitors that also produce x86 designs under license namely AMD and Cyrix owned by VIA.
Intel has recently made a bold move that upsets the balances a bit. Intels latest designs integrate a memory controller onto the CPU die. The impact of this is that other companies would be forced to use an separate memory controller or license Intel's on chip memory controller assuming Intel would even be willing to license it at all. Nvidia made the next move by directly accessing intel's memory controller in their latest chipset designs. Intel of course retaliated and Nvidia has subsequently ceased chipset development as far as is know to the public.
An exception to that rule would be Fujitsu that currently still produces Sparc based designs for high performance computing needs and is touted as the fastest CPU .
Nvidia has never been in the CPU business but has lots of high proformace design experience. Much like the article I think that Nvidia aims to add support for executing x86 binaries on their hardware.
There has been speculation for some time that they would do this and Nvidia's CEO has even threatened it a time or two! I find it rather intriguing that Nvidia has hired many former Transmeta employees working for them possibly to work on x86 compatibility for their GPUs. In my opinion Transmeta's biggest development was a x86 compatible processor that did not use the x86 instruction set in hardware. This allowed them to translate x86 or any instruction set within reason into their own instruction format with good proformace due to their design.
In my opinion if Nvdia were to use a similar translation technology as Transmeta used and implemented it into their GPUs it might spur progress with parallel processing since in theory programs running on the main CPU could be migrated directly onto the GPU if it were determined that it were a multi-threaded program capable of benefiting from the GPU's massive parallel architecture.
Similar ideas are also in the works at Intel on the Larrabee project and also at AMD on the Bulldozer project. Its good to see that Nvidia is not going to lie down on this one and let Intel and AMD get too far ahead.
Saturday, October 31, 2009
Journal 10, October 31
Article: Important update about Gallium3D
Recently AROS a mostly AmigaOS 3.1 compatible operating system has gained hardware accelerated 3d on Nvidia hardware ranging from the Geforce2 up to the 7000 series this however is unheard of in hobby and amateur operating systems!
While Gallium3d currently has a few bugs on AROS that can be seen by comparing the videos of a demo running in software mode and again with the hardware driver. The hardware driver seems to have trouble with rendering some textured 3d ojects from what I can tell.
Gallium3d is the next generation core of the The Mesa 3D Graphics Library which runs on most operating systems allowing cross platform 3d development whether it be games, 3d cad tools, virtual reality or technical demos. The big advantage with Gallium3d is that it is much more modular than previous versions of Mesa meaning that for Mesa to be ported to a new OS all that is needed is to write the operating system specific componetnts and add support for the hardware dirvers to your operating system. This was possible with older versions of mesa but now with galluim3d the same drivers can accelerate multiple APIs such as OpenCL, OpenVG, Clutter, OpenGL ES and of course OpenGL. Whereas in the past it would have taken much more code to enable all those APIs even with mere software rendering.
While most Galluim3d/Mesa development goes on for the Linux platform another OS besides AROS that is already getting a port is the Haiku operating system. Haiku is a BeOS alike operating system that aims to please desktop and workstation users. It has good compatibility with most older BeOS software and has some tricks of its own now as well I may blog more specifically about it sometime in the future. I have been building test images of the Haiku OS for quite some time since I found out about it last year and the developers are progressing quite quickly even though it is a small project. A screenshot of the gallium3d software renderer on haiku can be found here.
One of the coolest things about AROS is how fast it is for instance it takes mere seconds for it to do a warm boot! AROS also has a webkit based browser which is the same engine in Apples safari browser that supports most websites although Adobe flash only works on Windows, Linux and Solaris or an operating system that emulates those such as FreeBSD.
If you would like to try out AROS you can even run it inside windows as an applicaion with windows as the host! Or you can test out the more complete AROS derived distro called Icaros
Recently AROS a mostly AmigaOS 3.1 compatible operating system has gained hardware accelerated 3d on Nvidia hardware ranging from the Geforce2 up to the 7000 series this however is unheard of in hobby and amateur operating systems!
While Gallium3d currently has a few bugs on AROS that can be seen by comparing the videos of a demo running in software mode and again with the hardware driver. The hardware driver seems to have trouble with rendering some textured 3d ojects from what I can tell.
Gallium3d is the next generation core of the The Mesa 3D Graphics Library which runs on most operating systems allowing cross platform 3d development whether it be games, 3d cad tools, virtual reality or technical demos. The big advantage with Gallium3d is that it is much more modular than previous versions of Mesa meaning that for Mesa to be ported to a new OS all that is needed is to write the operating system specific componetnts and add support for the hardware dirvers to your operating system. This was possible with older versions of mesa but now with galluim3d the same drivers can accelerate multiple APIs such as OpenCL, OpenVG, Clutter, OpenGL ES and of course OpenGL. Whereas in the past it would have taken much more code to enable all those APIs even with mere software rendering.
While most Galluim3d/Mesa development goes on for the Linux platform another OS besides AROS that is already getting a port is the Haiku operating system. Haiku is a BeOS alike operating system that aims to please desktop and workstation users. It has good compatibility with most older BeOS software and has some tricks of its own now as well I may blog more specifically about it sometime in the future. I have been building test images of the Haiku OS for quite some time since I found out about it last year and the developers are progressing quite quickly even though it is a small project. A screenshot of the gallium3d software renderer on haiku can be found here.
One of the coolest things about AROS is how fast it is for instance it takes mere seconds for it to do a warm boot! AROS also has a webkit based browser which is the same engine in Apples safari browser that supports most websites although Adobe flash only works on Windows, Linux and Solaris or an operating system that emulates those such as FreeBSD.
If you would like to try out AROS you can even run it inside windows as an applicaion with windows as the host! Or you can test out the more complete AROS derived distro called Icaros
Sunday, October 25, 2009
Journal 9, October 25
Article: LLVM 2.6 Released, Clang Is Now Production Ready
While it won't directly impact most people LLVM's latest release is a significant accomplishment. I think the feature that stands out the most is the vastly improved error messages which will help developers write better code faster. I have found from my own experience that a subtle error in a program can take far more time to figure out than writing the bulk of the program itself. I think that is often due to poor wording of errors or just plain not giving an error message.
In its latest iteration LLVM 2.6 offers production quality C and Objective-C support with speeds of upto 3 times faster during compilation than GCC4, which is the current standard compiler for many projects across a variety of operating systems, so developers can not only find bugs faster but rebuild their projects with fixes faster too.
Although LLVM which stands for Low Level Virtual Machine only fully supports C and Objective-C fully at the moment other projects are also making progress such as C++ support and even more unusual projects such as compiling php code with Roadsend PHP to native binaries for increased speed or in other words lower CPU requirements for heavily used websites.
What makes LLVM so desirable for many projects is the way it breaks the components down into modules so to add support for a new language to LLVM all that is needed is to write a front end for that language instead of having to write a complete compiler. And of course when your frontend is finished you also get the optimization from LLVM for free. The same goes for the backend if you wanted to add support for a new type of processor once the backend is complete you can compile code written in any language LLVM supports.
The push to use LLVM is huge with Apple already using it for optimization of their opengl graphics stack. FreeBSD and DragonFlyBSD are already actively working with the LLVM to get thier entire OS compiled with LLVM mostly due to better features than GCC and also more compatible licensing.
LLVM is for most people a behind the scenes change but those affect everyone as well. With its BSD like open source license which allows both open source contribution and also closed source modification it may even be adopted into commercial compiler suites as it becomes more stable. So if you need a headache free C/Objective-C Compiler or want to modify it for your own inhouse use check out LLVM!
Read all about it at llvm.org
While it won't directly impact most people LLVM's latest release is a significant accomplishment. I think the feature that stands out the most is the vastly improved error messages which will help developers write better code faster. I have found from my own experience that a subtle error in a program can take far more time to figure out than writing the bulk of the program itself. I think that is often due to poor wording of errors or just plain not giving an error message.
In its latest iteration LLVM 2.6 offers production quality C and Objective-C support with speeds of upto 3 times faster during compilation than GCC4, which is the current standard compiler for many projects across a variety of operating systems, so developers can not only find bugs faster but rebuild their projects with fixes faster too.
Although LLVM which stands for Low Level Virtual Machine only fully supports C and Objective-C fully at the moment other projects are also making progress such as C++ support and even more unusual projects such as compiling php code with Roadsend PHP to native binaries for increased speed or in other words lower CPU requirements for heavily used websites.
What makes LLVM so desirable for many projects is the way it breaks the components down into modules so to add support for a new language to LLVM all that is needed is to write a front end for that language instead of having to write a complete compiler. And of course when your frontend is finished you also get the optimization from LLVM for free. The same goes for the backend if you wanted to add support for a new type of processor once the backend is complete you can compile code written in any language LLVM supports.
The push to use LLVM is huge with Apple already using it for optimization of their opengl graphics stack. FreeBSD and DragonFlyBSD are already actively working with the LLVM to get thier entire OS compiled with LLVM mostly due to better features than GCC and also more compatible licensing.
LLVM is for most people a behind the scenes change but those affect everyone as well. With its BSD like open source license which allows both open source contribution and also closed source modification it may even be adopted into commercial compiler suites as it becomes more stable. So if you need a headache free C/Objective-C Compiler or want to modify it for your own inhouse use check out LLVM!
Read all about it at llvm.org
Sunday, October 18, 2009
Journal 8, October 18
Article: Line sharing best solution for slow, expensive US broadband
The Internet has become quite pervasive in our way of life in the US and even in other countries. But the fact of the matter is that Internet service in the US isn't what it should be. For instance up until a month or two ago the fastest Internet connection I could get for under $50 was dialup which has been obsolete for years. Even now with 1Mbs cable Internet for about $25 available there isn't any competition going on since there aren't any other providers.
Policy and legislation aside, I think infrastructural challenges are what have kept faster Internet from coming my way for a good while since I am beyond the maximum range for DSL from the CO. I am not alone here either since nearly half the population in my area lives out of town and must either rely on cable Internet usually with only one choice of provider or in the absence of cable use dialup or the prohibitively satellite Internet.
The reason I think 1Mbs Internet is still an outrageously high price is for that price people living in the UK can get TV, Internet and phone service for less that $30 total from SKY. In the US the prices on comparable services are at least 3 times higher! I don't think the prices are high due to a lack of competing technologies but rather due to a lack of competing service providers. The technology used really matters very little once you think about it.
Another misconception that was cleared up by the report was areas with higher populations densities are the primary contributor to faster Internet. According to the article reports show that some countries such as Japan, Korea and the Netherlands are far outperforming what mere population density advantages would predict.
I think that if the US were to adopt more open and competition inducing policies we would see faster Internet service and better broadband Internet availability. The reason being that companies would be forced into competition that currently aren't really competing since like in my case I have no choice for broadband except one company. While the Internet is not the answer for everything and certainly has its rough spots its a shame that many areas in the US are getting left behind technologically. The Internet was designed to be and still is an excellent educational tool without which many homes will likely be poorly equipped to compliment learning done at school.
Original Berkman Center Research Paper
The Internet has become quite pervasive in our way of life in the US and even in other countries. But the fact of the matter is that Internet service in the US isn't what it should be. For instance up until a month or two ago the fastest Internet connection I could get for under $50 was dialup which has been obsolete for years. Even now with 1Mbs cable Internet for about $25 available there isn't any competition going on since there aren't any other providers.
Policy and legislation aside, I think infrastructural challenges are what have kept faster Internet from coming my way for a good while since I am beyond the maximum range for DSL from the CO. I am not alone here either since nearly half the population in my area lives out of town and must either rely on cable Internet usually with only one choice of provider or in the absence of cable use dialup or the prohibitively satellite Internet.
The reason I think 1Mbs Internet is still an outrageously high price is for that price people living in the UK can get TV, Internet and phone service for less that $30 total from SKY. In the US the prices on comparable services are at least 3 times higher! I don't think the prices are high due to a lack of competing technologies but rather due to a lack of competing service providers. The technology used really matters very little once you think about it.
Another misconception that was cleared up by the report was areas with higher populations densities are the primary contributor to faster Internet. According to the article reports show that some countries such as Japan, Korea and the Netherlands are far outperforming what mere population density advantages would predict.
I think that if the US were to adopt more open and competition inducing policies we would see faster Internet service and better broadband Internet availability. The reason being that companies would be forced into competition that currently aren't really competing since like in my case I have no choice for broadband except one company. While the Internet is not the answer for everything and certainly has its rough spots its a shame that many areas in the US are getting left behind technologically. The Internet was designed to be and still is an excellent educational tool without which many homes will likely be poorly equipped to compliment learning done at school.
Original Berkman Center Research Paper
Subscribe to:
Posts (Atom)