Sunday, December 6, 2009
Journal 15, December 6
Social networking has been a big hit on the Internet with Myspace and Facebook being some of the more well known contenders in the USA. Orkut a Google service is also widely used in Brazil but hasn't seen as much success elsewhere. These services are great for keeping in touch with friends but most of them also offer other features.
Facebook allows members to play games with their friends online without separate registration. Personally, I think that most of these games are very repetitive with little real interaction between players. However, that hasn't stopped a game called Farmville from becoming possibly the most played computer game ever with over 69 million people playing it!
The article speaks of the addictive nature of the game. Players find themselves spending considerable amounts of time playing these games either for long periods so they can reach the next level or revisiting the game often to check up on progress or make decisions that affect the game. Farmville like most social networking games frequently asks to post join requests for to your friends so they too will join in on the game.
The way Farmville and other online social games works reminds me very much of how computer viruses spread except in this case it is not the computer that is becoming less productive it is the person playing game. I think this is becoming a problem for many reasons such as poor performance at work or in school due to being mentally exhausted from playing games. There have even been deaths attributed to being addicted to games one example being a Korean man who played an online game in a cyber cafe for 50 hours straight and died of cardiac arrest and exhaustion.
While such games are not necessarily bad too much of a good thing is usually is not a good thing. Perhaps game developers should place limits on how long their games can be played on the same username to help deter excessively long gaming stints. I think there is a good possibility that the idea that playing games improves motor skills and prepares children for the modern work force is not going to work the way people thought. Instead we end up with lazy teens and adults that would rather be playing a game than doing meaningful work or studies. Obviously this is not the case with everyone but for many people it truly has become a problem.
Friday, November 27, 2009
Journal 15, November 27
I think that one of the problems with 3D imaging in the past has been the prohibitive cost of custom 3d cameras. Teh expense of 3D cameras is due to some literally being two cameras sandwiched together and others having an expensive custom lens. Some use multiple independent cameras however I think while that might have some advantages such as you can see all sides of the target the cost and complexity would be even more than for the previously mentioned methods.
Not only is Qi Pan's software able to use a common USB webcam for input but it updates the position and shape of the model in real time once it has been scanned! His project is called proFORMA which stand for Probabilistic Feature-based On-line Rapid Model Acquisition. The software works under the assumption that the camera is stationary and that the object of interest is in the center of view.
I think if porFORMA were freely licensed,under the BSD/MIT license for instance , with source code available could become quite a valuable tool for both open source and commercial interests. I think there is the possibility of using it to enhance video compression for video conferencing. Even though it makes the assumption that the camera is stationary I think it may be possible to allow the software to work if the changes in location of the camera were known making it usable for robotics applications.
The article mentioned that one of the drawbacks of the software currently was that the object needed to be sufficiently textured or else the proFORMA would be unable to model it correctly. I think that is probably because it uses the colors of the object as reference points when figuring out the 3D shape.
Qi Pan states that processing power is one of the main things holding back the system from being able to capture much larger models such as entire scenes. I think that could be partially remedied by using a computer with accelerator cards such made by Tilera which can run regular C/C++ code on using Linux as the operating system the advantage is it runs slightly modified but otherwise normal code on 36 to 100 or more cores for vastly increased performance in multithreaded applications.
Another cool possibility is being able to import objects you own into games to further personalize the experience. Perhaps so games no longer have to be entirely a canned experience but can be modified and enhanced by the players. An example of one such game would be Gary's mod I don't think it is actually a game with goals or anything in the conventional sense but fun non the less I think due to it being a giant user content scratchpad of sorts. From generating a model of you favorite teapot to getting a video chat stable on a weak wifi connection I think proFORMA has a lot of possibilities to explore.
Video of perFORMA in action.
Sunday, November 22, 2009
Journal 14, November 22
While I don't think that IBM has literally built a supercomputer as smart as a cat or smarter than a cat, they have completed a super computer capable of modeling neural simulations 4.5 times as complex as a cat brain. According to the researchers the simulation doesn't yet run in real time.
The purpose of whole brain simulations is to allow researchers to experiment with a model they can directly manipulate. The simulation allows them it run reproducible tests and create snapshots of activity with greater resolution than with real test subjects. While the simulations aren't real brains they are based on observations of how real neurons and brain tissue interact.
Some of the research is aimed toward understanding chemical interactions within the brain. Other researchers are also working on understanding how the brain actually works. If the researchers could uncover how the brain actually functions advances in technology similar to the fly eye algorithm might be possible for brain simulations. Assuming such algorithms exist in the brain perhaps even ordinary computers would be capable of supporting strong artificial intelligence.
I think the notion that researchers should be careful with these sorts of simulations since they might be alive is rather nonsensical. I think such simulations are after all just simulations and if they cease to be just simulations it will be painfully obvious. Another reason being that the simulations can be reversed or reset to their original state whereas in real brains that clearly can't happen.
Even though it seems nonsensical to me, the reason the ethical questions arise are due to so called emergent behavior of the systems that is as yet not understood similar to the fly eye algorithm in my past post though on a much higher order obviously. I don't think such emergent behavior in a man made system is grounds to call it alive or anything of that nature unless there is other significant evidence such as a human level of intelligence which would be a bit scary to begin with. I thought it was interesting that one person pointed out in the comments of the article this brain simulation may be more complex than a cat's brain but a plain old bucket of slime is also more complex and diverse diverse than a human even if it isn't intelligent. I think the underlying problem with the whole scheme is that the models may not reveal any emergent behavior at all if they do not incorporate all the needed components or are in the wrong configuration similar to how they fly eye algorithm required all the components to be in place for it to work.
Blog post referenced by Arstecnica postThe Cat is Out of the Bag and BlueMatter
Saturday, November 14, 2009
Journal 12, November 14
When things go right, robot vision is perhaps the most attention grabbing feat in computer science. I think that has to do with the long running fascination people have for designing human like machines and computers.
While most techniques for machine vision require massive amounts of processing power. Recent developments in studying fly vision have show that much simpler systems can also be effective. An example of current complex computer vision cited in the article was the Lucas-Kanade method for machine vision which is extremely computationally intensive having to compare individual pixel changes each time the image updates.
The fly inspired computer vision algorithm is much more optimal and works by ignoring areas that don't change in color and focuses on the changing patterns. This narrower approach allows more efficient implementation of commonly needed computer vision systems such as obstacle avoidance and detection. The algorithm is a feedback loop and creates a cascading non linear system of equations according to the researchers and is not fully understood but it works.
I think this sort of vision system might be useful for in the automotive industry for self guided vehicles once the algorithms are better understood. I think a drawback might be that some of the information from the camera is seemingly discarded by this design that might be needed for some applications and the system may have to rely on more conventional computer vision techniques anyway.
Examples of current computer vision systems in robots are Domo developed at MIT CSAIL and the Honda ASIMO robots. The Domo robot has been demonstrated on video to be able to interact with a visually complex environment for specific tasks. ASIMO is mainly a walking demo robot which basic balancing and obstacle avoidance. Both of these robots are fairly good examples of the state of the art in vision and environment interaction which is needed for human like robots. The main drawbacks are still high computational requirements of both systems with Domo using a powerful networked compute cluster of 15+ computers.
Perhaps if more algorithms similar to the fly vision algorithm could be discovered by experimentation and observation of nature faster and more efficient ways to comtrol robotic systems could be developed. Interestingly early versions of algorithm have already allowed the creating tiny self guided flying robots.
I think this development is similar to other developments in math for instance it is quicker to multiply 10 x 10 than to add 10 ten times a similar gain is made here where a new way via they fly vision algorithm of doing things has allowed the implementation of complex systems with much less powerful computers.
Thursday, November 5, 2009
Journal 11, November 5
The x86 ISA (instruction set architecture) originally developed by Intel has had a life of continual legal battles wins and losses on both sides of the Intel fence. Intel has long had competitors that also produce x86 designs under license namely AMD and Cyrix owned by VIA.
Intel has recently made a bold move that upsets the balances a bit. Intels latest designs integrate a memory controller onto the CPU die. The impact of this is that other companies would be forced to use an separate memory controller or license Intel's on chip memory controller assuming Intel would even be willing to license it at all. Nvidia made the next move by directly accessing intel's memory controller in their latest chipset designs. Intel of course retaliated and Nvidia has subsequently ceased chipset development as far as is know to the public.
An exception to that rule would be Fujitsu that currently still produces Sparc based designs for high performance computing needs and is touted as the fastest CPU .
Nvidia has never been in the CPU business but has lots of high proformace design experience. Much like the article I think that Nvidia aims to add support for executing x86 binaries on their hardware.
There has been speculation for some time that they would do this and Nvidia's CEO has even threatened it a time or two! I find it rather intriguing that Nvidia has hired many former Transmeta employees working for them possibly to work on x86 compatibility for their GPUs. In my opinion Transmeta's biggest development was a x86 compatible processor that did not use the x86 instruction set in hardware. This allowed them to translate x86 or any instruction set within reason into their own instruction format with good proformace due to their design.
In my opinion if Nvdia were to use a similar translation technology as Transmeta used and implemented it into their GPUs it might spur progress with parallel processing since in theory programs running on the main CPU could be migrated directly onto the GPU if it were determined that it were a multi-threaded program capable of benefiting from the GPU's massive parallel architecture.
Similar ideas are also in the works at Intel on the Larrabee project and also at AMD on the Bulldozer project. Its good to see that Nvidia is not going to lie down on this one and let Intel and AMD get too far ahead.
Saturday, October 31, 2009
Journal 10, October 31
Recently AROS a mostly AmigaOS 3.1 compatible operating system has gained hardware accelerated 3d on Nvidia hardware ranging from the Geforce2 up to the 7000 series this however is unheard of in hobby and amateur operating systems!
While Gallium3d currently has a few bugs on AROS that can be seen by comparing the videos of a demo running in software mode and again with the hardware driver. The hardware driver seems to have trouble with rendering some textured 3d ojects from what I can tell.
Gallium3d is the next generation core of the The Mesa 3D Graphics Library which runs on most operating systems allowing cross platform 3d development whether it be games, 3d cad tools, virtual reality or technical demos. The big advantage with Gallium3d is that it is much more modular than previous versions of Mesa meaning that for Mesa to be ported to a new OS all that is needed is to write the operating system specific componetnts and add support for the hardware dirvers to your operating system. This was possible with older versions of mesa but now with galluim3d the same drivers can accelerate multiple APIs such as OpenCL, OpenVG, Clutter, OpenGL ES and of course OpenGL. Whereas in the past it would have taken much more code to enable all those APIs even with mere software rendering.
While most Galluim3d/Mesa development goes on for the Linux platform another OS besides AROS that is already getting a port is the Haiku operating system. Haiku is a BeOS alike operating system that aims to please desktop and workstation users. It has good compatibility with most older BeOS software and has some tricks of its own now as well I may blog more specifically about it sometime in the future. I have been building test images of the Haiku OS for quite some time since I found out about it last year and the developers are progressing quite quickly even though it is a small project. A screenshot of the gallium3d software renderer on haiku can be found here.
One of the coolest things about AROS is how fast it is for instance it takes mere seconds for it to do a warm boot! AROS also has a webkit based browser which is the same engine in Apples safari browser that supports most websites although Adobe flash only works on Windows, Linux and Solaris or an operating system that emulates those such as FreeBSD.
If you would like to try out AROS you can even run it inside windows as an applicaion with windows as the host! Or you can test out the more complete AROS derived distro called Icaros
Sunday, October 25, 2009
Journal 9, October 25
While it won't directly impact most people LLVM's latest release is a significant accomplishment. I think the feature that stands out the most is the vastly improved error messages which will help developers write better code faster. I have found from my own experience that a subtle error in a program can take far more time to figure out than writing the bulk of the program itself. I think that is often due to poor wording of errors or just plain not giving an error message.
In its latest iteration LLVM 2.6 offers production quality C and Objective-C support with speeds of upto 3 times faster during compilation than GCC4, which is the current standard compiler for many projects across a variety of operating systems, so developers can not only find bugs faster but rebuild their projects with fixes faster too.
Although LLVM which stands for Low Level Virtual Machine only fully supports C and Objective-C fully at the moment other projects are also making progress such as C++ support and even more unusual projects such as compiling php code with Roadsend PHP to native binaries for increased speed or in other words lower CPU requirements for heavily used websites.
What makes LLVM so desirable for many projects is the way it breaks the components down into modules so to add support for a new language to LLVM all that is needed is to write a front end for that language instead of having to write a complete compiler. And of course when your frontend is finished you also get the optimization from LLVM for free. The same goes for the backend if you wanted to add support for a new type of processor once the backend is complete you can compile code written in any language LLVM supports.
The push to use LLVM is huge with Apple already using it for optimization of their opengl graphics stack. FreeBSD and DragonFlyBSD are already actively working with the LLVM to get thier entire OS compiled with LLVM mostly due to better features than GCC and also more compatible licensing.
LLVM is for most people a behind the scenes change but those affect everyone as well. With its BSD like open source license which allows both open source contribution and also closed source modification it may even be adopted into commercial compiler suites as it becomes more stable. So if you need a headache free C/Objective-C Compiler or want to modify it for your own inhouse use check out LLVM!
Read all about it at llvm.org
Sunday, October 18, 2009
Journal 8, October 18
The Internet has become quite pervasive in our way of life in the US and even in other countries. But the fact of the matter is that Internet service in the US isn't what it should be. For instance up until a month or two ago the fastest Internet connection I could get for under $50 was dialup which has been obsolete for years. Even now with 1Mbs cable Internet for about $25 available there isn't any competition going on since there aren't any other providers.
Policy and legislation aside, I think infrastructural challenges are what have kept faster Internet from coming my way for a good while since I am beyond the maximum range for DSL from the CO. I am not alone here either since nearly half the population in my area lives out of town and must either rely on cable Internet usually with only one choice of provider or in the absence of cable use dialup or the prohibitively satellite Internet.
The reason I think 1Mbs Internet is still an outrageously high price is for that price people living in the UK can get TV, Internet and phone service for less that $30 total from SKY. In the US the prices on comparable services are at least 3 times higher! I don't think the prices are high due to a lack of competing technologies but rather due to a lack of competing service providers. The technology used really matters very little once you think about it.
Another misconception that was cleared up by the report was areas with higher populations densities are the primary contributor to faster Internet. According to the article reports show that some countries such as Japan, Korea and the Netherlands are far outperforming what mere population density advantages would predict.
I think that if the US were to adopt more open and competition inducing policies we would see faster Internet service and better broadband Internet availability. The reason being that companies would be forced into competition that currently aren't really competing since like in my case I have no choice for broadband except one company. While the Internet is not the answer for everything and certainly has its rough spots its a shame that many areas in the US are getting left behind technologically. The Internet was designed to be and still is an excellent educational tool without which many homes will likely be poorly equipped to compliment learning done at school.
Original Berkman Center Research Paper
Friday, October 9, 2009
Journal 7, October 11
At Harvard University an ambitious project has been started to create a tiny colonisable robotic bee. The RoboBee Project as they are calling it has been granted 10 Million dollars from the National Science Foundation toward their goals.
The project seems to have similar goals as the DelFly only on a far smaller scale. The small size would make them less noticeable. Their low visual profile could make them useful in covert military operations.
I imagine that for systems such as this to see any widespread use other than as children's toys the amount of time that can be spent in the air must be vastly improved. For instance the Delfly 2 can only hover for 8 minutes or 15 minutes horizontal flight. A recent leap in battery technology, which I bet has left many chemical engineers slapping their heads that they hadn't thought of it sooner, may allow for this its called lithium-air battery technology and the air around the battery is used as part of the cathode for theoretical gain of up to 10x in capacity over standard high capacity lithium-ion batteries. The major gain for small flying robots in such batteries comes from the fact that the cathode is air and doesn't weight down the robot.
There could be some dangerous implications if the technology got into the wrong hands such as remote spreading of infectious disease without notice. Of course that doesn't mean we should live in fear. Quite the contrary life and progress must go on. I mean even today a flyby of a model aircraft spreading its payload over a small area might not even be noticed at all.
Other than terrorism I think the obvious danger to such small remote devices is privacy. Imagine if such devices were deployed everywhere similar to how CCTV is in Britain. I think security cameras are fine things to have in stores. They are often used by police to help track down thieves but when the if camera starts following you around you could hardly be called paranoid if it makes you feel uneasy.
On the other hand if these RoboBees were unleashed into a field and equipped with cameras and lasers I think it would make for a great online flying laser tag experience. It would probably have to be hosted in local areas however due to latency issues over the Internet. I think it would probably be more feasible than OnLive. Onlive would have ridiculous hardware costs for rendering and streaming game content requiring local area hosting relative to players to keep the enormous amounts of data off the Internet backbones and maintain low latencies.
While I don't think the will be of much practical use for people other than surveillance I think they would probably sell like hot cakes for a couple years, assuming they cost under $100, and raise the technological bar set in peoples minds yet again.
So have fun making Robobees Harvard but please don't give them stingers!
Saturday, October 3, 2009
Journal 6 , October 3
RedHat a Linux vendor based here in North Carolina is stepping up to the plate asking the Supreme Court to recognize that software is not patentable. RedHat has a long history of innovation and contribution to the free software community.
Though many people man not notice patents affect everyone. On nearly every thing you by there is printed the patent number, several patent numbers or even patent pending if it went into production before the patent registration was complete. The US Patent and Trademark Office defines a patent as "Patents protect inventions, and improvements to existing inventions. ". Although the US patent office defines what a patent is there is still some debate about patents and software.
Many people think that patents should only apply to machines or devices that take some input and provide another output. Which is how people generally think of patents. However, in the past years patents have also attempted to include software. I personally see this as a problem and many others do as well.
The problem with patenting software is that it limits innovation and progress which is what patents were originally designed to improve. The reason I think software patenting inhibits growth is that even though a certain software feature is patented it should be reimplementable in a different way unlike how it is today where if your software has a feature and it is patented if can't be used by other sofware. A recent example would be OpenGL 3 in the popular Mesa software rederer that has hit a snag a fully supporting OpenGL 3 because floating point textures and a few other features are patented.
I guess this is all due to the mentality that whatever you see on the screen is the software. The problem with that is software is much more than that a lot goes on behind the scenes and if a company wants to reimplement a piece if software with different workings internally they should be able to do that.
Of course you can look at the wine project and see a healthy example of this very thing happening. Microsoft of course owns the copyrights to the windows source code and can do whatever it likes with it. However since it fall under a copyright as software it can be reimplemented differently by someone else. This to some degree lessens monopolization and I think is healthy to the software ecosystem.
A few examples of the sort of things that pop up in software patents.
Google's lauch page
Apple's 3d desktop patent
Patent on drawing a cursor you can always see with the XOR fuction
Patent on saving the image data behind a window
As you can see another problem with software patents is they often cover the obvious best method to do something. How can progress continue if we are constantly being forced to discover different ways do do things worse? And in the case of google's web page surely they should have applied for a trademark on their logos so people would be able to tell it is their page and not patent the web page. As it stands every kid on the block with a text editor and a homepage is up to be sued should google find thier page a bit too much like thiers. I personally think its a free country and should I wish to make a webpage with a logo a search box and two buttons I should be able to do that without hearing from the likes of google.
I applaud RedHat and the Open Invention Network that is also working toward freeing software from patents. It will allow developers to once again develop software without the worry they are encroaching on some company's IP. Writing software will return to its rightful status as an art form like writing a book or painting a picture and not like designing a piece of hardware.
Sunday, September 27, 2009
Journal 5, September 27
Article: Slashdot says.. AU Government To Build "Unhackable" Netbooks
The Australian government has taken it upon itself to hand out 240,000 laptops to grade 9 and up students over the next 4 years. Which I think is a commendable initiative. The catch is all the laptops will be preloaded with security measures to prevent students from installing their own programs or changing the laptop configurations in any way.
The laptops are loaded with Windows 7 Enterprise and use its AppLocker functionality in Windows 7 to only allow approved programs to be installed. I think this is a very bad idea since students will be locked into a limited set of software that once out in the real world they will likely have to switch from.
An example would be Symphony the Open Office based office suite that IBM just completed the mandatory company wide migration from MS Office. And there are many companies that see the advantage to choosing from alternative software as well Lowe's for instance uses Lowe's Linux based thin clients for all its Point of Sale computers. The advantage for both IBM and Lowe's is that the software comes at a much lower cost since the initial licensing is free and optionally paying for developers to add needed features should be much cheaper than proprietary software.
While I am obviously pro free licensed software I don't think that the fact that these systems are preloaded with non free software is their main fault. It is the fact that these laptops are locked into the non free software with no option to change. I think it is a rather severe encroachment on freedom of choice and quite possibly a monopolistic venture on the part of Microsoft , Apple and Adobe all having their software loaded and locked in on the laptops. Another severe fault is the lojack installed on the laptops making them privacy invasion cases waiting to happen.
For the cost of $500 the laptops have quite low usability and flexibility for the students. Considering that the laptops are standard run of the mill lenovo netbooks than cost around $300 retail the Aussie government could have upped the specs on the laptops to around $450 which brings slightly more powerful graphics chips and larger screens into the picture. The other $50 dollars per laptop could have went directly to custom software development totaling around 12,000,000 dollars enough to pay 25 developers 120,000 dollars a year for 4 years. The advantages being that if they choose to load a free operating system on the laptops they could have content filtering built-in with dansgardian and additionally frequent security updates that is a trademark of open source software.
Their claim of the laptops being “unhackable” is also disputable since after all they are just standard netbooks which have easily flashable BIOS firmware and even windows 7 has supposedly unfixable exploits
Saturday, September 19, 2009
Journal 4, September 20
Osnews.com: ARM_Pushes_Envelope_with_New_Multicore_Chips
While you might initially think that Intel and AMD lead the computer processor market, if you stop to think about all the computers and not just desktops or laptops but also processors such as ARM, MIPS, PowerPC, SH and a few others found in the embedded computer market is much you can easily see that embedded computers far out sell personal computers.
Embedded computers can be found in most electronic devices. Examples using the ARM processor include 98% of cellphones cellphones, game consoles (Gameboy color and newer models), PDAs, mp3 players (for instance my sansa e200 media player has an ARM cpu), and even calculators.
The ARM processor was originally designed for the Acorn RISC Machine computer but it was overtaken in the market by IBM compatibles and Apple computers. Since then ARM has become the most popular processor architectures in the embedded computer world selling around 90 processors a minute! The popularity of ARM processors in embedded devices is largly due to their power efficiency and not their speed since you don't need much number cruching on a cellphone instead battery life is more important. However recently ARM processors have made large strides in performance while maintaining low power requirements. Intel has also developed lower power usage processors found in PCs designed for web browsing or long mobile battery life. Intels processors are still no match for the ARM design which does not maintain as much backward compatibilty as intel's x86 which allows them to keep ARM processors highly power efficent. For instance instance Texas Insturments' OMAP3 ARM processor can boast; “during average operation the OMAP 3 processors draw only 25.6 mW, about 11 to 16 percent as much as the 160 to 220 mW required by the best x86 solution in similar conditions“ [1]. A yet to be released hand held computer called the Pandora uses the ARM Cortex-A8 single core cpu at 600Mhz by default and up to 900Mhz offering 10+ hours of web browsing or over well over 20 hours of mp3 playback.
In the topic article ARM has announced that it will release new 2Ghz multi core versions of its ARM Cortex-A9 design which will further widen the performance gap. ARM processors will then be fast enough to compete directly with Intel and AMD desktop processors while consuming comparable power to their mobile processors.
The impact it could have once ARM processors start showing up in laptops and perhaps media center computers is that people will realize that ARM processors will make good desktop processors as well. It could lead to an entire market shift from x86 to ARM processors if game companies take notice. The fact that many games and applications are already available for ARM Linux is sure to help. While you will probably lose compatibility with most of your purchased software if ARM becomes popular on the PC again some manufacturers would likely offer patches to get your software working on ARM or free alternatives could be found. For instance IBM has recently made it mandatory for all employees to switch to Symphony Office their modified version of the Open Office suite instead of using Microsoft's costly office suite. Another example is the K3B an excellent CD burning application available on Linux and BSD which has proven itself better than non free CD burning programs in my opinion.
I think that in general people would be blown away by what an multi core ARM computer is capable of. Already single core ARM processors have been shown to be capable of watching HD video. Enabling normal web browsing with Firefox, Midori ( Webkit browser like Apple's Safari) or Google Chrome. No more hot laptops burning your lap. Extreme battery life would be achievable with a large a battery 24 hours or more of battery life would be no problem. It might not be long at all until you can buy a new laptop in Walmart and Bestbuy running with a similar processor to the iPhone so look out Intel and AMD!
[1] http://focus.ti.com/pdfs/wtbu/ti_mid_whitepaper.pdf
Sunday, September 13, 2009
Journal 3, September 13
In these days of increasing competition between web browsers we have witnessed the JavaScript performance wars,increasing compliance with the W3C standards, the advent of tabs for nearly every modern browser and even support for video without flash in the most recent Firefox release. But there is one feature that could make all the other tricks seem like old news. WebGL will allow web browsers to display 3D content accelerated by your graphics card directly in nearly any web page!
There are ways of displaying 3D content in a browser already but they all require a different plugins. For instance flash 10 from adobe can to some degree display 3D content but it has had little uptake, id Games also has its own plugin that allows the integration of their popular game Quake into a web interface called Quakelive that can be played anywhere you have a fast Internet connection free of charge. I have also seen some software 3D done in some extension of HTML but it was quite limited and I haven't found anything related to it since. The drawback to these is there is no open spec for all the browser companies to implement. WebGL is the answer, it gives the browsers a standard to rally around that doesn't require clumsy to install or in some cases bloated plugins.
So besides out of the box support for 3D in at least several browsers (Google Chrome and Firefox and Opera support the standard) what will this mean? For starters it means that many of the small flash games many people enjoy would no longer have to be written in flash but could be written in plain old html/javascript and WebGL which would be even more powerful for developers since it would be faster preforming and have inherently better 3D capabilities versus today's predominately 2D flash games.
The Khronos group who writes the WebGL and the well known OpenGL standard predicts the release of WebGL to happen sometime in the first half of 2010. That quick release schedule is backed up by the fact that the Webkit browser engine used by Apple's safari, Google Chrome and Midori already has at least a partial implementation working.
The possibilities of 3D in the browser are endless from games to more fluid redering of your web mail inbox WebGL with 3D avatars in your chat list. I think WebGL will probably mark the biggest change in the web since Web 2.0. A change this big could change a lot of things people do for instance it may become increasingly less common to go to the store and pick up a video game. Why bother when you can play hundreds on the web?
Sunday, September 6, 2009
Journal 2 , September 6
The encryption on cellphones has been the subject of attack for the past few years. The most recent seem to be using the new found power of GPGPU computation. GPGPU stands for General Perpose computation on Graphics Processing Units and most all of the new dedicated graphics cards can do it pretty well. The cards from ATI and NVIDIA are the main supporters of this technology.
To give you some perspective on the amount of computational power this gives programmers access to one of intel's fastest CPUs on the market the Core i7 extreme 965 tops out at 69 GigaFlops (69 Billion floating point operations / second ) in double precision Nvidia tesla c1060 can proform 1 TeraFlop (1 Trillion floating point operatons / second) of calculations per second in single precision. And up to 4 Nvidia GPGPU cards can be installed on a single custom PC for nearly 4 TeraFlops of computational power.
The goal of the hackers is to force cellphone companies to upgrade their encrytion due to the fact that for quite some time the GSM encryption that cellphones use could be broken with a large multi-FPGA system such as discribed on hackaday.com .
The implications are that GSM encrytion is hackable now if you have enough cash to buy an expensive computer setup and the expertise to build it. However with the release of the encryption key lookup table being currently being made using graphics processors anyone with a fast gaming laptop could break the encryption and listen in to cellphone calls and even intercept text messages.
Of course many people will never notice this as their phone contracts expire and they get new phones these are likely to include better encryption to keep your calls private. The people that do know about how quickly and cheaply a cell phone can be snooped in on will likely drastically change their habits when communicating.
I recently moved into town for the summer and was surprised that even in the small town I was in there were nearly 5 WiFi bases in the area. Most of them were using weak WEP or WPA encryption and not WPA2. This sort of situation where anyone with a little of know how can hack into your PC is entirely preventable.
Our society today is predominantly unaware of the dangers they are opening themselves up to by no keeping their computers and means of communication are secure. Imagine what kind of chaos would ensue if credit card numbers were stolen when people buy a product on their cellphone you might never even see the thief!
Perhaps the government could create a technology security site making available information about how to better secure your wireless devices and computers. The options for improving cellphone security are currently limited and vary since not all phones are capable of providing better encryption. The situation is better for PC since secure free operating systems like Linux and BSD are know for their security and Windows Vista is also more secure than Windows XP when it's security settings are turned up. Wireless networks in the home can also be secured with WPA2 encryption although that standard will eventually need to be replaced just like GSM.
Be safe and don't give out private information over the phone ;-) .
Saturday, August 29, 2009
Journal 1, August 30
http://www.osnews.com/story/22064/FSF_Launches_Windows_7_Sins_Campaign
Society as a whole uses free software whether they know it or not. Even if you are using MS Windows that you bought and paid for there is still a very high chance you are using free software. Even Mac users use open source since Mac OS X is itself partially open source. You see approximately 50% of the web is hosted by the open source Apache web server software and many of those servers are also running open source operating systems. So why doesn't your home computer run open source software? That is where the Free Software Foundation's windows 7 sins campaign comes in and lays the blame on Microsoft. While one of their points seems a bit over paranoid claiming that the Windows Genuine Advantage tool used to validate a windows installation violates your privacy most of them make interesting food for though. For instance, how many people do you know that have ever got a refund for the preinstalled copy of Windows that came with their computer if they didn't want it? I don't think I have ever even met anyone who has even done this. And there are the ever increasing operating system requirements imposed by Microsoft on its customers which are totally bogus. I myself even have an ancient 300mhz dual processor computer with a measly 512mb ram that by Microsoft standards should have been thrown away years ago and yet it gives me good services daily and runs the very latest versions of nearly all the programs in the Archlinux distribution of the GNU/Linux operating system quickly and with ram to spare. I find it slightly amazing that society has reached the point where it is generally believed that Viruses and Worms and Trojans are normal everyday things whereas they hardly even exist on open source operating systems. People no longer think to ask you which word processor application you use it is just assumed that everyone has MS word which has nearly monopolized the market with their proprietary formats and have deflected attempts to standardize on open formats like ODF . Sure windows is pretty easy to use but in recent years Linux has grown a lot in the usability and now my grandparents use it at home mostly for email, web browsing and genealogy. Linux is the big alternative right now but there are others too such as Haiku OS and BSD which are catching up to Linux and in some ways have already passed it. Imagine how different it would be if instead of being plopped down in front of a Windows PC or an Apple Macintosh as I was in grade school, kids were allowed to bring their own OS from home on their USB key. Such a shift is entirely possible. Especially, with USB drives costing less than 10 bucks. So who is going to put their foot down and put a bit of fun and choice back into computing?
http://windows7sins.org/